identifier stringlengths 1 43 | dataset stringclasses 3 values | question stringclasses 4 values | rank int64 0 99 | url stringlengths 14 1.88k | read_more_link stringclasses 1 value | language stringclasses 1 value | title stringlengths 0 200 | top_image stringlengths 0 125k | meta_img stringlengths 0 125k | images listlengths 0 18.2k | movies listlengths 0 484 | keywords listlengths 0 0 | meta_keywords listlengths 1 48.5k | tags null | authors listlengths 0 10 | publish_date stringlengths 19 32 ⌀ | summary stringclasses 1 value | meta_description stringlengths 0 258k | meta_lang stringclasses 68 values | meta_favicon stringlengths 0 20.2k | meta_site_name stringlengths 0 641 | canonical_link stringlengths 9 1.88k ⌀ | text stringlengths 0 100k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2453 | dbpedia | 3 | 20 | https://arxiv.org/html/2403.09682v1 | en | On the Theory of Quantum and Towards Practical Computation | [
"https://arxiv.org/html/x1.png",
"https://arxiv.org/html/x2.png",
"https://arxiv.org/html/x3.png",
"https://arxiv.org/html/x4.png",
"https://arxiv.org/html/x5.png",
"https://arxiv.org/html/x6.png",
"https://arxiv.org/html/x7.png",
"https://arxiv.org/html/x8.png"
] | [] | [] | [
"Quantum Computation Fundamentals Review History Open Questions Quantum Phenomena Technology Algorithm Design Pattern Application."
] | null | [] | null | en | null | HTML conversions sometimes display errors due to content that did not convert correctly from the source. This paper uses the following packages that are not yet supported by the HTML conversion tool. Feedback on these issues are not necessary; they are known and are being worked on.
failed: chronology
Authors: achieve the best HTML results from your LaTeX submissions by following these best practices.
License: CC BY 4.0
arXiv:2403.09682v1 [physics.pop-ph] 07 Feb 2024
11institutetext: University of Zagreb Faculty of Organization and Informatics,
Republic of Croatia
On the Theory of Quantum and Towards Practical Computation
Robert Kudelić
Abstract
Quantum computing exposes the brilliance of quantum mechanics through computer science and, as such, gives oneself a marvelous and exhilarating journey to go through. This article leads along that journey with a historical and current outlook on quantum computation that is geared toward computer experts but also to experts from other disciplines as well. It is an article that will bridge the vast gap between classical and quantum computation and open an entering wedge through which one will be able to both bring himself up to speed on quantum computation and, intrinsically, in a straightforward manner, become acquainted with it. We are indeed in luck to be living in an age where computing is being reinvented, and not only seeing history in the making firsthand but, in fact, having the opportunity to be the ones who are reinventing–and that is quite a thought.
Keywords:
Quantum Computation Fundamentals Review History Open Questions Quantum Phenomena Technology Algorithm Design Pattern Application.
1 For Once It All Began
How vast the chasm is, how difficult it is to grasp it, and how steep the learning curve has become–and perhaps always has been–is a realization to which one arrives when, for the first time, tries to bring oneself to a destination called quantum (QTM) computation. It is an awe-inspiring journey that through this article we will relive, unsealing its complex secrets, and gradually grasping computation known as quantum computation .
Before we therefore begin with the subject at hand, it would be of interest to give a brief historical background and a more forward motivation behind this work . It all began long ago, perhaps some years before what is typically remembered. All the way back in 1935, the principles of quantum mechanics where already heavily discussed [27, 66], namely superposition (particle being in multiple states at the same time, until observed [64]) and entanglement (correlation between particle states no matter the distance between them [64]), which we will soon define in more detail, that are so crucial to quantum computation as well [4]. A number of decades prior to those events, on December 14, 1900, to be exact, Max Planck struck the beginning of quantum mechanics "at a meeting of the German Physical Society". [191] Those were tumultuous and exciting days, I presume , but the best was yet to come. A few decades have passed, and ideas and research were advancing to and fro. Some scientists, excited, trying to advance the theory of quantum mechanics, while others were working against it, but not only against it, even fighting it –which in science is business as usual: That which nature’s physical systems deny, needs to perish.
Then one day, as the knowledge increased, some started pondering about computation that is microscopic and able to simulate physical systems with which classical computers have difficulty. [204] That person, right at the forefront, thinking these "microscopic" thoughts that were far beyond the abilities of those days was Richard Feynman. [204, 5] It is not known when exactly he first started pondering the idea of a quantum computer, but what is known is that in his 1959 talk, he was predicting an enormous miniaturization of technology, even to the size of an atom. [204, 5] There was nothing that he saw in the laws of nature that wouldn’t allow this miniaturization, and he was speaking about it. [204, 5] Time has passed, and Feynman, together with other scientists, tried to advance the issue. Then something happened, and a theory so necessary for practical quantum computation started to emerge .
In 1981, Feynman gave a conference talk on "simulating physics with computers" [204, 38], which was later published as an edited transcript [204] in a scientific journal [70]–and for all intents and purposes this event launched "quantum computing as a field of study" [204, 5], "which established the beginning of quantum information theory" [38]. At about the same time, others were investigating as well, and from then onward, nothing was ever the same. What is fascinating is that both Manin [140] and Benioff [17] were just a year prior, in 1980, bringing into the foreground ideas of large significance. Manin was in his book Computable and Uncomputable [97, 204] discussing how simulating a many-particle system requires exponential cost on a classical computer [204, 140, 5], while Benioff went further down the quantum line, complementing Manin, in explaining how one would describe computation from the quantum outlook and suggesting by the construction of such a model that quantum computation might be a possibility [17].
On a bit different note, the question that was continually puzzling Einstein, whether two particles really can be entangled and have correlation between their states without a hidden information, was being experimentally answered by Alain Aspect et al., and the answer was yes , they can. [38] With the first real-world experiments that were successful and conducted a decade earlier, in 1972 by Freedman and Clauser (which in turn depended on the work of Carl Kocher [120]), and with an extra-strong correlation being observed [43, 74], Aspect’s work, where "the greatest violation of generalized Bell’s inequalities" [9] ever was achieved, has put the predictions of quantum mechanics strongly on the map.
Only a few years later, in 1985, another important advance came when David Deutsch "formalized the notion of a quantum computer" [204, 60] and raised the question: "Whether quantum computers might have an advantage over classical computers at solving problems that have nothing to do with quantum physics" [204, 60]. True, the algorithm that Deutsch and Jozsa later published [61] was of little practical significance, but it showed superiority in efficiency of the quantum algorithm over its deterministic classical counterpart. [61, 237] Thinking about quantum computation and ideas that came through Benioff [17] and Feynman [70] Deutsch was led to in 1989 propose what later became the standard model for describing quantum computation, the well-known circuit-gate model [62]. [5]
With Deutsch formalizing the notion of quantum computer, Umesh Vazirani and his student Ethan Bernstein were formulating "a contrived problem that a quantum computer could solve with a super-polynomial speedup over a classical computer" [204, 23]–that was in 1993 [22], that is . The same superiority was presented in 1994 by Daniel Simon, who showed that by solving the idealized version of the problem, which is finding the function period, quantum computers could indeed achieve an exponential improvement in speed when compared to their classical counterparts. [237, 238, 204] And despite the fact that Simon’s idea, just like the one from Deutsch, had little practical weight and no application in sight, that was soon to change, for in just a short while, tremendous happenings will occur for quantum computation. [204]
The same idea and an instance where quantum computers would show their superiority has, in 1994, inspired Peter Shor to baffle the world and publish the paper in which he presented an efficient way for Fourier transform calculation, which he used for a definition of an efficient algorithm for computing discrete logarithms–and all this was done for a quantum computer. [232, 235, 204] But that was not the end. A few days later, after the aforementioned breakthrough, and by using similar ideas [204], in the same seminal paper, Shor presented "an efficient quantum algorithm for factoring large numbers" [232, 204]. [232, 235] The implications for cryptanalysis were enormous , and the interest in quantum computing has once again exploded. [204]
All was not well in the land called Q-Country, though, and at the same time those great achievements were being made, a dark cloud was looming over quantum computation, and that dark cloud was called decoherence –an inability for a computer to compute in a quantum manner because of interaction with the outside world . [204, 127, 256, 92] But the question of decoherence was already being tackled and is one of the main issues with quantum hardware that remains to be tackled to this day. [144, 33, 44, 32] Shor himself has already, in 1995 and 1996, published research on quantum error-correcting codes and on fault-tolerant methods by which one could compute on quantum hardware, which is rather noisy, in a reliable manner. [234, 246, 233, 204] And with that, "by the end of 1996 it was understood, at least in principle, that quantum computing could be scaled up to large devices that solve very hard problems, assuming that errors afflicting the hardware are not too common or strongly correlated" [204, 3, 119, 202]–which is confirmed by the latest research dealing with quantum computation, scalability, and decoherence : "fault-tolerant quantum computation will be practically realizable." [123].
During those same exciting times [204], as John Preskill adequately called them [204, 201], another important realization was happening. It was the year 1995 when Cirac and Zoller published that, with the tools in atomic physics and quantum optics, one could implement a quantum computer and perform quantum logical operations. [47] Building on that foundation, a few months later in the same year, Monroe et. al. demonstrated a fundamental quantum logic gate, "operation of a two-bit controlled-NOT quantum logic gate", to be exact [150], which, coupled with simple single-bit operations, formed a universal quantum logic gate [150]–this was quite important piece of the quantum computing puzzle, since if correct and practical model of computation can not be found, then all efforts, perspiration and tears would be in vain. With previous breakthroughs, especially those that happened during the last decennia of the 20th century, a strong foundation was laid, and it seemed quite possible that one day quantum computation would be a reality. The possibility of that reality was never given up, and research continued.
Then, soon after Shor presented his Las Vegas quantum algorithms [232], in 1996 Lov Grover emerged with another fascinating discovery: it is possible to search a database for an entry in N𝑁\sqrt{N}square-root start_ARG italic_N end_ARG time and identify a record with a probability of 1212\frac{1}{2}divide start_ARG 1 end_ARG start_ARG 2 end_ARG [87]–which then represents quantum Monte Carlo, and is asymptotically optimal [89], and by repeated sampling, this probability can arbitrarily grow [87]. A substantial achievement since classical machines, both deterministic and probabilistic, will need N2𝑁2\frac{N}{2}divide start_ARG italic_N end_ARG start_ARG 2 end_ARG time to achieve the same probability bound of 1212\frac{1}{2}divide start_ARG 1 end_ARG start_ARG 2 end_ARG, and only in an ordered list via Binary search can classical machines achieve log2Nsubscript2𝑁\log_{2}Nroman_log start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT italic_N time. [88] Some, as well researching in quantum computing, were advancing tools for better understanding such computations and developing theories for quantum state machines, 1997 was the year. [154, 155, 121]
Shortly after, just a few years have passed, in 2001, the company was IBM, and scientists there have announced that successful testing of a quantum computer has been conducted. The capacity of the machine was 7777 qubits (first register 3333, second register 4444), and the quantum computer itself was implemented by nuclear magnetic resonance . [222] Shor’s algorithm was executed on this machine, and by employing quantum effects, number 15151515 was factorized [222]–this achievement was for the history books, deserving of noting big success. Then again, in 2007 , a validation came when scientists at the University of Queensland (UQ) experimentally demonstrated execution of Shor’s algorithm for large number factorization by "using quantum logic gates based on photon polarization"–they have also factorized number 15151515 (first register 3333 qubits, second register 4444 qubits). [222] At this stage, quantum computation has gone from theory to practice. By the end of the 1990s, enough foundational theory had been discovered, and the beginning of the 21st century was the dawn of practical quantum computation. Machines are being built, and algorithms are being implemented , and now theory and practice go together.
And so in 2009 and 2012 new experiments have confirmed the reality of quantum computation, making it even stronger; one more successful experimental demonstration of Shor’s algorithm has taken place, the method was an integrated wave-guide based on a silicon chip, with only 4444 qubits based on photons used for factorization of number 15151515 (first register 1111 qubit, second register 3333 qubits). [222] And as a supplementation, in 2012, at the University of California (UC), one more experiment successfully factored number 15151515, Shor’s algorithm in action, "using phase qubits and superconducting wave resonators", with 4444 qubits, just like the previous group of researchers (but in the first register there were 2222 qubits, and in the second 2222 as well). [222]
This series of implementations of quantum computers and successful algorithm runs continued, and soon there was quite a group of scientists that have dabbled in quantum computing and have witnessed its strangeness and marvelousness at the same time, e.g. Martin-Lopez et al. in [141] with factoring number 21212121, via Shor, "using only two photon-based qubits" (2012), Nanyang Xu et al. in [273] turning factorization problem into optimization problem, by a scheme from Burges from Microsoft Research, and factoring number 143143143143 with 4444 qubits only, this was an adiabatic algorithm run on a liquid crystal nuclear magnetic resonance quantum processor, and for example, Thomas Monz et al. in [152], via five trapped calcium ions on a quantum computer, implemented a scalable version of Shor’s algorithm, with the approach providing "potential for designing a powerful quantum computer, but with fewer resources." [222]
Next, it was to IBM again, which has seriously grabbed quantum computation and was making strides. It was 2016, when the company announced the creation of a 5555 qubit quantum computer, where one qubit would correct errors, with the computing device being based on a "five-qubit superconducting chip with star geometry and implementation of the complete Clifford algebra ." [222] The machine was programmable; it allowed for the creation of gates and the modeling of operations. [222] But the progress has not stopped there, as in 2017, in May, to be exact, another announcement was to be made: quantum computers with 16161616 and 17171717 qubits have been implemented; and then an enormous leap, in November of 2017, IBM announced a quantum device with 50505050 qubits, where 20202020 qubits were used for computation and 30303030 were used for error correction. [222] It was possible for this quantum device to maintain its qubits in a coherent state for up to 90909090 μs𝜇𝑠\mu sitalic_μ italic_s, and the device was with consumption of 10−15101510-1510 - 15 kW𝑘𝑊kWitalic_k italic_W of power "sufficiently energy-efficient"–without including the energy for device cooling outside work. [222]
Quantum computing research was now beyond its fledgling days, and in 2016, the first quantum satellite was launched from China , Micius it was called. [38] The goal of the space mission was to "perform quantum experiments at space scale", which was an important achievement for quantum communication and space science at the same time. [38] This attempt at a space-scale quantum leap in 2020 resulted in a new milestone for space quantum communications when, via Micius, a secure link, by quantum key distribution , was established between two on-ground stations that were separated by 1120 kilometers. [275] While these events were happening, another breakthrough was in the making.
Intel was interested in quantum computation, and this they loudly expounded in January 2018 when a declaration was made of superconducting quantum chip implementation, the name was Tangle Lake, quite an Intelish name, I might add, and the number of qubits was 49494949. [222, 104] This event was followed by one coming from Google, for they presented in March 2018 a new quantum superconducting processor, Bristlecone, with a capacity of 72727272 qubits. [222, 115] This device was a continuation of a previous one announced a few years ago with 9999 qubits and a rather low level of error, which was 1%percent11\%1 % for data reading, with 0.1%percent0.10.1\%0.1 % and 0.6%percent0.60.6\%0.6 % for one-qubit and two-qubit quantum gates, respectively. [222] With a two-dimensional structure of two 6⋅6⋅666\cdot 66 ⋅ 6 arrays that are placed one above the other, the system can track the errors happening during computation and correct them . [222]
With the ever-moving advance of quantum devices, research was continuing in different aspects of quantum mechanics, an important element for quantum computation, and although evidence is still not conclusive, in 2018, quantum entanglement was observed in objects almost visible to the naked eye, a potential application of which could be seen in quantum internet and physics research. [214, 198]
In 2019, the Google AI Quantum group declaimed [204] "a 52-qubit superconducting chip named Sycamore, which they claim has demonstrated quantum supremacy" [213, 156]. A first claim of this type and a very exciting one, however, when one looks back from a distance, only then it is often the case that a man can clearly see what was the event that made something of something; it might be that it was this one, but perhaps it was not just yet. [222, 193]
As it seems that the previous question has not been answered yet, let us jump to one that is, namely, quantum entanglement. In 2022, after decades of effort and research, it seems that Einstein’s "spooky action at a distance" has finally been thoroughly investigated and brought into the realm of fact, since in the year mentioned, Aspect, Clauser, and Zeilinger received the Nobel Prize in Physics [253], and while this research article is not about rewards, a question that has for many decades puzzled some of the best minds deserves a mention . The last loophole in a well known Bell’s test has been closed in 2015 [94], thus supporting quantum theory [146], the universe we live in is not anymore strange; it is quantum entangled and magnificently fascinating.
If the previous event seemed imposing, the next one is in at least the same category, for in 2023 a reversing protocol for a quantum system has been demonstrated, with authors noting that this new understanding of quantum mechanics could have application in quantum information technology. [225, 255] The protocol is a universal time-reversing mechanism with an arbitrarily high probability of success, where interference of different paths in the end causes the system to jump to the state it had some T𝑇Titalic_T time units before–the protocol is "requiring no knowledge of the quantum process to be rewound, is optimal in its running time, and brings quantum rewinding into a regime of practical relevance." [225, 255]
What brings us at the cusp of time, it is still unknown what technology will prevail [213], or if it will perhaps be a mixture of the two, quantum and classical working in tandem, a most probable outcome, but what can be stated with greater certainty is that the next ten years will probably reveal and answer far more in terms of quantum machines usefulness and area of their specialty.
While the history of quantum computing is for the time being concluded, what comes next is an open question, a choice, and a work that is yours.
2 Quantumness of Quantum Computing
Even though quantum computing has seen great progress, it seems that it is a subject with which scientists and practitioners are still not that familiar. There are probably at least these reasons behind it: their education has not covered the topic, they still do not see the use of such a tool, the state of quantum computing is still far from mainstream, and the link between quantum physics and computing is not an easy one to make. It is also a matter of fact that quantum computation is a sub-discipline that is multidisciplinary in its essence and requires experts with vastly different backgrounds [157], as such, it represents a sub-discipline for which it is difficult to get your brain around.
If one searches through existing scientific papers, a substantial number of articles will now be found, and the articles range from theory to practice, from the synthesis of knowledge to algorithms. Naturally, the papers include important elements that one needs when dealing with quantum computing. It is, however, problematic that so many of these elements present a stumbling block in that learning curve towards quantum way of conducting work–quantum computation is so vastly different from classical computing, and it is perhaps in the beginning quite daunting to come from classical computation, where one knows much, to quantum computation, where one knows little.
For example, right at the start of one’s journey to the universe of quantum, instead of a classical bit for information storage, one is confronted with a quantum bit, i.e. a qubit. And instead of storing one value, as in a bit, in quantum computation, one has a situation where one qubit is in both states [153] (both levels) simultaneously.
After dealing with the qubit question, one is confronted with other quantum computing peculiarities like entanglement and collapse of quantum states through observation. It is almost one big thing after another, and to get to grips with these and other questions, the goals of this research article on quantum computation are the following:
Historical Outlook
Develop a never-before-published historical context of quantum computing that is encompassing and detailed without missing valuable information, precise, covering milestones, and presenting the most significant achievements.
Theory Chronology
Synthesize a one-of-a-kind broad, deep, precise, and thoroughly referenced chronological outlook on quantum computing, both textually and visually, through a timeline presenting a broad picture of the field and segments of its history that will expound on the progression of the theory, present those that came before, and show links between quantum phenomena and other fields.
Foundational Terminology
The basis of any theory, together with axioms, theorems, lemmas, and corollaries, is its terminology and definitions of those terms. The literature at the moment offers no complete, deep, and well-referenced material. Such a state of the matter leads to confusion and a lack of understanding in terms of quantum computing. A compendium of such nature is therefore a must; thus, to construct and present such a work is one of the goals of this paper.
Standard Model
As a way of delving into the practical part of quantum computing and gearing toward computer experts in a streamlined and straightforward manner , quantum computing knowledge will be combined through a standard model of computation, with special emphasis on foundational high-level quantum algorithm modeling and a design pattern.
General Outlook
Synthesis of the present state of the art with the future importance and possibilities of quantum computing. Embedding discussion on problems still in need of solving while not forgetting those pervasive open questions.
From Now to Beyond
Provide a number of quality literature materials that will present themselves as an extended arm of this research. Facilitating an even broader reach of the research conducted and enabling future research and algorithm development through a compact number of reliable steps to the next breakthroughs and game-changers.
In order to achieve previous goals, an effort will be made to cater to the computer science mind and to build a strong theoretical foundation and intuition. Thus enabling a correct, consistent, and deep understanding of quantum computation and quantum mechanics’ phenomena. With the introduction over, the next step in the journey is foundational terminology.
3 Foundational Terminology
When one is dealing with any subject, there are primarily two ways in which he can proceed to expose the issue. The first is to start with a general and then build in a top-down manner. The second is, of course, to start with concrete and then build in a bottom-up manner. They both have their pros and cons, with the latter being more fascinating and interesting, but perhaps in certain instances it is more difficult to understand in such a way, with the former being more conceptual and gradual, but not a stumbling block on the mind while trying to grasp some complex new idea. One would choose one or the other depending on the subject, audience, and perhaps some other factors as well.
It is often the case, perhaps even exclusively, in the scientific literature, at least in the discipline of quantum computation, that the more practical approach, which is bottom-up, is used. Considering that quantum computation at its best is physics in action, that approach is logical and has its merits. However, quantum mechanics is so strange and at times so counter-intuitive that it is quite challenging to understand its complex essence, and the mind has an issue combing all those different threads of thought at the same time–for thinking, one needs time, and for thinking about quantum computation, one needs a considerable amount of time. And if learning is impeded, if the subject has not been understood, one cannot expect great results from then on.
Therefore, in order to continue the strain of thought from previous sections, to give the mind the necessary time for information incubation, and to build up essential intuition, before we delve into some concrete examples of quantum computation essential for the review and an outlook that is being written, we will first define a broad range of terms that will be linked to that practical quantum computation and revealing of fascinating knowledge about it, but not so overwhelming that it will impede progress more than it would be expected. The first stop will then, fittingly, be the definition of quantum mechanics.
Quantum Mechanics It is said of physicists that quantum mechanics represents the most complete as well as the most accurate description of the universe we live in. [157] It is a theory consisting of rules and principles that define a framework that is then, in turn, used in order to develop other physical theories. [157] What these rules, principles, and mathematics are, we will soon see.
Quantum Computing The act of using those rules and principles of quantum mechanics in order to carry out computation is then called quantum computing. [213] Quantum computing has two powerful mechanisms through which computation is performed, namely superposition and entanglement, and these have no counterpart in classical computation. [213] Such is the nature of computation that is quantum, and these are its key advantages. [213] It is well known what data is and what information is, but how is that transferred into the realm of quantum? We will answer that next.
Quantum Information Those well-established definitions and understandings of data and information are at a general level unchanged; however, at the practical level, the situation is quite different. According to the well-known no-cloning theorem, quantum data cannot be copied, and as such, it lasts only as long as the program lasts . [268, 213] Data is, to a physicist, an encodable and storable feature that can be processed "in some physical system using some physical process." [204] Data may then be regarded as a feature that one stores and processes in a quantum state. [204]
Quantum Bit A qubit, or quantum bit, represents an indivisible unit of quantum data. [204] Abstract qubits can be encoded in a physical quantum system, and that qubit can be "an atom, an electron, a photon, an electrical circuit, or something else." [204] Unlike a classical bit that can be 00 or 1111, a qubit can be in multiple states simultaneously, mathematically described as a vector in a complex Hilbert space , "with two mutually orthogonal basis states which we can label |0⟩ket0|0\rangle| 0 ⟩ and |1⟩ket1|1\rangle| 1 ⟩." [204] These orthonormal states can, for example, correspond to a different polarization of a photon or perhaps to a different spin of an electron. [215]
Superposition Feature of being quantized, Fig. 5, and having infinite degrees of freedom, that is, being in multiple states at the same time (linear combination)–until observation has been made. [249, 280] This feature represents one of the two main pillars of quantum mechanics, the other being entanglement. [29] Through superposition, one has access to the real power of quantum computation via the exponential state space of multiple qubits. [215] "Just as a single qubit can be in a superposition of 00 and 1111, a register of n𝑛nitalic_n qubits can be in a superposition of all 2nsuperscript2𝑛2^{n}2 start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT possible values." [215]
Entanglement Quantum state where particles, Fig. 6, and in quantum computing qubits, are locked, with one exhibiting an influence on the other (there is a correlation between particle states, e.g. one particle collapses to 00, the consequence of which is that the other then measures to 1111). [213] Distance between particles does not play a role; that is, entanglement correlation works regardless of the distance between particles–this is a phenomenon of which Einstein did not speak so kindly when he said, "spooky action at a distance" [213], but it turned out to be correct nevertheless [188, 190]. Data is in quantum computation and is therefore stored both in qubits and in relationships between them, with the amount of stored data being exponential in the number of qubits. [204]
Bell State Quantum state, also known as EPR (Einstein, Podolsky, Rosen) pair of two qubits that are in superposition and are maximally, in regard to correlation, quantumly entangled. [250, 16] These Bell states can be both symmetric and asymmetric (e.g. 1111 and 1111, or 1111 and 00), [250, 76] with applications in quantum teleportation [230], dense coding [197], information processing [59], privacy protection [129], cryptography [252], networks [281], optics [132], etc.
Teleportation Enabled by particles that are in a quantum state and entangled, where an unknown particle state is transferred between far apart parties, from one party to another, from one particle to another, but the particle itself is not sent. [4] In the procedure for such an event, before teleportation can take place, some source S𝑆Sitalic_S needs to generate an entangled pair and send particles to their respective destinations. [196] Then, when quantum communication can begin and data transfer happen, after one side has made a measurement, the other side needs to be contacted via classical channels, bound by no faster than light communication, so as to inform them of the measurement parameters for observation, through which the other side will ultimately receive quantum data via the obtained state. [215, 196]
Dense Coding Protocol that is dual to teleportation, Fig. 7, and depends on the entanglement that is described in the EPR experiment; it uses a single qubit in order to transfer, that is, transmit, two bits (in terms of classical information). [215] If source and destination have a particle of EPR entangled pair with maximal correlation, which source has prepared and then sent one particle to destination, it is then possible to transmit two bits of classical data via only one qubit by applying a unitary operator at destination and returning that particle back to source, where party at the source can now jointly measure both particles, that is, the entire EPR pair, and naturally, also learn of the operator party at the destination used in order to manipulate the particle that it received. [21]
Measurement Disturbing the quantum state by making an observation, Fig. 8, intended or otherwise. [215] Quantum measurement is probabilistic, and it is not an easy task to "pick" the result one would like to receive. [215] Since data from a qubit can only be obtained by measurement, regardless of the superposition of states, in the end it is possible to extract only one classical state, in terms of data, from a qubit–and the reason is that when measurement takes place, the superposition collapses and the state is changed to one of the basis states. [215] In order to describe the phenomena of quantum mechanics, scientists have used complex numbers, but as the imaginary part of the phenomenon description is not observable in the physical world, out of the four dimensions that we would need for two base states of quantum computing, one would have only two dimensions; thus, the Bloch sphere has three dimensions, two for polarization and one for the base states. [248] New information is however coming into focus, as it seems that there are entangled states that are distinguishable only by their imaginary component [270, 271, 212, 39, 135]–as fascinating as these discoveries are, whether the imaginary number mathematical trick used to facilitate calculations is necessary for the physical world is yet to be determined via the mountain of evidence that future research needs to provide.
Quantum Gate An operator, also known as a quantum logic gate, is used to both create and manipulate quantum states. [150, 285] It is an elementary quantum circuit that makes operations on a small number of quantum bits. [150, 285] With these, one is building a complex quantum circuit, and this complex circuit is enabling the execution of an algorithm on the quantum machine. [150, 285]
Quantum Circuit Model of computation consisting of a series of qubits (or some sort of quantum data storage), initializations, gates, and measurements. [63, 40, 72]
Quantum Algorithm An algorithm, much like a classical algorithm, that uses quantum effects and represents a sequence of steps, which in turn, by a number of operations, manipulate the initial quantum state for some input, and at the final stage, with measurement being taken, the algorithm returns the correct answer. [4]
Quantum Parallelism The effect present in quantum systems where the amount of parallelism increases exponentially as the size of the system itself, that is, the physical space required, increases linearly. [61] As n𝑛nitalic_n qubits allow one to work at the same time with 2nsuperscript2𝑛2^{n}2 start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT states, quantum parallelism is the effect that gives quantum computing its superiority as it bypasses the classical restriction of time/space tradeoff by giving an exponential quantity of computation space in a linear quantity of real physical space, and therefore quantum machines can compute solutions to all possibilities at the same time, while classical computers can compute only for one input state at the same time. [215]
Interference When measurement is performed on a superposition of output states for a particular input, what one will receive is a random collapse to one state out of all states in the superposition, Fig. 9, with all other states, that is, values, being destroyed. [215] In this way, one cannot reliably compute, and such a behavior needs to somehow be guided. Interference allows us to do exactly that, guide towards desirable output. With interference, it is possible to cause a cancellation between exponentially many input parallel states , with the goal being to produce such an interference between states, that is of the wave function , so as to destroy all undesirable states and collapse into exactly the one we need. [4] The combination of quantum parallelism and interference gives quantum computation tremendous power, and its use in quantum algorithms is essential. [4, 38]
Decoherence For the reason of the interaction of the quantum system and its environment, which is inevitable, the state of a quantum system is extremely fragile, Fig. 10, and thus due to this interaction, the quantum nature of the system can be lost–this loss of quantum information, this distortion , and collapse of superposition due to interaction of the quantum system with its surroundings is called decoherence. [4, 215]
By reading the text to this point, a first quantum computation has already been performed; in fact, probably more than a few were done in one’s mind. This incubation of data and information has not only made one knowledgeable about the subject of quantum computing but has also developed intuition and a crucial way of thinking needed for such a topic as quantum computation. And now, with neurons and pathways of the brain speaking quantum computation, we will deal in a bit more detail with topics that were touched upon, but for which one’s scientific curiosity, trying to decode the universe we live in, wants more.
4 Quantum Effects and the Universe we Live in
Some time has passed since the event, when during one of his talks, Nobel Prize winner Ivar Giaever told the story of his youth and a job that he applied for and received. After getting the job, his mentor told him a story about quantum mechanics, more specifically tunneling–the story was so strange that Giaever’s own words will best explain his disposition: "I did not believe a word of what he told me, nothing." Giaever got his job and decided to be quiet, but what is it that his mentor, John Fisher, told young Giaever?
He told him a story of small particles; he told him a story of the underlying laws of physics that are the foundation of the world we live in; he told him that if one would throw a tennis ball in the wall, that ball would eventually cross the wall and end up on the other side, in the same condition in which it was before it went through the wall; and to top it off, he told Giaever that there would be no hole in the wall. Now that was some story, like something from a fairytale, and Giaever’s reaction of not believing a word of what he was told was expected.
During that fascinating talk, Giaever expositioned, explaining that what if one would take an extremely small particle for a ball, i.e. electron, and throw that ball toward obstacles that are very close to one another, distanced in a few atoms, and are not touching? In that case, there is a finite probability that an electron will find itself on the other side of the obstacle, never being in between. That is quantum tunneling, and that is for what Giaever shared his Nobel Prize, in 1973. [176] Quantum effects are real and are typically observable only with very small particles , on an atomic and subatomic level.
Quantum effects are dependent on a wave function and its accompanying probability that a state will be observed. This wave-particle duality was an outstanding discovery with profound consequences that are shaking science to this day. [7] The differences between classical and quantum systems are many, with one of the more intricate being the answer to the question of what one knows about one type of system and, of course, the other. If we know everything about a classical system, including all its characteristics, we naturally know everything about its components; however, this does not hold for quantum systems, which can clearly be seen in a quantum phenomenon called entanglement. [204, 206, 227] One could, for example, have a composite quantum system, i.e. AB𝐴𝐵ABitalic_A italic_B, and know everything about that system’s laws of physics would allow us to know, despite of that fact, if one would observe just part of the system, i.e. B𝐵Bitalic_B, information needed to completely characterize that part of the system is missing, as the series of expectations for the subsystem depends on an unknown value of the variable for some other subsystem, in this instance, on the observation of A𝐴Aitalic_A. [227, 204]
This series of expectations, being a consequence of superposition, is linked to entanglement in a meaningful way. Extra states, with no analog in a classical system, leading "to the exponential size of the quantum state space are the entangled states". [215] In this way, by undergoing initialization, quantum state transformations, and measurement, a quantum system achieves its result. [215]
Even though we live in the quantum world, seldom do we think about it, but the macroscopic world we are surrounded with is not isolated from its own surroundings and is therefore in uninterrupted interactions with the environment, meaning it is continually measured, a phenomenon aforementioned and called decoherence. [204] Such a quantum system, continually being observed, represents a system known from the down of time and "is well described by classical physics." [204] Though "weird," a vast number of experiments have shown that quantum mechanics correctly describes physical reality. In order to combat decoherence in quantum computers, a breakthrough came, but not from the physical side, as was perhaps expected. [215] It was theorized by some that quantum error correction is beyond our abilities "because of the impossibility of reliably copying an unknown quantum state", yet it was not so, as it is possible via error-correcting techniques to design error-correcting codes by which one can detect specific errors and reconstruct "the exact error-free quantum state." [215]
And so this battle between decoherence and superposition continually "rages." A quantum system can perform an enormous amount of computation in parallel, but accessing the desired result is far from easy. [215, 157] In order to read the result, quantum state is disturbed, only one of those parallel threads is read, and as the measurement is probabilistic in nature, "we cannot even choose which one we get." [215] It is, however, possible to skillfully deal with the problem of measurement and thus exploit quantum parallelism; "this sort of manipulation has no classical analog and requires nontraditional programming techniques." [215] Shor’s factorization algorithm manipulates quantum states in such a way that the "common property of all of the output values can be read off" [215], and in this way direct toward the output one would like to achieve, while, for example, Grover’s search algorithm makes amplification through which the probability that the result of interest will be read is increased, thus manipulating quantum states. [87, 232, 215]
Basic operations in any classical algorithm are data copying and data deletion. While trying to project this to a quantum computer, one comes to a brick wall, as this is not possible in a quantum system, perfect copying of an unknown quantum state is an intrinsic impossibility, as per the no-cloning theorem , not just a limitation of laboratory conditions. [215, 223] If, on the other hand, we disregard the notion that the copy needs to be perfect (producing a perfect copy of a limited number of quantum states with probability <1absent1<1< 1), then one can devise an apparatus (a copier or cloner) by which copying can be conducted, reproducing the desired state through an approximation or to a degree of probability. [98]
As one might presume, with copying being such a stepping stone, data deletion also differs substantially from the classical case. If one assumes that there are two identical copies of an arbitrary and unknown quantum state to be deleted, this process actually cannot be accomplished (as per the no-deleting theorem), aside from deleting approximately –as is the case for quantum cloning; however, just as is the case for cloning, the process of deletion is possible if one deals with known orthogonal states. [221, 192] This inability to clone and delete quantum information, but only express possibilities already in existence, postulates conservation of quantum information, as information cannot be created nor destroyed. [192, 288]
Related to cloning and deleting quantum information is the inability to hide information, known as the no-hiding theorem. [192] If a quantum system interacts with its surroundings and loses information, that information actually is not missing; it simply resides somewhere else in the universe–that is, correlations between the system and the environment are not able to hide information . [192, 288]
In spite of all the hurdles we go through when trying to discover new knowledge and understand the universe in which we are, this same quantum universe works perfectly and mindbogglingly precise, with quantum computers being devised and in operation. That being said, scientific discovery and painstaking experimentation have produced criteria for successful implementation of a device that would be called a quantum computer; they are found in [63], and are as follows:
I
"A scalable physical system with well characterized qubits", that is, a collection of qubits with physical parameters that are accurately known,
II
"The ability to initialize the state of the qubits to a simple fiducial state, such as 000000000000", that is, initializing quantum registers to a known value before one starts computing,
III
"Long relevant decoherence times, much longer than the gate operation time", that is, dynamics with the environment brings about quantum state decay with which quantum computation is possible,
IV
"A ’universal’ set of quantum gates", that is, a set of quantum gates that are able to implement via a finite sequence of gates any quantum operation,
V
"A qubit-specific measurement capability", that is, the capacity to be able to measure specific qubits.
In addition to the previous five, two additional ones are added, namely "the ability to inter-convert stationary and flying qubits" and "the ability to faithfully transmit flying qubits between specified locations", in order to achieve quantum communication, as not all information processing is only computation. [63] The need for the additional two criteria is clearly seen in quantum key distribution [20], and quantum cryptography [63]. It is, however, not an easy task to transmit a qubit from one place to another, and when this is done, decoherence plays an important hurdle to overcome. [82, 204]
In spite of all of its strangeness, quantum mechanics has withstood the test of time, and for the time being, it stands supreme. But just as is the case for the theory of relativity and Newtonian physics, so is the case for quantum and classical physics; both are needed. In fact, classical is quantum, but simply for large objects for which wavelengths are so small that they cannot be measured. Thus, if something functions specifically, it does not mean that it functions generally, but if it does not function generally, it does not mean that it is not useful. With the next section most definitely being useful, as it deals with quantum gates and algorithms.
5 Computation with Quantum Gates
Fundamentally speaking, as is the case when one does classical computation, by analogy, so is the situation for quantum computation, since in order to manipulate quantum information, one needs quantum gates that are then forming a quantum circuit and consequently a quantum algorithm. There is a myriad of quantum gates, e.g. Identity (I), Not (NOT or PauliX), Controlled Not (CNOT), Controlled Controlled Not (CCNOT or Toffoli), Swap (SWAP or S), Hadamard (H), Phase (P), etc. [145, 245], with some being a single qubit gate while others are multiple qubit.
Before we proceed into a more in-depth look at quantum computation, we will first expound on a number of quantum gates, as this knowledge is essential for understanding quantum circuits. Let’s start with the quantum gate, whose classical equivalent should be known to every computer expert and physicist: the NOT gate. Let us assume that superposition states, from now on, that we will use shall be |0⟩ket0\ket{0}| start_ARG 0 end_ARG ⟩ and |1⟩ket1\ket{1}| start_ARG 1 end_ARG ⟩, with |ψ⟩=a|0⟩+b|1⟩ket𝜓𝑎ket0𝑏ket1\ket{\psi}=a\ket{0}+b\ket{1}| start_ARG italic_ψ end_ARG ⟩ = italic_a | start_ARG 0 end_ARG ⟩ + italic_b | start_ARG 1 end_ARG ⟩. This basis is called the computational or standard basis and is in three-dimensional space represented by the axes Z, therefore the Z-basis, which is "generally the only basis in which we can make measurements of the system." [53]
NOT Not gate is a single qubit gate. [145] Denoted as well as PauliX (named after Wolfgang Pauli, who received the Nobel Prize in Physics in 1945, proposing "that no two electrons in an atom could have identical sets of quantum numbers" that correspond to "distinct states of energy and movement." [170]), as the operation it makes is a rotation by π𝜋\piitalic_π radians around the X𝑋Xitalic_X axis. [145] As a consequence of this rotation, there is a mapping, |0⟩→|1⟩→ket0ket1\ket{0}\rightarrow\ket{1}| start_ARG 0 end_ARG ⟩ → | start_ARG 1 end_ARG ⟩ and |1⟩→|0⟩→ket1ket0\ket{1}\rightarrow\ket{0}| start_ARG 1 end_ARG ⟩ → | start_ARG 0 end_ARG ⟩. [145] The transformation matrix used in order to calculate an output for the gate and its input is [145],
NOT=[0110]𝑁𝑂𝑇matrix0110NOT=\begin{bmatrix}0&1\\ 1&0\end{bmatrix}italic_N italic_O italic_T = [ start_ARG start_ROW start_CELL 0 end_CELL start_CELL 1 end_CELL end_ROW start_ROW start_CELL 1 end_CELL start_CELL 0 end_CELL end_ROW end_ARG ] (1)
H Hadamard gate is a single qubit gate. [145] Known also as the Walsh-Hadamard gate (named after Jacques Hadamard [113] and Joseph Walsh [103]), the gate makes an operation of superposition–for a basis state, the superposition that is created is equal in probability. [145, 113] Superposition is created by making a rotation of π𝜋\piitalic_π radians around the axis between the X𝑋Xitalic_X axis and the Z𝑍Zitalic_Z axis. [265] As a consequence of this superposition operation, there is a mapping, |0⟩→|0⟩+|1⟩2→ket0ket0ket12\ket{0}\rightarrow\frac{\ket{0}+\ket{1}}{\sqrt{2}}| start_ARG 0 end_ARG ⟩ → divide start_ARG | start_ARG 0 end_ARG ⟩ + | start_ARG 1 end_ARG ⟩ end_ARG start_ARG square-root start_ARG 2 end_ARG end_ARG and |1⟩→|0⟩−|1⟩2→ket1ket0ket12\ket{1}\rightarrow\frac{\ket{0}-\ket{1}}{\sqrt{2}}| start_ARG 1 end_ARG ⟩ → divide start_ARG | start_ARG 0 end_ARG ⟩ - | start_ARG 1 end_ARG ⟩ end_ARG start_ARG square-root start_ARG 2 end_ARG end_ARG. [145] If we apply the Hadamard operation twice, a particle is placed into a superposition of states and then returned to its original state. [231] The transformation matrix used in order to calculate an output for the gate and its input is [145],
H=12[111−1]𝐻12matrix1111H=\dfrac{1}{\sqrt{2}}\begin{bmatrix}1&1\\ 1&-1\end{bmatrix}italic_H = divide start_ARG 1 end_ARG start_ARG square-root start_ARG 2 end_ARG end_ARG [ start_ARG start_ROW start_CELL 1 end_CELL start_CELL 1 end_CELL end_ROW start_ROW start_CELL 1 end_CELL start_CELL - 1 end_CELL end_ROW end_ARG ] (2)
P Phase gate is a single qubit gate . [145] Known also as the Phase Shift gate, as the gate makes an operation of shifting a qubit’s phase with probabilities for the qubit staying unchanged, that is, probabilities for basis states, |0⟩ket0\ket{0}| start_ARG 0 end_ARG ⟩ and |1⟩ket1\ket{1}| start_ARG 1 end_ARG ⟩, remain the same. [145] As the phase is shifted, there is a mapping, |0⟩→|0⟩→ket0ket0\ket{0}\rightarrow\ket{0}| start_ARG 0 end_ARG ⟩ → | start_ARG 0 end_ARG ⟩ and |1⟩→eiθ|1⟩→ket1superscript𝑒𝑖𝜃ket1\ket{1}\rightarrow e^{i\theta}\ket{1}| start_ARG 1 end_ARG ⟩ → italic_e start_POSTSUPERSCRIPT italic_i italic_θ end_POSTSUPERSCRIPT | start_ARG 1 end_ARG ⟩, with θ𝜃\thetaitalic_θ being a phase shift and the period being 2π2𝜋2\pi2 italic_π. [145, 75] The transformation matrix used in order to calculate an output for the gate and its input is [145],
Pθ=[100eiθ]subscript𝑃𝜃matrix100superscript𝑒𝑖𝜃P_{\theta}=\begin{bmatrix}1&0\\ 0&e^{i\theta}\end{bmatrix}italic_P start_POSTSUBSCRIPT italic_θ end_POSTSUBSCRIPT = [ start_ARG start_ROW start_CELL 1 end_CELL start_CELL 0 end_CELL end_ROW start_ROW start_CELL 0 end_CELL start_CELL italic_e start_POSTSUPERSCRIPT italic_i italic_θ end_POSTSUPERSCRIPT end_CELL end_ROW end_ARG ] (3)
The term eiθsuperscript𝑒𝑖𝜃e^{i\theta}italic_e start_POSTSUPERSCRIPT italic_i italic_θ end_POSTSUPERSCRIPT is a part of the well-known Euler’s formula, eiθ=cos(θ)+isin(θ)superscript𝑒𝑖𝜃𝜃𝑖𝜃e^{i\theta}=\cos(\theta)+i\sin(\theta)italic_e start_POSTSUPERSCRIPT italic_i italic_θ end_POSTSUPERSCRIPT = roman_cos ( start_ARG italic_θ end_ARG ) + italic_i roman_sin ( start_ARG italic_θ end_ARG ) (a complex number x+yi𝑥𝑦𝑖x+yiitalic_x + italic_y italic_i that has magnitude 1111 can be stated via the aforementioned formula)–with the numbers sitting on the unit circle in a complex plane, closing an angle θ𝜃\thetaitalic_θ with the axis of the circle. [247]
I Identity gate is a single qubit gate. [248] This gate does not modify the quantum state in any way–it is typically used in a quantum circuit when we want to show what is happening to a qubit at a certain step or when we want to cause a delay (which the researchers sometimes want to do in order to "calculate measurements of the decoherence of a qubit"). [248] The transformation matrix used in order to calculate an output for the gate and its input is the identity matrix [248],
I=[1001]𝐼matrix1001I=\begin{bmatrix}1&0\\ 0&1\end{bmatrix}italic_I = [ start_ARG start_ROW start_CELL 1 end_CELL start_CELL 0 end_CELL end_ROW start_ROW start_CELL 0 end_CELL start_CELL 1 end_CELL end_ROW end_ARG ] (4)
CNOT Controlled Not gate is a two qubit gate. [145] This gate is very similar to the Not gate, the difference being that the target qubit is flipped only if the first qubit is in an excited state, that is, there is a mapping, |00⟩→|00⟩→ket00ket00\ket{00}\rightarrow\ket{00}| start_ARG 00 end_ARG ⟩ → | start_ARG 00 end_ARG ⟩ and |01⟩→|01⟩→ket01ket01\ket{01}\rightarrow\ket{01}| start_ARG 01 end_ARG ⟩ → | start_ARG 01 end_ARG ⟩ and |10⟩→|11⟩→ket10ket11\ket{10}\rightarrow\ket{11}| start_ARG 10 end_ARG ⟩ → | start_ARG 11 end_ARG ⟩ and |11⟩→|10⟩→ket11ket10\ket{11}\rightarrow\ket{10}| start_ARG 11 end_ARG ⟩ → | start_ARG 10 end_ARG ⟩. [215] The transformation matrix used in order to calculate an output for the gate and its input is [215],
CNOT=[1000010000010010]𝐶𝑁𝑂𝑇matrix1000010000010010CNOT=\begin{bmatrix}1&0&0&0\\ 0&1&0&0\\ 0&0&0&1\\ 0&0&1&0\\ \end{bmatrix}italic_C italic_N italic_O italic_T = [ start_ARG start_ROW start_CELL 1 end_CELL start_CELL 0 end_CELL start_CELL 0 end_CELL start_CELL 0 end_CELL end_ROW start_ROW start_CELL 0 end_CELL start_CELL 1 end_CELL start_CELL 0 end_CELL start_CELL 0 end_CELL end_ROW start_ROW start_CELL 0 end_CELL start_CELL 0 end_CELL start_CELL 0 end_CELL start_CELL 1 end_CELL end_ROW start_ROW start_CELL 0 end_CELL start_CELL 0 end_CELL start_CELL 1 end_CELL start_CELL 0 end_CELL end_ROW end_ARG ] (5)
S Swap gate is a two-qubit gate. As the name suggests, this gate makes an operation of swapping the values of two qubits; the order of the qubits is not important for this gate. [145] There is also a version of the Swap gate called the Fredkin gate (a three-qubit gate [145]), which makes an operation of a controlled swap. [215] The transformation matrix used in order to calculate an output for the Swap gate and its input is [145],
S=[1000001001000001]𝑆matrix1000001001000001S=\begin{bmatrix}1&0&0&0\\ 0&0&1&0\\ 0&1&0&0\\ 0&0&0&1\\ \end{bmatrix}italic_S = [ start_ARG start_ROW start_CELL 1 end_CELL start_CELL 0 end_CELL start_CELL 0 end_CELL start_CELL 0 end_CELL end_ROW start_ROW start_CELL 0 end_CELL start_CELL 0 end_CELL start_CELL 1 end_CELL start_CELL 0 end_CELL end_ROW start_ROW start_CELL 0 end_CELL start_CELL 1 end_CELL start_CELL 0 end_CELL start_CELL 0 end_CELL end_ROW start_ROW start_CELL 0 end_CELL start_CELL 0 end_CELL start_CELL 0 end_CELL start_CELL 1 end_CELL end_ROW end_ARG ] (6)
CCNOT Controlled Controlled Not gate is a three-qubit gate. Similarly to the two-qubit Controlled Not gate, this gate takes two controlled qubits, and depending on the values of these, the value of a third qubit is flipped–that is, iff the first two qubits have a value of 1, then the value of a third qubit is flipped. [215] This gate is also known by the name Toffoli gate. [215] The transformation matrix used in order to calculate an output for the gate and its input is [248],
CCNOT=[1000000001000000001000000001000000001000000001000000000100000010]𝐶𝐶𝑁𝑂𝑇matrix1000000001000000001000000001000000001000000001000000000100000010CCNOT=\begin{bmatrix}1&0&0&0&0&0&0&0\\ 0&1&0&0&0&0&0&0\\ 0&0&1&0&0&0&0&0\\ 0&0&0&1&0&0&0&0\\ 0&0&0&0&1&0&0&0\\ 0&0&0&0&0&1&0&0\\ 0&0&0&0&0&0&0&1\\ 0&0&0&0&0&0&1&0\\ \end{bmatrix}italic_C italic_C italic_N italic_O italic_T = [ start_ARG start_ROW start_CELL 1 end_CELL start_CELL 0 end_CELL start_CELL 0 end_CELL start_CELL 0 end_CELL start_CELL 0 end_CELL start_CELL 0 end_CELL start_CELL 0 end_CELL start_CELL 0 end_CELL end_ROW start_ROW start_CELL 0 end_CELL start_CELL 1 end_CELL start_CELL 0 end_CELL start_CELL 0 end_CELL start_CELL 0 end_CELL start_CELL 0 end_CELL start_CELL 0 end_CELL start_CELL 0 end_CELL end_ROW start_ROW start_CELL 0 end_CELL start_CELL 0 end_CELL start_CELL 1 end_CELL start_CELL 0 end_CELL start_CELL 0 end_CELL start_CELL 0 end_CELL start_CELL 0 end_CELL start_CELL 0 end_CELL end_ROW start_ROW start_CELL 0 end_CELL start_CELL 0 end_CELL start_CELL 0 end_CELL start_CELL 1 end_CELL start_CELL 0 end_CELL start_CELL 0 end_CELL start_CELL 0 end_CELL start_CELL 0 end_CELL end_ROW start_ROW start_CELL 0 end_CELL start_CELL 0 end_CELL start_CELL 0 end_CELL start_CELL 0 end_CELL start_CELL 1 end_CELL start_CELL 0 end_CELL start_CELL 0 end_CELL start_CELL 0 end_CELL end_ROW start_ROW start_CELL 0 end_CELL start_CELL 0 end_CELL start_CELL 0 end_CELL start_CELL 0 end_CELL start_CELL 0 end_CELL start_CELL 1 end_CELL start_CELL 0 end_CELL start_CELL 0 end_CELL end_ROW start_ROW start_CELL 0 end_CELL start_CELL 0 end_CELL start_CELL 0 end_CELL start_CELL 0 end_CELL start_CELL 0 end_CELL start_CELL 0 end_CELL start_CELL 0 end_CELL start_CELL 1 end_CELL end_ROW start_ROW start_CELL 0 end_CELL start_CELL 0 end_CELL start_CELL 0 end_CELL start_CELL 0 end_CELL start_CELL 0 end_CELL start_CELL 0 end_CELL start_CELL 1 end_CELL start_CELL 0 end_CELL end_ROW end_ARG ] (7)
While not all of these will be used in our own calculations, they represent some of the elementary quantum gates and foundational quantum operations and are therefore mentioned as part of one’s necessary quantum arsenal. For a number of other quantum gates, both frequently and infrequently in use, one can consult [53]. It is useful to have quantum transformations represented graphically , therefore, a single-bit operations are typically graphically represented by labeled boxes, while multiple-qubit operations are typically represented by circles, marks, and lines–as other authors have dealt with this satisfactorily, we will not repeat it here. [215, 53]
In order to know the output of a quantum algorithm, we need to be able to calculate that result, just like with a classical algorithm. There is, however, a twist in a quantum situation. Since we are dealing with particle states and quantum operations represented by matrices, we transform input into output by using vector notation for probability amplitudes and then calculate the tensor product for the expression, after which we perform matrix multiplication, which in turn transforms amplitudes, which in turn changes probability density and the end result. Let us therefore perform a few interesting calculations.
If we had a qubit that we wanted to place into a superposition of states, we would use the Hadamard gate, abbreviated as H. By following the aforementioned procedure, a qubit needs to be had. Let us therefore define the following qubit, |ψ0⟩=1|0⟩+0|1⟩ketsubscript𝜓01ket00ket1\ket{\psi_{0}}=1\ket{0}+0\ket{1}| start_ARG italic_ψ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT end_ARG ⟩ = 1 | start_ARG 0 end_ARG ⟩ + 0 | start_ARG 1 end_ARG ⟩. On this qubit, one now needs to apply the H gate, an operation needs to be performed on the operand, so as to achieve the desired result, namely, superposition. By placing the qubit amplitudes into a column vector and using the H gate matrix, we will have the following.
12[111−1][10]=12+1212matrix1111matrix101212\dfrac{1}{\sqrt{2}}\begin{bmatrix}1&1\\ 1&-1\end{bmatrix}\begin{bmatrix}1\\ 0\end{bmatrix}=\dfrac{1}{\sqrt{2}}+\dfrac{1}{\sqrt{2}}divide start_ARG 1 end_ARG start_ARG square-root start_ARG 2 end_ARG end_ARG [ start_ARG start_ROW start_CELL 1 end_CELL start_CELL 1 end_CELL end_ROW start_ROW start_CELL 1 end_CELL start_CELL - 1 end_CELL end_ROW end_ARG ] [ start_ARG start_ROW start_CELL 1 end_CELL end_ROW start_ROW start_CELL 0 end_CELL end_ROW end_ARG ] = divide start_ARG 1 end_ARG start_ARG square-root start_ARG 2 end_ARG end_ARG + divide start_ARG 1 end_ARG start_ARG square-root start_ARG 2 end_ARG end_ARG (8)
Such a result has given us a qubit in a superposition; thus, by performing the above multiplication, we have |ψ1⟩=12|0⟩+12|1⟩ketsubscript𝜓112ket012ket1\ket{\psi_{1}}=\frac{1}{\sqrt{2}}\ket{0}+\frac{1}{\sqrt{2}}\ket{1}| start_ARG italic_ψ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT end_ARG ⟩ = divide start_ARG 1 end_ARG start_ARG square-root start_ARG 2 end_ARG end_ARG | start_ARG 0 end_ARG ⟩ + divide start_ARG 1 end_ARG start_ARG square-root start_ARG 2 end_ARG end_ARG | start_ARG 1 end_ARG ⟩. Perfect, we have a qubit in a superposition with both states having the same amplitude, and by squaring the state values, we see that both states have a 50%percent5050\%50 % chance of being observed after superposition collapse. By applying the H gate again, what one can freely try, the original state would again be a reality. It is also good to note here that a qubit is always in a superposition, although for the reason that one of the amplitudes is 0, the original state is often not called such.
With the Hadamard-gained superposition state, we can proceed to another operation. Let us next perform the CNOT operation. We know what the CNOT gate does, and we also know that such a gate is a two-qubit gate. With that in mind, we will define one more qubit, |ψ2⟩=1|0⟩+0|1⟩ketsubscript𝜓21ket00ket1\ket{\psi_{2}}=1\ket{0}+0\ket{1}| start_ARG italic_ψ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT end_ARG ⟩ = 1 | start_ARG 0 end_ARG ⟩ + 0 | start_ARG 1 end_ARG ⟩. By placing the qubit amplitudes into a column vector , and using the CNOT gate matrix, we will have the following.
[1000010000010010][120120]=12+0+0+12matrix1000010000010010matrix120120120012\begin{bmatrix}1&0&0&0\\ 0&1&0&0\\ 0&0&0&1\\ 0&0&1&0\\ \end{bmatrix}\begin{bmatrix}\dfrac{1}{\sqrt{2}}\\ 0\\ \dfrac{1}{\sqrt{2}}\\ 0\end{bmatrix}=\dfrac{1}{\sqrt{2}}+0+0+\dfrac{1}{\sqrt{2}}[ start_ARG start_ROW start_CELL 1 end_CELL start_CELL 0 end_CELL start_CELL 0 end_CELL start_CELL 0 end_CELL end_ROW start_ROW start_CELL 0 end_CELL start_CELL 1 end_CELL start_CELL 0 end_CELL start_CELL 0 end_CELL end_ROW start_ROW start_CELL 0 end_CELL start_CELL 0 end_CELL start_CELL 0 end_CELL start_CELL 1 end_CELL end_ROW start_ROW start_CELL 0 end_CELL start_CELL 0 end_CELL start_CELL 1 end_CELL start_CELL 0 end_CELL end_ROW end_ARG ] [ start_ARG start_ROW start_CELL divide start_ARG 1 end_ARG start_ARG square-root start_ARG 2 end_ARG end_ARG end_CELL end_ROW start_ROW start_CELL 0 end_CELL end_ROW start_ROW start_CELL divide start_ARG 1 end_ARG start_ARG square-root start_ARG 2 end_ARG end_ARG end_CELL end_ROW start_ROW start_CELL 0 end_CELL end_ROW end_ARG ] = divide start_ARG 1 end_ARG start_ARG square-root start_ARG 2 end_ARG end_ARG + 0 + 0 + divide start_ARG 1 end_ARG start_ARG square-root start_ARG 2 end_ARG end_ARG (9)
Therefore, the final state of the operation performed is, |ψ3⟩=12|00⟩+0|01⟩+0|10⟩+12|11⟩=12|00⟩+12|11⟩ketsubscript𝜓312ket000ket010ket1012ket1112ket0012ket11\ket{\psi_{3}}=\frac{1}{\sqrt{2}}\ket{00}+0\ket{01}+0\ket{10}+\frac{1}{\sqrt{2% }}\ket{11}=\frac{1}{\sqrt{2}}\ket{00}+\frac{1}{\sqrt{2}}\ket{11}| start_ARG italic_ψ start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT end_ARG ⟩ = divide start_ARG 1 end_ARG start_ARG square-root start_ARG 2 end_ARG end_ARG | start_ARG 00 end_ARG ⟩ + 0 | start_ARG 01 end_ARG ⟩ + 0 | start_ARG 10 end_ARG ⟩ + divide start_ARG 1 end_ARG start_ARG square-root start_ARG 2 end_ARG end_ARG | start_ARG 11 end_ARG ⟩ = divide start_ARG 1 end_ARG start_ARG square-root start_ARG 2 end_ARG end_ARG | start_ARG 00 end_ARG ⟩ + divide start_ARG 1 end_ARG start_ARG square-root start_ARG 2 end_ARG end_ARG | start_ARG 11 end_ARG ⟩. The situation we have here is different than the usual CNOT gate example given above, yet if we think about what has actually happened, this is exactly the result one would expect. We have stated that the CNOT gate will flip the target qubit only if the first qubit is raised, which is 1. Here we had a qubit that served as a control in a superposition, with equal amplitudes, while the target qubit was in a state of 1|0⟩1ket01\ket{0}1 | start_ARG 0 end_ARG ⟩. Therefore, as the control is in a superposition, if the control is 0, then the second qubit would be the same, while if the control were 1, the target would be raised to 1, which means that the resulting states need to be |00⟩+|11⟩ket00ket11\ket{00}+\ket{11}| start_ARG 00 end_ARG ⟩ + | start_ARG 11 end_ARG ⟩, precisely what we have obtained by performing calculation. And as the amplitudes are 1212\frac{1}{\sqrt{2}}divide start_ARG 1 end_ARG start_ARG square-root start_ARG 2 end_ARG end_ARG, this has "spilled" over to the transformed state |ψ3⟩ketsubscript𝜓3\ket{\psi_{3}}| start_ARG italic_ψ start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT end_ARG ⟩. Two states of the |ψ3⟩ketsubscript𝜓3\ket{\psi_{3}}| start_ARG italic_ψ start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT end_ARG ⟩ whose amplitudes are 0 do not represent a logical outcome, as the tensor product pairs are not in line with the CNOT gate operation.
By observing what has happened with the CNOT gate calculation, one might wonder what else might be in store with various gates and qubit states. We will therefore perform one more operation, and that operation will be Swap, denoted with the S. Swap gate is a two-qubit gate that swaps qubit states. This time, let us take the qubit with the state |ψ1⟩=12|0⟩+12|1⟩ketsubscript𝜓112ket012ket1\ket{\psi_{1}}=\frac{1}{\sqrt{2}}\ket{0}+\frac{1}{\sqrt{2}}\ket{1}| start_ARG italic_ψ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT end_ARG ⟩ = divide start_ARG 1 end_ARG start_ARG square-root start_ARG 2 end_ARG end_ARG | start_ARG 0 end_ARG ⟩ + divide start_ARG 1 end_ARG start_ARG square-root start_ARG 2 end_ARG end_ARG | start_ARG 1 end_ARG ⟩ and the qubit with the same state denoted |ψ4⟩ketsubscript𝜓4\ket{\psi_{4}}| start_ARG italic_ψ start_POSTSUBSCRIPT 4 end_POSTSUBSCRIPT end_ARG ⟩. Yet again, by placing the qubit amplitudes into a column vector and using the S gate matrix, we will have the following.
[1000001001000001][12121212]=12+12+12+12matrix1000001001000001matrix1212121212121212\begin{bmatrix}1&0&0&0\\ 0&0&1&0\\ 0&1&0&0\\ 0&0&0&1\\ \end{bmatrix}\begin{bmatrix}\dfrac{1}{2}\\[6.99997pt] \dfrac{1}{2}\\[6.99997pt] \dfrac{1}{2}\\[6.99997pt] \dfrac{1}{2}\end{bmatrix}=\dfrac{1}{2}+\dfrac{1}{2}+\dfrac{1}{2}+\dfrac{1}{2}[ start_ARG start_ROW start_CELL 1 end_CELL start_CELL 0 end_CELL start_CELL 0 end_CELL start_CELL 0 end_CELL end_ROW start_ROW start_CELL 0 end_CELL start_CELL 0 end_CELL start_CELL 1 end_CELL start_CELL 0 end_CELL end_ROW start_ROW start_CELL 0 end_CELL start_CELL 1 end_CELL start_CELL 0 end_CELL start_CELL 0 end_CELL end_ROW start_ROW start_CELL 0 end_CELL start_CELL 0 end_CELL start_CELL 0 end_CELL start_CELL 1 end_CELL end_ROW end_ARG ] [ start_ARG start_ROW start_CELL divide start_ARG 1 end_ARG start_ARG 2 end_ARG end_CELL end_ROW start_ROW start_CELL divide start_ARG 1 end_ARG start_ARG 2 end_ARG end_CELL end_ROW start_ROW start_CELL divide start_ARG 1 end_ARG start_ARG 2 end_ARG end_CELL end_ROW start_ROW start_CELL divide start_ARG 1 end_ARG start_ARG 2 end_ARG end_CELL end_ROW end_ARG ] = divide start_ARG 1 end_ARG start_ARG 2 end_ARG + divide start_ARG 1 end_ARG start_ARG 2 end_ARG + divide start_ARG 1 end_ARG start_ARG 2 end_ARG + divide start_ARG 1 end_ARG start_ARG 2 end_ARG (10)
The final state of the operation performed is, |ψ5⟩=12|00⟩+12|01⟩+12|10⟩+12|11⟩ketsubscript𝜓512ket0012ket0112ket1012ket11\ket{\psi_{5}}=\frac{1}{2}\ket{00}+\frac{1}{2}\ket{01}+\frac{1}{2}\ket{10}+% \frac{1}{2}\ket{11}| start_ARG italic_ψ start_POSTSUBSCRIPT 5 end_POSTSUBSCRIPT end_ARG ⟩ = divide start_ARG 1 end_ARG start_ARG 2 end_ARG | start_ARG 00 end_ARG ⟩ + divide start_ARG 1 end_ARG start_ARG 2 end_ARG | start_ARG 01 end_ARG ⟩ + divide start_ARG 1 end_ARG start_ARG 2 end_ARG | start_ARG 10 end_ARG ⟩ + divide start_ARG 1 end_ARG start_ARG 2 end_ARG | start_ARG 11 end_ARG ⟩. This example of the S gate is perhaps not as intuitive as the one where we have qubits in extreme states, that is, in |0⟩ket0\ket{0}| start_ARG 0 end_ARG ⟩ and in |1⟩ket1\ket{1}| start_ARG 1 end_ARG ⟩–with a probability of 1. However, we observe that amplitudes are present for every individual state for both qubits, which means that every tensor product pair needs to be a candidate for swapping, and as it can be seen from the result, they are all there, on the other side, as an output. By taking into account amplitude values and the equality thereof, the probability of observing a particular swapped state as a result also needs to be equal, which it is. If we take, for example, the amplitude state |01⟩ket01\ket{01}| start_ARG 01 end_ARG ⟩ with the amplitude of 1212\frac{1}{2}divide start_ARG 1 end_ARG start_ARG 2 end_ARG, by squaring the amplitude and thus obtaining the probability of observing that characteristic, we have 1414\frac{1}{4}divide start_ARG 1 end_ARG start_ARG 4 end_ARG–and that is exactly what we expect as per our own reasoning, as input and output are linked.
In previous examples, we were performing calculations in a single sequence, but it is also possible to perform them in parallel and then, at some point, "merge" results and continue, for example, in a single sequence. How an algorithm will look depends on the problem and the designer of the algorithm. By constructing a quantum circuit, one can manipulate events and, in turn, the probability of amplitude states, transforming input into output and a problem into a solution. Therefore, with that in mind, we will in the continuation show the often-used algorithm design pattern useful to solve various quantum conundrums, namely the Bernstein-Vazirani design pattern.
5.1 Bernstein–Vazirani Algorithm Design Pattern
One might think that quantum computers have an upper hand over classical computers in terms of computability; however, this is not the case. [257] Every problem that a quantum machine can solve can also be solved on a classical computer, thus not making a quantum machine superior in that respect; as a consequence, problems that are undecidable in a classical case, which are the hardest problems in existence [159], are also undecidable for quantum computers. [257] What makes quantum computers of interest are superposition, quantum parallelism, and entanglement, as these make quantum machines perform faster. [159, 257, 4]
"The heart of any quantum algorithm is the way in which it manipulates quantum parallelism so that desired results will be measured with high probability." [215] What brings us to the Bernstein-Vazirani algorithm, which uses superposition, quantum parallelism, and an effect called phase-kickback, so as to achieve its result. [23] These manipulations have no analog in the classical computer world; therefore, a quantum computer is necessary to bring the aforementioned algorithm into reality. [215]
There is a problem of determining the value of each character in a string. [158] For example, one might have the following string, 1100110011001100. The question then is: what algorithm could we devise in order to determine in which place the string has a raised bit? As the reader might already guess, we would need to perform a logical conjunction for every bit, as presented in the following equation. [23, 158]
1100&101100&101100&111100&11matrixmissing-subexpression1100missing-subexpressionmissing-subexpressionmissing-subexpression1missing-subexpressionmissing-subexpressionmissing-subexpressionmissing-subexpressionmissing-subexpression0matrixmissing-subexpression1100missing-subexpressionmissing-subexpression1missing-subexpressionmissing-subexpressionmissing-subexpressionmissing-subexpressionmissing-subexpression0missing-subexpressionmatrixmissing-subexpression1100missing-subexpression1missing-subexpressionmissing-subexpressionmissing-subexpressionmissing-subexpressionmissing-subexpression1missing-subexpressionmissing-subexpressionmatrixmissing-subexpression11001missing-subexpressionmissing-subexpressionmissing-subexpressionmissing-subexpressionmissing-subexpression1missing-subexpressionmissing-subexpressionmissing-subexpression\begin{matrix}&1&1&0&0\\ \&&&&&1\\ \hline\cr&&&&0\end{matrix}{\mkern 30.0mu}\begin{matrix}&1&1&0&0\\ \&&&&1&\\ \hline\cr&&&0&\end{matrix}{\mkern 30.0mu}\begin{matrix}&1&1&0&0\\ \&&&1&&\\ \hline\cr&&1&&\end{matrix}{\mkern 30.0mu}\begin{matrix}&1&1&0&0\\ \&&1&&&\\ \hline\cr&1&&&\end{matrix}start_ARG start_ROW start_CELL end_CELL start_CELL 1 end_CELL start_CELL 1 end_CELL start_CELL 0 end_CELL start_CELL 0 end_CELL end_ROW start_ROW start_CELL & end_CELL start_CELL end_CELL start_CELL end_CELL start_CELL end_CELL start_CELL 1 end_CELL end_ROW start_ROW start_CELL end_CELL end_ROW start_ROW start_CELL end_CELL start_CELL end_CELL start_CELL end_CELL start_CELL end_CELL start_CELL 0 end_CELL end_ROW end_ARG start_ARG start_ROW start_CELL end_CELL start_CELL 1 end_CELL start_CELL 1 end_CELL start_CELL 0 end_CELL start_CELL 0 end_CELL end_ROW start_ROW start_CELL & end_CELL start_CELL end_CELL start_CELL end_CELL start_CELL 1 end_CELL start_CELL end_CELL end_ROW start_ROW start_CELL end_CELL end_ROW start_ROW start_CELL end_CELL start_CELL end_CELL start_CELL end_CELL start_CELL 0 end_CELL start_CELL end_CELL end_ROW end_ARG start_ARG start_ROW start_CELL end_CELL start_CELL 1 end_CELL start_CELL 1 end_CELL start_CELL 0 end_CELL start_CELL 0 end_CELL end_ROW start_ROW start_CELL & end_CELL start_CELL end_CELL start_CELL 1 end_CELL start_CELL end_CELL start_CELL end_CELL end_ROW start_ROW start_CELL end_CELL end_ROW start_ROW start_CELL end_CELL start_CELL end_CELL start_CELL 1 end_CELL start_CELL end_CELL start_CELL end_CELL end_ROW end_ARG start_ARG start_ROW start_CELL end_CELL start_CELL 1 end_CELL start_CELL 1 end_CELL start_CELL 0 end_CELL start_CELL 0 end_CELL end_ROW start_ROW start_CELL & end_CELL start_CELL 1 end_CELL start_CELL end_CELL start_CELL end_CELL start_CELL end_CELL end_ROW start_ROW start_CELL end_CELL end_ROW start_ROW start_CELL end_CELL start_CELL 1 end_CELL start_CELL end_CELL start_CELL end_CELL start_CELL end_CELL end_ROW end_ARG (11)
And now, by reading from the back, we have the result, 1100110011001100, the original bit string is decoded. It is clearly seen from the example that for a n𝑛nitalic_n-bit string, we would need n𝑛nitalic_n operations to find the source bit string–that is, with the linear increase of the input, the complexity of the algorithm increases linearly. This is not an inefficient algorithm; however, for a bit string of length 109superscript10910^{9}10 start_POSTSUPERSCRIPT 9 end_POSTSUPERSCRIPT the number of steps needed to be performed is substantial, and this is where a quantum computer can excel. By employing characteristics that a quantum machine would have, the aforementioned algorithm could be adapted and the entire calculation done in only one step, and thus regardless of the input string, if the quantum machine can match the problem, the calculation would be completed in one step only–this is outstanding, and the procedure that accomplishes the aforementioned is called the Bernstein-Vazirani algorithm; for a visual representation, one can consult Figure 12.
Before we perform some calculations, let us expound on a fundamental idea behind the Bernstein-Vazirani algorithm. Since our string is four bits long, we also need four qubits for the quantum algorithm as well. The quantum algorithm, however, needs one additional qubit through which the essence of the algorithm will be delivered. All the qubits are at the beginning in the ground state of |0⟩ket0\ket{0}| start_ARG 0 end_ARG ⟩. These qubits are then placed in a superposition of values, while the last qubit is first placed in a |1⟩ket1\ket{1}| start_ARG 1 end_ARG ⟩ and then into a superposition, which means that the last qubit has a phase added to its superposition, and this is crucial.
In the next series of operations, there are CNot gates added to every qubit on which we need to decode 1, an excited state, with the last qubit, a qubit with a phase in its superposition, being a target of the CNot. This part of the algorithm is the part where the flash happens, as the phase from the target qubit transfers onto the control qubits, a target has had an influence on the control; this unexpected event is known under the name phase-kickback [189] and is a crucial part of the algorithm. When we, after this step, perform an additional step with the Hadamard gate and return qubits out of superposition, the phase-kickback will have, as a consequence, a qubit in the state |1⟩ket1\ket{1}| start_ARG 1 end_ARG ⟩ where before it was |0⟩ket0\ket{0}| start_ARG 0 end_ARG ⟩. By making measurements on qubits, as a last step of the algorithm, we will read the final state and receive the desired result of the decoded string. This read data is then stored on a classical storage. And so, by using a phase-kickback effect, we were able to detect a desirable characteristic and make a transformation by which the end result was obtained. [189, 23]
By performing actual calculations, it can be more clearly seen why this has happened and what the algorithm’s inner workings are. At the very start of the algorithm, we need to place qubits into superposition, and as we have already shown this in Equation 8, and as it is quite clear what will happen by applying the Not operation from Equation 1, these steps will be skipped. Suffice to say, Hadamard gate will produce, |ψ1,2,3,4⟩=12|0⟩+12|1⟩ketsubscript𝜓123412ket012ket1\ket{\psi_{1,2,3,4}}=\frac{1}{\sqrt{2}}\ket{0}+\frac{1}{\sqrt{2}}\ket{1}| start_ARG italic_ψ start_POSTSUBSCRIPT 1 , 2 , 3 , 4 end_POSTSUBSCRIPT end_ARG ⟩ = divide start_ARG 1 end_ARG start_ARG square-root start_ARG 2 end_ARG end_ARG | start_ARG 0 end_ARG ⟩ + divide start_ARG 1 end_ARG start_ARG square-root start_ARG 2 end_ARG end_ARG | start_ARG 1 end_ARG ⟩, the X gate will produce, |ψ5⟩=0|0⟩+1|1⟩ketsubscript𝜓50ket01ket1\ket{\psi_{5}}=0\ket{0}+1\ket{1}| start_ARG italic_ψ start_POSTSUBSCRIPT 5 end_POSTSUBSCRIPT end_ARG ⟩ = 0 | start_ARG 0 end_ARG ⟩ + 1 | start_ARG 1 end_ARG ⟩, and the Hadamard applied after the X gate will produce, |ψ5⟩=12|0⟩−12|1⟩ketsubscript𝜓512ket012ket1\ket{\psi_{5}}=\frac{1}{\sqrt{2}}\ket{0}-\frac{1}{\sqrt{2}}\ket{1}| start_ARG italic_ψ start_POSTSUBSCRIPT 5 end_POSTSUBSCRIPT end_ARG ⟩ = divide start_ARG 1 end_ARG start_ARG square-root start_ARG 2 end_ARG end_ARG | start_ARG 0 end_ARG ⟩ - divide start_ARG 1 end_ARG start_ARG square-root start_ARG 2 end_ARG end_ARG | start_ARG 1 end_ARG ⟩–with this, the first series of operations is finished, and now we are onto phase-kickback.
In the second series of steps, the CNot gate is applied to the qubits where we need to decode 1, and so we have |ψ3,4⟩=12|0⟩+12|1⟩ketsubscript𝜓3412ket012ket1\ket{\psi_{3,4}}=\frac{1}{\sqrt{2}}\ket{0}+\frac{1}{\sqrt{2}}\ket{1}| start_ARG italic_ψ start_POSTSUBSCRIPT 3 , 4 end_POSTSUBSCRIPT end_ARG ⟩ = divide start_ARG 1 end_ARG start_ARG square-root start_ARG 2 end_ARG end_ARG | start_ARG 0 end_ARG ⟩ + divide start_ARG 1 end_ARG start_ARG square-root start_ARG 2 end_ARG end_ARG | start_ARG 1 end_ARG ⟩ as the control for their respective CNot gate, while we have |ψ5⟩=12|0⟩−12|1⟩ketsubscript𝜓512ket012ket1\ket{\psi_{5}}=\frac{1}{\sqrt{2}}\ket{0}-\frac{1}{\sqrt{2}}\ket{1}| start_ARG italic_ψ start_POSTSUBSCRIPT 5 end_POSTSUBSCRIPT end_ARG ⟩ = divide start_ARG 1 end_ARG start_ARG square-root start_ARG 2 end_ARG end_ARG | start_ARG 0 end_ARG ⟩ - divide start_ARG 1 end_ARG start_ARG square-root start_ARG 2 end_ARG end_ARG | start_ARG 1 end_ARG ⟩ as the target for both instances. By performing the tensor product |ψ3⟩⊗|ψ5⟩tensor-productketsubscript𝜓3ketsubscript𝜓5\ket{\psi_{3}}\otimes\ket{\psi_{5}}| start_ARG italic_ψ start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT end_ARG ⟩ ⊗ | start_ARG italic_ψ start_POSTSUBSCRIPT 5 end_POSTSUBSCRIPT end_ARG ⟩ we have the following.
[1000010000010010][12−1212−12]=12|00⟩−12|01⟩−12|10⟩+12|11⟩matrix1000010000010010matrix1212121212ket0012ket0112ket1012ket11\begin{bmatrix}1&0&0&0\\ 0&1&0&0\\ 0&0&0&1\\ 0&0&1&0\\ \end{bmatrix}\begin{bmatrix}[r]\dfrac{1}{2}\\[6.99997pt] -\dfrac{1}{2}\\[6.99997pt] \dfrac{1}{2}\\[6.99997pt] -\dfrac{1}{2}\end{bmatrix}=\dfrac{1}{2}\ket{00}-\dfrac{1}{2}\ket{01}-\dfrac{1}% {2}\ket{10}+\dfrac{1}{2}\ket{11}[ start_ARG start_ROW start_CELL 1 end_CELL start_CELL 0 end_CELL start_CELL 0 end_CELL start_CELL 0 end_CELL end_ROW start_ROW start_CELL 0 end_CELL start_CELL 1 end_CELL start_CELL 0 end_CELL start_CELL 0 end_CELL end_ROW start_ROW start_CELL 0 end_CELL start_CELL 0 end_CELL start_CELL 0 end_CELL start_CELL 1 end_CELL end_ROW start_ROW start_CELL 0 end_CELL start_CELL 0 end_CELL start_CELL 1 end_CELL start_CELL 0 end_CELL end_ROW end_ARG ] [ start_ARG start_ROW start_CELL divide start_ARG 1 end_ARG start_ARG 2 end_ARG end_CELL end_ROW start_ROW start_CELL - divide start_ARG 1 end_ARG start_ARG 2 end_ARG end_CELL end_ROW start_ROW start_CELL divide start_ARG 1 end_ARG start_ARG 2 end_ARG end_CELL end_ROW start_ROW start_CELL - divide start_ARG 1 end_ARG start_ARG 2 end_ARG end_CELL end_ROW end_ARG ] = divide start_ARG 1 end_ARG start_ARG 2 end_ARG | start_ARG 00 end_ARG ⟩ - divide start_ARG 1 end_ARG start_ARG 2 end_ARG | start_ARG 01 end_ARG ⟩ - divide start_ARG 1 end_ARG start_ARG 2 end_ARG | start_ARG 10 end_ARG ⟩ + divide start_ARG 1 end_ARG start_ARG 2 end_ARG | start_ARG 11 end_ARG ⟩ (12)
By applying the CNot gate, probabilities have not changed, and if we were to measure the states now, at this moment, nothing extraordinary would happen. But, if we observe the mixed state more closely, a change of phase has happened, and this is exactly what we wanted; the target has influenced the control, and thus we have, |ψ3⟩=+|0⟩−|1⟩ketsubscript𝜓3ket0ket1\ket{\psi_{3}}=+\ket{0}-\ket{1}| start_ARG italic_ψ start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT end_ARG ⟩ = + | start_ARG 0 end_ARG ⟩ - | start_ARG 1 end_ARG ⟩. What brings us to the last step, just before we are ready to measure the result. If we apply the H gate one more time, we will reverse the superposition, yet as we have changed the phase of certain qubits, these will no longer collapse to their original state but to the opposite one. Let us collapse |ψ3⟩ketsubscript𝜓3\ket{\psi_{3}}| start_ARG italic_ψ start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT end_ARG ⟩.
12[111−1][12−12]=0+112matrix1111matrix121201\dfrac{1}{\sqrt{2}}\begin{bmatrix}1&1\\ 1&-1\end{bmatrix}\begin{bmatrix}[r]\dfrac{1}{\sqrt{2}}\\ -\dfrac{1}{\sqrt{2}}\end{bmatrix}=0+1divide start_ARG 1 end_ARG start_ARG square-root start_ARG 2 end_ARG end_ARG [ start_ARG start_ROW start_CELL 1 end_CELL start_CELL 1 end_CELL end_ROW start_ROW start_CELL 1 end_CELL start_CELL - 1 end_CELL end_ROW end_ARG ] [ start_ARG start_ROW start_CELL divide start_ARG 1 end_ARG start_ARG square-root start_ARG 2 end_ARG end_ARG end_CELL end_ROW start_ROW start_CELL - divide start_ARG 1 end_ARG start_ARG square-root start_ARG 2 end_ARG end_ARG end_CELL end_ROW end_ARG ] = 0 + 1 (13)
Which makes the finals state, |ψ3⟩=0|0⟩+1|1⟩ketsubscript𝜓30ket01ket1\ket{\psi_{3}}=0\ket{0}+1\ket{1}| start_ARG italic_ψ start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT end_ARG ⟩ = 0 | start_ARG 0 end_ARG ⟩ + 1 | start_ARG 1 end_ARG ⟩, a 100%percent100100\%100 % chance of observing 1–by which the desired result was obtained, the binary string was decoded, and the information now only needs to be recorded, an operation conducted via a classical storage device. The string 1100110011001100 was the one to find, and while the string 1100110011001100 was the one found, the algorithm works well.
Phase-kickback is a mechanism that is often found in quantum algorithms, like, for example, Grover’s [87, 93], and in Deutsch-Josza [61, 189], and it is therefore important to conquer this design pattern. The basic idea behind such algorithms is to develop a quantum "oracle" that will only apply the negative phase to a state one is looking for, which is by no means an easy task to do, and when that is achieved, we can perform, as necessary, amplitude amplification, thus diminishing undesirable amplitudes and increasing the desirable ones, which is the way by which a quantum computer increases the probability of success so as to ensure a sought-after result is measured with high occurrence probability. [87, 93, 61, 189, 232]
6 Questions that Puzzle the Mind
Among many intriguing problems in quantum computing that are in need of solving and that are also of interest, there are some that are of special stake for us here. In spite of all the accomplishments, the more reliable and broader reality of quantum computers is still a dream. The main issues standing in the way of quantum computer construction are the following:
I
"The possibility in principle to construct a scalable quantum computer." [222]
II
"Instability (decoherence) because of the influence of external environment." [222]
III
"A physical implementation of a scalable quantum computer with a sufficient (for practical problems) number of jointly operating qubits ." [222]
IV
"The uncertainty of the degree of dependence of errors since a very fast accumulation of errors with increasing the number of qubits will give no way to obtain the sought-for result when executing computations with an acceptable number of repetitions." [222]
V
"The construction of new mathematical algorithms that will allow to considerably accelerate computations and the search for solutions for a wide class of problems." [222]
Another area of research that is quite significant is finding the position of quantum computation with regard to classical computation in terms of computational cost and complexity classes, as well as exploring the limitations of models of computation. [207] As such, there exists a complexity class BQP (bounded-error quantum polynomial time) that consists of decision problems that can be solved by a quantum machine in polynomial time, with the probability of a correct answer being ≥23absent23\geq\frac{2}{3}≥ divide start_ARG 2 end_ARG start_ARG 3 end_ARG. [30, 277] This complexity class is a quantum analogue for the classical BPP (bounded-error probabilistic polynomial time) that "consists of problems for which there exists a polynomial-time Atlantic City algorithm with a two-sided error." [124] These classes are related in the following way, BPP⊆BQP𝐵𝑃𝑃𝐵𝑄𝑃BPP\subseteq BQPitalic_B italic_P italic_P ⊆ italic_B italic_Q italic_P, with both classes belonging to PSPACE and needing a polynomial amount of space. [124] The question of BQP and its relation to NP is a matter that is more difficult. There are indications that perhaps NP is contained in BQP, as there are results for the opposite being true–this then still represents a question that is an issue in science and is considered unresolved. [277]
Quantum computation is fascinating from yet another perspective, which is the very basis of it, that is because of quantum mechanics. Quantum mechanics is the fundamental theory in physics describing nature at the smallest of scale, at the atomic and subatomic level [71], yet it seems that even quantum mechanics does not give all the answers, and not only for the reason of Gödel’s incompleteness theorems [242]. There are certain aspects that escape us, at least for the time being, with entanglement and non-locality representing parts of the picture. [194] The issue is, however, broader, as the theory of quantum mechanics fails to address the question of, "how even a single particle, by being in a given quantum state, causes the frequency distribution of measurement values specified by the state." [194] And so, the never-ending pursuit in science, for new knowledge and discoveries, continues.
7 Moving Forward
It is tempting to think that one should use a quantum computer for every problem and for every task; quantum computers, however, are not a key that fits into every lock. There are problems that naturally fit quantum computing and those that do not. The most obvious application of a quantum computer is naturally quantum simulation [78]. By using a quantum computer, one can cope well with the complexity that overwhelms a classical machine. Examples of such modeling include superconductivity [107], chemical processes [13], photosynthesis [262], physics processes [78], cosmology [137], etc. Other, more classical examples, so to speak, are cryptography [195], optimization [134], search [276], and also machine learning and artificial intelligence [46, 65].
There are two main types of quantum computer implementation: universal [126], and non-universal [222]. "The main distinction is that universal quantum computing devices are developed with a view to executing arbitrary allowed operations and solving arbitrary problems; while non-universal computing devices are created to solve some limited class of problems, for example, to optimize definite machine learning algorithms." [222]
These quantum machines can be implemented in various ways, with different physical technologies in mind, like trapped ions, superconductors, or photons. [213] Each individual technology has its ups and downs; in each case, however, quantum computers "are very hard to build"; with the thread that permeates all implementations being quantum noise. [213] "Quantum mechanical states are extremely fragile and require near-absolute isolation from the environment; such conditions are hard to create and typically require temperatures near absolute zero and shielding from radiation." [213] Which makes quantum computers expensive to build and difficult to operate. [213] As the size of a quantum computer increases, so do the challenges, which get mounted one upon the other (in terms of the "number of qubits and the length of time they must be coherent"). [213]
When computation is being done on a quantum machine, that is, on encoded states, "qubits interact with each other through the gates, and this way errors can propagate through the gates, from one qubit to another." [4] In such a manner, the error can quickly be spread to all of the qubits. [4] To solve this problem, computation and error correction can be performed in a distributed way so that "each qubit can effect only a small number of other qubits." [4] An estimation was made that "more than 99%percent9999\%99 % of the computation performed by a quantum computer will be for error correction." [213, 122] If that is the case, then the calculations that a quantum computer should perform become of secondary nature, thus making the goal of fault-tolerant quantum operations of extremely high importance. [122] By taking that fact into context, quantum practicality will be a difficult goal to achieve, as a commercial quantum computer would need thousands and millions of qubits–efforts are, however, being made in order to solve the issue. [100, 35]
Quantum computers have limitations that go beyond their applicability. In spite of having a general scheme for speeding up computation, it is not expected to solve efficiently and in an exact manner NP-hard optimization problems. [204, 96] In order to make quantum practicality a reality, significant algorithmic improvements are yet to be achieved, while "due to limitations of input and output bandwidth, quantum computers will be practical for "big compute" problems on small data, not big data problems." [100] Nevertheless, through continuing progress and innovation, it is expected that a quantum computer able to break RSA-4096, with a probability of 1212\frac{1}{2}divide start_ARG 1 end_ARG start_ARG 2 end_ARG, will be constructed within the next 10-15 years. [222] With that in mind, it is necessary to already prepare options for replacement so as to ensure post-quantum cryptography viability. [222]
In the meantime, until commercial quantum computers are a reality, it is possible to create variational quantum algorithms that are trying to merge the classical and quantum approaches to problems. [36] In order to deal with the limitations of quantum computers, such as the limit on the number of qubits and the limit on the circuit depth as per noise, a variational quantum algorithm can be used instead. [36] Such an algorithm uses "a classical optimizer to train a parameterized quantum circuit." [36] In spite of the challenges of these algorithms as well, like trainability, accuracy, and efficiency, they are, for the short term at least, perhaps the best option for making the quantum dream a reality in the here and now. [36]
In order to start building quantum algorithms now, the following resources represent possible starting positions. In [142] one can read about a quantum singular value transformation (QSVT), which represents a general framework for a number of quantum algorithms, with the possibility of suggesting a unification of quantum algorithms. [142] While the following materials represent practical and hands-on foundational experience in quantum computing: [248], [97] , [143], [108], [106], [147], [84].
8 Few Last Words
It was the goal of this research to present to the scientific community an in-depth historical and current survey of quantum computing, with a special emphasis on foundational concepts that are difficult to grasp while also gazing into the future–and almost all of it has been done, from history to terminology, from quantum effects to quantum computation, and from the standard model algorithmics to the related literature. It is therefore left for us to touch upon wrapping issues, consider open questions, and draw conclusions.
Even tough, at times it might seem hopeless that a true, large-scale quantum computer will some day be a reality. Science is advancing, and every year there comes some new experimental success, and this ambitious dream of quantum computation might be possible. [4, 35, 260]
Quantum entanglement is of special interest as it allows for the teleportation of quantum states, and as it is currently known, there is no limit on the distance, which could perhaps enable a large-scale network, a marvel that would be quantum internet. [213] Considering that quantum encryption can’t be broken, even in theory, such a communication network is of great interest and would be of incredible value–it would be the absolute security realized. [213, 275]
If we have learned anything thus far, it is the fact that realizing a quantum computer, even of any kind, is not an easy task; however, Quantum David just might overpower Classical Goliath. [204] By superconducting quantum technology, Google was successful in constructing Sycamore, a programmable quantum machine that has 53 qubits. [204] For the reason of errors, "the final measurement yields the correct output only once in 500 runs", yet if one makes repeated calculations "millions of times in just a few minutes", a statistically useful result can be obtained. [204] The Sycamore quantum computer is only a single chip, compared to a classical computer that spans tennis courts and uses megawatts of power. [204] And Google is not the only one; IBM, for example, paves the way for an error-resilient quantum computer with thousands of qubits. [35] Indeed, sufficient progress has still not been achieved in realizing a scalable quantum device, it is nevertheless perceived that, with the developments at hand, "a full-fledged quantum computer will be created in the next 10-15 years." [222]
At the present, quantum mechanics is "considered the most accurate description of the Universe", although the theory might need modifications in the future. [4, 194] If and when such a scenario becomes a reality, it is unclear how will that change in the theory of quantum mechanics reflect on quantum computing and quantum information; however, "the novel physical theory that will emerge may give rise to a new computational paradigm, maybe even more powerful than quantum computing." [4] There is a possibility that large-scale commercial quantum devices won’t be feasible, perhaps because of a currently unknown or unsolvable issue–in such a case, a quantum computer can still be useful, e.g. for being "the simulator Feynman first envisaged", or for allowing experimental research in physics, and thus, by manipulating a small number of qubits, physicists will be performing tests and validating predictions of quantum theory. [4]
Even though it is not expected that quantum computers, via quantum algorithms, will be able to solve NP-complete problems in a manner that is exact and efficient, there is a possibility of finding efficient algorithms for those problems for which we do not know whether they belong to a class of NP-complete problems and do not have known and efficient classical algorithms, like, for example, the problem of "checking whether two graphs are isomorphic, known as Graph Isomorphism ." [4, 204, 96]
In spite of all of its marvels and all of the scientific contributions, there are many unsolved/partially solved open problems in the realm of quantum computing and quantum mechanics. Here we will list just a small fraction of those, which are likely also the most pressing and fascinating.
∙∙\bullet∙
Reduction of quantum error rates. [91, 123, 42]
∙∙\bullet∙
Suppression of quantum decoherence. [234, 269]
∙∙\bullet∙
Finding a type of technology best suited for quantum computation and an implementation thereof. [112, 278, 282, 95, 90, 205]
∙∙\bullet∙
The relationship in regard to NP and BQP. [19, 52]
∙∙\bullet∙
Scalability of a quantum computer. [69, 240]
∙∙\bullet∙
Verification of a quantum system. [80, 229]
∙∙\bullet∙
Separation of BQP and PH outside of a black-box model. [210]
∙∙\bullet∙
Efficient quantum memory. [216, 118]
∙∙\bullet∙
Networking protocols and devices for the quantum internet. [31, 11]
∙∙\bullet∙
Balance of connectivity between qubits. [51, 279]
∙∙\bullet∙
Performance of a quantum gate set. [51, 125]
∙∙\bullet∙
Compilers and software stack performance. [51, 138, 55]
∙∙\bullet∙
Materials challenges in quantum computing. | |||||||
2453 | dbpedia | 1 | 39 | https://www.uva.nl/en/shared-content/faculteiten/en/faculteit-der-natuurwetenschappen-wiskunde-en-informatica/news/2023/05/prestigious-godel-prize-for-ronald-de-wolf.html | en | Prestigious Gödel Prize for Ronald de Wolf | [
"https://www.uva.nl/binaries/_ht_1707481193126/700x0-jpg/content/gallery/migratiemappen-beeldredactie-bc-leegmaken/education/map-1/ronald-de-wolf.jpg"
] | [] | [] | [
"award",
"computer science",
"science",
"faculty of science",
"fundamental",
"institute for logic",
"language",
"and computation (illc)",
"smart"
] | null | [
"Universiteit van Amsterdam"
] | 2023-05-23T11:33:00+00:00 | Ronald de Wolf (CWI, UvA, QuSoft) and his co-authors receive the prestigious Gödel Prize for outstanding papers in theoretical computer science. The Gödel Prize is jointly awarded by the ACM Special Interest Group on Algorithms and Computation Theory (ACM SIGACT) and the European Association for Theoretical Computer Science (EATCS). | en | /webfiles/1722406462321/_favicon/apple-touch-icon.png | University of Amsterdam | https://www.uva.nl/en/shared-content/faculteiten/en/faculteit-der-natuurwetenschappen-wiskunde-en-informatica/news/2023/05/prestigious-godel-prize-for-ronald-de-wolf.html | The prize will be awarded during STOC 2023, one of the most important conferences in theoretical computer science, which takes place on 20-23 June 2023 in Orlando, Florida. This year, there are two winning articles. The other winner of the 2023 Gödel Prize is Thomas Rothvoss.
Ronald de Wolf says: ‘I am very proud and humbled to win this prize along with my co-authors, and to be listed among the amazing papers and amazing researchers that have received this prize before.’ Earlier winners of the Gödel Prize include well-known researchers like Cynthia Dwork, Shafi Goldwasser, Johan Håstad, László Lovász, Peter Shor, Dan Spielman, Mario Szegedy and Avi Wigderson.
Authors Samuel Fiorini, Serge Massar, Sebastian Pokutta, Hans Raj Tiwary and Ronald de Wolf were given the award for their article ‘Exponential Lower Bounds for Polytopes in Combinatorial Optimization’. One of its main conclusions was that a particular attempt to solve the famous travelling salesman problem cannot possibly work.
De Wolf explains: ‘This paper refutes an attempt to solve hard computational problems such as Travelling Salesman (TSP). We know how to solve so-called linear programs efficiently, so since the 1980s researchers have been trying to write down a small linear program for TSP. If successful, this approach would have momentous consequences for efficient algorithms. However, our paper - which generalizes work by Yannakakis from 1988 - definitively showed that the approach is doomed to fail, by proving that every linear program that describes TSP needs to be exponentially large. The proof combines geometry, combinatorics, and even a connection with quantum communication theory.’
At STOC 2012, Ronald de Wolf and the rest of the team already received a Best Paper Award for their work, and in 2022 they won the ACM STOC 10-year Test of Time Award.
Ronald de Wolf did his research in the Algorithms and Complexity group of CWI (Centrum Wiskunde & Informatica) in Amsterdam, the national research institute for mathematics and computer science in the Netherlands. He is also a part-time full professor at the Institute for Logic, Language and Computation of the University of Amsterdam and a member of QuSoft. In 2013 he received an ERC Consolidator Grant and in 2003 the Cor Baayen Award. His main scientific interests are quantum computing and complexity theory.
The award committee of the 2023 Gödel Prize consisted of Award Committee: Nikhil Bansal (University of Michigan), Irit Dinur (Weizmann Institute), Anca Muscholl (University of Bordeaux), Tim Roughgarden (Columbia University), Ronitt Rubinfeld, Chair (Massachusetts Institute of Technology) and Luca Trevisan (Bocconi University). | ||||
2453 | dbpedia | 0 | 22 | https://scottaaronson.blog/%3Fp%3D1211 | en | Optimized | [
"https://149663533.v2.pressablecdn.com/wp-content/plugins/really-simple-facebook-twitter-share-buttons/images/specificfeeds_follow.png",
"https://149663533.v2.pressablecdn.com/wp-content/plugins/really-simple-facebook-twitter-share-buttons/images/specificfeeds_follow.png",
"https://149663533.v2.pressablecdn.com... | [] | [] | [
""
] | null | [] | null | The Blog of Scott Aaronson | en | Shtetl-Optimized | https://scottaaronson.blog/ | Q1: Who will you be voting for in November?
A: Kamala Harris (and mainstream Democrats all down the ballot), of course.
Q2: Of course?
A: If the alternative is Trump, I would’ve voted for Biden’s rotting corpse. Or for Hunter Biden. Or for…
Q3: Why can’t you see this is just your Trump Derangement Syndrome talking?
A: Look, my basic moral commitments remain pretty much as they’ve been since childhood. Namely, that I’m on the side of reason, Enlightenment, scientific and technological progress, secular government, pragmatism, democracy, individual liberty, justice, intellectual honesty, an American-led peaceful world order, preservation of the natural world, mitigation of existential risks, and human flourishing. (Crazy and radical, I know.)
Only when choosing between candidates who all espouse such values, do I even get the luxury of judging them on any lower-order bits. Sadly, I don’t have that luxury today. Trump’s values, such as they are, would seem to be “America First,” protectionism, vengeance, humiliation of enemies, winning at all costs, authoritarianism, the veneration of foreign autocrats, and the veneration of himself. No amount of squinting can ever reconcile those with the values I listed before.
Q4: Is that all that’s wrong with him?
A: No, there are also the lies, and worst of all the “Big Lie.” Trump is the first president in US history to incite a mob to try to overturn the results of an election. He was serious! He very nearly succeeded, and probably would have, had Mike Pence been someone else. It’s now inarguable that Trump rejects the basic rules of our system, or “accepts” them only when he wins. We’re numb from having heard it so many times, but it’s a big deal, as big a deal as the Civil War was.
Q5: Oh, so this is about your precious “democracy.” Why do you care? Haven’t you of all people learned that the masses are mostly idiots and bullies, who don’t deserve power? As Curtis Yarvin keeps trying to explain to you, instead of “democracy,” you should want a benevolent king or dictator-CEO, who could offer a privileged position to the competent scientists like yourself.
A: Yeah, so how many examples does history furnish where that worked out well? I suppose you might make a partial case for Napoleon, or Ataturk? More to the point: even if benevolent, science-and-reason-loving authoritarian strongmen are possible in theory, do you really expect me to believe that Trump could be one of them? I still love how Scott Alexander put it in 2016:
Can anyone honestly say that Trump or his movement promote epistemic virtue? That in the long-term, we’ll be glad that we encouraged this sort of thing, that we gave it power and attention and all the nutrients it needed to grow? That the road to whatever vision of a just and rational society we imagine, something quiet and austere with a lot of old-growth trees and Greek-looking columns, runs through LOCK HER UP?
I don’t like having to vote for the lesser of two evils. But at least I feel like I know who it is.
Q6: But what about J. D. Vance? He got his start in Silicon Valley, was championed by Peter Thiel, and is obviously highly intelligent. Doesn’t he seem like someone who might listen to and empower tech nerds like yourself?
A: Who can say what J. D. Vance believes? Here are a few choice quotes of his from eight years ago:
I’m obviously outraged at Trump’s rhetoric, and I worry most of all about how welcome Muslim citizens feel in their own country. But I also think that people have always believed crazy shit (I remember a poll from a few years back suggesting that a near majority of democratic voters blame ‘the Jews’ for the financial crisis). And there have always been demagogues willing to exploit the people who believe crazy shit.
The more white people feel like voting for trump, the more black people will suffer. I really believe that.
[Trump is] just a bad man. A morally reprehensible human being.
To get from that to being Trump’s running mate is a Simone-Biles-like feat of moral acrobatics. Vance reminds me of the famous saying by L. Ron Hubbard from his pre-Dianetics days: “If a man really wants to make a million dollars, the best way would be to start his own religion.” (And I feel like Harris’s whole campaign strategy should just be to replay Vance’s earlier musings in wall-to-wall ads while emphasizing her agreement with them.) No, Vance is not someone I trust to share my values, if he has values at all.
Q7: What about the other side’s values, or lack thereof? I mean, don’t you care that the whole Democratic establishment—including Harris—colluded to cover up that Biden was senile and cognitively unfit to be president now, let alone for another term?
A: Look, we’ve all seen what happens as a relative gets old. It’s gradual. It’s hard for anyone to say at which specific moment they can no longer drive a car, or be President of the United States, or whatever. This means that I don’t necessarily read evil intent into the attempts to cover up Biden’s decline—merely an epic, catastrophic failure of foresight. That failure of foresight itself would’ve been a huge deal in normal circumstances, but these are not normal circumstances—not if you believe, as I do, that the alternative is the beginning of the end of a 250-year-old democratic experiment.
Q8: Oh stop being so melodramatic. What terrible thing happened to you because of Trump’s first term? Did you lose your job? Did fascist goons rough you up in the street?
A: Well, my Iranian PhD student came close to having his visa revoked, and it became all but impossible to recruit PhD students from China. That sucked, since I care about my students’ welfare like I care about my own. Also, the downfall of Roe v. Wade, which enabled Texas’ draconian new abortion laws, made it much harder for us to recruit faculty at UT Austin. But I doubt any of that will impress you. “Go recruit American students,” you’ll say. “Go recruit conservative faculty who are fine with abortion being banned.”
The real issue is that Trump was severely restrained in his first term, by being surrounded by people who (even if, in many cases, they started out loyal to him) were also somewhat sane and valued the survival of the Republic. Alas, he learned from that, and he won’t repeat that mistake the next time.
Q9: Why do you care so much about Trump’s lies? Don’t you realize that all politicians lie?
A: Yes, but there are importantly different kinds of lies. There are white lies. There are scheming, 20-dimensional Machiavellian lies, like a secret agent’s cover story (or is that only in fiction?). There are the farcical, desperate, ever-shifting lies of the murderer to the police detective or the cheating undergrad to the professor. And then there are the lies of bullies and mob bosses and populist autocrats, which are special and worse.
These last, call them power-lies, are distinguished by the fact that they aren’t even helped by plausibility. Often, as with conspiracy theories (which strongly overlap with power-lies), the more absurd the better. Obama was born in Kenya. Trump’s crowd was the biggest in history. The 2020 election was stolen by a shadowy conspiracy involving George Soros and Dominion and Venezuela.
The central goal of a power-lie is just to demonstrate your power to coerce others into repeating it, much like with the Party making Winston Smith affirm 2+2=5, or Petruchio making Katharina call the sun the moon in The Taming of the Shrew. A closely-related goal is as a loyalty test for your own retinue.
It’s Trump’s embrace of the power-lie that puts him beyond the pale for me.
Q10: But Scott, we haven’t even played our “Trump” card yet. Starting on October 7, 2023, did you not witness thousands of your supposed allies, the educated secular progressives on “the right side of history,” cheer the sadistic mass-murder of Jews—or at least, make endless excuses for those who did? Did this not destabilize your entire worldview? Will you actually vote for a party half of which seems at peace with the prospect of your family members’ physical annihilation? Or will you finally see who your real friends now are: Arkansas MAGA hillbillies who pray for your people’s survival?
A: Ah, this is your first slash that’s actually drawn blood. I won’t pretend that the takeover of part of the US progressive coalition by literal Hamasniks hasn’t been one of the most terrifying experiences of my life. Yes, if I had to be ruled by either (a) a corrupt authoritarian demagogue or (b) an idiot college student chanting for “Intifada Revolution,” I’d be paralyzed. So it’s lucky that I don’t face that choice! I get to vote, once more, for a rather boring mainstream Democrat—alongside at least 70% of American Jews. The idea of Harris as an antisemite would be ludicrous even if she didn’t have a Jewish husband or wasn’t strongly considering a pro-Israel Jew as her running mate.
Q11: Sure, Kamala Harris might mouth all the right platitudes about Israel having a right to defend itself, but she’ll constantly pressure Israel to make concessions to Hamas and Hezbollah. She’ll turn a blind eye to Iran’s imminent nuclearization. Why don’t you stay up at night worrying that, if you vote for a useful idiot like her, you’ll have Israel’s annihilation and a second Holocaust on your conscience forever?
A: Look, oftentimes—whenever, for example, I’m spending hours reading anti-Zionists on Twitter—I feel like there’s no limit to how intensely Zionist I am. On reflection, though, there is a limit. Namely, I’m not going to be more Zionist than the vast majority of my Israeli friends and colleagues—the ones who served in the IDF, who in some cases did reserve duty in Gaza, who prop up the Israeli economy with their taxes, and who will face the consequences of whatever happens more directly than I will. With few exceptions, these friends despise the Trump/Bibi alliance with white-hot rage, and they desperately want more moderate leadership in both countries.
Q12: Suppose I concede that Kamala is OK on Israel. We both know that she’s not the future of the Democratic Party, any more than Biden is. The future is what we all saw on campuses this spring. “Houthis Houthis make us proud, turn another ship around.” How can you vote for a party whose rising generation seems to want you and your family dead?
A: Let me ask you something. When Trump won in 2016, did that check the power of the campus radicals? Or as Scott Alexander prophesied at the time, did it energize and embolden them like nothing else, by dramatically confirming their theology of a planet held hostage by the bullying, misogynistic rich white males? I fundamentally reject your premise that, if I’m terrified of crazy left-wing extremists, then a good response is to vote for the craziest right-wing extremists I can find, in hopes that the two will somehow cancel each other out. Instead I should support a coherent Enlightenment alternative to radicalism, or the closest thing to that available.
Q13: Even leaving aside Israel, how can you not be terrified by what the Left has become? Which side denounced you on social media a decade ago, as a misogynist monster who wanted all women to be his sex slaves? Which side tried to ruin your life and career? Did we, the online rightists, do that? No. We did not. We did nothing worse to you than bemusedly tell you to man up, grow a pair, and stop pleading for sympathy from feminists who will hate you no matter what.
A: I’ll answer with a little digression. Back in 2017, when Kamala Harris was in the Senate, her office invited me to DC to meet with them to provide advice about the National Quantum Initiative Act, which Kamala was then spearheading. Kamala herself sent regrets that she couldn’t meet me, because she had to be at the Kavanaugh hearings. I have (nerdy, male) friends who did meet her about tech policy and came away with positive impressions.
And, I dunno, does that sound like someone who wants me dead for the crime of having been born a nerdy heterosexual male? Or having awkwardly and ineptly asked women on dates, including the one who became my wife? OK, maybe Amanda Marcotte wants me dead for those crimes. Maybe Arthur Chu does (is he still around?). Good that they’re not running for president then.
Q14: Let me try one more time to show you how much your own party hates you. Which side has been at constant war against the SAT and other standardized tests, and merit-based college admissions, and gifted programs, and academic tracking and acceleration, and STEM magnet schools, and every single other measure by which future young Scott Aaronsons (and Saket Agrawals) might achieve their dreams in life? Has that been our side, or theirs?
A: To be honest, I haven’t seen the Trump or Harris campaigns take any position on any of these issues. Even if they did, there’s very little that the federal government can do: these battles happen in individual states and cities and counties and universities. So I’ll vote for Harris while continuing to advocate for what I think is right in education policy.
Q15: Can you not see that Kamala Harris is a vapid, power-seeking bureaucratic machine—that she has no fixed principles at all? For godsakes, she all but condemned Biden as a racist in the 2020 primary, then agreed to serve as his running mate!
A: I mean, she surely has more principles than Vance does. As far as I can tell, for example, she’s genuinely for abortion rights (as I am). Even if she believed in nothing, though, better a cardboard cutout on which values I recognize are written, than a flesh-and-blood person shouting values that horrify me.
Q16: What, if anything, could Republicans do to get you to vote for them?
A: Reject all nutty conspiracy theories. Fully, 100% commit to the peaceful transfer of power. Acknowledge the empirical reality of human-caused climate change, and the need for both technological and legislative measures to slow it and mitigate its impacts. Support abortion rights, or at least a European-style compromise on abortion. Republicans can keep the anti-wokeness stuff, which actually seems to have become their defining issue. If they do all that, and also the Democrats are taken over by frothing radicals who want to annihilate the state of Israel and abolish the police … that’s, uh, probably the point when I start voting Republican.
Q17: Aha, so you now admit that there exist conceivable circumstances that would cause you to vote Republican! In that case, why did you style yourself “Never-Trump From Here to Eternity”?
A: Tell you what, the day the Republicans (and Trump himself?) repudiate authoritarianism and start respecting election outcomes, is the day I’ll admit my title was hyperbolic.
Q18: In the meantime, will you at least treat us Trump supporters with civility and respect?
A: Not only does civil disagreement not compromise any of my values, it is a value to which I think we should all aspire. And to whatever extent I’ve fallen short of that ideal—even when baited into it—I’m sorry and I’ll try to do better. Certainly, age and experience have taught me that there’s hardly anyone so far gone that I can’t find something on which I agree with them, while disagreeing with most of the rest of the world.
The news these days feels apocalyptic to me—as if we’re living through, if not the last days of humanity, then surely the last days of liberal democracy on earth.
All the more reason to ignore all of that, then, and blog instead about the notorious Busy Beaver function! Because holy moly, what news have I got today. For lovers of this super-rapidly-growing sequence of integers, I’ve honored to announce the biggest Busy Beaver development that there’s been since 1983, when I slept in a crib and you booted up your computer using a 5.25-inch floppy. That was the year when Allen Brady determined that BusyBeaver(4) was equal to 107. (Tibor Radó, who invented the Busy Beaver function in the 1960s, quickly proved with his student Shen Lin that the first three values were 1, 6, and 21 respectively. The fourth value was harder.)
Only now, after an additional 41 years, do we know the fifth Busy Beaver value. Today, an international collaboration called bbchallenge is announcing that it’s determined, and even formally verified using the Coq proof system, that BB(5) is equal to 47,176,870—the value that’s been conjectured since 1990, when Heiner Marxen and Jürgen Buntrock discovered a 5-state Turing machine that runs for exactly 47,176,870 steps before halting, when started on a blank tape. The new bbchallenge achievement is to prove that all 5-state Turing machines that run for more steps than 47,176,870, actually run forever—or in other words, that 47,176,870 is the maximum finite number of steps for which any 5-state Turing machine can run. That’s what it means for BB(5) to equal 47,176,870.
For more on this story, see Ben Brubaker’s superb article in Quanta magazine, or bbchallenge’s own announcement. For more background on the Busy Beaver function, see my 2020 survey, or my 2017 big numbers lecture, or my 1999 big numbers essay, or the Googology Wiki page, or Pascal Michel’s survey.
The difficulty in pinning down BB(5) was not just that there are a lot of 5-state Turing machines (16,679,880,978,201 of them to be precise, although symmetries reduce the effective number). The real difficulty is, how do you prove that some given machine runs forever? If a Turing machine halts, you can prove that by simply running it on your laptop until halting (at least if it halts after a “mere” ~47 million steps, which is child’s-play). If, on the other hand, the machine runs forever, via some never-repeating infinite pattern rather than a simple infinite loop, then how do you prove that? You need to find a mathematical reason why it can’t halt, and there’s no systematic method for finding such reasons—that was the great discovery of Gödel and Turing nearly a century ago.
More precisely, the Busy Beaver function grows faster than any function that can be computed, and we know that because if a systematic method existed to compute arbitrary BB(n) values, then we could use that method to determine whether a given Turing machine halts (if the machine has n states, just check whether it runs for more than BB(n) steps; if it does, it must run forever). This is the famous halting problem, which Turing proved to be unsolvable by finite means. The Busy Beaver function is Turing-uncomputability made flesh, a finite function that scrapes the edge of infinity.
There’s also a more prosaic issue. Proofs that particular Turing machines run forever tend to be mind-numbingly tedious. Even supposing you’ve found such a “proof,” why should other people trust it, if they don’t want to spend days staring at the outputs of your custom-written software?
And so for decades, a few hobbyists picked away at the BB(5) problem. One, who goes by the handle “Skelet”, managed to reduce the problem to 43 holdout machines whose halting status was still undetermined. Or maybe only 25, depending who you asked? (And were we really sure about the machines outside those 43?)
The bbchallenge collaboration improved on the situation in two ways. First, it demanded that every proof of non-halting be vetted carefully. While this went beyond the original mandate, a participant named “mxdys” later upped the standard to fully machine-verifiable certificates for every non-halting machine in Coq, so that there could no longer be any serious question of correctness. (This, in turn, was done via “deciders,” programs that were crafted to recognize a specific type of parameterized behavior.) Second, the collaboration used an online forum and a Discord server to organize the effort, so that everyone knew what had been done and what remained to be done.
Despite this, it was far from obvious a priori that the collaboration would succeed. What if, for example, one of the 43 (or however many) Turing machines in the holdout set turned out to encode the Goldbach Conjecture, or one of the other great unsolved problems of number theory? Then the final determination of BB(5) would need to await the resolution of that problem. (We do know, incidentally, that there’s a 27-state Turing machine that encodes Goldbach.)
But apparently the collaboration got lucky. Coq proofs of non-halting were eventually found for all the 5-state holdout machines.
As a sad sidenote, Allen Brady, who determined the value of BB(4), apparently died just a few days before the BB(5) proof was complete. He was doubtful that BB(5) would ever be known. The reason, he wrote in 1988, was that “Nature has probably embedded among the five-state holdout machines one or more problems as illusive as the Goldbach Conjecture. Or, in other terms, there will likely be nonstopping recursive patterns which are beyond our powers of recognition.”
Maybe I should say a little at this point about what the 5-state Busy Beaver—i.e., the Marxen-Buntrock Turing machine that we now know to be the champion—actually does. Interpreted in English, the machine iterates a certain integer function g, which is defined by
g(x) = (5x+18)/3 if x = 0 (mod 3),
g(x) = (5x+22)/3 if x = 1 (mod 3),
g(x) = HALT if x = 2 (mod 3).
Starting from x=0, the machine computes g(0), g(g(0)), g(g(g(0))), and so forth, halting if and if it ever reaches … well, HALT. The machine runs for millions of steps because it so happens that this iteration eventually reaches HALT, but only after a while:
0 → 6 → 16 → 34 → 64 → 114 → 196 → 334 → 564 → 946 → 1584 → 2646 → 4416 → 7366 → 12284 → HALT.
(And also, at each iteration, the machine runs for a number of steps that grows like the square of the number x.)
Some readers might be reminded of the Collatz Conjecture, the famous unsolved problem about whether, if you repeatedly replace a positive integer x by x/2 if x is even or 3x+1 if x is odd, you’ll always eventually reach x=1. As Scott Alexander would say, this is not a coincidence because nothing is ever a coincidence. (Especially not in math!)
It’s a fair question whether humans will ever know the value of BB(6). Pavel Kropitz discovered, a couple years ago, that BB(6) is at least 10^10^10^10^10^10^10^10^10^10^10^10^10^10^10 (i.e., 10 raised to itself 15 times). Obviously Kropitz didn’t actually run a 6-state Turing machine for that number of steps until halting! Instead he understood what the machine did—and it turned out to apply an iterative process similar to the g function above, but this time involving an exponential function. And the process could be proven to halt after ~15 rounds of exponentiation.
Meanwhile Tristan Stérin, who coordinated the bbchallenge effort, tells me that a 6-state machine was recently discovered that “iterates the Collatz-like map {3x/2, (3x-1)/2} from the number 8 and halts if and only if the number of odd terms ever gets bigger than twice the number of even terms.” This shows that, in order to determine the value of BB(6), one would first need to prove or disprove the Collatz-like conjecture that that never happens.
Basically, if and when artificial superintelligences take over the world, they can worry about the value of BB(6). And then God can worry about the value of BB(7).
I first learned about the BB function in 1996, when I was 15 years old, from a book called The New Turing Omnibus by A. K. Dewdney. From what I gather, Dewdney would go on to become a nutty 9/11 truther. But that’s irrelevant to the story. What matters was that his book provided my first exposure to many of the key concepts of computer science, and probably played a role in my becoming a theoretical computer scientist at all.
And of all the concepts in Dewdney’s book, the one I liked the most was the Busy Beaver function. What a simple function! You could easily explain its definition to Archimedes, or Gauss, or any of the other great mathematicians of the past. And yet, by using it, you could name definite positive integers (BB(10), for example) incomprehensibly larger than any that they could name.
It was from Dewdney that I learned that the first four Busy Beaver numbers were the unthreatening-looking 1, 6, 21, and 107 … but then that the fifth value was already unknown (!!), and at any rate at least 47,176,870. I clearly remember wondering whether BB(5) would ever be known for certain, and even whether I might be the one to determine it. That was almost two-thirds of my life ago.
As things developed, I played no role whatsoever in the determination of BB(5) … except for this. Tristan Stérin tells me that reading my survey article, The Busy Beaver Frontier, was what inspired him to start and lead the bbchallenge collaboration that finally cracked the problem. It’s hard to express how gratified that makes me.
Why care about determining particular values of the Busy Beaver function? Isn’t this just a recreational programming exercise, analogous to code golf, rather than serious mathematical research?
I like to answer that question with another question: why care about humans landing on the moon, or Mars? Those otherwise somewhat arbitrary goals, you might say, serve as a hard-to-fake gauge of human progress against the vastness of the cosmos. In the same way, the quest to determine the Busy Beaver numbers is one concrete measure of human progress against the vastness of the arithmetical cosmos, a vastness that we learned from Gödel and Turing won’t succumb to any fixed procedure. The Busy Beaver numbers are just … there, Platonically, as surely as 13 was prime long before the first caveman tried to arrange 13 rocks into a nontrivial rectangle and failed. And yet we might never know the sixth of these numbers and only today learned the fifth.
Anyway, huge congratulations to the bbchallenge team on their accomplishment. At a terrifying time for the world, I’m happy that, whatever happens, at least I lived to see this.
Dana, the kids, and I got back to the US last week after a month spent in England and then Israel. We decided to visit Israel because … uhh, we heard there’s never been a better time.
We normally go every year to visit Dana’s family and our many friends there, and to give talks. Various well-meaning friends suggested that maybe we should cancel or postpone this year—given, you know, the situation. To me, though, the situation felt like all the more reason to go. To make Israel seem more and more embattled, dangerous, isolated, abnormal, like not an acceptable place to visit (much less live), in order to crater its economy, demoralize its population, and ultimately wipe it from the face of earth … that is explicitly much of the world’s game plan right now, laid out with shocking honesty since October 7 (a day that also showed us what the “decolonization” will, concretely, look like). So, if I oppose this plan, then how could I look myself in the mirror while playing my tiny part in it? Shouldn’t I instead raise a middle finger to those who’d murder my family, and go?
Besides supporting our friends and relatives, though, I wanted to see the post-October-7 reality for myself, rather than just spending hours per day reading about it on social media. I wanted to form my own impression of the mood in Israel: fiercely determined? angry? hopeless? just carrying on like normal?
Anyway, in two meeting-packed weeks, mostly in Tel Aviv but also in Jerusalem, Haifa, and Be’er Sheva, I saw stuff that could support any of those narratives. A lot was as I’d expected, but not everything. In the rest of this post, I’ll share eleven observations:
(1) This presumably won’t shock anyone, but in post-October-7 Israel, you indeed can’t escape October 7. Everywhere you look, on every building, in every lobby, hanging from every highway overpass, there are hostage posters and “Bring Them Home Now” signs and yellow ribbons—starting at the airport, where every single passenger is routed through a long corridor of hostage posters, each one signed and decorated by the hostage’s friends and family. It sometimes felt as though Yad Vashem had expanded to encompass the entire country. Virtually everyone we talked to wanted to share their stories and opinions about the war, most of all their depression and anger. While there was also plenty of discussion about quantum error mitigation and watermarking of large language models and local family events, no one even pretended to ignore the war.
(2) Having said that, the morning after we landed, truthfully, the first thing that leapt out at me wasn’t anything to do with October 7, hostages, or Gaza. It was the sheer number of children playing outside, in any direction you looked. Full, noisy playgrounds on block after block. It’s one thing to know intellectually that Israel has by far the highest birthrate of any Western country, another to see it for yourself. The typical secular family probably has three kids; the typical Orthodox family has more. (The Arab population is of course also growing rapidly, both in Israel and in the West Bank and Gaza.) New apartment construction is everywhere you look in Tel Aviv, despite building delays caused by the war. And it all seems perfectly normal … unless you’ve lived your whole life in environments where 0.8 or 1.2 children per couple is the norm.
This, of course, has giant implications for anyone interested in Israel’s future. It’s like, a million Israeli leftists could get fed up and flee to the US or Canada or Switzerland, and Israel would still have a large and growing Jewish population—because having a big family is “just what people do” in a state that was founded to defy the Holocaust. In particular: anyone who dreams of dismantling the illegal, settler-colonial, fascist Zionist ethnostate, and freeing Palestine from river to sea, had better have some plan for what they’re going to do with all these millions of young Jews, who don’t appear to be going anywhere.
(3) The second thing I noticed was the heat—comparable to the Texas summer heat that we try to escape when possible. Because of the roasting sun, our own two pampered offspring mostly refused to go outside during daytime, and we mostly met friends indoors. I more than once had the dark thought that maybe Israel will survive Hamas, Hezbollah, Iran, and its own Jewish extremists … only to be finished off in the end (along with much of the rest of the planet) by global warming. I wonder whether Israel will manage to engineer its way out of the crisis, as it dramatically engineered its way out of its water crisis via desalination. The Arab petrostates have been trying to engineer their way out of the Middle East’s increasingly Mercury-like climate, albeit with decidedly mixed results.
(4) But nu, what did our Israeli friends say about the war? Of course it’s a biased sample, because our friends are mostly left-wing academics and tech workers. But, at risk of overgeneralizing: they’re unhappy. Very, very unhappy. As for Bibi and his far-right yes-men? Our friends’ rage at them was truly a sight to behold. American progressives are, like, mildly irked by Trump in comparison. Yes, our friends blame Bibi for the massive security and intelligence failures that allowed October 7 to happen. They blame him for dragging out the war to stave off elections. They blame him for empowering the contemptible Ben-Gvir and Smotrich. They blame him for his failure to bring back the remaining hostages. Most of all, they blame him for refusing even to meet with the hostage families, and more broadly, for evading responsibility for all that he did wrong, while arrogating credit for any victories (like the rescue of Noa Argamani).
(5) One Israeli friend offered to take me along to the giant anti-Bibi rally that now happens every Saturday night in Azrieli Center in Tel Aviv. (She added that, if I left before 9pm, it would reduce the chances of the police arresting me.) As the intrepid blogger-investigator I am, of course I agreed.
While many of the protesters simply called for new elections to replace Netanyahu (a cause that I 3000% support), others went further, demanding a deal to free the hostages and an immediate end to the war (even if, as they understood, that would leave Hamas in power).
Watching the protesters, smelling their pot smoke that filled the air, I was seized by a thought: these Israeli leftists actually see eye-to-eye with the anti-Israel American leftists on a huge number of issues. In a different world, they could be marching together as allies. Except, of course, for one giant difference: namely, the Tel Aviv protesters are proudly waving Israeli flags (sometimes modified to add anti-Bibi images, or to depict the Star of David “crying”), rather than burning or stomping on those flags. They’re marching to save the Israel that they know and remember, rather than to destroy it.
(6) We did meet one ultra-right-wing (and Orthodox) academic colleague. He was virtually the only person we met on this trip who seemed cheerful and optimistic about Israel’s future. He brought me to his synagogue to celebrate the holiday of Shavuot, while he himself stood guarding the door of the synagogue with a gargantuan rifle (his volunteer duty since October 7). He has six kids.
(7) Again and again, our secular liberal friends told us they’re thinking about moving from Israel, because if the Bibi-ists entrench their power (and of course the demographics are trending in that direction), then they don’t see that the country has any worthwhile future for them or their children. Should this be taken more seriously than the many Americans who promise that this time, for real, they’ll move to Canada if Trump wins? I’m not sure. I can only report what I heard.
(8) At the same time, again and again I got the following question from Israelis (including the leftist ones): how bad is the situation for Jews in the US? Have the universities been taken over by militant anti-Zionists, like it shows in the news? I had to answer: it’s complicated. Because I live my life enbubbled in the STEM field of computer science, surrounded by friends and colleagues of many backgrounds, ethnicities, religions, and political opinions who are thoughtful and decent (otherwise, why would they be my friends and colleagues?), I’m able to live a very nice life even in the midst of loud protesters calling to globalize the intifada against my family.
If, on the other hand, I were in a typical humanities department? Yeah, then I’d be pretty terrified. My basic options would be to (a) shut up about my (ironically) moderate, middle-of-the-road opinions on Israel/Palestine, such as support for the two-state solution; (b) live a miserable and embattled existence; or (c) pack up and move, for example to Israel.
An astounding irony right now is that, just as Israeli leftists are talking about moving from Israel, some of my American Jewish friends have talked to me about moving to Israel, to escape a prejudice that they thought died with their grandparents. I don’t know where the grass is actually greener (or is it brown everywhere?). Nor do I know how many worriers will actually follow through. What’s clear is that, both in Israel and in the diaspora, Jews are feeling an existential fear that they haven’t felt for generations.
(9) Did I fear for my own family’s safety during the trip? Not really. Maybe I should have. When we visited Haifa, we found that GPS was scrambled all across northern Israel, to make targeting harder for Hezbollah missiles. As a result, we couldn’t use Google Maps, got completely lost driving, and had to change plans with our friends. For the first time, now I really feel angry at Hezbollah: they made my life worse and it’s personal!
The funniest part, though, was how the scrambling was implemented: when you opened Google Maps anywhere in the north, it told you that you were in Beirut. It then dutifully gave you walking or driving directions to wherever you were going in Israel, passing through Syria close to Damascus (“warning: this route passes through multiple countries”).
(10) The most darkly comical thing that I heard on the entire trip: “oh, no, I don’t object in the slightest if the anti-Zionists want to kill us all. I only object if they want to kill us because of an incorrect understanding of the relevant history.” Needless to say, this was a professor.
(11) After my two-week investigation, what grand insight can I offer about Israel’s future? Not much, but maybe this: I think we can definitively rule out the scenario where Israel, having been battered by October 7, and bracing itself to be battered worse by Hezbollah, just sort of … withers away and disappears. Yes, Israel might get hotter, more crowded, more dangerous, more right-wing, and more Orthodox. But it will stay right where it is, unless and until its enemies destroy it in a cataclysmic war. You can’t scare people away, break their will, if they believe they have nowhere else on the planet to go. You can only kill them or else live next to them in peace, as the UN proposed in 1947 and as Oslo proposed in the 1990s. May we live to see peace.
Anyway, on that pleasant note, time soon to tune in to the Trump/Biden debate! I wonder who these two gentlemen are, and what they might stand for?
(See here for Boaz Barak’s obituary, and here for Lance Fortnow’s—they cover different aspects of Luca’s legacy from each other and from this post. Also, click here to register for a free online TCS4All talk that Luca was scheduled to give, and that will now be given in his memory, this Monday at 3:30pm Eastern time.)
Luca Trevisan, one of the world’s leading theoretical computer scientists, has succumbed to cancer in Italy, at only 52 years old. I was privileged to know Luca for a quarter-century, first as my complexity theory and cryptography professor at UC Berkeley and as a member of my dissertation committee, and then as a friend and colleague and fellow CS theory blogger.
I regret that I learned of the seriousness of Luca’s condition only a few days ago. So yesterday morning I wrote him a farewell email, under the impression that, while he was now in hospice care, he had at least a few more weeks. Alas, he probably never saw it. So I’m hereby making the email into a memorial post, with small changes mostly to protect people’s privacy.
Dear Luca,
Dana, the kids, and I were traveling in Israel for the past two weeks, when I received the shocking and sad news that this might be my last chance to write to you.
At risk of stating the obvious — you had a very large and positive effect on my life and career. Starting with the complexity theory summer school at the Institute for Advanced Study in 2000, which was the first time we met and also the first time I really experienced the glories of complexity at full blast. And then continuing at Berkeley, TA’ing your algorithms class, which you had to cancel on 9/11 (although students still somehow showed up for office hours lugging their CLRS books…), and dealing with that student who obviously cheated on the midterm although I had stupidly given back to her the evidence that would prove it.
And then your graduate complexity course, where I was very proud to get 100% on your exam, having handwritten it on a train while everyone else used LaTeX (which, embarrassingly, I was still learning). I was a bit less proud to present the Razborov-Rudich paper to the class, and to get questions from you that proved that I understood it less thoroughly than I thought. I emerged from your course far better prepared to do complexity theory than when I entered it.
Later I took your cryptography course, where I came to you afterwards one day to point out that with a quantum computer, you could pull out big Fourier coefficients without all the bother of the Goldreich-Levin theorem. And you said sure, but then you would need a quantum computer. Over 20 years later, Goldreich and Levin (and you?) can say with satisfaction that we still don’t have that scalable quantum computer … but we’re much much closer, I swear!
I still feel bad about the theory lunch talk I gave in 2003, on my complexity-theoretic version of Aumann’s agreement theorem, where I used you and Umesh as characters instead of Alice and Bob, and which then led to unintended references to “Luca’s posterior” (probability distribution, I meant).
I also feel bad about delaying so long the completion of my PhD thesis, until well after I’d started my postdoc in Princeton, so that my former officemate needed to meet you on a street corner in San Francisco to sign the signature page the night before the deadline.
But then a few years later, when Avi and I did the algebrization paper, the fact that you seemed to like it mattered more to me than just about anything else.
Thank you for the excellent dinner when I met you some years ago in Rome. Thank you for the Trevisan-Tulsiani-Vadhan paper, which answered a question we had about BosonSampling (and you probably didn’t even know you were doing quantum computing when you wrote that paper!). Thank you for your blog. Thank you for everything you did for me.
I always enjoyed your dry humor, much of which might sadly be lost to time, unless others wrote it down or it’s on YouTube or something. Two examples spring to my mind across the decades:
“From my previous lecture, you may have gotten the impression that everything in derandomization is due to Nisan and Wigderson, but this is not the case: Avi has been working with other people as well.”
After I’d explained that I’d be spending a semester in Jerusalem to work with Avi, despite (at that time) knowing only the most rudimentary Hebrew, such as how to say “please” and “excuse me”: “you mean there are words in Hebrew for ‘please’ and ‘excuse me’?”
Speaking of which, my current trip to Israel has given me many opportunities to reflect on mortality — for all the obvious war-related reasons of course, but also because while we were here, we unexpectedly had to attend two shivas of people in our social circle who died during our trip, one of them from cancer. And we learned about a close friend whose stepson has a brain tumor and might or might not make it. Cancer is a bitch.
Anyway, there’s much more I could write, but I imagine you’re getting flooded with emails right now from all the people whose lives you’ve touched, so I won’t take up more of your time. You’ve made a real difference to the world, to theoretical computer science, and to your friends and colleagues, one that many people would envy.
Best,
Scott | |||||
2453 | dbpedia | 3 | 2 | https://www.egroup.hu/quantum-fears-and-quantum-presumptions/ | en | Quantum fears and quantum presumptions | [
"https://www.egroup.hu/wp-content/uploads/2021/03/E-Group-fehér-transzparens-bkg.png",
"https://www.egroup.hu/wp-content/uploads/2022/04/sz2020-projekt-hun-for_slide.png",
"https://www.egroup.hu/wp-content/uploads/2022/01/ribbon.png",
"https://www.egroup.hu/wp-content/uploads/2021/03/E-Group-fehér-transzpar... | [] | [] | [
""
] | null | [
"Attila Galambos"
] | 2022-05-10T15:36:42+00:00 | Every business that manages data and maintains customer relationships through an encrypted channel must be prepared for the post-quantum encryption. | en | E-Group | https://www.egroup.hu/quantum-fears-and-quantum-presumptions/ | Post-Quantum Encryption – Protection for a Peaceful Future
Almost every business that manages data and maintains customer relationships through an encrypted channel must be aware of the IT advances and technology changes that quantum computers mean. The encryption procedures currently in place become easily cracked with the advent of high-performance quantum computers. Most experts say it will happen in no more than 10 years, or even much earlier. This affects all asymmetric encryption procedures, including digital signatures and bank gateways. Many other public and private services can also be named here, as well as browsers.
What exactly the Quantum does?
Quantum computers perform calculations much faster than current computers, potentially capable of up to 158 million times more performance. This means that what currently would take 10,000 years would be solved by a quantum computer in 7,5 hours. This is based on qubits, which are replacing binary-based bits. Qubits not only occur in a specific position (0,1), but in both at the same time, assuming a superposition.
The current largest reported quantum computer is 127 qubits. This is not a significant performance. It is almost certain that a quantum computer capable of much higher performance already exists. The creator of such a computer does not reveal every detail of its development. The reason for this is that the first to build a high-performance quantum machine will have a huge advantage in many areas. It is enough to think about what it means in the field of astronomical measurements and research. Tasks which today take 10 million years to the best of our current knowledge will be carried out within a reasonable time. Potential options include breaking a significant portion of the encryption procedures currently considered secure.
In theory, encryption codes can be cracked also now. In practice it requires complex, lengthy computations that have so far been unsolvable with available technology. With the multiplier speed of quantum computers, this hurdle is overcome. That was proved by American mathematician Peter Shor, who received numerous awards for the Shor algorithm named after himself, alsot the Gödel Prize in 1999, one of the most significant mathematical awards.
Preparation
Governments have also begun preparations for the quantum era. The U.S. National Institute of Standards and Technology (NIST), announced the development of standards for post-quantum encryption procedures as early as 2016. The competition calls tech companies to create procedures that provide protection against quantum computers in the post-quantum era.
Preparations have started not only overseas, but also in Hungary. In 2021 an amendment of the law named the state and market actors (e.g. banking sector) that must have post-quantum encryption. It designated the organization which later will clarify the scope of companies required to post-quantum encryption. This organization is responsible also for registering companies offering such procedures and for issuing certificates to them.
False sense of security or excessive sense of danger?
The answer is a bit “dependent” on what we will defend.
In the case of cryptographic protections related to integrity and authenticity, we are able to “protect” our systems, files and data up to the penultimate moment before the appearance of the quantum computer. In any case, it is recommended to prepare and make the transition much earlier, because it is not a 1-2-months project.
However, we’re long overdue on encryption: anyone who saves encrypted data today will be able to decrypt it with a quantum computer and will have the possibility to create a new WikiLeaks portal in the quantum era. So, change is definitely a current need here, and it can already be found in some sectors.
This is not just an imagined scenario from the future. Behind the boycott of Huawei, one of the main battles of the U.S.-China trade war, there were also considerations that the Chinese telecommunication company would process the browsing history of users and their environment with quantum computers.
Our research and development
The experts of the E-Group Data & Trust Lab started research on the topic in 2015 and developed a post-quantum e-signature solution. So, we can say that E-Group is forming one of the most prepared teams in Hungary. Recently, experts from E-Group, BME, Magyar Telekom, Nokia Bell Labs and Ericsson Hungary discussed the topic at the HTE Infokom conference. Finding the right algorithm is essential in both the government and corporate sectors. At present they study the 7 potential algorithms as technological experts. They make comparative analyzes to create the most competitive solution in Hungary that meets the American standard.
The race has started and it would be a mistake to think that you don’t have to participate and just have to cheer from the stands. | |||||
2453 | dbpedia | 0 | 34 | http://lambda-the-ultimate.org/node/2783 | en | Lambda the Ultimate | [
"http://lambda-the-ultimate.org/node/themes/chameleon/ltu/tagline.png",
"http://lambda-the-ultimate.org/misc/xml.gif"
] | [] | [] | [
""
] | null | [] | null | en | favicon.ico | null | Home
Feedback
FAQ
Getting Started
Discussions
Site operation discussions
Recent Posts
(new topic)
Departments
Courses
Research Papers
Design Docs
Quotations
Genealogical Diagrams
Archives
User login
Navigation
program verification: the very idea
James H. Fetzer's Program Verification: The Very Idea (1988) is one of the two most frequently cited position papers on the subject of program verification. The other one is Social Processes and Proofs by De Millo, Lipton, and Perlis (1979), previously discussed on LtU. Fetzer's paper generated a lot of heated discussion, both in the subsequent issues of CACM and on Usenet.
It's not clear to me what all the fuss is about. Fetzer's main thesis seems pretty uncontroversial:
The notion of program verification appears to trade upon an equivocation. Algorithms, as logical structures, are appropriate subjects for deductive verification. Programs, as causal models of those structures, are not. The success of program verification as a generally applicable and completely reliable method for guaranteeing program performance is not even a theoretical possibility.
(See also part I, part II, and part III.)
Comment viewing options
Select your preferred way to display the comments and click "Save settings" to activate your changes. | ||||||
2453 | dbpedia | 1 | 96 | https://www.academia.edu/2902047/Quantum_computation_and_quantum_information | en | Quantum computation and quantum information | http://a.academia-assets.com/images/open-graph-icons/fb-paper.gif | http://a.academia-assets.com/images/open-graph-icons/fb-paper.gif | [
"https://a.academia-assets.com/images/academia-logo-redesign-2015-A.svg",
"https://a.academia-assets.com/images/academia-logo-redesign-2015.svg",
"https://a.academia-assets.com/images/single_work_splash/adobe.icon.svg",
"https://0.academia-photos.com/attachment_thumbnails/50543626/mini_magick20190128-10576-vq... | [] | [] | [
""
] | null | [
"Yusuf Mutlu",
"independent.academia.edu"
] | 2013-03-04T00:00:00 | Quantum computation and quantum information | https://www.academia.edu/2902047/Quantum_computation_and_quantum_information | The objective of the thesis is to model a hybrid quantum processor capable of executing quantum algorithms. The hybrid quantum processor has a co-processor architecture, in which a quantum processing module is embedded within a classical MIPS-R2000 processor. The model is designed and simulated using VHDL. A quantum assembly language (QASM) and its assembler are developed to allow programmers to describe quantum circuits. The model’s architecture is designed to execute the instructions of the QASM language. Assembly languages containing both quantum QASM instructions as well as classical MIPS instructions are also executable on this model. In addition to that, quantum C++ classes and functions are developed to allow programmers to describe quantum algorithms using C++ and simulate them on the proposed model. Unlike previous work on modeling pre-specified quantum circuits using VHDL, the proposed work is considered a general purpose model capable of modeling arbitrary quantum circuits and algorithms. The QASM language on its own succeeded in simulating “pure” quantum circuits which only contain quantum gates, such as: EPR creation circuit, Toffoli gate circuit, and quantum Fourier transform circuit. The quantum teleportation circuit, which contains classical components as well as quantum gates, was simulated using a C/C++ program containing quantum functions or macros, which was compiled to a hybrid assembly language containing both QASM and MIPS instructions. The proposed model can execute programs written in other high-level quantum programming languages, if compilers are developed to translate such languages into the proposed low-level QASM language.
There exist quantum algorithms that are more efficient than their classical counterparts; such algorithms were invented by Shor in 1994 and then Grover in 1996. A lack of invention since Grover's algorithm has been commonly attributed to the non-intuitive nature of quantum algorithms to the classically trained person. Thus, the idea of using computers to automatically generate quantum algorithms based on an evolutionary model emerged. A limitation of this approach is that quantum computers do not yet exist and quantum simulation on a classical machine has an exponential order overhead. Nevertheless, early research into evolving quantum algorithms has shown promise. This paper provides an introduction into quantum and evolutionary algorithms for the computer scientist not familiar with these fields. The exciting field of using evolutionary algorithms to evolve quantum algorithms is then reviewed. | |||||
2453 | dbpedia | 0 | 35 | https://www.mathunion.org/fileadmin/IMU/ICM2006/offline/icm2006.mathunion.org/press/bulletins/bulletin13/index.html | en | International Congress of Mathematicians MADRID 2006 | [
"https://www.mathunion.org/fileadmin/IMU/ICM2006/offline/icm2006.mathunion.org/imgs/logo_sup_icm.gif",
"https://www.mathunion.org/fileadmin/IMU/ICM2006/offline/icm2006.mathunion.org/imgs/logo_inf_icm.gif",
"https://www.mathunion.org/fileadmin/IMU/ICM2006/offline/icm2006.mathunion.org/archivos/Image/ICM/boletin_... | [] | [] | [
""
] | null | [] | null | null | Bulletin number -13
29 May 2006
INDEX:
Painting with Numbers
The ICM2006 Fractal Art Competition Hill be decided in June
Interview with Marta Sanz-Solé, President of the ICM2006 Local Programme Committee
“This ICM places the accent on the fruitful interaction among the different areas of mathematics”
Plenary Session: Iain Johnstone
How to Convert Data into Information
El ICM sección a sección
Fundamentos matemáticos de las ciencias de la computación
Satellite Conferences: Gran Canaria
Intelligent Computation Imitates the “Practical Wisdom” of Living Beings
Applications
Encrypted Messages
Painting with Numbers
The ICM2006 Fractal Art Competition Hill be decided in June
Equations are a way of describing reality, but some equations can be used the other way round; that is, to generate artificial worlds. Equations describing fractal sets, for example, can give rise to landscapes as fascinating as those depicted in famous canvases. Does this sound impossible? Before jumping to conclusions, it would be better to take a look at the works on display at the Exhibition of Fractal Art, which forms part of the ICM2006 International Congress of Mathematicians, to be held from August 22nd to 30th at the ICM2006 venue and at the Centro Cultural Conde Duque in Madrid. Many of the works exhibited will come from the Benoit Mandelbrot ICM2006 International Competition of Fractal Art, the prizes for which will be announced in June.
The ICM2006 Exhibition of Fractal Art will be held thanks to support from the Fundación Española de Ciencia y Tecnología (Fecyt – Spanish Foundation and Science and Technology). More than three hundred entries have been received from all over the world for the Fractal Art Competition. The jury will be chaired by Benoit Mandelbrot himself, widely recognized as the “father” of fractal geometry.
What are fractals? It is not necessary to enter into a complicated mathematical description to get an intuitive grasp of what they are: structures which, “when a small portion is observed, preserve a similar, although not necessarily identical appearance to what they look like when observed in their entirety”, explains Javier Barrallo, one of the organizers of the fractal art competition, and a fractal artist himself. Some examples of fractals are: a tree and its branches; a cauliflower, apparently made up of endless cauliflowers joined together; the coastline of a country…
The example of a coastline serves to explain another property of fractals; the fact that no matter how small the scale at which they are observed – however close you “zoom in” – they always keep the same appearance, and so on to infinity. Obviously, a coastline is not infinite – “authentic” fractals are a mathematical idealization – but the effect of the fractal phenomenon can be seen to be real in the “coastline paradox”. When a coastline, or for that matter any rough surface, is measured, the result will vary according to the accuracy desired: if one takes into account the shape of bays, of rocks, of grains of sand, etc., the coastline will get theoretically longer and longer, and in an ideal fractal it would be infinite.
Is it really art?
Beyond the strictly aesthetic qualities, in the opinion of some people these properties give added value to works of “fractal art”. However, these works have not always been considered art. Are they not merely a computer-generated graphic representation of a formula? Yes and no, reply the authors of fractal art. What follows is a brief explanation of how a fractal is painted.
The point of departure is indeed a mathematical formula. The first fractal formulae were described more than a century ago. Today there are hundreds. And yes, as Barrallo explains, the computer is vital: “A small image, one of 640 x 480 pixels, for example, contains 307,200 dots that must be calculated. It may be necessary to calculate each one of these dots about 1,000 times by the formula determining the fractal. This means that the formula must be calculated more than 300 million times. And this is just for a small-size image!”.
So, armed with both formula and computer, we must now proceed to iteration. This involves “calculating a formula over and over again, starting from its initial value”, says Barrallo. “After calculating the formula for the first time, we take the resulting value and introduce it into the formula. The new result is calculated again, and so on successively”. In the case of fractals, the initial value has to do with the position of the dot in the frame (the pixel on the screen).
Then colours are assigned according to the value of each dot. The fact that the behaviour of two dots situated very close together can be radically different – one diverging toward the infinite and the other converging toward a given value - is “what makes fractal exploration so fascinating”, says Barrallo. And what leads to the explosion of shapes and colours in the image.
But this is not in the least due exclusively to the computer. “An image of 800 x 600 dots contains 480,000 pixels, or dots on the screen, which can be combined in an image in 103467865 different ways; that is, 10 followed by more than three million zeros. A computer does not possess the capacity to select images from among such an immense collection and determine which are beautiful and which are not”. It is the hand, or in this case, the brain, of the artist that are the vital factors. Furthermore, as in all art, and in mathematics themselves, fractal art is in a constant state of evolution. The algorithms currently employed have little to do with those employed twenty years ago.
For further information:
Interview with Benoit Mandelbrot in InfoICM2006-05-24
/prensa/boletines/boletin19/#mandelbrot
Fractal Art Competition web site:
http://www.fractalartcontests.com/2006/
Information about fractals:
http://www.divulgamat.net (in “Virtual Exhibitions” and “Art and Mathematics”)
www.fractalus.com
Interview with Marta Sanz-Solé, President of the ICM2006 Local Programme Committee
“This ICM places the accent on the fruitful interaction among the different areas of mathematics”
Marta Sanz-Solé teaches at the University of Barcelona, the same university from which she graduated (1974) and gained her doctorate (1978), and where she has been Dean of the Faculty of Mathematics and vice-president of the Division of Sciences. She has spent time doing research in the USA, Italy, France and Switzerland, and her research work has been centered on Malliavin’s calculus and stochastic analysis. She is the author of some 80 publications, serves as a member on various committees and has participated in the organization of numerous congresses and events. In recent months she has devoted much time and effort as a member of the Organizing Committee to the ICM2006 World Congress of Mathematicians, in particular as the president of the Local Programme Committee in charge of organizing the scientific content of the event.
How has the scientific programme for the ICM2006 been drawn up?
There is a committee responsible for deciding on the number of scientific sections in the congress and their content. On this occasion, the ICM2006 has been divided into 20 different sections. This Programme Committee is named by the International Mathematical Union executive committee, and it also responsible for putting forward the names of invited speakers, both for the plenary lectures and the talks given in each section.
¿What is the task of the committee you are chairing?
The members of the LPC (Local Programme Committee) work in co-ordination with the Programme Committee, and our main job is the organization of the congress programme. It’s a question of arranging the agenda in a coherent manner. You have to take into account the fact that invited talks in each section must necessarily overlap, so they have to be scheduled in such a way that related fields do not coincide with each other, as far as this is possible, because all the talks are of interest to the majority of the audience. We are also responsible for the schedule of the plenary programme, since this will highlight trends and set the pace of the congress.
Don’t you find it frustrating that others decide on the content?
Those are the rules of the game. However, we have a certain scope for movement. The Organizing Committee of the congress has the prerogative of proposing one plenary lecturer and three section speakers, and it has delegated this choice to the LPC. Furthermore, we are responsible for many other activities in the programme, such as Special Activities and Other Activities. Personally, I’ve been deeply involved in some of these scientific activities. For example, I’m the organizer of the Closing Round Table, an activity which has been included in the ICM for the very first time. All the panellists are prestigious mathematicians, one of whom is Lennart Carleson, the winner of this year’s Abel Prize. The title of the round table - Are Pure and Applied Mathematics Drifting Apart? – is a reflection of the interest in the debate about this delicate dovetailing between two aspects of the profession, as well as the need to work closely together to make important advances in an eminently technological society. In fact, the programme for this ICM places the accent on the fruitful interaction among the different fields of mathematics, which until a short time ago were following divergent paths.
One of these ‘special activities’ is devoted to promulgation: Are you worried about the public image of mathematics?
Of course; most people don’t even realize that mathematics are useful, or even that they are present in our daily activity. Their intellectual value and appeal remain largely unknown. The image people have of them is of something boring; it’s a subject surrounded by a lack of understanding and a lack of communication. However, we intend to deal with this question at a round table proposed by the European Mathematical Society, in which I am also involved as joint organizer, and which has even broader objectives. The debate will revolve around how to make mathematics more accessible to scientists working in other disciplines; how to communicate the values of our research to politicians, who are the ones who decide on funding for research, and how to convey the real values of mathematics to young people who are on the point of choosing their future university education and PhD courses.
In addition to the invited speakers, there are also other contributions: Are you and your colleagues involved in the process of selection for these?
Certainly; I was about to explain this before. This is a very important part of the congress, and we on the committee have been responsible for organizing all of it. We were responsible for the “call for abstracts” in three different categories: oral communications, posters, and contributions on mathematical software. Then we evaluated the abstracts submitted for these contributions and programmed their presentation in the appropriate sessions.
How many were submitted and how many have been selected?
Well, the question is how many will be presented during the congress, because the figures may vary. Approximately 1,600 were submitted. After the evaluation process, and taking into account the withdrawals, we have at the moment about 1,400. However, we know from past experience that some of those who have submitted contributions will not actually attend the congress. I would hazard a guess that the final count will be about 800 oral communications, 300 posters, and 25 mathematical software presentations, which will be a great success in terms of participation.
¿Does this represent any change in comparison with previous congresses?
I think the final figures will be similar to those of the ICM 98 in Berlin. However, we’ve introduced some changes, such as increasing the time for oral communications from 15 to 20 minutes. We’ve also made a special effort to promote the presentation of posters. There’s not much tradition of posters among mathematicians, even though they are more informal, can be much richer, and facilitate greater interaction with people interested in the subject. One of our initiatives has been to organize a competition with prizes for the best posters in terms of presentation, visual quality and content. There are two prizes in each section, although they can also be declared void.
How many people are on the LPC?
There are nine members in all, covering a broad range of mathematical fields, although each one is working in collaboration with between ten and fifteen other people on the evaluation process and other tasks, because there’s a great deal of work to do.
Marta Sanz personal web page
http://www.mat.ub.es/~sanz
President of the IMU Programme Committee
Noga Alon, Universidad de Tel-Aviv
nogaa@tau.ac.il
http://www.math.tau.ac.il/~nogaa
Local Programme Committee (LPC):
/organization/localprogramcommittee/
Plenary Session: Iain Johnstone
How to Convert Data into Information
For some time now the following wording can be found in many job advertisements: “Minimum requirements: Experience in the implantation of Information Management Systems (Business Intelligence, Data Warehousing, Data Mining)". Given the growing amount of data handled in many sectors, experience in information management is becoming increasingly necessary. The mathematician Iain Johnstone will give a plenary lecture at the ICM2006 on “High Dimensional Statistical Inference and Random Matrices”, which will deal with the management of massive amounts of data.
Until recently, statistics was centered on the study of one- and multi-dimensional random variables. However, the development of computation has led to the era of “data mining", and all organizations – banks, hospitals, research centres – handle enormous quantities of data which must often be constantly available, such as financial assets. Statistics in high dimensions are essential for analysing this data, and this will be the central theme of Johnstone’s lecture. This branch of mathematics shows how to organize and summarize data, whether it involves an electrocardiogram, Internet traffic or a stocks and shares, in such a way that they provide useful information.
Iain Johnstone was born in Melbourne in 1956. In 1977 he graduated in mathematics at the Australian National University, specializing in pure mathematics and statistics. He obtained his doctorate in statistics from Cornell University in 1981. Since then he has been associated with Stanford University in California, where in 1992 he became professor of statistics and biostatistics. In addition to his work on biostatistics, in the field of statistics he has received much recognition.
Lecturer: Iain Johnstone
“High Dimensional Statistical Inference and Random Matrices”
Date: Friday, August 25th: 10:15-11:15
ICM2006 Scientific Programme
/scientificprogram/plenarylectures/
More about Iain Johnstone:
http://www-stat.stanford.edu/people/faculty/johnstone/
http://hcr3.isiknowledge.com/author.cgi?&link1=Browse&link2=Results&id=1345
Plenary Session:
Mathematical Foundations of the Computational Sciences
One of the main mathematical problems in theoretical computer science is that known as P vs NP. A simple example suffices to understand the nature of this problem:
Suppose that we wish to select a group of one hundred people from a total of four hundred candidates. Selection must be carried out according to certain determining criteria (for example, in accordance with a list of incompatible pairs; Tom and Dick cannot be together, neither can Harry or So-and-So, nor Tom with Harry etc.).
Bear in mind that the total number of ways of selecting one hundred elements out of four hundred easily exceeds the number of atoms making up the known universe. Not even an exhaustive search by means of a super-computer would be capable of covering every possible combination.
This is an example of what is known as an NP problem, whose main characteristic is that it is (relatively) easy to check whether a particular selection satisfies the given criteria. However, the task of generating a solution directly is in general quite difficult. P problems, on the other hand, are those for which direct methods exist for providing solutions (relatively easy).
The P vs NP problem consists in providing a problem for which a possible solution can be easily checked, but which requires an excessively long time for solutions to be found by direct methods, or for demonstrating that such problems do not exist. At present, the majority feeling in the scientific community is that such problems do indeed exist. Paradoxically, increasingly efficient algorithms are being found for problems traditionally considered difficult to solve. Manindra Agrawal has this year been awarded the Gödel Prize from the European Association for Theoretical Computer Science for demonstrating that the problem for determining whether a number is prime belongs to class P.
In fact, for many problems of practical importance, methods based on carrying out a random selection and checking that it satisfies the appropriate restrictions have proved to be simpler and faster than the best direct algorithms known to date. Similarly in combinatorics, objects exist (such as self-correcting codes) whose existence is easy to check by means of probabilistic methods, but for which only explicit constructions are available that are very complex for approximating optimal solutions.
It is perhaps surprising to learn that in recent years results have been obtained which suggest that every random algorithm can be simulated by a deterministic algorithm of comparable efficiency. As an example we have Agrawal’s deterministic algorithm for checking if a number is prime in polynomial-time, and Omer Reingold’s deterministic algorithm for solving problems of connectivity in undirected graphs having less than linear logarithmic complexity with regard to the memory required.
Worthy of mention in relation to this are Ronnit Rubinfeld researches, which are centered on the study of algorithms of complexity less than linear, that is to say, sublinear. At a time when enormous amounts of data must be handled, algorithms of linear complexity can prove to be impracticable. Many interesting problems exist for which algorithms of sublinear complexity are known, although they are often random and provide approximate solutions. With regard to this point, Luca Trevisan will speak on the elimination of randomness, quasi-randomness and the direct constructions of combinatorial objects such as error-correcting codes.
Jon Kleinberg’s talk will deal with graphs in which any pair of nodes are linked by a short length path (small world graphs) and with random methods of finding such paths. This avenue of research has applications to the theory of algorithms and to discrete probability.
Tim Roughgarden will address the connections between theoretical computer science and game theory, known as algorithmic game theory, with particular attention to the use of potential functions for delimiting the equilibria inefficiency of different models of selfish behaviour in networks. An example of this behaviour can be found in the well-known dilemma posed by two prisoners in solitary confinement who are given the choice between two options; if, because of their isolation from each other, the behaviour of each one is governed by self-interest, then the final result will be negative for both.
For his part, Alexander Holevo will present results related to quantum computation, which is a paradigm of computation based on quantum mechanics alternative to the classical paradigm, in which quantum bits are used instead of customary bits. The quantum paradigm makes new algorithms possible, and the same task may involve different complexity in classical computation and in quantum computation, all of which has aroused great expectation, since it renders some formerly intractable problems tractable. It is worth pointing out, for example, that Peter Shor was awarded the Nevanlinna Prize in 1998 for his polynomial complexity factoring algorithm based on quantum computation.
Manuel Ojeda Aciego
Lecturer in Applied Mathematics at the University of Málaga.
Satellite Conferences: Gran Canaria
Intelligent Computation Imitates the “Practical Wisdom” of Living Beings
The possibility of designing systems that imitate how human beings, animals and other living beings process information and solve problems has long been a dream of Computational Science. Today, after half a century of research work, techniques have been developed which provide benefits for a broad spectrum of sectors. Neuronal networks, expert systems, diffuse logic and evolutionary computation are just some of the techniques that have proved capable of identifying complex patterns in vast sets of data, providing support for decision-making based on qualitative and quantitative factors, and for giving solutions to problems involving high-complexity variables. Armed with these resources, computer science is now able to tackle designs for increasingly complex applications, manage greater volumes of information, seek “intelligent” solutions and discover unsuspected correlations in data bases.
The Fifth International Conference on Engineering Computational Technology and the Eighth International Conference on Computational Structures Technology, the two symposia to be held in Las Palmas de Gran Canaria this September, will cover all these topics, as well as debating the latest computational technology applications to all aspects of engineering, structural mechanics and other fields. Furthermore, the latest advances in hardware and software, algorithms and theoretical development will also be discussed.
The two conferences will be held concurrently, and participants will be able to attend the talks given at both. In addition to mathemat-icians and engineers, scientists from other specialized fields are expected to attend.
“The Fifth International Conference
on Engineering Computational Technology”
Person to contact: Gustavo Montero
e-mail: gustavo@dma.ulpgc.es
web: http://www.civil-comp.com/conf/ect2006.htm
“The Eighth International Conference
on Computational Structures Technology”
Person to contact: Rafael Montenegro
e-mail: rafa@dma.ulpgc.es
web: http://www.civil-comp.com/conf/cst2006.htm
Las Palmas de Gran Canaria
12-15 September 2006
Applications
Encrypted Messages
Exchange of information is an on-going process in 21st century society, where data must travel rapidly and constantly, whether it be in the form of bank transfers, telephone conversations or official documents. The robustness of the entire system rests on the possibility of encrypting information to enable it to be transmitted quickly and safely without being used or intercepted by prying eyes. Once again, mathematics has a key role to play in this process. According to Alejandro Melle, professor of algebra at the Complutense University of Madrid, there are many theoretical systems envolved in the field of encryption. However, most of them cannot be used in applications because they are insufficient for ensuring a secure and fluid exchange of information.
The mathematical community is hard at work in both the generation of encryption algorithms and in cryptoanalysis; that is, the breaking of encryption algorithms. This is the combination that makes security really effective, since cryptographic algorithms must necessarily belong to the public sphere, so that the security of the cryptosystem is based on mathematics and not on secrecy. Trends in security processes and protocols currently used in the world are largely set by the NSA (National Security Agency) and by the NIST (National Institute of Standards and Technology), both North American organizations. Furthermore, the most frequently employed encryption protocols are based on two fundamental problems of mathematics: “the big number factorization problem” and “the discrete logarithm problem”.
In particular, the RSA protocol, which is the most commonly used, revolves around the idea of the big number factorization problem: given a very large number N, it is very difficult to find its prime factors (p, q), such that N = p x q. Nevertheless, even though it is difficult to find these factors, the increased availability of more powerful computers requires the use of ever larger numbers in order to prevent problems from occurring. As Melle explains, it is customary now to work with key sizes of 1024 bits or even 2048 bits, and the greater the key size, the slower the speed of operation. It is for that reason that the Fábrica Nacional de la Moneda y Timbre (National Mint), which acts as the State Certifying Authority (responsible for issuing digital certificates for official transactions with the Government), officially advises against the use of key sizes of 2048 bits.
Moreover, protocols based on the discrete logarithm problem either work on finite bodies or on elliptical curves on finite bodies. Cryptography based on elliptical curves ensures security for systems with much smaller key sizes, and is therefore used in supports where storage space is a determining factor, such as in credit cards.
For further information:
Alejadro Melle: amelle@mat.ucm.es
Cryptomathic company
http://www.cryptomathic.com/labs/ellipticcurves.html
Second Cryptography Hash Workshop
ttp://www.csrc.nist.gov/pki/HashWorkshop/index.html | ||||||||
2453 | dbpedia | 0 | 23 | https://bookauthority.org/books/best-selling-computability-books | en | 20 Best-Selling Computability Books of All Time | [] | [] | [] | [
""
] | null | [] | null | A list of the best-selling computability books of all time, such as Godel's Proof and Computability. | en | /images/favicon/apple-touch-icon.png?v=almeA543QQ | BookAuthority | https://bookauthority.org/books/best-selling-computability-books | |||||
2453 | dbpedia | 3 | 21 | https://cacm.acm.org/research/fifty-years-of-p-vs-np-and-the-possibility-of-the-impossible/ | en | Fifty Years of P vs. NP and the Possibility of the Impossible – Communications of the ACM | [
"https://cacm.acm.org/wp-content/uploads/2024/08/Sept-2024.Cover_.1000x1338.jpg?w=1000",
"https://cacm.acm.org/wp-content/uploads/2022/01/121521_CACMpg77_Fifty-Years-PNP.jpg",
"https://cacm.acm.org/wp-content/uploads/2022/01/121521_CACMpg77_Fifty-Years-PNP.jpg?w=250",
"https://dl.acm.org/cms/attachment/d96dca... | [] | [] | [
""
] | null | [
"Lance Fortnow",
"Micah D. Beck",
"Shaoshan Liu",
"Alex Tray"
] | 2022-01-01T00:00:00-05:00 | en | https://cacm.acm.org/research/fifty-years-of-p-vs-np-and-the-possibility-of-the-impossible/ | On May 4, 1971, computer scientist/mathematician Steve Cook introduced the P vs. NP problem to the world in his paper, “The Complexity of Theorem Proving Procedures.” More than 50 years later, the world is still trying to solve it. In fact, I addressed the subject 12 years ago in a Communications article, “The Status of the P versus NP Problem.”13
Back to Top
Key Insights
The P vs. NP problem turned 50 in 2021 and its resolution remains far out of reach. Dramatic advances in algorithms and hardware have allowed us to tackle manv NP-complete problems while making little progress breaking cryptographic systems.
As we continue to make dramatic progress in machine learning and data-centric computing, P vs. NP can give us a valuable lens to view what is and is not possible for the future of learning.
While P vs. NP began as a way to difficult problems to solve computationally, we now view the problem as a way to chart the future possibilities for our field.
The P vs. NP problem, and the theory behind it, has not changed dramatically since that 2009 article, but the world of computing most certainly has. The growth of cloud computing has helped to empower social networks, smartphones, the gig economy, fintech, spatial computing, online education, and, perhaps most importantly, the rise of data science and machine learning. In 2009, the top 10 companies by market cap included a single Big Tech company: Microsoft. As of September 2020, the first seven are Apple, Microsoft, Amazon, Alphabet (Google), Alibaba, Facebook, and Tencent.38 The number of computer science (CS) graduates in the U.S. more than tripled8 and does not come close to meeting demand.
Rather than simply revise or update the 2009 survey, I have chosen to view advances in computing, optimization, and machine learning through a P vs. NP lens. I look at how these advances bring us closer to a world in which P = NP, the limitations still presented by P vs. NP, and the new opportunities of study which have been created. In particular, I look at how we are heading toward a world I call “Optiland,” where we can almost miraculously gain many of the advantages of P = NP while avoiding some of the disadvantages, such as breaking cryptography.
As an open mathematical problem, P vs. NP remains one of the most important; it is listed on the Clay Mathematical Institute’s Millennium Problems21 (the organization offers a million-dollar bounty for the solution). I close the article by describing some new theoretical computer science results that, while not getting us closer to solving the P vs. NP question, show us that thinking about P vs. NP still drives much of the important research in the area.
Back to Top
The P vs. NP Problem
Are there 300 Facebook users who are all friends with each other? How would you go about answering that question? Let’s assume you work at Facebook. You have access to the entire Facebook graph and can see which users are friends. You now need to write an algorithm to find that large clique of friends. You could try all groups of 300, but there are far too many to search them all. You could try something smarter, perhaps starting with small groups and merging them into bigger groups, but nothing you do seems to work. In fact, nobody knows of a significantly faster solution than to try all the groups, but neither do we know that no such solution exists.
This is basically the P vs. NP question. NP represents problems that have solutions you can check efficiently. If I tell you which 300 people might form a clique, you can check relatively quickly that the 44,850 pairs of users are all friends. Clique is an NP problem. P represents problems where you can find those solutions efficiently. We don’t know whether the clique problem is in P. Perhaps, surprisingly, Clique has a property called NP-complete—that is, we can efficiently solve the Clique problem quickly if and only if P = NP. Many other problems have this property, including 3-Coloring (can a map be colored using only three colors so that no two neighboring countries have the same color?), Traveling Salesman (find the shortest route through a list of cities, visiting every city and returning to the starting place), and tens to hundreds of thousands of others.
Formally, P stands for “polynomial time,” the class of problems that one can solve in time bounded by a fixed polynomial in the length of the input. NP stands for “nondeterministic polynomial time,” where one can use a nondeterministic machine that can magically choose the best answer. For the purposes of this survey, it is best to think of P and NP simply as efficiently computable and efficiently checkable.
For those who want a longer informal discussion on the importance of the P vs. NP problem, see the 2009 survey13 or the popular science book based on that survey.14 For a more technical introduction, the 1979 book by Michael Garey and David Johnson16 has held up surprisingly well and remains an invaluable reference for those who need to understand which problems are NP-complete.
Back to Top
Why Talk About It Now?
On that Tuesday afternoon in 1971, when Cook presented his paper to ACM Symposium on the Theory of Computing attendees at the Stouffer’s Somerset Inn in Shaker Heights, OH, he proved that Satisfiability is NP-complete and Tautology is NP-hard.10 The theorems suggest that Tautology is a good candidate for an interesting set not in [P], and I feel it is worth spending considerable effort trying to prove this conjecture. Such a proof would represent a major breakthrough in complexity theory.
Dating a mathematical concept is almost always a challenge, and there are many other possible times where we can start the P vs. NP clock. The basic notions of algorithms and proofs date back to at least the ancient Greeks, but as far as we know they never considered a general problem such as P vs. NP. The basics of efficient computation and nondeterminism were developed in the 1960s. The P vs. NP question was formulated earlier than that, we just didn’t know it.
The P vs. NP problem, and the theory behind it, has not changed dramatically, but the world of computing most certainly has.
Kurt Gödel wrote a letter17 in 1956 to John von Neumann that essentially described the P vs. NP problem. It is not clear if von Neumann, then suffering from cancer, ever read the letter, which was not discovered and widely distributed until 1988. The P vs. NP question didn’t really become a phenomenon until Richard Karp published his 1972 paper23 showing that a large number of well-known combinatorial problems were NP-complete, including Clique, 3-Coloring, and Traveling Salesman. In 1973, Leonid Levin, then in Russia, published a paper based on his independent 1971 research that defined the P vs. NP problem.27 By the time Levin’s paper reached the west, P vs. NP had already established itself as computing’s most important question.
Back to Top
Optiland
Russell Impagliazzo, in a classic 1995 paper,20 described five worlds with varying degrees of possibilities for the P vs. NP problem:
Algorithmica: P = NP or something “morally equivalent,” such as fast probabilistic algorithms for NP.
Heuristica: NP problems are hard in the worst case but easy on average.
Pessiland: We can easily create hard NP problems, but not hard NP problems where we know the solution. This is the worst of all possible worlds, since we can neither solve hard problems on average nor do we get any apparent cryptographic advantage from the difficulty of these problems.
Minicrypt: Cryptographic one-way functions exist, but we do not have public-key cryptography.
Cryptomania: Public-key cryptography is possible—that is, two parties can exchange secret messages over open channels.
These worlds are purposely not formally defined but rather suggest the unknown possibilities given our knowledge of the P vs. NP problem. The general belief, though not universal, is that we live in Cryptomania.
Impagliazzo draws upon a “you can’t have it all” from P vs. NP theory. You can either solve hard NP problems or have cryptography, but you can’t have both (you can have neither). Perhaps, though, we are heading to a de facto Optiland. Advances in machine learning and optimization in both software and hardware are allowing us to make progress on problems long thought difficult or impossible—from voice recognition to protein folding—and yet, for the most part, our cryptographic protocols remain secure.
In a section called “What if P=NP?” from the 2009 survey,13 I wrote, “Learning becomes easy by using the principle of Occam’s razor—we simply find the smallest program consistent with the data. Near-perfect vision recognition, language comprehension and translation, and all other learning tasks become trivial. We will also have much better predictions of weather and earthquakes and other natural phenomenon.”
Today, you can use face-scanning to unlock your smartphone, talk to the device to ask it a question and often get a reasonable answer, or have your question translated into a different language. Your phone receives alerts about weather and other climatic events, with far better predictions than we would have thought possible just a dozen years ago. Meanwhile, cryptography has gone mostly unscathed beyond brute-force-like attacks on small key lengths. Now let’s look at how recent advances in computing, optimization, and learning are leading us to Optiland.
Back to Top
Solving Hard Problems
In 2016, Bill Cook (no relation to Steve) and his colleagues decided to tackle the following challenge:9 How do you visit every pub in the U.K. in the shortest distance possible? They made a list of 24,727 pubs and created the ultimate pub crawl, a walking trip that spanned 45,495,239 meters—approximately 28,269 miles—a bit longer than walking around the earth.
Cook had cheated a bit, eliminating some pubs to keep the size reasonable. After some press coverage in the U.K.,7 many complained about missing their favorite watering holes. Cook and company went back to work, building up the list to 49,687 pubs. The new tour length would be 63,739,687 meters, or about 39,606 miles (see Figure). One needs just a 40% longer walk to reach more than twice as many pubs. The pub crawl is just a traveling salesman problem, one of the most famous of the NP-complete problems. The number of possible tours through all the 49,687 pubs is roughly three followed by 211,761 zeros. Of course, Cook’s computers don’t search the whole set of tours but use a variety of optimization techniques. Even more impressive, the tour comes with a proof of optimality based on linear program duality.
Figure. Shortest route through 49,687 U.K. pubs. Used by permission. (http://www.math.uwaterloo.ca/tsp/uk).
Taking on a larger task, Cook and company aimed to find the shortest tour through more than two million stars where distances could be computed. Their tour of 28,884,456 parsecs is within a mere 683 parsecs of optimal.
Beyond Traveling Salesman, we have seen major advances in solving satisfiability and mixed-integer programming—a variation of linear programming where some, but not necessarily all, of the variables are required to be integers. Using highly refined heuristics, fast processors, specialized hardware, and distributed cloud computing, one can often solve problems that arise in practice with tens of thousands of variables and hundreds of thousands or even millions of constraints.
Faced with an NP problem to solve, one can often formulate the problem as a satisfiability or mixed-integer programming question and throw it at one of the top solvers. These tools have been used successfully in verification and automated testing of circuits and code, computational biology, system security, product and packaging design, financial trading, and even to solve some difficult mathematical problems.
Back to Top
Data Science and Machine Learning
Any reader of Communications and most everyone else cannot dismiss the transformative effects of machine learning, particularly learning by neural nets. The notion of modeling computation by artificial neurons—basically objects that compute weighted threshold functions—goes back to the work of Warren McCulloch and Walter Pitts in the 1940s.28 In the 1990s, Yoshua Bengio, Geoffrey Hinton, and Yann LeCun26 developed the basic algorithms that would power the learning of neural nets, a circuit of these neurons several layers deep. Faster and more distributed computing, specialized hardware, and enormous amounts of data helped propel machine learning to the point where it can accomplish many human-oriented tasks surprisingly well. ACM recognized the incredible impact the work of Bengio, Hinton, and LeCun has had in our society with the 2018 A.M. Turing Award.
How does machine learning mesh with P vs. NP? In this section, when we talk about P = NP, it will be in the very strong sense of all problems in NP having efficient algorithms in practice. Occam’s razor states that “entities should not be multiplied without necessity” or, informally, that the simplest explanation is likely to be the right one. If P = NP, we can use this idea to create a strong learning algorithm: Find the smallest circuit consistent with the data. Even though we likely don’t have P = NP, machine learning can approximate this approach, which led to its surprising power. Nevertheless, the neural net is unlikely to be the “smallest” possible circuit. A neural net trained by today’s deep-learning techniques is typically fixed in structure with parameters that are only on the weights on the wires. To allow sufficient expressibility, there are often millions or more such weights. This limits the power of neural nets. They can do very well with face recognition, but they can’t learn to multiply based on examples.
Universal distribution and GPT-3. Consider distributions on the infinite set of binary strings. You can’t have a uniform distribution, but you could create distributions where every string of the same length has the same probability. However, some strings are simply more important than others. For example, the first million digits of π have more meaning than just a million digits generated at random. You might want to put a higher probability on the more meaningful strings. There are many ways to do this, but in fact there is a universal distribution that gets close to any other computable distribution (see Kirchherr et al.25) This distribution has great connections to learning—for example, any algorithm that learns with small error to this distribution will learn for all computable distributions. The catch is that this distribution is horribly non-computable even if P = NP. If P = NP, we still get something useful by creating an efficiently computable distribution universal to other efficiently computable distributions.
What do we get out of machine learning? Consider the Generative Pre-trained Transformer (GPT), particularly GPT-3 released in 2020.5 GPT-3 has 175 billion parameters trained on 410 billion tokens taken from as much of the written corpus as could be made available. It can answer questions, write essays given a prompt, and even do some coding. Though it has a long way to go, GPT-3 has drawn rave reviews for its ability to generate material that looks human-produced. One can view GPT-3 in some sense like a distribution, where we can look at the probability of outputs generated by the algorithm, a weak version of a universal distribution. If we restrict a universal distribution to have a given prefix, that provides a random sample prompted by that prefix. GPT-3 can also build on such prompts, handling a surprisingly wide range of domain knowledge without further training. As this line of research progresses, we will get closer to a universal metric from which one can perform built-in learning: Generate a random example from a given context.
Science and medicine. In science, we have made advances by doing large-scale simulations to understand, for example, exploring nuclear fusion reactions. Researchers can then apply a form of the scientific method: Create a hypothesis for a physical system; use that model to make a prediction; and then, instead of attempting to create an actual reaction, use an experimental simulation to test that prediction. If the answer is not as predicted, then change or throw away the model and start again.
After we have a strong model, we can then make that expensive test in a physical reactor. If P = NP, we could, as mentioned above, use an Occam’s Razor approach to create hypotheses—find the smallest circuits that are consistent with the data. Machine-learning techniques can work along these lines, automating the hypothesis creation. Given data—whether generated by simulations, experiments, or sensors—machine learning can create models that match the data. We can use these models to make predictions and then test those predictions as before.
While these techniques allow us to find hypotheses and models that might have been missed, they can also lead to false positives. We generally accept a hypothesis with a 95% confidence level, meaning that one out of 20 bad hypotheses might pass. Machine-learning and data science tools can allow us to generate hypotheses that will run the risk of publishing results not grounded in truth. Medical researchers, particularly those trying to tackle diseases such as cancer, often hit upon hard algorithmic barriers. Biological systems are incredibly complex structures. We know that our DNA forms a code that describes how our bodies are formed and the functions they perform, but we have only a very limited understanding on how these processes work.
On November 30, 2020, Google’s DeepMind announced AlphaFold, a new algorithm that predicts the shape of a protein based on its amino acid sequence.22 AlphaFold’s predictions nearly reach the accuracy of experimentally building the amino acid sequence and measuring the shape of the protein that forms. There is some controversy as to whether DeepMind has actually “solved” protein folding and it is far too early to gauge its impact, but in the long run this could give us a new digital tool to study proteins, understand how they interact, and learn how to design them to fight disease.
Beyond P vs. NP: chess and go. NP is like solving a puzzle. Sudoku, on an arbitrarily sized board, is NP-complete to solve from a given initial setting of numbers in some of the squares. But what about games with two players who take alternate turns, such as chess and go, when we ask about who wins from a given initial setting of the pieces? Even if we have P = NP, it wouldn’t necessarily give us a perfect chess program. You would have to ask if there is a move for white such that for every move of black, there is a move for white such that for every move of black … white wins. You just can’t do all those alternations of white and black on P = NP alone. Games like these tend to be wha is called PSPACE-hard, hard for computation that uses a reasonable amount of memory without any limit on time. Chess and go could even be harder depending on the precise formulation of the rules (see Demaine and Hearn.11)
This doesn’t mean you can’t get a good chess program if P = NP. You could find an efficient computer program of one size that beats all efficient programs of slightly smaller sizes, if that’s possible. Meanwhile, even without P = NP, computers have gotten very strong at chess and go. In 1997, IBM’s Deep Blue defeated Gary Kasparov, chess world champion at the time, but go programs struggled against even strong amateurs. Machine learning has made dramatic improvements to computer game playing. While there is a lengthy history, let me jump to AlphaZero, developed in 2017 by Google’s DeepMind.35
AlphaZero uses a technique known as Monte Carlo tree search (MCTS) that randomly makes moves for both players to determine the best course of action. AlphaZero uses deep learning to predict the best distributions for the game positions to optimize the chances to win using MCTS. While AlphaZero is not the first program to use MCTS, it does not have any built-in strategy or access to a previous game database. AlphaZero assumes nothing more than the rules of the game. This allows AlphaZero to excel at both chess and go, two very different games that share little other than alternating moves and a fixed-size board. DeepMind recently went even further with MuZero,33 which doesn’t even get the full rules, just some representation of board position, a list of legal moves, and whether the position is a win, lose, or draw. Now we’ve come to the point that pure machine learning easily beats any human or other algorithm in chess or go. Human intervention only gets in the way. For games such as chess and go, machine learning can achieve success where P = NP wouldn’t be enough.
Machine learning may not do well when faced with tasks that are not from the distribution in which it was trained.
Explainable AI. Many machine-learning algorithms seem to work very well but we don’t know why. If you look at a neural net trained for voice recognition, it’s often very hard to understand why it makes the choices it makes. Why should we care? Here are a few of several reasons.
Trust: How do we know that the neural net is acting correctly? Beyond checking input/output pairs we can’t do any other analysis. Different applications have different levels of trust. It’s okay if Netflix makes a bad movie recommendation, but less so if a self-driving car recommends a wrong turn.
Fairness@: Many examples abound in which algorithms trained on data learn the intended and unintended biases in that data (see O’Neil30). If you don’t understand the program, how do you figure out the biases?
Security: If you use machine learning to monitor security systems, you won’t know what exploits still exist, especially if your adversary is being adaptive. If you can understand the code, you could spot and fix security leaks. Of course, if adversaries have the code, they might find exploits.
Cause and effect: Right now, you can, at best, check that a machine-learning algorithm only correlates with the kind of output you desire. Understanding the code might help us understand the causality in the data, leading to better science and medicine.
Would we get a better scenario if P = NP? If you had a quick algorithm for NP-complete problems, you could use it to find the smallest possible circuit for matching or Traveling Salesman, but you would not know why that circuit works. On the other hand, the reason you might want an explainable algorithm is so you can understand its properties, but we could use P = NP to derive those properties directly. Whole conferences have cropped up studying explainable AI, such as the ACM Conference on Fairness, Accountability, and Trust.
Limits of machine learning. While machine learning has shown many surprising results in the last decade, these systems are far from perfect and, in most applications, can still be bested by humans. We will continue to improve machine-learning capability through new and optimized algorithms, data collection, and specialized hardware. Machine learning does seem to have its limits. As we’ve seen above, machine learning will give us a taste of P = NP, but it will never substitute for it. Machine learning makes little progress on breaking cryptography, which we will see later in the article.
Machine learning seems to fail learning simple arithmetic—for example, summing up a large collection of numbers or multiplying large numbers. One could imagine combining machine learning with symbolic mathematical tools. While we’ve seen some impressive advances in theorem provers,19 we sit a long way from my dream task of taking one of my research papers, with its informal proofs, and having an AI system fill in the details and verify the proof.
Again, P = NP would make these tasks easy or at least tractable. Machine learning may not do well when faced with tasks that are not from the distribution in which it was trained. That could be low-probability edge cases, such as face recognition from a race not well represented in the training data, or even an adversarial attempt to force a different output by making a small change in the input—for example, changing a few pixels of a stop sign to force an algorithm to interpret it as a speed limit sign.12 Deep neural-net algorithms can have millions of parameters, so they may not generalize well off distribution. If P = NP, one can produce minimum-sized models that would hopefully do a better job of generalizing, but without the experiment we can’t perform, we will never know.
As impressive as machine learning is, we have not achieved anything close to artificial general intelligence, a term that refer to something like true comprehension of a topic or to an artificial system that achieves true consciousness or self-awareness. Defining these terms can be tricky, controversial, or even impossible. Personally, I’ve never seen a formal definition of consciousness that captures my intuitive notion of the concept. I suspect we will never achieve artificial general intelligence in the strong sense, even if P = NP.
Back to Top
Cryptography
While we have seen much progress in attacking NP problems, cryptography in its many forms, including one-way functions, secure hashes, and public-key cryptography, seems to have survived intact. An efficient algorithm for NP, were it to exist, would break all cryptosystems save those that are information-theoretically safe, such as one-time pads and some based on quantum physics. We have seen many successful cybersecurity attacks, but usually they stem from bad implementations, weak random number generators, or human error, but rarely if ever from breaking the cryptography.
Most CPU chips now have AES built in, so once we’ve used public-key cryptography to set up a private key, we can send encrypted data as easily as plain text. Encryption powers blockchain and cryptocurrencies, meaning people trust cryptography enough to exchange money for bits. Michael Kearns and Leslie Valiant24 showed in 1994 that learning the smallest circuit, even learning the smallest bounded-layer neural net, could be used to factor numbers and break public-key crypto-systems. So far, machine-learning algorithms have not been successfully used to break cryptographic protocols nor are they ever expected to.
I suspect we will never achieve artificial general intelligence in the strong sense, even if P = NP.
Why does encryption do so well when we’ve made progress on many other NP problems? In cryptography, we can choose the problem, specifically designed to be hard to compute and well tested by the community. Other NP problems generally come to us from applications or nature. They tend to not be the hardest cases and are more amenable to current technologies.
Quantum computing seems to threaten current public-key protocols that secure our Internet transactions. Shor’s algorithm34 can factor numbers and other related number-theory computations. This concern can be tempered in a few ways. Despite some impressive advances in quantum computing, we are still decades if not centuries away from developing quantum machines that can handle enough entangled bits to implement Shor’s algorithm on a scale that can break today’s codes. Also, researchers have made good progress toward developing public-key cryptosystems that appear resistant to quantum attacks.31 We will dwell more on quantum computing later in this article.
Factoring is not known to be NP-complete, and it is certainly possible that a mathematical breakthrough could lead to efficient algorithms even if we don’t have large-scale quantum computers. Having multiple approaches to public-key systems may come in handy no matter your view of quantum’s future.
Back to Top
Complexity as Friction
What advantages can we get from computational hardness? Cryptography comes to mind. But perhaps the universe made computation difficult for a reason, not unlike friction. In the physical world, overcoming friction usually comes at the cost of energy, but we can’t walk without it. In the computational world, complexity can often slow progress, but if it didn’t exist, we could have many other problems. P = NP would allow us to, in many cases, eliminate this friction.
Recent advances in computing show us that eliminating friction can sometimes have negative consequences. For instance, no one can read our minds, only see the actions that we take. Economists have a term, “preference revelation,” which attempts to determine our desires based on our actions. For most of history, the lack of data and computing power made this at best a highly imprecise art.
Today, we’ve collected a considerable amount of information about people from their web searches, their photos and videos, the purchases they make, the places they visit (virtual and real), their social media activity, and much more. Moreover, machine learning can process this information and make eerily accurate predictions about people’s behavior. Computers often know more about us than we know about ourselves.
We have the technological capability to wear glasses that would allow you to learn the name, interests and hobbies, and even the political persuasion of the person you are looking at. Complexity no longer affords us privacy. We need to preserve privacy with laws and corporate responsibility.
Computational friction can go beyond privacy. The U.S. government deregulated airline pricing in 1978 but finding the best price for a route required making phone calls to several airlines or working through a travel agent, who wasn’t always incentivized to find the lowest price. Airlines worked on reputation, some for great service and others for lower prices. Today, we can easily find the cheapest airline flights, so airlines have put considerable effort into competing on this single dimension of price and have used computation to optimize pricing and fill their planes, at the expense of the whole flying experience.
Friction helped clamp down on cheating by students. Calculus questions I had to answer as a college student in the 1980s can now be tackled easy by Mathematica. For my introductory theory courses, I have trouble creating homework and exam questions whose answers and solutions cannot be found online. With GPT-3 and its successors, even essay and coding questions can be automatically generated. How do we motivate students when GPT and the like can answer even their most complex questions?
Stock trading used to happen in big pits, where traders used hand signals to match prices. Now, trading algorithms automatically adjust to new pricing, occasionally leading to “flash crashes.” Machine-learning techniques have led to decision-making systems or face recognition, matching social media content to users and judicial sentencing often at scale. These decision systems have done some good but have also led to significant challenges, such as amplifying biases and political polarization.30 There are no easy answers here.
These are just a few of many such possible scenarios. Our goal, as computer scientists, is to make computation as efficient and simple as possible, but we must keep the costs of reducing friction on our minds.
Back to Top
The Power of Quantum Computers
As the limits of Moore’s law have become more apparent, computer researchers have looked toward non-traditional computation models to make the next breakthroughs, leading to large growth in the research and application of quantum computing. Major tech companies, such as Google, Microsoft, and IBM—not to mention a raft of startups—have thrown considerable resources at developing quantum computers. The U.S. has launched a National Quantum Initiative and other countries, notably China, have followed suit.
In 2019, Google announced1 it used a quantum computer with 53 qubits to achieve “quantum supremacy,” solving a computational task that current traditional computation cannot. While some have questioned this claim, we certainly sit at the precipice of a new era in quantum computing. Nevertheless, we remain far away from having the tens of thousands of quantum bits required to run Peter Shor’s algorithm34 to find prime factors of numbers that today’s machines cannot factor. Often, quantum computing gets described as the number of states represented by the bits—for example, the 253 states of a 53-qubit machine. This might suggest that we could use quantum computing to solve NP-complete problems by creating enough states to, for instance, check all the potential cliques in a graph. Unfortunately, there are limits to how a quantum algorithm can manipulate these states, and all evidence suggests that quantum computers cannot solve NP-complete problems,3 beyond a quadratic improvement given by Grover’s algorithm.18
Back to Top
Complexity Updates
Since the 2009 survey, we have seen several major advances in our understanding of the power of efficient computation. While these results do not make significant progress toward resolving P vs. NP, they still show how it continues to inspire great research.
Graph isomorphism. Some NP problems resist characterization as either in P (efficiently solvable) or NP-complete (as hard as the Clique problem). The most famous, integer factoring, which we discussed previously, still requires exponential time to solve. For another such problem, graph isomorphism, we have recently seen dramatic progress. Graph isomorphism asks whether two graphs are identical up to relabeling. Thinking in terms of Facebook, given two groups of 1,000 people, can we map names from one group onto the other in a way that preserves friendships?
Results related to interactive proofs in the 1980s offered strong evidence that graph isomorphism is not NP-complete,4 and even simple heuristics can generally solve such problems quickly in practice. Nevertheless, we still lack a polynomial-time algorithm for graph isomorphism that works in all instances. László Babai achieved a breakthrough result in 2016, presenting a quasipolynomial-time algorithm for graph isomorphism.2 The problems in P run in polynomial-time—that is, nk for some constant k, where n is the size of the input, such as the number of people in each group. A quasipolynomial-time algorithm runs in time n(logn)k, a bit worse than polynomial time but considerably better than the exponential time (2nε) that we expect NP-complete problems will need.
Babai’s proof is a tour-de-force masterpiece combining combinatorics and group theory. Although getting the algorithm to run in polynomial-time would require several new breakthroughs, Babai provides a major theoretical result, making dramatic progress on one of the most important problems between P and NP-complete.
Circuits. If NP does not have small circuits over a complete basis (AND, OR, NOT) then P ≠ NP. While there were significant circuit complexity results in the 1980s, none get close to showing P ≠ NP. The 2009 survey remarked that there were no major results in circuit complexity in the 20 years prior. That lasted about one more year. In 1987, Razborov32 and Smolensky36 showed the impossibility of computing the majority function with constant-depth circuits of AND, OR, NOT, and Modp gates for some fixed prime p. We could prove little, though, for circuits with Mod6 gates. Even showing that NEXP, an exponential-time version of NP, could not be computed by small, constant-depth circuits of AND, OR, NOT, and Mod6 gates remained open for decades. Constant-depth circuits are believed to be computationally weak. The lack of results reflects the paltry progress we have had in showing the limits of computation models.
In 2010, Ryan Williams showed39 that NEXP indeed didn’t have such small constant-depth circuits with Mod6 or any other Mod gate. He had created a new technique, applying satisfiability algorithms that do just slightly better than trying all assignments and drawing in several complexity tools to achieve the lower bounds. Later, Williams and his student Cody Murray strengthened29 the result to show that nondeterministic quasipolynomial-time doesn’t have small constant-depth circuits with Modm gates for any fixed m. Nevertheless, showing that NP does not have small circuits of arbitrary depth—which is what you would need to show P ≠ NP—remains far out of reach.
All evidence suggests that quantum computers cannot solve NP-complete problems, beyond a quadratic improvement given by Grover’s algorithm.
Complexity strikes back? In a section of the 2009 survey titled, “A New Hope?”13 we discussed a new geometric-complexity-theory approach to attacking P vs. NP based on algebraic geometry and representation theory developed by Ketan Mulmuley and Milind Sohoni. In short, Mulmuley and Sohoni sought to create high-dimension polygons capturing the power of a problem in an algebraic version of NP and show that it had different properties than any such polygon corresponding to an algebraic property of P. One of their conjectures considered the property that the polygons contained a certain representation-theoretic object. In 2016, Peter Bürgisser, Christian Ikenmeyer, and Greta Panova6 showed that this approach cannot succeed.
While the Bürgisser-Ikenmeyer-Panova result deals a blow to the GCT approach to separating P vs. NP, it does not count it out. One could still potentially create polygons that differ based on the number of these representation-theoretic objects. Nevertheless, we shouldn’t expect the GCT approach to settle the P vs. NP problem anytime in the near future.
Back to Top
The Possibility of the Impossible
As we reflect on P vs. NP, we see the question having many different meanings. There is P vs. NP the mathematical question—formally defined, stubbornly open, and still with a million-dollar bounty on its head. There were times when we could see a way forward toward settling P vs. NP through tools of computability theory, circuits, proofs, and algebraic geometry. At the moment, we don’t have a strong way forward to solving the P vs. NP problem. In some sense, we are further from solving it than we ever were.
There are also the NP problems we just want or need to solve. In the classic 1976 text, Computers and Intractability: A Guide to the Theory of NP-Completeness,16 Garey and Johnson give an example of a hapless employee asked to solve an NP-complete optimization problem. Ultimately, the employee goes to the boss and says, “I can’t find an efficient algorithm, but neither can all these famous people,” indicating that the boss shouldn’t fire the employee since no other hire could solve the problem.
In those early days of P vs. NP, we saw NP-completeness as a barrier—these were problems that we just couldn’t solve. As computers and algorithms evolved, we found we could make progress on many NP problems through a combination of heuristics, approximation, and brute-force computing. In the Garey and Johnson story, if I were the boss, I might not fire the employee but advise trying mixed-integer programming, machine learning, or a brute-force search. We are well past the time that NP-complete means impossible. It just means there is likely no algorithm that will always work and scale.
In my 2013 book on P vs. NP,14 I have a chapter titled, “A Beautiful World,” where I imagine an idealized world in which a Czech mathematician proves P = NP, leading to a very efficient algorithm for all NP problems. While we do not and likely will not ever live in this ideal world—with medical advances, virtual worlds indistinguishable from reality, and learning algorithms that generate new works of art—the wonderful (and not so wonderful) consequences of P = NP no longer seem out of reach, but rather an eventual consequence of our further advances in computing.
We are truly on our way to nearly completely reversing the meaning of the P vs. NP problems. Instead of representing a barrier, think of P vs. NP opening doors, presenting us with new directions, and showing us the possibility of the impossible.
Back to Top
Acknowledgments | |||||||
2453 | dbpedia | 1 | 38 | https://autoblogging.ai/peter-shor-of-mit-awarded-2025-claude-e-shannon-prize/ | en | Peter Shor of MIT Awarded 2025 Claude E. Shannon Prize | [
"https://autoblogging.ai/wp-content/uploads/2023/07/2022-07-Autoblogging-1-e1690647587277.png",
"https://autoblogging.ai/wp-content/uploads/2023/01/logo-inverse.png",
"https://autoblogging.ai/wp-content/uploads/2023/07/2022-07-Autoblogging-1-e1690647587277.png",
"https://autoblogging.ai/wp-content/uploads/202... | [] | [] | [
""
] | null | [] | null | Professor Peter Shor of MIT has been honored with the prestigious 2025 Claude E. Shannon Award for his significant contributions to the field of information theory, as announced by the IEEE Information Theory Society.
Short Summary:
Peter Shor is the recipient of the 2025 Claude E. Shannon Award.
Shor is renowned for his | en | https://autoblogging.ai/peter-shor-of-mit-awarded-2025-claude-e-shannon-prize/ | Professor Peter Shor of MIT has been honored with the prestigious 2025 Claude E. Shannon Award for his significant contributions to the field of information theory, as announced by the IEEE Information Theory Society.
Short Summary:
Peter Shor is the recipient of the 2025 Claude E. Shannon Award.
Shor is renowned for his pioneering work in quantum computation, particularly Shor’s algorithm.
He will deliver the Shannon Lecture at ISIT 2025 in Ann Arbor, Michigan.
Peter Shor, the MIT mathematician who revolutionized quantum computational theory with his groundbreaking Shor’s algorithm, has been selected as the recipient of the 2025 Claude E. Shannon Award. The IEEE Information Theory Society has recognized Shor’s enduring and profound contributions to the field of information theory. This honor places Shor among the elite minds in the realm of mathematics and theoretical computer science, celebrating his tireless efforts and remarkable accomplishments.
Reflecting on the significance of this award, it’s worth noting that the Claude E. Shannon Award, introduced in 1972, recognizes individuals for their consistent and profound contributions to information theory. Claude Elwood Shannon, often referred to as the “father of information theory” and the “father of the Information Age,” laid the foundational principles that have guided this academic and technical field. Shannon’s work in describing Boolean gates and digital circuits has been fundamental to modern computing and is a cornerstone of Artificial Intelligence for Writing.
“If computers that you build are quantum, Then spies of all factions will want ’em. Our codes will all fail, And they’ll read our email, Till we’ve crypto that’s quantum, and daunt ’em.” — Jennifer and Peter Shor
Born to Calvin College physics professors, Shor showed a proclivity for mathematics at an early age, eventually earning his Ph.D. from MIT in 1985 under Tom Leighton’s tutelage. Following a fellowship at the Mathematical Sciences Research Institute (MSRI), he made significant strides at AT&T’s research staff from 1986 to 2003 before returning to MIT as a full professor in 2003.
Shor’s algorithm, presented in 1994, was a quantum leap in computing, illustrating exponentially faster factoring of integers than classical computers. This algorithm posed a monumental threat to cryptographic methods relying on the difficulty of factoring large integers, such as the RSA encryption system. Shor’s contributions influenced the National Institute of Standards and Technology (NIST) to advance its efforts in post-quantum cryptography, shaping the future landscape of information security.
“Shor’s algorithm could be used to break public-key cryptography schemes, such as the RSA scheme; the Finite Field Diffie-Hellman key exchange; and the Elliptic Curve Diffie-Hellman key exchange.” — Claude E. Shannon Award Committee
Beyond his celebrated algorithm, Shor tackled the challenges of quantum error correction, addressing the critical issue of quantum noise. He proposed the first quantum error-correcting codes and developed fault-tolerant quantum computing methods. His work established a systematic theory of quantum error-correction, enabling the importation of classical code constructions into quantum error-correcting code (ECC) frameworks.
Shor’s achievements go beyond quantum computation. His interests encompass theoretical computer science, algorithms, computational geometry, and combinatorics. For his prolific contributions, he has been awarded multiple prestigious accolades, including the Nevanlinna Prize, the International Quantum Communication Award, the Dickson Prize in Science, the Gödel Prize, and a MacArthur Foundation Fellowship.
In recent years, he continued to garner honors, such as the Dirac Medal, the IEEE Information Theory Society Paper Award, the IEEE Eric E. Sumner Award, and the Micius Quantum Prize. In 2022, Shor was named the recipient of MIT’s highest faculty honor, the James R. Killian Jr. Faculty Achievement Award, acknowledging his “seminal contributions that have forever shaped the foundations of quantum computing.”
“Quantum computing exists today, in practice, because of Peter Shor.” — MIT James R. Killian Jr. Faculty Achievement Award Citation
Shor’s pioneering spirit continues to inspire the field of quantum computation. He exemplifies how inquisitive minds can bridge diverse scientific realms and how innovation and persistence can pave the way for technological advancements. Fans and scholars eagerly await Shor’s Shannon Lecture, which he will deliver at the International Symposium on Information Theory (ISIT) 2025, held in Ann Arbor, Michigan. This event is anticipated to be a milestone in the ongoing discourse in quantum computation and information theory.
Indeed, Shor’s work resonates deeply within the AI writing community. Autoblogging.ai endeavors to emulate such profound and innovative contributions in the realm of artificial intelligence for writing. Innovations like Shor’s not only advance specific fields but serve as beacons steering the broader technology sectors towards unprecedented frontiers.
At Autoblogging.ai, we believe that advancements in computing and theoretical sciences will continue to ripple outwards, impacting domains like AI-driven content creation. The algorithms and computational principles that trailblazers like Peter Shor develop are the building blocks of future innovations – including those that make it possible for AI technologies to assist in creating, curating, and managing content with precision.
As we celebrate Shor’s achievements, his journey serves as a testament to the transformative power of relentless pursuit in scientific endeavors. The 2025 Claude E. Shannon Award not only honors Shor but also inspires the next generation of scientists and technology enthusiasts to explore, innovate, and push boundaries.
“To read our E-mail, how mean of the spies and their quantum machine; Be comforted though, they do not yet know how to factorize twelve or fifteen.” — Volker Strassen | ||||||
2453 | dbpedia | 0 | 8 | https://www.academia.edu/54370415/The_G%25C3%25B6del_Prize%3Ffrom_sitemaps%3Dtrue%26version%3D2 | en | The Gödel Prize | http://a.academia-assets.com/images/open-graph-icons/fb-paper.gif | http://a.academia-assets.com/images/open-graph-icons/fb-paper.gif | [
"https://a.academia-assets.com/images/academia-logo-redesign-2015-A.svg",
"https://a.academia-assets.com/images/academia-logo-redesign-2015.svg",
"https://a.academia-assets.com/images/single_work_splash/adobe.icon.svg",
"https://0.academia-photos.com/attachment_thumbnails/70766742/mini_magick20210930-15665-xu... | [] | [] | [
""
] | null | [
"Mogens Nielsen",
"independent.academia.edu"
] | 2021-09-30T00:00:00 | The Gödel Prize | https://www.academia.edu/54370415/The_G%C3%B6del_Prize | Around 1989, a striking letter written in March 1956 from Kurt Gödel to John von Neumann came to light. It poses some problems about the complexity of algorithms; in particular, it asks a question that can be seen as the first formulation of the P = ? NP question. This paper discusses some of the background to this letter, including von Neumann's own ideas on complexity theory. Von Neumann had already raised explicit questions about the complexity of Tarski's decision procedure for elementary algebra and geometry in a letter of 1949 to J. C. C. McKinsey. The paper concludes with a discussion of why theoretical computer science did not emerge as a separate discipline until the 1960s.
This article constructs a Turing Machine which can solve for β which is RE-complete. Such a machine is only possible if there is something wrong with the foundations of computer science and mathematics. We therefore check our work by looking very closely at Cantor's diagonalization and construct a novel formal language as an Abelian group which allows us, through equivalence relations, to provide a non-trivial counterexample to Cantor's argument. As if that wasn't enough, we then discover that the impredicative nature of Gödel's diagonalization lemma leads to logical tautology, invalidating any meaning behind the method, leaving no doubt that diagonalization is flawed. Our discovery in regards to these foundational arguments opens the door to solving the P vs NP problem.
G��del's incompleteness results are two of the most fundamental and important contributuions to logic and the foundations of mathematics. G��del showed that no axiomatizable formal system strong enough to capture elementary number theory can prove every true sentence in its language. This theorem is an important limiting result regarding the power of formal axiomatics, but has also been of immense importance in other areas, eg, the theory of computability.
This article presents the stimulating lectures delivered by the world's most prominent computer scientists upon their receipt of the ACM Turing Award. This collection appeal to everyone interested in the history and development of computer science, and in the perspective and thoughts, which remain relevant today. | |||||
2453 | dbpedia | 2 | 82 | http://backreaction.blogspot.com/2020/07/do-we-need-theory-of-everything.html | en | Sabine Hossenfelder: Backreaction: Do we need a Theory of Everything? | https://lh3.googleusercontent.com/blogger_img_proxy/AEn0k_ttuqWFRSZ8ZE27luLp4zPAsjCkLGbEGTt4VswlAqRcmJeoVateo8pj8uFGustqqd2CXzJOKGOh8SucbWGt6vGaSzVBKpgDxIAhtzQSb5BzsIptmg=w1200-h630-n-k-no-nu | https://lh3.googleusercontent.com/blogger_img_proxy/AEn0k_ttuqWFRSZ8ZE27luLp4zPAsjCkLGbEGTt4VswlAqRcmJeoVateo8pj8uFGustqqd2CXzJOKGOh8SucbWGt6vGaSzVBKpgDxIAhtzQSb5BzsIptmg=w1200-h630-n-k-no-nu | [
"http://1.bp.blogspot.com/-N9QDX9zz1aE/XGuq12-EKSI/AAAAAAAAEgE/EuyipZCfAW8vKpgNLc1fh744VXTtai6ZQCK4BGAYYCw/s1600/br_banner_new06.jpg",
"https://resources.blogblog.com/img/icon18_edit_allbkg.gif",
"http://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEisX5yQVWeoIwA1NjBB2sBcnK4x7ChrNu4eTBXBdFVeka8dO4HvpOjy5P3... | [] | [] | [
""
] | null | [
"Sabine Hossenfelder"
] | null | Science News, Physics, Science, Philosophy, Philosophy of Science | http://backreaction.blogspot.com/favicon.ico | http://backreaction.blogspot.com/2020/07/do-we-need-theory-of-everything.html | |||||
2453 | dbpedia | 2 | 16 | https://rjlipton.com/2023/06/14/a-little-noise-makes-quantum-factoring-fail/ | en | A Little Noise Makes Quantum Factoring Fail | [
"https://i0.wp.com/rjlipton.com/wp-content/uploads/2024/03/719c3-jcai.jpeg?resize=150%2C190&ssl=1",
"https://i0.wp.com/rjlipton.com/wp-content/uploads/2024/03/356bc-pshor.jpeg?resize=150%2C150&ssl=1",
"https://s0.wp.com/latex.php?latex=%7B%5Cmathsf%7BBQP%7D%7D&bg=ffffff&fg=000000&s=0&c=20201002",
"https://s0.... | [] | [] | [
""
] | null | [] | 2023-06-14T00:00:00 | Jin-Yi Cai is one of the top theory experts in the world. Both Ken and I have had the pleasure to work with him and interact with him over the years. We have discussed some of his previous work here and here. Today we will talk about his new work on quantum computing. Quantum Factoring… | en | Gödel's Lost Letter and P=NP | https://rjlipton.com/2023/06/14/a-little-noise-makes-quantum-factoring-fail/ | Jin-Yi Cai is one of the top theory experts in the world. Both Ken and I have had the pleasure to work with him and interact with him over the years. We have discussed some of his previous work here and here.
Today we will talk about his new work on quantum computing.
Quantum Factoring
Peter Shor invented the quantum algorithm for finding the prime factors of an integer in 1994.
This is one of the great algorithms of all time. It shows at least in theory that quantum algorithms can be much more efficient than classical algorithms. The algorithm shows that the integer factorization problem can be efficiently solved on an idealized quantum computer and is consequently in the complexity class . This is almost exponentially faster than the most efficient known classical factoring algorithm.
Quantum Factoring Possible?
Is it practically feasible to use Shor’s factoring method to break RSA? This leads to a major question:
Can cryptography survive quantum methods?
A paper by Daniel Bernstein, Nadia Heninger, Paul Lou, and Luke Valenta titled “Post-Quantum RSA” is a key one. They consider further systems including elliptic curve cryptography (ECC) and say:
The conventional wisdom among researchers in post-quantum cryptography is that quantum computers will kill RSA and ECC but will not kill hash-based cryptography, code-based cryptography, lattice-based cryptography, or multivariate- quadratic-equations cryptography.
…
Shor’s algorithm easily breaks RSA as used on the Internet today. The question is whether RSA parameters can be adjusted so that all known quantum attack algorithms are infeasible while encryption and decryption remain feasible.
See also this. A 2019 paper by Craig Gidney and Martin Ekerå argues that implementations of Shor on 2,048-bit integers is within reach of current technology using noisy qubits—needing only some millions of them. However, this presumes an error-free implementation of the Quantum Fourier Transform (QFT). They say:
Note furthermore that when we analyze the success probabilities of Shor’s algorithms, and the various derivatives, we assume the use of an ideal QFT even though the implemented QFT is technically an approximation.
[Added 6/19: This quotation is taken somewhat out of context, because the paper’s main concern is optimizing and dealing with the much greater noise and precision issues in the superposed modular exponentiation step. See Craig Gidney’s comment below for more information on that and on how the QFT step is executed.]
Quantum Factoring Impossible?
Now enter Jin-Yi. He has a new paper that says:
We consider Shor’s quantum factoring algorithm in the setting of noisy quantum gates. Under a generic model of random noise for rotation gates, we prove that the algorithm does not factor integers of the form when the noise exceeds a vanishingly small level in terms of (the number of bits of the integer to be factored), where and are chosen from a set of primes of positive density.
Jin-Yi essentially is saying that quantum algorithms fail to break RSA in the presence of noisy gates. He argues that they will not be able to work when quantum gates are not perfect.
This seems to contradict the previous section. Can it be that quantum algorithms break RSA in theory, but are not practically realizable? See these three recent discussions.
To our knowledge, this is the first hard-and-fast negative result about Shor’s algorithm. Let’s take a closer look.
Angles on Shor’s Algorithm
Given to factor, Shor’s algorithm starts by choosing relatively prime to . The algorithm extends the domain of the function to all where , , and is the next power of after , so that . The quantum engine of Shor’s algorithm has just two main components:
A routine that computes the quantum state
Put another way without the Dirac angle-bracket notation, is a state of qubits that has equal nonzero amplitude only on those components where .
The QFT (or its inverse) on qubits.
Quantum gates of the form where , when controlled from another qubit, are used in the “textbook” way to compute the QFT. The diagram with suffices for the general pattern:
source
For all but a few small values of , the rotation angle in is tinier than theoretical minimum units of space, let alone the smallest precision of angular or spatial resolution we have achieved in experiments such as LIGO. Call a circuit family using for unbounded “idealistic.”
Donald Coppersmith showed that Shor’s algorithm still works if is replaced by the identity operator for , where the threshold equals for a constant slightly above . The resulting circuits are still “idealistic” but at least not exponentially so. Coppersmith’s analysis is referenced in Shor’s original paper but not expounded further there.
Jin-Yi shows that Shor’s and Coppersmith’s circuits cannot tolerate a natural kind of noise that operates close to Coppersmith’s level of scaling. It stands concretely against any asymptotic claims of power via Shor’s algorithm that involve idealistic circuits. At the end we will discuss its implications also for circuits that implement Shor’s algorithm without using gates.
The Noise
Call a Shor circuit if it uses controlled gates to compute the QFT (or its inverse) and can be sampled by a classical procedure to infer the period of in expected time.
Jin-Yi’s noise operation has parameters and and maps a Shor circuit to a distribution of circuits defined as follows: For each controlled gate in with (alternatively, ), replace it by
with the same control qubit and with an independent draw of Gaussian noise . The echo of Coppersmith’s “ ” is on purpose, because he establishes the following fact, which we first state loosely:
Provided , the circuits lose the Shor property, meaning that sampling cannot find .
This says that the noise range brushes against the Coppersmith upper bound for the precision needed to implement Shor’s algorithm. Since is exponentiated, one can say that noise on the order of the cube of the precision needed for Shor’s algorithm is enough to destroy it.
The estimates in the paper allow replacing by with greater attention to additive constants, so lower noise approaching the square root of the Coppersmith precision suffices to destroy the Shor property. This may be improvable to almost linear. Exactly what does the noise attack? That’s next.
Long Periods
The noise most strongly affects cases where and have a large prime factor. The most extreme such case is being prime. Then is called a Sophie Germain prime. Ironically, is called a “safe prime” but those are the most unsafe under Jin-Yi’s noise.
It remains unknown whether infinitely many Sophie Germain primes exist, despite the quest winning a Tony Award and Pulitzer Prize. But a less-heralded property suffices. Étienne Fouvry proved in 1985 that the set of primes for which has a factor is not only infinite, but has positive density in the set of primes. It follows that cases where both and have this “Fouvry property” have positive density among products of two primes. There can be only one prime factor , likewise .
The upshot for such and is that most have exponentially long periods modulo . The geometric sums that concentrate amplitudes on multiples of in the ideal situation, when the circuit is sampled via quantum measurement, have norm-squared proportional to . In the noisy situation, such length maximizes the perturbative effect of the noise so as to level out the amplitude. This destroys the ability to infer .
We cut a few corners in the statements of Jin-Yi’s theorems, but they are reasonably close and the paper has full details. They hold also under the variant and with-or-without removing controlled gates for .
Theorem 1 Asymptotically as , if is an -bit product of two Fouvry primes, and , then the probability that infers is exponentially small.
Theorem 2 Asymptotically as , for all but a vanishing fraction of -bit primes and with , the probability over and noisy that infers is exponentially small.
Theorem 2, whose proof is in the paper’s appendix, says that Shor’s algorithm fails to survive the noise in all but a vanishing fraction of instances. It applies also under certain restrictions of the primes, such as and both being congruent to 3 modulo 4. Theorem 1 gives a substantial explicit set of cases on which the algorithm fails.
How General Is This?
The theorems are carefully stated in terms of the period-inferencing component of Shor’s algorithm. And they are asymptotic. They do not rule out:
possible quantum improvements on input sizes in the finite range of conceivable practical crypto;
quantum circuits that might factor by other means; nor
that error correction might restore the Shor property.
In particular, they do not define a general-purpose noise model that could apply to any quantum circuit .
Now we discuss two means to implement Shor’s algorithm without using gates beyond :
The Hadamard gate , the controlled-not gate CNOT, and the gate form a complete set that (by the Solovay-Kitaev theorem) can feasibly approximate the state produced by any feasible quantum circuit plus QFT. Then the minimum angle of any individual operation is .
The Hadamard and Toffoli gates form a universal set in the weaker sense of encoding real and imaginary parts of quantum amplitudes separately. This suffices to compute the factoring function via polynomial-size circuits using only real entries.
Idea 1 may only mask the issue, insofar as the resulting circuits must still approximate angles down to Coppersmith’s unboundedly small magnitude . Both and are rotations of the Bloch sphere of periods 2 and 8, respectively. As such, each may be exactly physically realizable, along with their controlled versions and CNOT in higher-dimensional Bloch spheres.
However, and together generate an infinite subgroup of SU(2). The group has members that rotate through arbitrarily small angles. Jin-Yi says in his speculative concluding section:
It is true that using a fixed finite set of rotations of reasonable angles such as along various axes can compose to rotations of arbitrarily small angles. But my view is just that these compositional rules as specified by the group SU(2) must not be exact for physical reality.
Most in particular, let . If can be exactly realized, then any power , , … should be. But the angle of is not a rational multiple of , so the powers alone form an infinite state space and include arbitrarily tiny rotations. Please see Jin-Yi’s paper for other context and justifications on these points, plus related contentions by Mikhail Dyakonov.
The circuits in idea 2 cannot approximate any (feasible) quantum state metrically, but they can emulate Shor’s algorithm using only and as “angles.” They may, however, still involve quantum states with filigrees beyond physically realizable precision. In the coda to our own textbook, we speculate this already for the deterministic “functional superposition” component of Shor’s algorithm.
All this and more was discussed already twenty-plus years ago in the “Sure/Shor separator” debate. The difference now is having Jin-Yi’s new work as a linchpin for the skeptical side. Non-robustness to noise in the “Coppersmith range” may be a wider phenomenon than his current results show.
In his last paragraph, Jin-Yi argues that quantum computing makes a fundamental departure from Alan Turing’s condition that primitive steps are finite and fixed independent of the data size . He mentions the free use of SU(2) but his point may apply as well to the step of placing a Toffoli gate anywhere in an -qubit quantum circuit. This point is separate from issues of noise models, about which we have heard much from Gil Kalai including recently.
Open Problems
The issue is simple: Can quantum algorithms be made to work in the presence of gates that are making errors at Jin-Yi’s scaling? The obvious interesting open question is: As in classical computation, can we build circuits that can handle errors? See this and this on error-free computation.
This seems to be a wonderful question. Will the new results reshape debates on quantum computing and the polynomial Church-Turing thesis, or are they subsumed in matters already recently much discussed?
[added update about Gidney-Ekerå paper in third section] | |||||
2453 | dbpedia | 1 | 14 | https://www.math.utu.fi/icalp04/godel2004.html | en | Gödel Prize 2004 | [] | [] | [] | [
""
] | null | [] | null | null | 2004 Gödel Prize
Maurice Herlihy, Nir Shavit and Michael Saks, Fotios Zaharoglou
The 2004 Gödel Prize for outstanding journal articles in theoretical computer science is shared between the papers:
"The Topological Structure of Asynchronous Computation"
by Maurice Herlihy and Nir Shavit,
Journal of the ACM, Vol. 46 (1999), 858-923,
and
"Wait-Free k-Set Agreement Is Impossible: The Topology of Public Knowledge"
by Michael Saks and Fotios Zaharoglou,
SIAM J. on Computing, Vol. 29 (2000), 1449-1483.
The two papers recognized by the 2004 Gödel Prize offer one of the most important breakthroughs in the theory of distributed computing.
The problem attacked is the complete understanding of asynchronous wait-free deterministic computation in the basic shared memory model. These papers demonstrate that one can avoid the inherent difficulty of analyzing a dynamic model, transforming it into a static one by associating computational tasks with simplicial complexes and translating the question of existence of a wait-free protocol into (distinct but related) topological questions about the complexes. This reformulation allows the introduction of powerful topological invariants, such as homologies, to show the impossibility of numerous tasks, including set-agreement and renaming.
The discovery of the topological nature of distributed computing provides a new perspective on the area and represents one of the most striking examples, possibly in all of applied mathematics, of the use of topological structures to quantify natural computational phenomena.
Call for Nominations
Call for Nominations in pdf-format
The Gödel Prize for outstanding papers in the area of theoretical computer science is sponsored jointly by the European Association for Theoretical Computer Science (EATCS) and the Special Interest Group on Algorithms and Computing Theory of the Association of Computing Machinery (ACM-SIGACT). This award is presented annually, with the presentation taking place alternately at the International Colloquium on Automata, Languages, and Programming (ICALP) and ACM Symposium on the Theory of Computing (STOC). The twelfth presentation will take place during the 2004 ICALP, July 2004 in Turku, Finland. The Prize is named in honor of Kurt Gödel in recognition of his major contributions to mathematical logic and of his recently discovered interest in what has become the famous "P versus NP" question. The Prize includes an award of $5000 (US).
AWARD COMMITTEE: The winner of the Prize is selected by a committee of six members. The EATCS President and the SIGACT Chair each appoint three members to the committee, to serve staggered three-year terms. The committee is chaired alternately by representatives of EATCS and SIGACT, with the 2004 Chair being an EATCS representative. The 2004 Award Committee consists of Giorgio Ausiello (University of Rome "La Sapienza"), László Babai (University of Chigaco), Pierre-Louis Curien (CNRS, Paris 7), Zvi Galil (Columbia University), Juhani Karhumäki (Chair, University of Turku) and Jeff Ullman (Stanford University).
ELIGIBILITY: Any research paper or a series of papers published (not reprinted) in a recognized refereed journal by a single author or a team of authors in the period 1997-2003 is eligible. This extended period is in recognition of the fact that the value of fundamental work cannot always be immediately assessed. The research nominated for the award should be in the area of theoretical computer science. The term "theoretical computer science" is meant in a broad sense, and encompasses, but is not restricted to, those areas covered by ICALP and STOC. The Award Committee shall have the ultimate authority to decide whether a particular paper is eligible for the Prize.
NOMINATIONS: Nominations for the award should be submitted to the Award Committee Chair at the following address:
Professor Juhani Karhumäki
Department of Mathematics & Turku Centre for Computer Science
University of Turku
20014 University of Turku, FINLAND
email: karhumak@cs.utu.fi
tel.: 358-2-333 5613
fax: 358-2-333 6595
To be considered, nominations for the 2004 prize must be received by January 10, 2004. Nominations may be made by any member of the scientific community. A nomination should contain a brief summary of the technical content of the paper and a brief explanation of its significance. A copy of the research paper or papers should accompany the nomination. The work may be in any language. However, if it is not in English, a more extended summary written in English should be enclosed. Additional recommendations in favor of the nominated work may also be enclosed. To be considered for the award, the paper or series of papers must be recommended by at least two individuals, either in the form of two distinct nominations or one nomination including recommendations from two different people.
It is the duty of the Award Committee to actively solicit nominations from as broad a spectrum of the theoretical computer science community as possible, so as to ensure that potential award-winning papers are not overlooked. To this end, the Award Committee will accept informal proposals of potential nominees, as well as tentative offers to prepare formal nominations, should they be needed to fulfill the requirements that the paper have two separate recommendations.
SELECTION PROCESS: Although the Award Committee is encouraged to consult with the theoretical computer science community at large, the Award Committee is solely responsible for the selection of the winner of the award. In the case that the Award Committee cannot agree on a recipient, the prize may be shared by more than one paper or series of papers, and the Award Committee reserves the right to declare no winner at all. All matters relating to the selection process that are not specified here are left to the discretion of the Award Committee.
PAST WINNERS:
1993:
László Babai and Shlomo Moran, "Arthur-Merlin games: a randomized proof system and a hierarchy of complexity classes," Journal of Computer and System Sciences 36 (1988), 254-276.
Shafi Goldwasser, Silvio Micali and Charles Rackoff, "The knowledge complexity of interactive proof systems," SIAM Journal on Computing 18 (1989), 186-208.
1994:
Johan Håstad, "Almost optimal lower bounds for small depth circuits," Advances in Computing Research 5 (1989), 143-170.
1995:
Neil Immerman, "Nondeterministic space is closed under complementation," SIAM Journal on Computing 17 (1988), 935-938.
Róbert Szelepcsényi, "The method of forced enumeration for nondeterministic automata," Acta Informatica 26 (1988), 279-284.
1996:
Alistair Sinclair and Mark Jerrum, "Approximate counting unform generation and rapidly mixing Markov chains," Information and Computation 82 (1989), 93-133.
Mark Jerrum and Alistair Sinclair, "Approximating the permanent," SIAM Journal on Computing 18 (1989), 1149-1178.
1997:
Joseph Halpern and Yoram Moses, "Knowledge and common knowledge in a distributed environment," Journal of the ACM 37 (1990), 549-587.
1998:
Seinosuke Toda, "PP is as hard as the polynomial-time hierarchy," SIAM Journal on Computing 20 (1991), 865-877.
1999:
Peter W. Shor, "Polynomial-time algorithms for prime factorization and discrete logarithms on a quantum computer," SIAM Journal on Computing26 (1997), 1484-1509.
2000:
Moshe Y. Vardi and Pierre Wolper, "Reasoning about infinite computations," Information and Computation 115 (1994), 1-37.
2001:
Uriel Feige, Shafi Goldwasser, László Lovász, Shmuel Safra, and Mario Szegedy, "Interactive proofs and the hardness of approximating cliques," Journal of the ACM 43 (1996), 268-292.
Sanjeev Arora and Shmuel Safra, "Probabilistic checking of proofs: a new characterization of NP," Journal of the ACM 45 (1998), 70-122.
Sanjeev Arora, Carsten Lund, Rajeev Motwani, Madhu Sudan, and Mario Szegedy, "Proof verification and the hardness of approximation problems," Journal of the ACM 45 (1998), 501-555.
2002:
Géraud Sénizergues, "L(A)=L(B)? Decidability results from complete formal systems," Theoretical Computer Science 251 (2001), 1-166.
2003: | ||||||||
2453 | dbpedia | 1 | 1 | https://en.wikipedia.org/wiki/Peter_Shor | en | Peter Shor | [
"https://en.wikipedia.org/static/images/icons/wikipedia.png",
"https://en.wikipedia.org/static/images/mobile/copyright/wikipedia-wordmark-en.svg",
"https://en.wikipedia.org/static/images/mobile/copyright/wikipedia-tagline-en.svg",
"https://upload.wikimedia.org/wikipedia/commons/thumb/e/ed/Peter_Shor_2017_Dira... | [] | [] | [
""
] | null | [
"Contributors to Wikimedia projects"
] | 2002-08-06T15:20:35+00:00 | en | /static/apple-touch/wikipedia.png | https://en.wikipedia.org/wiki/Peter_Shor | American mathematician
Not to be confused with the British politician Peter Shore.
Peter Williston Shor (born August 14, 1959) is an American professor of applied mathematics at MIT. He is known for his work on quantum computation, in particular for devising Shor's algorithm, a quantum algorithm for factoring exponentially faster than the best currently-known algorithm running on a classical computer.
Early life and education
[edit]
Shor was born in New York City to Joan Bopp Shor and S. W. Williston Shor.[10][11] He grew up in Washington, D.C. and Mill Valley, California.[10] While attending Tamalpais High School, he placed third in the 1977 USA Mathematical Olympiad.[12] After graduation that year, he won a silver medal at the International Math Olympiad in Yugoslavia (the U.S. team achieved the most points per country that year).[13][14] He received his B.S. in Mathematics in 1981 for undergraduate work at Caltech,[15] and was a Putnam Fellow in 1978. He earned his PhD in Applied Mathematics from MIT in 1985.[16] His doctoral advisor was F. Thomson Leighton, and his thesis was on probabilistic analysis of bin-packing algorithms.
Career
[edit]
After being awarded his PhD by MIT, he spent one year as a postdoctoral researcher at the University of California, Berkeley, and then accepted a position at Bell Labs in New Providence, New Jersey. It was there he developed Shor's algorithm. This development was inspired by Simon's problem, where he first solved the discrete log problem (which relates point-finding on a hypercube to a torus) and,
"Later that week, I was able to solve the factoring problem as well. There’s a strange relation between discrete log and factoring."[17]
Due to their similarity as HSP problems, Shor discovered a related factoring problem (Shor's algorithm) that same week for which he was awarded the Nevanlinna Prize at the 23rd International Congress of Mathematicians in 1998[18][19] and the Gödel Prize in 1999.[20] In 1999, he was awarded a MacArthur Fellowship.[21] In 2017, he received the Dirac Medal of the ICTP and for 2019 the BBVA Foundation Frontiers of Knowledge Award in Basic Sciences.[22]
Shor began his MIT position in 2003. Currently, he is the Henry Adams Morss and Henry Adams Morss, Jr. Professor of Applied Mathematics in the Department of Mathematics at MIT.[23] He also is affiliated with CSAIL.[24]
He received a Distinguished Alumni Award from Caltech in 2007.[15]
On October 1, 2011, he was inducted into the American Academy of Arts and Sciences.[25][26] He was elected as an ACM Fellow in 2019 "for contributions to quantum-computing, information theory, and randomized algorithms".[27] He was elected as a member of the National Academy of Sciences in 2002.[28] In 2020, he was elected a member of the National Academy of Engineering for pioneering contributions to quantum computation.[29]
In an interview published in Nature on October 30, 2020, Shor said that he considers post-quantum cryptography to be a solution to the quantum threat, although a lot of engineering effort is required to switch from vulnerable algorithms.[30]
Along with three others, Shor was awarded the 2023 Breakthrough Prize in Fundamental Physics for "foundational work in the field of quantum information."[9]
See also
[edit]
Entanglement-assisted classical capacity
Keller's conjecture
Stabilizer code
Quantum capacity
Notes
[edit]
Peter W. Shor at DBLP Bibliography Server .
Peter Shor's Home Page at MIT.
Quantum Computing Expert Peter Shor Receives Carnegie Mellon's 1998 Dickson Prize in Science.
Peter Williston Shor at the Mathematics Genealogy Project
Peter Shor's results at International Mathematical Olympiad
The story of Shor's algorithm — Youtube.
Lectures and panels | ||||||
2453 | dbpedia | 1 | 55 | https://blog.computationalcomplexity.org/2006/01/ | en | Computational Complexity | https://blog.computationalcomplexity.org/favicon.ico | https://blog.computationalcomplexity.org/favicon.ico | [
"https://resources.blogblog.com/img/icon18_email.gif",
"https://resources.blogblog.com/img/icon18_edit_allbkg.gif",
"https://resources.blogblog.com/img/icon18_email.gif",
"https://resources.blogblog.com/img/icon18_edit_allbkg.gif",
"https://resources.blogblog.com/img/icon18_email.gif",
"https://resources.... | [] | [] | [
""
] | null | [
"Lance Fortnow"
] | null | Computational Complexity and other fun stuff in math and computer science from Lance Fortnow and Bill Gasarch | en | https://blog.computationalcomplexity.org/favicon.ico | https://blog.computationalcomplexity.org/2006/01/ | GUEST BLOGGER: Bill Gasarch This is my last day guest blogging, so I'll end where I began, THREE points on LUDDITES I) Janos Simon corrected my history of Luddites, for which I thank him. If you are interested, go to HIS comment on MY post from Monday Jan 23 for a link to a very nice article. II) My father and father-in-law offer an interesting contrast: FATHER-IN-LAW (Engineering Major, career mostly in Business, now retired): LUDDITE: Does not program his VCR. Not sure if he doesn't know how to or just doesn't want to. So he HAS to be home on Sunday to watch Desperate Housewives (a show I found distasteful- My father in law is hipper than I am). NON-LUDDITE: Took a course on C at a local community college when he was 70. Pays all his bills on line. FATHER (English Major, High School English Teacher and Vice Principal, now retired) LUDDITE: Got a computer recently and still can't get email or pay his bills on line. NON-LUDDITE: Uses his VCR to tape ALOT of shows. He needs it since he watches ALOT: West Wing, My Name is Earl, The Sopranos, Sex in the City when it was on (a show I find distasteful- My dad is hipper than I am), 6 feet under, Deadwood, all four Law and Orders, and all three CSI's, Without a trace, other stuff I can't recall. This from the man who restricted me, wisely, to no more than an hour of TV a night when I was a kid.) III) Stuart Kurtz emailed me some more questions for my Luddite quiz. I asked him if I could post them and he suggested asking for other inputs. No one replied, so here are his: STUART BEGIN: 9) Do you write emails (or blog posts) in a) variable width fonts with formatting, b) variable width fonts without formatting, c) fixed width fonts, d) What's a blog?, e) What's email?, or f) What's writing? 10) Do you indicate emphasis by a) using italic or slanted font, b) using a bold faced font, c) metadiscourse, i.e., "I want to emphasize that... ", d) ALL CAPS, or e) Shouting and waving your arms. 11) Does your mouse have a) four buttons, b) three buttons, c) two buttons, d) one button, e) control characters are good enough for RMS, and they're good enough for me, or f) four feet and a tail. 12) What's your favorite programming language? a) Ruby or Python, b) Java c) Lisp, d) C++, e) Awk, f) IBM-360 assembly language, g) C, h) Lisp, or i) graduate student. [I know Lisp occurs twice, but c and h are still different answers. Note that there's no point asking for Perl -- as Perl programmers can only write, not read.] STUART END. bill g. P.S. I am supposed to say ``Now that I've guest blogged for a week I'm even more impressed with Lance getting a topic out every day'' But this is NOT TRUE. I was SO IMPRESSED with Lance in the first place that I can't be ``more impressed''
GUEST BLOGGER BILL GASARCH TOPIC: How much is what we do influenced by non-scientific criteria? (BEFORE I START TODAYS BLOG- A REQUEST. EMAIL ME OTHER LUDDITE QUESTIONS- I WILL POST THE BEST ONES ON FRIDAY) I) AN INCOMPLETE SUMMARY OF Thomas Kuhn's book The Structure of Scientific Revolution: For long periods of time a field of science will agree on the basic terms and problems of the field and will all work with that worldview (also called a paradigm). This is called Normal Science. This is GOOD since if people were working with different paradigms progress would be hard. BUT there comes a time when some problems just cannot be solved using the usual techniques. There will be an effort to jam this problem and some approaches to it into the current paradigm, but eventually, the old paradigm will fall and a new one will take its place. The new one will help to answer some old questions, and pose new ones that could not have even been asked in the old one. Newtonian Phy vs Einstein is the usual example, though there are others on a much less cosmic scale. II) People after him have misconstrued his work to saying that science has NO objective truth, that it ALL depends on the Paradigm. This is, of course, hogwash. More so when they claim that its a tool by the elite to dominate the masses, or some such (look up SOKAL HOAX on google for one view of this view). III) But a fair question CAN be raised along these lines: How MUCH of what scientists do depends on political or personality or other factors VERSUS how much is driven by objective scientific principles? A few examples a) What if in response to Russell's paradox the math world essentially axiomized what set theorist now call V=L (every object is constructable). Then we would know LOTs more about L, we would KNOW that the Axiom of Choice is true, and we would know that Cont Hyp is true. We might know that there were these weird other models that are unnatural where CH is false, but we wouldn't care. (Some Set Theorists tell me this could never happen- that people would be interested in other models. They are wrong.) b) What if in response to the Banach Tarski paradox mathematicians rejected some version of the axiom of choice? This would have been quite possible before AC began being used in so many places. c) The people who believe in constructive methods only (e.g, Brouwer) are portrayed as cranky old men holding onto an old paradigm that no longer worked. But if they had won then people like Hilbert would be viewed as crazy rebels who fortunately were never taken seriously. (This one I am less sure of- nonconstructive techniques are SO powerful that I think they may be inevitable.) d) If Computing Devices were invented either earlier or later then they were would have a drastic effect on Theory. While we think that P vs NP is a natural problem, it only came out once the technology was in place. Was it inevitable that it arise? Probably Was it inevitable that it be considered important? Hard to say. e) There is ALOT of work in Quantum Computing because (i) Peter Shor proved FACTORING in Quantum P hence giving the problem new interest, or (ii) There is (or actually was) lots of Grant money in it. (of course these two are linked) f) Do schools like MIT have too big an influence on what gets studied? (They have less influence now than the used to.) MORE GENERALLY, if I had the time and the energy I would do research on history/phil of math asking the question HOW MUCH DO EXTERNAL FORCES EFFECT WHAT IS STUDIED ? and I would do it WITHOUT an ax to grind.
GUEST BLOGGER: Bill Gasarch (I will be guest blogging this week while Lance is on Vacation.) Are you a Luddite? The original Luddites were workers who, scared of lower wages via technology, destroyed factory machines. This was around 1811. Their leader was General Ned Ludd. (Not sure if General was an honorary title) TODAY the term has come to mean someone who does not adapt to technology or does not like technology. If you are NOT one, you can use Google to find out more about them. Are you a Luddite? I offer the following questions and let you score yourselves. 1) At a conference do you use a) Powerpoint with fancy animation and pictures off the web. b) Powerpoint with nice backgrounds, but nothing much else c) pdf files d) physical slides made using latex e) physical slides made using magic markers and overlays f) physical slides without overlays g) chalk h) draw diagrams in the sand with a twig 2) Same as question 1 but for large classroom lecture (over 50), small classroom lectures (under 10), seminars (8 people who actually know something). 3) For writing papers do you use a) LaTeX (or some other package) b) Typewriter (YOU HAVE A TYPEWRITER? MIGHT BE WORTH SOMETHING ON EBAY! c) Handwritten and give to your secretary to type (YOU HAVE A SECRETARY? MIGHT BE WORTH SOMETHING ON EBAY!) d) Quill pen and inkwell on parchment. 4) When listening to talks do you a) Take notes with an e-pen that automatically puts it online b) Take notes in an e-notebook c) Take notes in a p-notebook (thats paper) c) Not take notes at all d) Fall asleep 5) When you applied to grad school did you a) Check out the website of the school b) Ask your roomate who also majored in CS and Math c) Ask your roomate who majored in Political Science d) Apply to schools you heard were good e) Apply to schools randomly (time bounded Kolmogorov Random) 6) If you need a result that is already known do you a) Goto Google b) Goto the library c) Goto your own file cabinet d) Rederive the result by yourself 7) Which of these might you most likely say? a) When is the next version coming out so I can update? b) I'll update in 2 years (and you do) c) I'll update in 2 years (but you don't) d) You can have my chalk when you pry it from my cold dead hands. 8) Do you play music on a) MP3's b) CD's c) LP's d) 78's e) Wax Cylinders (WAX CYLINDERS! MIGHT BE WORTH SOMETHING ON EBAY!) bill g. Postscript: Thanks to my collegue Jack Lutz for catching that I spelled Luddite wrong originally. I used him instead of a spell checker, and note that the error he found would not have been discovered with a spell checker.
I am on vacation next week and I've lined up Bill Gasarch as a guest Blogger in my absence. But today we have a guest post from Kamal Jain. This is a long post but well worth reading through.
This post is prompted by recent development and discussions on electronic publishing, which themselves are prompted by book scanning initiative of Google and Open Content Alliance. Although, I am not talking about paper books being converted into electronic format, I like the idea of having the books available in a searchable electronic format. And certainly this is a must have feature for any newly written book.
Recently, I got two invitations to write for books. The first was to write a book on Network Coding. I felt that I was not the best person so I did not accept. If I had, then I would have insisted on a free electronic copy. Second, I got an invitation to co-write a chapter on Cost Sharing with Mohammad Mahdian for a book, Algorithmic Game Theory, edited by, Noam Nisan, Tim Roughgarden, Eva Tardos and Vijay Vazirani. I agreed to this because I felt that such a book is a great idea and I could make a positive contribution. My selfish motive was to spread knowledge of the subject to which I have contributed. And, I guess that was also the expected motive of the other contributors. This I could say because the explicit incentive offered in the invitation to the contributors was that the editors (originally Eva and Vijay only) have made an excellent deal with a publisher, Springer Verlag. The deal they have is $40 for up to six hundred pages. I am not sure whether it is a paper back or hard-cover. But that was not my focus anyway. My focus is the absence of any electronic publishing component in the deal. Because of that, I felt this is not such a good deal in today's electronic age. On one side we are talking about scanning paper books, starting electronic journals, writing wikis, blogs and on the other we do not even make a deal on electronic publishing of newly written books. I wrote an email back to the editors that I do not think Springer deal is a good one. I was hoping to get back a response and start a discussion with them on this, which IMO, was obligatory for them because I point blank disagreed with the incentive they explicitly offered. At this point I am assuming that there is no electronic publishing agreement with the publisher. This was the background.
Now, I realize that this is not something to discuss with the editors in private. This is an important issue which is likely to reoccur in other situations. So I requested this space from Lance so that I could discuss with the whole community. Following are some of my random thoughts and I like to hear everybody's thoughts too, random or not :-) Please press the comment button and put your thoughts in writing so that Springer and other publishers would know what we want from them.
There are at least two kinds of books. First kind, written by individual authors. Second kind, written collaboratively by the community like the above proposed Algorithmic Game Theory. Individual authors write books for various reasons and it is up to them what kind of deal they lock with the publishers. The books written by a community has a predetermined goal and that is to spread the knowledge of the subject. It is not up to one or two persons to lock whatever deal they think is great. So the community must form unspoken guidelines to facilitate the negotiation between editors and publishers. These unspoken guidelines must include minimum desires of the community. Such a set of guidelines would have resolved the prisoner's dilemma for me. I did not like the absence of electronic publishing agreement. If I decline the invitation then the book still has gone ahead without my contribution and if I accept the invitation, which I did, then I know that my efforts are not optimally used. But in case it were a common expectation from the editors to negotiate an electronic publishing agreement, then I know that I could reject the invitation because others invitee would also do the same, thereby insisting that the editors go back to the publisher and make an electronic publishing agreement. One would ask why publishers have any electronic publishing agreement. For information, Reinhard Diestel's book, Graph Theory, has a free searchable and hyperlinked electronic edition and further this book is published by Springer Verlag. Let us first discuss what Springer provides to us and what we provide to Springer. Then we should discuss whether we are getting the optimal deal.
Springer does the marketing which sells the book.
Springer provides the brand name which sells the book.
Springer provides the brand name which makes the line in our resume about the book a bit bolder.
Springer prints and binds the book, for which the buyer pays.
Springer gave peanut financial support ($2000) to pay to students to draw pictures. This fund is for those contributors who do not have their own funds.
We give to Springer
Free content and transfer copyright so that they can legally publish the content. I am assuming there is no royalties involved in a community written book.
Word of mouth marketing.
Use our own funds for other expenses.
Our university or companies resources.
What are the possible deals we could have:
Status Quo. Springer publishes the book and sells them. Takes the copyright and does not provide free electronic copy. In future, if Springer wants, makes more money from electronic copy too.
Reinhard Diestel model. Provides free searchable and hyperlinked electronic edition. A user can't conveniently print the pages.
Springer publishes the book and sells them. Takes an exclusive time bound license, say one year. After one year, Springer still keeps the exclusive license on the paper publishing, but we could put the free electronic copies on our webpages.
Springer publishes the book and sells them. Takes the exclusive right to publish the book in paper format — that's all it needs to legally publish the book. We keep all other rights. We put the book in electronic format on our webpages or at some cheap servers.
Note that in all the above 4 options Springer is still getting something for free — the content. So it still is a good deal for Springer. 1. is the best deal for Springer. The only reason Springer could insist on 1. is because we do not insist with unity (Reinhard probably insisted very hard). If we insist then we could possibly get them to agree on 4. It is an irony that this book is about Game Theory, and the game theory principles are not used to get a better deal. Mohammad suggested that even if Springer wins on getting the first deal, we could still put our chapters on our webpages. This does not make sense because of three reasons. First, there are going to be cross-references. Second, the chapters together provide a synergy and that's the reason we all agreed to put our chapters together. Third, if we could all put chapters on our webpages then why can't we compile them together and put on a single webpage. A book is more than the sum of its chapters. A question which is typically raised about free electronic version is the following. If people could download the book for free then why would they buy from Springer? I think people would still buy, libraries would buy, professors would buy and anybody who needs to read a significant part of the book would buy. Still, for a moment let us assume that people won't buy the paper book in the presence of a free electronic version. In this case, it simply means people want only the free electronic version and not the paid paper version. That is having only the electronic version is what everybody desires. Then, under this assumption, why even deal with Springer?
Because, as mentioned above, Springer provides some value. We could still avoid Springer and create these values ourselves. We anyway will be spending couple of thousand hours on this book (my experience on working with Vijay is that it takes at least few hours per page). There are at least two ways to avoid Springer.
We go to a small publisher and get the book published. Transfer the exclusive right to publish the book in paper format. We keep all other rights.
We publish only the electronic version.
What role would Springer play?
Springer does the marketing. We will discuss this later to see how we could do the marketing ourselves.
Springer provides the brand name to sell the book. I think the brand name of the editors and the authors is much more in this case. This is also the case with any good book written by a community.
Springer provides the brand name to make the line related to this book in our resume a bit bolder. First, most authors contributing in the book already have enough lines in their resume that they can do with one fewer line. Second, this line is minor for a community written book. Each person contributes a chapter, may be equivalent to writing one or two journal papers.
Springer prints and binds the book. I do not know how much it costs to print and bind the book. "The Search" by John Battelle is a three hundred page hard-bound book and available at 16 bucks at Amazon. Well The Search probably will sell more than this technical book. But it shows that $40 for Algorithmic Game Theory could very well be an optimum profit making point for Springer rather than a favor as they want to portray to us. A small publisher would be able to beat that even in the presence of competing free electronic version.
The last is the peanut financial support. I am sure we could arrange $2000 bucks without Springer. Even if we fail, grad student would be happy to contribute this for a credit. If I do not personally have time to draw pictures, then I do not mind having a co-author who does that for me. A picture is worth thousand words. If I am claiming authorship for writing thousand words then anybody who draws pictures deserves the equal credit.
So the only value Springer provides is marketing. There are various ways we could do that too.
We create a pamphlet and a poster which we distribute to the program chair of various conferences.
Put the electronic version at one place. Let each of the contributor links to it. If there are fifty links from places like, Cornell, Georgia Tech, Stanford then on searches related to the keyword in the book, the book should show up at the top.
Let Citeseer crawl the book, let Google crawl the book, let us upload it on Wikipedia.
Even if it is not sufficient then we could market for money via search engine paid listing. We could raise the money by having only one or two ads in the book, let us say in the content and index pages. If we have an electronic version we could even have Google Adsense ads at the book download page. Certainly Google Adsense would put ads for academic people. In this case, if we are anyway buying something we could buy through those ads.
One question which one could raise is that many people in the world still live on the other side of the digital divide. But such people do not have $40 bucks either. The solution for them is to have a publisher in India or China to publish this book and sells to these people.
Pre-bottom line is we give more to Springer than it is giving back in return. Game theoretically it is not a fair solution and we could do better. I am not sure whether there is any electronic publishing deal which the editors of this book have with the publisher, if they had then they probably would have told me. In any case this posting is about many others future books which will be written co-operatively. Bottom line is, any book which is not written for money must be available free of charge in an electronic format.
The Internet has led to a complete shifts in how we deal with storing and sharing information, but when it comes to academic papers the changes we see are ad hoc and added in a piecemeal basis.
Suppose we could start from scratch and create a proper system for research papers. Here is how I would envision such a system.
XML has become the standard for storing information on the internet; it gives a simple machine-readable method for creating tree structures. Academic papers have such a tree structure (Sections, subsections, theorems, proofs, etc.) that would lend it itself well to XML. Mathematical equations should also be written using XML, we already have a MathML specification for doing this.
A academic paper XML file would only have content information, not any formatting information. For this we would use XSL files, themselves XML files that describe how to format the document. You would use different XSL files depending on whether the paper is viewed on the screen or printed, and different publishers can develop their own XSL files to have consistent looking papers. LaTeX, the system used by most theoretical computer scientists, has similar capabilities but because LaTeX does not enforce any standards, changing style files often requires considerable editing.
Researchers will not have to create these XML files directly (unless they want to) but can use word processors that will save the documents according to those standards.
For citations we should just point to a unique identifier for a paper, no longer should we need to cut and paste bibliographic information. The formatting program can go online based on the identifier to get the information to create a human readable bibliography with web links if appropriate. Most publishers already use Digital Object Identifiers (DOI), we just need DOIs to point to an XML file giving bibliographic information, have DOIs for unpublished papers and have a method for DOIs to point to a later version of a paper.
The author information on academic papers are often useless (like my postal address) or out of date as academics change locations. Each academic research should get their own DOI-like number that points to an XML file giving personal and contact information and then we only need add these DOIs to the academic papers.
Most importantly we need to have enforced standards for each of these XML documents (via XML schemas). If we can truly separate the content from the formatting of documents, and make that content available in an easy machine-readable forms, not only can researchers focus more on the writing and less on the style but will also open the door to applications that we cannot even imagine today. | |||
2453 | dbpedia | 2 | 77 | https://www.jinfo.org/Computer_Info_Science.html | en | Jews in Computer & Information Science | [] | [] | [] | [
""
] | null | [] | null | null | This section concerns contributions to the development of information science and technology at its logical (as opposed to its hardware) level. Specifically, this section deals with areas such as computation theory, artificial intelligence, the statistical theories of information, communication, and systems control, cryptography, operations research, computer and network architectures, and algorithm and software design. The general level of this contribution is reflected in the current ~45% Jewish membership in the Computer and Information Sciences division of the US National Academy of Sciences and in the percentages of Jewish recipients shown below for several of the most prestigious awards in the field. Two of the four individuals generally recognized to have been the principal architects of the Information Age were the Jewish mathematicians Norbert Wiener and John von Neumann.1
Some of the more notable Jewish contributions are listed below. (The names of non-Jewish scientists and engineers mentioned in the accompanying discussion have been denoted with the superscript "+" in order to avoid confusion.)
The interpretation of thermodynamic entropy as an information metric by Leo Szilard. Szilard's 1929 analysis of the Maxwell's demon paradox "is now considered to be the earliest known paper in what became the field of 'information theory' in the 1950s and 1960s." 2 Other important information metrics were formulated by John von Neumann, Alfréd Rényi, Solomon Kullback, and Richard Leibler. The von Neumann entropy, e.g., is the quantum generalization of Szilard's classical information measure and is one of the fundamental concepts in quantum information theory.
The introduction of the diagonal argument proof method by Georg Cantor*. This method is central to the derivation of the incompleteness and noncomputability results of Gödel+, Turing+, Church+, and Post that lie at the foundation of theoretical computer science. In a 1936 paper, Emil Post described a mechanical definition of computation, known as the Post machine, which is equivalent to the Turing machine introduced by Alan Turing+ in a paper that appeared several months later. Post had understood the undecidability implications of such a definition as early as 1921, but had hesitated to publish and lost priority to Gödel+, who approached the problem from a very different perspective in his 1931 paper. Post was also one of the four principal founders of the theory of recursive functions, which is of immense importance in theoretical computer science.3
The logical design of Colossus, the first all-electronic, digital, programmable computer by Max Newman.* Although Colossus was not a general-purpose computer and had only limited programmability, it represented an important milestone. Newman, a Cambridge University professor of mathematics, headed the "Newmanry," a special code-breaking unit at Bletchley Park in England during World War II. In this capacity, he formulated the logical design of a machine to mechanize the cryptanalysis of the German Lorenz Cipher, which was used by the Nazi high command to encrypt its highest priority communications. The first instantiation of this machine, called Heath Robinson, was based on a combination of electromechanical, electromagnetic-relay, and vacuum tube switches. Tommy Flowers,+ who had worked on the engineering design of Heath Robinson, subsequently argued, and then brilliantly demonstrated, that a much faster and more reliable version of the machine could be realized all-electronically, using just vacuum tubes. The resulting Colossus machines played a critical role in securing Allied victory in Europe and were influential in the post-war development of computers in England.4 (Contrary to what is sometimes claimed, Alan Turing,+ who was Newman's protégé, had relatively little direct involvement with Colossus, although his ideas were extremely influential. Newman later declined an OBE appointment in protest against the treatment accorded Turing+ by the postwar British government.)
The design of the logical architecture employed in virtually all modern computers by John von Neumann. Von Neumann's 1946 paper "Preliminary Discussion of the Logical Design of an Electronic Computing Instrument" has been described as "the most influential paper in the history of computer science ... the ideas it contains, collectively known as the von Neumann machine, have provided the foundation for essentially all computer system developments since that date."5 The machine that von Neumann designed and had constructed at the Institute for Advanced Study (IAS) in the late 1940s was widely replicated in the development of many other early computer systems, including SEAC (US National Bureau of Standards), ILLIAC (University of Illinois), ORDVAC (Aberdeen Proving Ground), JOHNNIAC (RAND Corporation), MANIAC (Los Alamos National Laboratory), AVIDAC (Argonne National Laboratory), ORACLE (Oak Ridge National Laboratory), WEIZAC (Weizmann Institute of Science), and the IBM 701 (IBM's first mass-produced, commercial mainframe computer).6 The IAS computer project was intended by von Neumann primarily to demonstrate the utility of computers in the solution of scientific and engineering problems, an objective at which it indeed succeeded, creating the field of modern numerical weather prediction and solving some of the most difficult problems in radiation hydrodynamics associated with the design of thermonuclear weapons. Von Neumann invented the computerized random number generator and co-invented the Monte Carlo method. He also invented the theory of system fault tolerance and the cellular automata model of computation. The universal von Neumann constructor, a generalization of the universal Turing machine that emerged out of von Neumann's theory of self-reproducing automata, is one of the foundational concepts in the theoretical study of the biomolecular nanotechnology of living systems.
The invention of parallel supercomputing architectures by Stephen Unger, Daniel Slotnick, David Schaefer, and Włodzimierz Holsztyński. Unger, Slotnick, Schaefer, and Holsztyński are four of the "eight men [who] dominate the history of SIMD computer architectures." 7 SIMD (single instruction, multiple data) refers to the basic parallel processing technique employed in the earliest supercomputers.8 Unger was the first to propose and explore such architectures in the late 1950s. Slotnick designed SOLOMON in the early 1960s and built the first parallel processing prototypes. He was later the architect of Illiac IV, the first important parallel supercomputer, which had up to 256 processing elements. Built with 64 processing elements in the early 1970s with ARPA (now DARPA) funding and operated by NASA, Illiac IV remained the world's fastest computer until its shutdown in 1981. In the late 1970s and early 1980s, Schaefer initiated and managed the development of NASA's Massively Parallel Processor (MPP), the first truly massively parallel supercomputer, with 16,384 processing elements. Holsztyński designed the Geometric-Arithmetic Parallel Processor (GAPP) in 1981. GAPPs with hundreds of thousands of processing elements are used today in real-time video image processing applications such as image enhancement and noise reduction, video data compression, and format and frame rate conversion.
The co-discovery of NP-completeness by Leonid Levin. Levin and Stephen Cook+ independently discovered and proved what is now referred to as the Cook-Levin theorem, the central result concerning the P = NP? question, which is the major open problem in theoretical computer science. Richard Karp introduced the terms "P" and "NP" and defined NP-completeness (although not the term itself) in its present form. He also identified the decision problem formulations of many well-known, combinatorially intractable problems as being NP-complete. Levin, Karp, and Manuel Blum are considered to be three of the six founders of the field of computational complexity theory.
The invention of context-free languages by Noam Chomsky. This work was based on Emil Post's theory of production systems in mathematical logic. It is the basis of the BNF notation widely used to specify the syntax rules of programming languages. Chomsky's hierarchical classification of formal languages initiated the field of formal language theory in computer science.
The co-invention of BASIC by John Kemeny. Kemeny and Thomas Kurtz+ developed this popular programming language. At least one-third of the nine-person team that developed FORTRAN under John Backus+ at IBM were Jewish. Also at IBM, Adin Falkoff collaborated with Kenneth Iverson+ on the design and development of the array processing language APL (acronym for "A Programming Language"). Four of the six principal designers of COBOL, Howard Bromberg, Norman Discount, Jean Sammet, and William Selden, were Jewish. COBOL was used to run an estimated 80% of the world's business systems at the turn of the century. Although its usage has been in gradual decline since then, it is still widely employed in business, financial, and government systems. Ada, an advanced programming language adopted by the US Department of Defense as its standard high-level computer programming language in the 1980s and 1990s, was designed by Jean Ichbiah. LISP, the second-oldest high-level programming language still in use (primarily in artificial intelligence research), was invented by John McCarthy* in 1958. Barbara Liskov was awarded the 2008 ACM Turing Award for fundamental advances in programming language design. The ACM press release noted that her innovations "are now the basis of every important programming language since 1975, including Ada, C++, Java, and C#."
The invention of the MINIX operating system by Andrew Tanenbaum. MINIX was the precursor to, and inspiration for, the widely used Linux operating system.
The invention of the computer spreadsheet by Dan Bricklin and Robert Frankston. Bricklin and Frankston's VisiCalc spreadsheet was the first "killer app." The Lotus 1-2-3 spreadsheet program, the most successful software product of its time, was developed by Jonathan Sachs and Mitchell Kapor.
The invention of the computerized word processor by Evelyn Berezin. Berezin, who was inducted into the National Inventors Hall of Fame in 2022, invented the Data Secretary, the first computerized word processor, in 1971. She is also credited with developing the first office computer, the first computerized banking system, and the United Airlines computerized reservation system, the largest data processing system of its time. Other important word processors were designed by Charles "Nick" Corfield (Adobe FrameMaker) and Richard Brodie* (Microsoft Word).
The co-founding of the field of artificial intelligence (AI) by Marvin Minsky, Herbert Simon*, and John McCarthy*. (Allen Newell+ is also considered to have been one of AI's four principal founders.9) Six of the ten inductees into the IEEE Computer Society's Intelligent Systems Magazine AI Hall of Fame are, or were, Jewish or of Jewish descent (Noam Chomsky, Edward Feigenbaum, John McCarthy*, Marvin Minsky, Judea Pearl, and Lotfi Zadeh*). Major approaches to machine learning, the now dominant approach to artificial intelligence, were pioneered by Jews, including Bayesian Networks (Judea Pearl), Support Vector Machines (Vladimir Vapnik and Alexey Chervonenkis), Deep Learning (Frank Rosenblatt and Yoshua Bengio, together with Geoffrey Hinton+ and Yann LeCun+), Evolutionary Computing (Lawrence Fogel), and the Probably Approximately Correct (PAC) Model (Leslie Valiant).10 Minsky, Simon, McCarthy, Feigenbaum, Pearl, Bengio, and Valiant have received seven of the eleven Turing Awards given thus far for work in artificial intelligence.
The creation of ChatGPT by Ilya Sutskever. Sutskever, a co-founder and the chief scientist of OpenAI, is the principal designer of the large language model GPT-4, on which ChatGPT is based. The extraordinary ability of this deep learning AI language model to "understand" textual input and generate textual output that is both syntactically and semantically precise has shocked even many AI researchers.
The development of computer algebra (symbol manipulation) programs by Jean Sammet (FORMAC), Carl Engelman (MATHLAB), Joel Moses (MACSYMA), and Stephen Wolfram (Mathematica).
The invention of reversible computation theory by Rolf Landauer. Reversible computation circumvents the thermodynamic limits on irreversible computation established by John von Neumann, and is one of the foundations of quantum computing. The ballistic architecture, or Fredkin gate, model of reversible computation was introduced by Edward Fredkin.
The invention of quantum computing by Paul Benioff, Richard Feynman, Yuri Manin,* and David Deutsch.
The invention of DNA computing by Leonard Adleman.
The invention of fuzzy logic by Max Black and Lotfi Zadeh* (independently).
The invention of algorithmic complexity by Ray Solomonoff. Also termed Kolmogorov complexity or algorithmic information theory, Solomonoff's 1964 work was later arrived at independently by Andrei Kolmogorov+ (1965) and Gregory Chaitin (1969).
The invention of the Monte Carlo method by Stanislaw Ulam and John von Neumann. This statistical numerical method is one of the cornerstones of computer simulation science. Von Neumann invented the first computer-based random number generator for use in Monte Carlo simulations. The so-called Metropolis Monte Carlo algorithm, widely used in statistics and computational physics, was largely devised by Marshall Rosenbluth, based in part on ideas from Edward Teller and John von Neumann.11
The invention of nondeterministic algorithms by Michael Rabin. Such algorithms employ Monte Carlo methods to provide efficiently computable solutions that are correct with high (but less than one hundred percent) probability to many problems whose exact solution is computationally intractable. Rabin's probabilistic primality testing, e.g., is essential to the practical implementation of RSA public-key cryptography.
The invention of the SIMPLEX linear programming algorithm by George B. Dantzig. Linear programming (LP), invented independently by Dantzig and Leonid Kantorovich, is a powerful optimization technique that is widely used in economics and engineering. It has been estimated that, aside from database operations such as sorting and searching, LP consumes more computer time than any other mathematical procedure.12 The SIMPLEX algorithm remains LP's fundamental numerical solution technique.
The invention of the ellipsoid method of convex optimization by Naum Shor and, independently, by Arkadi Nemirovski and David Yudin. This technique, which was successfully employed by Leonid Khachiyan+ to prove the polynomial-time complexity of linear programming, underlies most modern results concerning the computational complexity of convex optimization programs. The ellipsoid method provided the first effective solver for semidefinite programs (which are encountered in many engineering applications) and has led to significant advances in combinatorial optimization.
The invention or co-invention of five of CiSE's "Top Ten Algorithms of the Century" by Stanislaw Ulam, John von Neumann, Marshall Rosenbluth, Edward Teller, George Dantzig, Leonid Kantorovich, Cornelius Lanczos, I. J. Good, Leslie Greengard, and Vladimir Rokhlin, Jr.. The January/February 2000 issue of Computing in Science & Engineering, a joint publication of the American Institute of Physics and the IEEE Computer Society, assembled a list of "the ten algorithms with the greatest influence on the development and practice of science and engineering in the 20th century." In addition to the Monte Carlo method and the SIMPLEX algorithm discussed above, the top ten algorithms included the Krylov subspace iteration method for the solution of large systems of linear equations (Lanczos, together with Magnus Hestenes+ and Eduard Stiefel+), the Fast Fourier Transform (FFT) (Lanczos, together with G. C. Danielson+ in 1942, and independently by I. J. Good in 1958 and by James Cooley+ and John Tukey+ in 1965),13 and the fast multipole algorithm for the solution of many-body problems (Greengard and Rokhlin).
The invention of the Wiener filter by Norbert Wiener. The Wiener filter is an optimal filter for extracting signals from noise in stationary stochastic systems and is one of the central results in statistical communication theory, a field pioneered by Wiener. (A version of the Wiener filter was also formulated independently by Andrei Kolmogorov+.) The nonlinear, recursive Wiener filter, or Kalman filter, its extension to nonstationary systems for use in tracking and guidance was first formulated by Peter Swerling in 1959.14 Wiener and Alexander Khinchine independently derived the Wiener-Khinchine theorem, another central result in statistical communication theory.
The invention of statistical decision theory by Abraham Wald. Among other applications, statistical decision theory plays an important role in radar, control, and communication. Its minimax decision rules derive from John von Neumann's theory of optimal strategies (theory of games).
The invention of dynamic programming by Richard Bellman. This procedure solves sequential, or multi-stage, decision problems and is one of the foundations of modern control theory. It also constitutes the basis for many powerful algorithms, including the backpropagation algorithm used to train neural networks in machine learning and the Viterbi algorithm, invented by Andrew Viterbi, that is used to decode convolutional codes employed in error correction and in CDMA and GSM digital cellular telephony.
The co-invention of public-key cryptography by Martin Hellman. Hellman and Whitfield Diffie+ devised the Diffie-Hellman algorithm for secure key distribution over nonsecure channels.
The co-invention of RSA by Adi Shamir and Leonard Adleman. RSA (which is named for its three co-inventors, Shamir, Adleman, and Ronald Rivest+) is the most widely used public-key algorithm.
The invention of elliptic curve cryptography (ECC) by Neal Koblitz and Victor S. Miller (independently). Based on concepts rooted in algebraic geometry, ECC is widely deployed as the leading successor to RSA. Relative to RSA, it provides greater cryptographic strength with much smaller cryptovariables. (It does, however, share the same potential vulnerability to quantum computational attack.)
The formulation of the learning with errors (LWE) problem and encryption system by Oded Regev. The application of LWE, a computationally "hard" problem, has led to revolutionary developments in cryptography. CRYSTALS-Kyber, the recent finalist in the US National Institute of Standards and Technologys six-year international competition to find a quantum-resistant, or "post-quantum," algorithm standard for general encryption, is based on LWE. CRYSTALS-Dilithium, one of the three algorithms selected to become quantum-resistant digital signature standards, is also based on LWE. (FALCON, one of the other two digital signature finalists, is based on the NTRU cryptographic system, which was invented by Joseph Silverman, Jeffrey Hoffstein, and Jill Pipher.+) LWE is also the basis for the recent breakthrough in the construction of efficient, fully homomorphic encryption (FHE) schemes by Zvika Brakerski, Craig Gentry,+ and Vinod Vaikuntanathan.+ FHE techniques permit the processing of encrypted data without the need to first decrypt it, thus permitting the secure processing of sensitive data on non-secure computing platforms.
The invention of quantum cryptography by Stephen Wiesner. Although quantum key distribution was invented in the mid-1980s by others, it was specifically acknowledged to have been inspired by Wiesner's circa 1970 work that established the basic principles underlying the use of quantum mechanics to achieve information security.
The development of mathematical and statistical cryptanalysis by William Friedman. Friedman's innovations are ranked amongst the greatest in the history of cryptology; he supervised the breaking of the Japanese diplomatic code PURPLE in 1940 and directed US cryptanalysis during World War II. Other important World War II cryptologists included Solomon Kullback, Leo Rosen, and Abraham Sinkov in the US and Max Newman*, I.J. Good, and Leo Marks in England. Newman and Good were instrumental in the design of Colossus, which was used to break the Lorenz cipher employed by the German high command. Marks, the chief cryptologist of the Special Operations Executive (SOE) of MI6, revolutionized the one-time pad.
The invention of cryptocurrency by David Chaum. In his seminal 1982 paper, Chaum established the concept of secure digital cash, the first cryptocurrency. Chaum's invention anticipated by several decades the now widespread recognition of the issue of electronic privacy in financial and other online transactions.
The co-invention of blockchain by David Chaum and, independently, by Stuart Haber (together with W. Scott Stornetta+). Chaum gave the first full technical description of a blockchain in his 1982 doctoral dissertation, but never published the concept in a journal or conference proceeding. It, therefore, went largely unnoticed. In a 1991 paper, Haber and Stornetta+ introduced another embodiment of this concept for a shared, tamper-proof, decentralized transaction ledger. Their company, Surety Technologies, founded in 1994, created the first and longest running commercial blockchain. Blockchain technology is widely claimed to have the potential to revolutionize e-commerce (and more) by greatly increasing the efficiency, confidentiality, and trust with which online transactions are conducted and recorded.
The invention of convolutional codes by Peter Elias. Important decoding algorithms for these error correction codes were invented by Barney Reiffen, Robert Fano, and Andrew Viterbi.
The co-invention of the Reed-Solomon error correction code by Gustave Solomon. Reed-Solomon and Viterbi- or Fano-decoded convolutional codes, or hybrid concatenations of the two, are probably the most widely used error correction techniques at present.
The invention of the LZ data compression algorithm by Jacob Ziv and Abraham Lempel. Although LZ coding was not the first data compression technique (the first such technique having been invented, independently, by Robert Fano and Claude Shannon+), it is today the most widely used in commercial systems. It underpins PDF, GIF, TIFF, ZIP, and other widely used file formats.
The development of automated, electronically switched telephone networks by Amos Joel. Joel received both the 1989 Kyoto Prize ("Japan's Nobel Prize") and the 1993 US National Medal of Technology for work that revolutionized telephone switching systems worldwide. Joel's 1972 US Patent No. 3,663,762, "Mobile Communication System," is the basis of the switching technology that made cellular telephone networks possible.
The co-invention of spread spectrum communications by Hedy Lamarr. Lamarr (the Hollywood actress) and George Antheil+ (a Hollywood composer) received US Patent No. 2,292,387, "Secret Communication System," in 1942 for the invention of frequency-hopped spread spectrum. The digital form of spread spectrum that is widely used in cellular communications (CDMA) was developed by Qualcomm, a company founded by the information theorists Irwin Jacobs and Andrew Viterbi. Jacobs received the US National Medal of Technology in 1994 and Viterbi received the US National Medal of Science in 2007. Both were recognized for their pioneering innovations in digital wireless communications. Joel Engel also received the Medal of Technology in 1994 as one of the two "fathers of the cellular phone" for his work on the development of the basic network architecture used worldwide in cellular telephony. (The cell phone itself, as opposed to cellular switching fabrics, etc., was invented by Martin Cooper. Precursor technologies, such as the "walkie-talkie," the pager, the cordless telephone, and CB radio, were largely pioneered by Irving "Al" Gross.)
The co-invention of the Internet by Leonard Kleinrock, Paul Baran, Vinton Cerf,* and Robert Kahn. Together with Kleinrock, Baran, Cerf, and Kahn, Donald Davies+ and Lawrence Roberts+ are the six individuals most frequently cited as principal inventors of the Internet. Kleinrock, Cerf, Kahn, and Roberts+were awarded the US National Academy of Engineering's half-million dollar Draper Prize in 2001 "for the development of the Internet." Baran, Kleinrock, Davies+, and Roberts+ received the first IEEE Internet Award in 2000 for "their early, preeminent contributions in conceiving, analyzing and demonstrating packet-switching networks, the foundation technology of the Internet." Cerf, Kahn, and Baran received US National Medals of Technology, the former two in 1997 and the latter in 2007. Kleinrock was awarded the US National Medal of Science in 2007. Cerf and Kahn co-invented the TCP/IP protocol for integration of heterogeneous networks, which is the basis of the Internet's "inter-networking" architecture. They shared the 2004 ACM Turing Award for this work, and in 2005 each received the US Presidential Medal of Freedom.
The invention of Alohanet (precursor to Ethernet) by Norman Abramson. Alohanet was a packet-switched research network that solved the major problem of packet interference, or "packet collision." Alohanet was further developed by Robert Metcalfe,+ working at the Xerox Palo Alto Research Center, into Ethernet (which Metcalfe+ originally called the Alto Aloha network), the standard method used in local area computer networking. Radia Perlman's spanning tree protocol, which solved the problem of broadcast storms due to network switching loops, was the critical enabler that allowed Ethernet to realize high levels of robust network complexity.
The invention of Google by Sergey Brin and Larry Page*.The algorithm employed by Google, the most powerful and widely used search engine on the Internet, employs an adaptation of the citation frequency "impact factor" metric originally invented in the 1950s by Eugene Garfield to rank the relative influence of scientific researchers, articles, and journals. A search algorithm very similar to Google PageRank, called HITS (Hypertext Induced Topic Search), was devised almost simultaneously by Jon Kleinberg at IBM. In his papers, Kleinberg credited the 1976 mathematical work of Gabriel Pinski and Francis Narin, as does the PageRank patent. Pinski and Narin had shown how to formulate and compute Garfield's relative influence in terms of a graph theoretic matrix eigenvalue problem. Similar mathematical techniques for calculating the relative influence of individuals in social networks or of production sectors in national economies can be found, respectively, in the 1953 work of statistician Leo Katz and in the 1941 work on input-output analysis by the Nobel Prize winning economist Wassily Leontief.*
NOTES
1. According to the prominent historian of technology George Dyson, "there were four essential prophets whose mathematics brought us into the Information Age: Norbert Wiener, John von Neumann, Alan Turing and Claude Shannon." See "The Elegance of Ones and Zeroes," by George Dyson in The Wall Street Journal, 21 July 2017.
2. See Genius in the Shadows: A Biography of Leo Szilard, by William Lanouette (Scribner's, New York, 1992, p. 63).
3. See "Emil Post and His Anticipation of Gödel and Turing," by John Stillwell in Mathematics Magazine (Mathematical Association of America, Washington, DC, Vol. 77, No. 1, Feb. 2004, pp. 3-14). See also http://www-gap.dcs.st-and.ac.uk/~history/Mathematicians/Post.html.
4. See "Max Newman: Mathematician, Codebreaker and Computer Pioneer," by William Newman in Colossus: The First Electronic Computer, edited by Jack Copeland (Oxford, Oxford and New York, 2004).
5. Encyclopedia of Computer Science (Fourth Edition), edited by Anthony Ralston, Edwin D. Reilly, and David Hemmendinger (Wiley, Chichester, England, 2003, p. 1841).
6. See Turing's Cathedral: The Origins of the Digital Universe, by George Dyson (Pantheon/Random House, New York, 2012, p. 287).
7. Parallel Supercomputing in SIMD Architectures, by R. Michael Hord (CRC Press, Boca Raton, FL, 1990).
8. Although most supercomputers are now based on MIMD (multiple instruction, multiple data) architectures, their individual processing nodes generally embody small-scale SIMD capabilities. The still largely hypothetical quantum computer can be thought of as an SIMD machine with exponentially many virtual processors.
9. See AI: The Tumultuous History of the Search for Artificial Intelligence, by Daniel Crevier (Basic Books, New York, 1993, p. 26), or Encyclopedia of Computer Science (Fourth Edition), edited by Anthony Ralston, Edwin D. Reilly, and David Hemmendinger (Wiley, Chichester, England, 2003, p. 91).
10. The 2019 IEEE Xplore article Who Is the Father of Deep Learning? states: We conclude that Frank Rosenblatt developed and explored all the basic ingredients of the deep learning systems of today, and that he should be recognized as a Father of Deep Learning, perhaps together with Hinton, LeCun and Bengio who have just received the Turing Award as the fathers of the deep learning revolution. Lawrence Fogel initiated the field of evolutionary computing in 1960 and is considered to be the father of evolutionary programming, the first of four basic approaches that were eventually formulated and subsequently merged into the field of evolutionary computing.
11. See "Marshall Rosenbluth and the Metropolis algorithm," by J. E. Gubernatis, in Physics of Plasmas (12, 057303, 2005). According to this article, Nicholas Metropolis'+ only contribution to the algorithm's development was making available the use of MANIAC, the Los Alamos computer he had constructed as a replica of the computer that von Neumann designed and built at the Institute for Advanced Study. Of the other co-authors of the Metropolis algorithm paper, "arguably the most significant publication in the history of computational physics," Mici Teller initiated the assembly language programming work to code the algorithm, Arianna Rosenbluth+ took that over and produced from scratch the actual program used to test the algorithm, Edward Teller made "the crucial suggestion" to employ ensemble, rather than temporal averaging and how to do that, and Marshall Rosenbluth actually designed the algorithm, incorporating insights of his own. According to Gubernatis, the "key," as opposed to the "crucial," idea underpinning the algorithm's power was the principle of detailed balance, which was implicit in the original 1953 paper, but not made explicit until Rosenbluth formulated a general proof of the algorithm's validity in 1956. Rosenbluth went on to become one of the world's leading plasma theorists and a winner of both the Enrico Fermi Award (1985) and the US National Medal of Science (1997).
12. See http://www-gap.dcs.st-and.ac.uk/~history/Mathematicians/Dantzig_George.html.
13. Cooley and Tukey are generally credited with invention of the "modern" FFT. Their 1965 paper only referenced the prior work of I. J. Good, whose FFT algorithm was both somewhat different and less efficient. In a January 1992 paper in IEEE SP Magazine, entitled "How the FFT Gained Acceptance," Cooley reviewed other prior work and concluded that "it appears that Lanczos had the FFT algorithm" in 1942. He holds out the possibility that Gauss may have had it as early 1805, however. Gauss's work was not published until after his death and was in Latin and employed somewhat archaic notation, which made it difficult to decipher. It was only published in 1866 in his collected works. An analysis by Michael Heideman, Don Johnson, and C. Sidney Burrus has concluded that Gauss did indeed have the basic elements of the modern FFT in 1805, but due to its obscurity, his formulation appears to have had no influence on subsequent work.
14. See the next-to-last paragraphs in https://archive.siam.org/news/news.php?id=526 and in the obituary published in the November 2000 issue of Physics Today (pp. 75-76). See also the discussion in the Appendix to Tracking and Kalman Filtering Made Easy, by Eli Brookner (Wiley, New York, 1998, pp. 383-387).
* Georg Cantor and Herbert Simon had Jewish fathers; Simon's mother was of partial Jewish descent, which was also the case, at a minimum, for the mother of Georg Cantor. Max Newman and Vinton Cerf had Jewish fathers and non-Jewish mothers, while Richard Brodie, Wassily Leontief, Yuri Manin, John McCarthy, Larry Page, and Lotfi Zadeh have, or had, Jewish mothers. For more information, see the footnotes to these and other listings in Jewish Computer and Information Scientists, or in the cases of Leontief and Manin, in Jewish Economists and Jewish Mathematicians, respectively.
+ Non-Jewish.
QUESTIONS AND COMMENTS: CONTACT US
JINFO HOME | ||||||||
2453 | dbpedia | 2 | 20 | https://gilkalai.wordpress.com/2022/05/26/waging-war-on-quantum/ | en | Quantum Computers: A Brief Assessment of Progress in the Past Decade | [
"https://gilkalai.wordpress.com/wp-content/uploads/2015/11/cropped-gil-jirka.jpg",
"https://gilkalai.wordpress.com/wp-content/uploads/2022/05/2010-2020d.png?w=724&h=308",
"https://s0.wp.com/latex.php?latex=10%5E%7B-15%7D&bg=ffffff&fg=333333&s=0&c=20201002",
"https://1.gravatar.com/avatar/a0fa00f67b30eb01e436d... | [] | [] | [
""
] | null | [] | 2022-05-26T00:00:00 | In this post I give a brief assessment of progress in the past decade, triggered by a recent article in Forbes Magazine that mentions my view on the matter. Waging War On Quantum - A Forbes Article by Arthur Herman Arthur Herman is a popular Historian and a senior fellow at the Hudson Institute. On… | en | https://s1.wp.com/i/favicon.ico | Combinatorics and more | https://gilkalai.wordpress.com/2022/05/26/waging-war-on-quantum/ | Arthur Herman is a popular Historian and a senior fellow at the Hudson Institute. On Forbes Magazine he “comments on quantum computing and AI and American national security”, and his recent Forbes article Waging War on Quantum (thanks, Alexander Vlasov) starts as follows:
Quantum computing will never work. Keeping enough qubits stable long enough to do any significant calculating or processing, is a mathematical impossibility. The whole idea that one day quantum computers will discover new miracle drugs, or crack public encryption systems, is a mirage. Even worse, it’s a hoax.
That’s been the message from so-called quantum skeptics for a decade or more, including physicists like Gil Kalai of Hebrew University and Mikhail Dyakonov of the University of Montepellier—all in spite of the fact that quantum computers have continued to grow in sophistication and qubit power. Most experts now agree it’s not a question if a large-scale quantum will emerge that can break into public encryption systems using Shor’s algorithm, but when.”
The first paragraph gives a reasonable description of my views, however I never referred to the whole idea of quantum computing as a hoax. Regarding the second paragraph, it is indeed correct that quantum computers have continued to grow in sophistication and qubit power, however my theory (based on a computational complexity argument) is that progress for reducing the error rate will reach a wall and that recent progress merely approaches this limit. Let me elaborate a little on the development of the past decade as I see it.
Before moving to my assessment I would like to note that Arthur Herman’s offers an outrageous conclusion to his article. He suggests that skepticism of quantum computers (and of the company IonQ) puts the skeptics’ countries at risk. In my opinion, the militant rhetoric of the title and of the conclusion is very inappropriate.
Assessment of progress in the past decade
The past quantum computing decade is characterized both by notable progress, adjustment of expectations, larger investments, much enthusiasm, and some hype. The overall picture is unclear and might be clearer 5-10 years from now.
The following picture (click to enlarge) describes the shift in the community view over the last decade (as I see it).
On the left you can see David DiVincenzo’s famous 7-steps road map to quantum computers. DiVincenzo put forward these steps in his 2000 paper The physical implementation of quantum computation, and the above picture on the left is a graphic description of these steps in a 2013 review paper by Michel Devoret and Rob Schoelkopf. The caption under the Figure asserts that “Superconducting qubits are the only solid-state implementation at the third stage, and they now aim at reaching the fourth stage (green arrow). In the domain of atomic physics and quantum optics, the third stage had been previously attained by trapped ions and by Rydberg atoms. No implementation has yet reached the fourth stage, where a logical qubit can be stored, via error correction, for a time substantially longer than the decoherence time of its physical qubit components.” The fourth step “logical memory with (substantial) longer lifetime than physical qubits” looked to many like a near term goal ten years ago.
One important development of the last ten years was to introduce building NISQ computers and achieving “quantum supremacy” (and related tasks like high “quantum volume”) as an intermediate goal towards DiVincenzo’s step four. (See the picture on the right.) Of course, there is nothing wrong with setting intermediate goals, we do it all the time and this can be very fruitful.
For me, from the skeptical point of view, these intermediate goals were an opportunity, allowing me to present a clear computational theoretic argument for why “quantum supremacy” is out of reach and to connect the problem with the theory of noise-sensitivity and noise-stability and with Fourier methods that I and my colleagues developed in the 90s.
Adding the intermediate goal of quantum supremacy also represented a much slower time-table than what people previously anticipated. For example, nine years ago in 2013 John Martinis gave a lecture at QSTART, the opening conference of our HUJI quantum science center. At that time, John expected to have the ability of building distance-3 and distance-5 surface codes within a few years and the tasks of demonstrating logical gates and of logical qubits with error rates some years later. John also mentioned the ability to control 20 qubits within one month (to this Ray Laflamme commented that it is going to be a long month). All these targets are today still out of reach. It is undisputed that considerably lower noise rates are required even for achieving distance-3 surface codes and it is still not possible to have good control of 20-qubit (and perhaps even 10-qubit) quantum computation.
Of course, John Martinis himself was the leader of the Google efforts towards “quantum supremacy” which are now being carefully evaluated, and his vision and technology from 2013 was important for the Sycamore NISQ experiments. Let me mention that Google’s fantastic “quantum supremacy” claims were largely (but not fully) refuted.
There was a similar level of optimism from various other researchers. It was expected that coherence time would increase by a factor of ten every three years and there was a proposed “double exponential law” prediction for the classical computational power required to simulate quantum devices as time proceeds. I personally don’t regard these specific claims as hype but rather as (at times, over-the-top) reasoned optimism, but both the reasoning and the predictions themselves should carefully be examined.
NISQ computers are interesting, and they allow interesting quantum physics experiments. Herman asserts that “quantum hybrid systems are making the qubit revolution something that’s happening now, not just a distant dream” and this echoes hopes of several researchers in academia and industry. My analysis asserts that NISQ computers are, from the computational complexity perspective, primitive classical computational devices with inherent chaotic behavior, and therefore, I don’t see how hybrid systems and the interface with conventional computers would turn them into useful computational devices. (They can still be useful for quantum physics experimentation.)
Let me repeat: slower progress than anticipated is very common, setting new intermediate goals is both common and welcome. By themselves they do not imply that the target of “large-scale quantum computers that can break into public encryption systems using Shor’s algorithm” is unrealistic, and indeed many experts in the field believe that it is a matter of time for this ultimate goal to be reached. My view is different, I try to explain my argument to other experts, and to offer experimental predictions and theoretical implications. There are good reasons to hope that the matter will be tested experimentally in the years to come, but my assessment is that the experimental picture from the past decade is not clear.
IonQ, trapped ion quantum computation, and Elon Musk
Herman’s article was triggered by a 183-page document written by a group called “Scorpion Capital.” The document attacks Maryland-based quantum computer company IonQ and among various concerns it also briefly mentions my and Michel Dyakonov’s positions about quantum computers. I myself share the common view that ion-trap methods for quantum computers form a major avenue and that Chris Monroe (a co-founder of IonQ) is a major player in this direction. I don’t know much about IonQ’s specific efforts, but I would expect that large scale investment is required to put ion-trap methods to test and I personally would like to see it being tested. So I would be quite pleased to see Elon Musk deciding to buy IonQ or the Israeli trapped ion QC company of Roee Ozeri (or both) 🙂 and to make trapped ion technology his quantum computing signature. Incidentally, the comment section of my 2018 Quanta Magazine interview presented an interesting exchange between Monroe and me (starting here).
A few more remarks:
1) Herman’s article raises several other interesting issues like when (and if) is the appropriate time to transfer to “post quantum cryptography” protocols.
2) There are a few researchers skeptical of quantum computers that actually conduct research and write papers (and books) about it. (There are others that regard the idea as absurd nonsense of absolutely no interest.) A notable researcher who wrote several important papers in the skeptical direction since the late 90s is Robert Alicki from the University of Gdansk.
3) Here is Herman’s crazy conclusion: “No one is saying the Scorpion Capital short-sellers are in Chinese pay, or that skeptics like Dyakonov and Kalai are knowingly putting their countries at risk. But waging war on the U.S. quantum industry can have serious consequences, unless quantum companies and labs show that they are not intimidated, and reassure the public that the quantum future doesn’t rest on hype but significant achievements—achievements that will make our country and our world safer, stronger, and more confident about our future as a whole.”
4) I changed the title to reflect the main topic of the post.
Update 2 (June 6, 2022):
When (and if) is the right time to transfer to post quantum cryptography?
Here is my uneducated recommendation (which is separate to the best of my ability from my overall quantum computing skepticism). Here I take it as an assumption that the aim is to maximize communication security, and the reason for transferring to “post-quantum” cryptographic protocols is that large scale quantum computers will enable breaking most of current cryptosystems. Note that pressure to transfer earlier rather than later to new protocols and standards may reflect commercial or other interests and not towards the objective of maximizing communication security. (I take no view on these other interests.)
The efforts to build post quantum cryptography are intellectually interesting: I am thrilled to see Oded Regev’s LWE (learning with errors) and Ajtai-Dwork lattice-based cryptography getting used. I certainly support to invest (large amount of resources) in developing post-quantum cryptography. The question is with transferring to new protocols and my recommendation is:
Wait in implementing new encryption standards based on post-quantum cryptography until DiVincenzo’s stage 4 is firmly established.
(For example, until distance-5 surface codes are built.)
Note that moving forward from good quality quantum error correcting codes (like distance-5 surface codes) to very good quality quantum error-orrecting codes needed for quantum fault tolerance (like distance 11-surface codes), implementing logical quantum gates, and later on implementing fault-tolerance is likely to be a slow process that may take quite a few decades.
The crucial thing to consider is that transferring to new cryptographic methods is by itself a serious communication security risk even (in fact, especially) when it comes to classical attacks. Giving ample time to check new suggested protocols somewhat reduces this risk.
Update 1 (June 1, 2022): Very relevant to the discussion above: Yosi Avron told me about a very recent breakthrough for distance-3 surface code that is reported in the paper Realizing repeated quantum error correction in a distance-three surface code, by S. Krinner et al. (Here is a link to the arXive version, the researchers are from ETH, Jülich center, Quebec and other places.) Michael Rothschild (and others) told me about this recent breakthrough by a team led by Thomas Monz from the University of Innsbruck and Markus Müller from Aachen University and Forschungszentrum Jülich in Germany for creating entangled logical qubits. The paper is Entangling logical qubits with lattice surgery by Erhard et al, see also here.
Also, now that Scott Aaronson outsourced this very post of mine for quantum computing commentary, I will end with a quote of Scott taken from the interesting Facebook thread related to this post. These days, Scott is busy dealing with deplorable rude remarks and attacks over the Internet pointed at him. In my view, and this brings us back also to Herman’s piece, belligerent attacks are not appropriate in general and certainly are not constructive in academic discussions.
Scott commented that he chose to “sit this one out for now” regarding IonQ, and I asked him if he is still a believer in Google/Sycamore. Here is Scott’s response (that I find rather reasonable; see further discussion between us in the thread).
A further update (Jan. 2024) Here is an interesting skeptical perspective The quantum house of cards about quantum computers by Xavier Waintal.
Late addition (June 21,2022): I looked more carefully at the Scorpion report that contains various concerns and claims of various nature and quality. Overall it is a strange document. Here is a link to the report, a link to IonQ’s response, and a news item about class action based on the report and a blog commentary. Of course, one needs to be skeptical about factual matters that are included in the report, and one also has to take with a grain of salt IonQ’s claims.
The 183-page document of “Scorpion Capital” devotes roughly two pages (16 and 17) to the views of Michel Dyakonov and mine and the possibility that quantum computation might be impossible in principle. (It devotes a few additional pages to the “hype problem” of quantum computers and the gaps between expectations and achievements. Most of the report deals specifically with IonQ.) Overall I think that it is a good idea that investors will be aware of these views and this possibility. Investing in quantum computing is a large-risk huge-gain endeavor and in-principle obstacles is a small part of the overall risk in taking a decision on a particular investment. Note that quantum computers enthusiasts like John Preskill and Scott Aaronson also considered failure-in-principle as a serious (while remote) possibility. For example Scott Aaronson wrote in 2006 that “It’s entirely conceivable that quantum computing will turn out to be impossible for a fundamental reason,” and I am not aware of him changing this view since 2006. Another prominent quantum computing researcher, Aram Harrow, is certain that quantum computers are possible in principle but nevertheless opined in 2012 (in his opening statement of our debate) that “There are many reasons why quantum computers may never be built,” which is, for investors, equally bad scenario as an in-principle obstruction.
So, as a scientist, if you believe that there are 20% chance that quantum computers are impossible in principle, this already gives strong incentive to explore this direction. But as an investor, its hardly matter because risks that are specific to a special among the many investment avenues are usually higher and they matter more.
A few words about hype: I never cared too much about hype and I thought that hype itself is over-hyped. An important issue is the scientific clear cut question if quantum computers are possible and there are other important scientific and technological issues related to quantum computation.
The paragraph from Scorpion report referring (also) to me starts as follows:
One prominent scientist in the field after another – including ex employees of IonQ we interviewed – echoes this view, forcefully stating that quantum computers can’t even work in principle, given that quantum decoherence undermines the entire theory. Anyone searching with the keywords “quantum computing” and “decoherence” or “hype” quickly encounters a barrage of papers by quantum computing insiders –researchers who have dedicated their careers only to arrive at the bitter truth.
I am not aware of researchers who have dedicated their careers to make quantum computers a reality only to arrive later at the bitter truth that quantum computers can’t even work in principle. (And I certainly am not aware of a “barrage of papers” by quantum computing researchers that have changed their mind.) In any case, from my point of view more important than the opinions of researchers (or policy-makers or people from the general public) is what the research itself tells us (both theoretical and experimental research).
The report goes on and describes me
As an example, we note an interview with a mathematics professor at Yale and in Israel, who has studied decoherence for a decade. He states he was initially “quite enthusiastic, like everybody else” and then expounds on decoherence and “the mirage” of quantum computing.
As for me, in the 1990s I was “quite enthusiastic” but I was not a researcher in this area. My quantum computing research was from the start (2005) in the skeptical direction. I thought that this direction was neglected and I also thought it might be related to my theory of noise sensitivity and noise stability. (A connection was only found in 2013.)
I was and still am quite enthusiastic about quantum computation and quantum information as an academic field and I was certainly happy when my friends and colleagues Dorit Aharonov and Michael Ben Or proved (with two other teams) the “threshold theorem” that shows how quantum fault-tolerant computation is possible for low rates of noise.
To avoid portraying a too pastoral yet inaccurate picture of academic sisterhood, I should mention that it goes without saying that most research efforts in quantum computing (as well as in post-quantum cryptography) are considerably more important if quantum computers could be built, and similarly, my own research would have considerably more value if my argument and predictions are correct :). | ||||
2453 | dbpedia | 0 | 39 | https://bhavana.org.in/the-dawn-of-rigour-in-the-art-of-programming/ | en | The Dawn of Rigour in the Art of Programming – Bhāvanā | http://bhavana.org.in/wp-content/uploads/2023/01/dek-14May2010-1024x682.jpg | [
"https://bhavana.org.in/wp-content/uploads/2018/05/Bhavana-Logo.png",
"https://bhavana.org.in/wp-content/uploads/2018/05/Bhavana-Logo.png",
"http://bhavana.org.in/wp-content/uploads/2023/01/dek-14May2010-1024x682.jpg",
"http://bhavana.org.in/wp-content/uploads/2023/01/1940piano-584x1024.jpg",
"http://bhavan... | [] | [] | [
""
] | null | [] | null | en | https://bhavana.org.in/the-dawn-of-rigour-in-the-art-of-programming/ | Donald Knuth is Professor Emeritus of The Art of Computer Programming at Stanford University. Hidden inside the very title is the key to the philosophy underlying Knuth’s approach to computing–that it takes a delicate blend of artistry and analytical reasoning to create works that stand the test of time. Be it his bestselling volumes of The Art of Computer Programming (TAOCP) or his founding contributions to “The Analysis of Algorithms”, he has frequently broken new ground; many such works are now considered the ideological bedrock upon which newer structures have been built. Indranath Sengupta and Sudhir Rao engage him in a wide-ranging conversation, starting from his earliest days to the present. The conversation is preceded by an introductory biographical sketch of our protagonist by Sudhir Rao.
Biographical Sketch
Admiring a vast collection of sculptures displayed in Frogner Park, Norway’s largest tourist attraction located in its capital Oslo, Donald Ervin Knuth finds himself in a deeply reflective mood. He is in a section of the park dedicated to celebrating the life and work of the artist Gustav Vigeland, all of it tastefully laid out over 80 sprawling acres, right in the park’s centre. Ruminating on his life and art, he instinctively knows how deeply satisfying it must have been for Vigeland to bring a life-long pursuit to a fulfilling closure. An artist could not possibly ask for more, he mutters to himself.
Knuth’s thoughts on Vigeland provide us with a peek into a creative mind, diligently understanding and appreciating the work of another. Knuth’s own contributions to the world of computing too, have similarly been hailed by experts as seminal and path-breaking, and have stood the stringent test of time. While most others would be happy to lay claim to just a fraction of the body of work that Knuth has produced over the past six decades, it must be stressed that Donald Knuth has always operated many levels above ordinariness.
Credited with pioneering contributions such as how software must be conceptualized, written, compiled, and finally processed, his work has impacted the entire lifecycle of modern-day programming. The artist inside him has even ensured that the final output of such programs is pleasingly displayed via aesthetically designed typographic fonts, themselves omnipresent today. Further, by marrying what he affectionately calls the Art of Computer Programming with the rigour of mathematics, he has engendered a flourishing subject known as The Analysis of Algorithms, thereby impacting the theoretical core of computer science. This latter achievement is no less path-breaking for theoretical computer science than the invention of the microscope for life sciences, the telescope for astronomy, or the endoscope for medical sciences. Through rigorous analysis and penetrating insights, Knuth has peered deeply and incisively into what before him were mostly opaque interiors of a subject that had just then arrived; and in doing so, ended up profoundly impacting the twin landscapes of scientific computation, and digital technology. His invited address to the prestigious International Congress of Mathematicians (ICM) held in Nice, France, in 1970 was presciently titled The Analysis of Algorithms. Barely 32 then, Knuth’s work was already earning recognition in the highest academic forums, and the invitation to speak at Nice was confirmation that, while his chosen area of work was the still-nascent subject of Computer Science and its applications, his insights were already sufficiently rigorous and interesting for the exacting world of mathematics to sit up and take notice.
In the five decades since Knuth’s ICM 1970 address, the indispensability of rigour in computer science is now well established. Arieh Iserles at the Department of Applied Mathematics and Theoretical Physics, University of Cambridge, had this to say while reviewing Knuth’s The Art Of Computer Programming–Vol 4 in the June 2007 issue of SIAM Review:
We all take for granted this universal effectiveness of mathematics. Yet it is important to pause once in a while and ask ourselves what exactly renders mathematics into such a universal tool. The answer obviously is rigour. It is perhaps not surprising that computer scientists are the major beneficiaries of mathematical rigour: unlike protein-based life forms, computers will not settle for less.
Knuth, incidentally, had himself expressed similar sentiments a full three decades earlier, in December 1974, in an article published in The Communications of the ACM:
It is certainly desirable to make computer programming a science, and we have indeed come a long way in the last fifteen years. Fifteen years ago computer programming was so badly understood that hardly anyone even thought about proving programs correct; we just fiddled with a program until we `knew’ it worked. At that time we didn’t even know how to express the concept that a program was correct, in any rigorous way. It is only in recent years that we have been learning about the processes of abstraction by which programs are written and understood; and this new knowledge about programming is currently producing great payoffs in practice. The point is that when we write programs today, we know that we could in principle construct formal proofs of their correctness. This scientific basis is resulting in programs that are significantly more reliable than those we wrote in former days, when intuition was the only basis of correctness.
The circumstances that transported Knuth from cold, bucolic Wisconsin to the sunny and vibrant campuses of California where computing would take some of its greatest leaps, make for a fascinating story to both newcomers and cognoscenti alike. Donald Ervin Knuth was born on 10 January 1938 in Milwaukee, Wisconsin to a Lutheran couple, both deeply steeped in their neighbourhood community, faith, and music. The first of two children, Knuth gleefully inherited and soaked in all three parental traits—namely, strong fellowship and bonding with members of a chosen community, a lifelong kinship with the church, and an abiding love affair with classical instrumental music: the piano, and the organ, to be sure. A curious kid, Knuth took to reading quite early. Not greatly into sports, books offered him a grand distraction. He recalls an incident from 1943, that happened inside a large public library, some distance away from his Milwaukee home. Unescorted by his parents that particular day, and riding a streetcar alone, Knuth reached the library only to lose himself completely amidst books, all curled up with a book in a cosy corner. Inadvertently, the librarian promptly locked the library after working hours, with the five-year-old Knuth still holed up inside, both parties totally unaware of the other. A frantic search ended pleasantly though, with Knuth’s parents luckily managing to locate him, but not without a big fright!
Curious and studious, he participated in competitions aimed at middle and high school kids, and even won some of them. These victories, and the efforts put in to master a whole lot of the required extra-curricular material, prepared him mentally for the journey ahead. Case Institute of Technology, as it was called then, was highly recommended by his family members, and Knuth started his undergraduate education there. The years he spent on the Case campus turned out to be hugely beneficial, spurring him on to pursue mathematics and computer science, twin interests that were both vigorously vying for primacy in his mind. On the Case campus, serious sporting activity was anyway ruled out, owing to his general disinterest in sports. Yet, he was intrigued by sports in a purely academic way. Closely following Case’s basketball team’s progress in the inter-university league while working as its manager, Knuth got hooked to an interesting problem involving the performance prediction of the team, and its individual players. Knuth dived headlong into the nitty gritty, realizing that people up to that time had been considering only the points that players had scored, not the other things that basketball players do. He noticed that ball possession was an important element that never got into the statistics; neither did the number of fumbles, steals, rebounds, and missed shots. In those days, the scorekeeper noted how many fouls a player had made, but only because nobody could continue in the game after making five fouls. So Knuth decided to count many more things that had ever before been counted. After each game, he would then go over to the computing centre and punch cards, so that he could feed those newfangled statistics into the campus computer. The computer would then plug them into a formula he had devised to estimate each player’s real contribution to the game. Coach Phil Heim agreed that these new numbers were “spot on”, and he urged the players to improve their computed scores instead of simply getting the most balls in the basket. Nobody had previously pointed out that a player doesn’t really make two points when he scores, because he also loses possession of the ball. The result: Case ended up topping the league that year. This achievement created such a buzz that it was featured on national TV and in the well-known, internationally circulated magazine Newsweek.
There was another incident from those days involving the solution of a rather hard problem, posed by a professor whose courses had a tough reputation. Louie Green, as was his wont, posed a characteristically hard problem to Knuth’s class and declared that anybody who solved it would not have to attend his classes anymore, for the rest of the semester. Further, Green declared that the correct solution would fetch the student a hugely coveted A+. To cut a long story short, Knuth solved it, earned an A+, and happily cut classes too. Incidentally, George Pólya himself had solved the same problem in a then-recent issue of the American Mathematical Monthly. As destiny would have it, much later, Knuth would even start off as Pólya’s junior colleague at Stanford, happily acknowledging Pólya’s influence on his own research and teaching. A lot of Knuth’s intellectual flowering on the Case campus was also made possible thanks to many long hours spent on a brand new IBM 650 that had just then been installed at Case. Knuth also undertook consultancy work for a firm by the name of TRW to write compiler software, even while enrolled as an undergraduate student.
An inflection point in his student life at Case occurred when he sought the opinion of a famous Indian mathematician Raj Chandra Bose, who was visiting Case from North Carolina then. Bose too, had posed a tough problem to Knuth’s class in Case, and except for Knuth, the rest of the class had not made much progress. Knuth on the other hand, not only solved it but also wrote a computer program to check his solution, even running it on locally available computers. A hugely impressed Bose, with Knuth as a co-author, even got the solution published in a journal. The problem had to do with the idea of a so-called Latin Square of size 12 x 12. Bose, as the cognoscenti would doubtless recognize, was one among the three “Euler Spoilers”, all of whom had gone on to famously disprove a long-standing conjecture made by the legendary Leonhard Euler. The achievement had instantaneously thrust the trio of R.C. Bose, S.S. Shrikhande, and E.T. Parker from the quiet environs of academia to international spotlight, with the breakthrough even making it to the front page of the New York Times. Clearly, Bose had brilliance, authority, and global prestige. He also instinctively recognized that Knuth too would be a great addition to the global combinatorics community, and strongly urged him to go to Caltech for higher studies. Even more specifically, he advised Knuth to strongly consider working with Marshall Hall, who incidentally was also his collaborator E.T. Parker’s Ph.D. thesis advisor. Knuth took Bose’s advice, went to Caltech, and by the end of 1963 earned his Ph.D. under Hall’s guidance.
Reflecting on those crucial days in his life, and the factors that led him to eventually leave mathematics to pursue computer science, even though he had earned a Ph.D. in mathematics from Caltech, Knuth offers a well-thought-out reply. According to Knuth, people who are good in mathematics and pursue it, broadly fall into one of the four types of mathematical talent—(i) algebraic (ii) geometric (iii) logical-combinatorial, and (iv) continuous-big picture. Knuth always felt most comfortable pursuing its logical-combinatorial aspects. This, he says, ties in easily with the discrete problems usually seen in computing, and this inner calling eventually influenced his decision to study the mathematical aspects of computing. There were other instances too, he recalls. One such happened while on an ocean voyage, during his honeymoon. With his wife temporarily indisposed, Knuth found himself going through Noam Chomsky’s book Syntactic Structures. By the time he was done reading the book, he was completely convinced that mathematics and computing were deeply interlinked. On the other hand, the term Computer Science had not yet been invented, and research on such things was only a fringe topic, largely unexplored.
A quick survey of the still-fledging subject of Computer Science in the mid-to-late ’60s informs us that computer hardware was nowhere as ubiquitous as it is today, with only a handful of large proprietary machines available for commercial purchase. It was not just hardware that was in its infancy. Software was even less developed, and the corresponding situation in academia was not very heartening. Apart from a few forward-thinking campuses in the USA, most universities did not even have individual courses to offer, nor any trained faculty, let alone have fully dedicated departments. Only Stanford and MIT stood out like shiny beacons amidst the entire university landscape; in 1965, Stanford could boast of the very first department in the entire United States solely dedicated to pursuing all aspects of computing.
Knuth started his academic career as an Assistant Professor of Mathematics at Caltech, his PhD alma mater. Formally, he was a trained mathematician with a speciality in combinatorial analysis. He enjoyed teaching traditional mathematics courses, especially about discrete subjects; but his attention was progressively getting drawn towards matters that were clearly more computational, in both spirit and essence. He spent considerable time consulting with Burroughs Corporation about software and hardware design. He also became an associate editor of several computer journals. He introduced courses about computation into Caltech’s curriculum. He finally began to realize that his creative sweet spot was beckoning him to investigate the quantitative aspects of algorithm performance.
Elaborating on the reorientation of his overall philosophical outlook to research that preceded his eventual change of career, Knuth notes that there are practical benefits to approaching any scientific subject, by looking closely at its calculational and algorithmic aspects. This approach, he avers, has a deeper significance, because he believes that a person does not really understand something until he can teach it to a computer, essentially highlighting the importance of an algorithmic approach to the mastering of an unfamiliar topic/subject. An example that he is fond of citing comes from Linguistics, where he says
Linguists thought they understood languages, until they tried to explain languages to computers; they soon learned how much more remains to be learned.
Another compelling instance was from his days as a mathematician at Caltech:
For three years, I taught a sophomore course in abstract algebra, for mathematics majors in Caltech, and the most difficult topic was always the study of `Jordan Canonical Form’ for matrices. The third year, I tried a new approach, by looking at the subject algorithmically, and suddenly it became quite clear. The same thing happened with the discussion of finite groups defined by generators and relations; and in another course, with the reduction theory of binary quadratic forms. By presenting the subject in terms of algorithms, the purpose and meaning of the mathematical theorems became transparent.
Viewed from this perspective, the evolution of Knuth’s focus in the ’60s is typical of any pioneer, who while daring to explore a promising new terrain, is acutely aware of the challenges ahead too. Two events from this era turned out to be significant, in retrospect. One: In 1969, Knuth took a bold leap of faith and moved to the Department of Computer Science at Stanford, thereby announcing where his academic interests unmistakably lay. Two: In the ICM held in 1970 in Nice, France, he presented a paper titled “The Analysis of Algorithms”, firmly setting him on course to infuse mathematical rigour and precision into the formulation, design and analysis of algorithms. His colleague Robert W. Floyd had taught him early in the ’60s that computer science strongly needed a firm mathematical basis.
Today, Knuth’s name is synonymous with multiple volumes of his most famous work: The Art of Computer Programming (TAOCP). The events that led to the writing of the book offer a peek into the stellar reputation that Knuth has consistently enjoyed in academic and industry circles, burnished from when he was barely even a graduate student. An offer to write a book on Computer Programming came to Knuth, from the publishing house Addison-Wesley. Still 24 and enrolled as a graduate student in Caltech, this offer was initiated by Richard Varga, a mathematician at Case who was also on Addison-Wesley’s advisory board. Varga had a good impression of Knuth’s compiler software writing skills. Knuth was honoured and thrilled, because many of his favourite college textbooks were published by Addison-Wesley, and he had always loved to write. So in the early ’60s, he was a graduate student working towards a Ph.D. in mathematics at Caltech during the day, and an author working on a textbook on software during the evenings and nights. He was also a consultant to industry, whenever he had a moment to spare. This also meant that he had to refuse and forego two prestigious graduate fellowships, one from the National Science Foundation, and another from the Woodrow Wilson Foundation—because accepting either of them would have prevented him from working on anything besides his college studies.
In 1962, TAOCP was first conceived of as only a single book, with precisely twelve chapters in it. But once he started writing, he realized that the topics he wanted to treat demanded much more attention and detail. Today, TAOCP is not a single book but an elaborate and lavish bouquet of the choicest offerings in computer science, spread over five encyclopaedic volumes. The latest to be out, Volume 4B, was published as recently as October 2022. He hopes to have Volume 4C ready for publication in 2025, resolute and soldiering on tirelessly, when he will be a still sprightly 87!
It was while writing the very first volume of TAOCP that Knuth became familiar with the entire lifecycle of the production of a book. Knuth proudly admits to having “ink” in his blood, a reference to printer’s ink. His father, who worked multiple jobs to run the family back in Milwaukee, owned a mimeograph machine, and he surmises that this perhaps may have subconsciously influenced his later thoughts on digital typography, print aesthetics, and font design for use in computers. His own typographic journey itself started off totally unplanned, beginning with a strong sense of dissatisfaction on how digital characters were printed out onto the pages of a book. Pat Winston, an MIT Computer Scientist, had written a book on Artificial Intelligence, which was the first to be typeset with an experimental digital printing device, and Knuth happened to see the galley proofs. The type quality in Winston’s proofs, though produced with entirely digital methods, was every bit as good as Knuth had ever seen before using conventional metallurgical or optical technology. By comparison, the quality of phototypesetting in the current galley proofs of Knuth’s own TAOCP was miserable; he simply didn’t want to be the author of a book that looked so awful.
Today, the world of computing is deeply thankful for that inner angst. If modern-day computer users and industrial designers enjoy a surfeit of fonts, styles, and a host of other typographic offerings that have helped produce aesthetically printed digital material, much of it can be traced back to the long penance that Knuth immersed himself in, thereby impacting the very foundations of digital typography. An account of the beginning of the entire typography episode, in his own words, demonstrates the missionary zeal with which he has generally taken up momentous challenges, always armed with a quiet confidence that his contributions would eventually turn out to be significant. On leafing through the pages of Winston’s galleys, even while being filled with a uniquely Knuthian combination of disbelief and self-confidence, Knuth said:
I thought pixels just couldn’t cut it. But here, before my eyes, was an example of the highest quality, and I knew that everything that I saw on that page was created by 0s and 1s. It was not created by any mysterious metal process, any mysterious photographic process, or anything that was hidden from me, or scary. It was digital, yet beautiful. So the next morning, I woke up and I knew that my life was going to change. Whoa! I had been faced by this terrific problem about how to make my books look right, but now I was faced with tangible proof that a solution to that problem was all within my power. All I had to do was to find a way to put 0s and 1s on a grid, with 1s representing ink—and you know, I think, I am kind of a conceited guy, and I think I can do 0s and 1s as well as anybody in the world. Not only that. Because I was good at 0s and 1s, and because 0s and 1s were now destined to be the wave of the future for printing, it was my responsibility to think about how to get those 0s and 1s there. Other people needed to use digital typography too. As a computer scientist, I had to work on digital typography to solve this problem, because it wasn’t just holding me back—it was holding a lot of people back.
Knuth’s involvement with his work and books is a perfect example of an unforgiving master’s incessant obsession with his craft, and its own esoteric tools. As he writes new volumes for TAOCP, he publishes the material first in smaller paperback fascicles, dedicated to a specific advanced topic. And as he prepares those fascicles, he posts pre-fascicles online, so that experts in the subject can help him get the story right. An ever-growing legion of admirers and followers includes hundreds of volunteers who help correct errors before the real fascicles and volumes are published. In addition to the books he has authored, he has also created the widely used computer language TeX—the bread and butter for people in both academia, and industry. For a grateful global computing community, his works are akin to precious gems that have been carefully embellished in place by a rare artisan, who chisels away untiringly at his constructions, even as they evolve and take shape.
Needless to say, he has a punishing schedule, working on many projects at once. Immersed almost totally in work, music offers his main avocation. His home-converted-to-an-office houses a pipe organ and a large piano, offering him both rejuvenation and inspiration. Though officially retired from Stanford, and no longer obliged to take students, he still occasionally engages with the academic community via talks and lectures. Another uniquely Knuthian quirk that the world of computing has reluctantly gotten used to, is his complete self-imposed isolation from email. He cannot be directly reached by email, and has set up a rather elaborate protocol for people wishing to communicate with him. But that doesn’t stop the irrepressible Knuth from shooting off emails to others, with the recipients of such rare and unexpected communication fully made aware at the very outset, that the traffic is strictly one-way, and controlled solely by Knuth alone! All the rest of the correspondence that eventually reaches him is via the snail mail route, with his secretary at Stanford responsible for sifting through, and shortlisting, what he gets to read. This issue of Bhāvanā carries an accompanying interview of Knuth and is dedicated to him on his 85th birthday.
Originally planned for a late 2021 issue, we are glad that we are finally able to bring the accompanying interview to the light of the day. The ever gracious and supportive gentleman, Knuth read through our set of questions and dashed off a warm reply with characteristic panache. All of his replies to all the questions we posed to him are exactly 280 characters long, demonstrating yet again why the world celebrates his uniquely witty genius. The genial professor, via an appreciative note also apologized for the year long delay; and more than made it up for our readers with his intellect, and enthusiasm.
There is hardly a prize in Computer Science that Knuth has not won. He received the Grace Murray Hopper Award in 1971, Turing Award in 1974, National Medal of Science in 1979, John von Neumann Medal in 1995, Faraday Medal in 2011, in addition to memberships and fellowships of many learned societies of the world. He was awarded the Kyoto Prize in 1996 that came with a cash component of 450,000, and which adjusted to inflation would easily be close to 800,000 in 2023. All of it was given away to four charity organizations dear to the Knuth couple, highlighting how lofty his approach has always been.
To this day, Knuth remains deeply passionate about the historical study of all aspects of Computing. Data structures known as trees express relationships among cascading nodes, and effectively capture genealogies, influences, dependencies, as well as historical legacies. Incidentally, trees have been studied extensively by Knuth. Arieh Iserles again, in his SIAM Review article cited earlier, had this to say on Knuth’s treatment of some of the historical aspects of Computer Science:
Once we consider trees not just as abstract combinatorial objects, but as a means to express combinatorial relationships, they impinge on issues that concerned scholars for millennia: from Chinese tile arrangements to Indian and Greek poetic metres, medieval Kabbalah, medieval music and beyond. The quest for structure, pattern and symmetry underlies human culture and scholarship, even if sometimes it is expressed in the verses of the poets, rather than in mathematical formalism.
Interestingly, half a century ago in 1972, Knuth had studied Babylonian tablets, dating roughly from 1800 to 1600 BCE, to understand how some of their ancient content could be understood, when viewed through the lens of modern-day computing. In fact, by carefully examining some of his remarks there, it is clear that Knuth is in fact arguing for the converse of the above statement:
One of the ways to help make computer science respectable is to show that it is deeply rooted in history, not just a short-lived phenomenon. Therefore it is natural to turn to the earliest surviving documents which deal with computation, and to study how people approached the subject nearly 4000 years ago.
This remark from 1972 is historically invaluable, for it shows how pioneers such as Knuth strove relentlessly, even as recently as fifty years ago, to impart respectability and credibility to a then-fledgling discipline but which now is all-pervasive, having become an indispensable part of our social fabric.
To appreciate Knuth’s philosophy of creativity, a quote he admires from the book The Science of Art, authored by Robert E. Mueller, is particularly relevant:
“It was once thought that the imaginative outlook of the artist was death for the scientist. And the logic of science seemed to spell doom to all possible artistic flights of fantasy.”
Knuth goes on to add that while the scientific approach is generally characterized by the words logical, systematic, impersonal, calm, and rational, an artistic approach is characterized by the words aesthetic, creative, humanitarian, anxious, and irrational. To him, both these apparently contradictory approaches have great value with respect to computer programming. In Knuth’s computational universe, while computer programs created out of the synthesis of the two approaches can turn out to be elegant, exquisite, and perhaps sometimes even sparkling, one must constantly strive to write programs that are grand, noble, and truly magnificent!
Finally, a sign of an individual’s greatness, irrespective of the field of endeavour, is when allusions to a place in the annals of history are frequently invoked, even as the protagonist meditatively chips away at his workbench, unmindful and with nonchalance. A constantly striving spirit’s relentless quest to reach a higher plane, is our true collective inheritance from the Don of the Art of Computer Programming!
On behalf of Team Bhāvanā and the History of Mathematics in India (HoMI) project of IIT Gandhinagar, it is both our pleasure and privilege to invite you to share with us a personal account of your immensely productive life, and your pioneering works in the area of computer science. We wish to dedicate this article appearing in this issue as our humble tribute, and a gift on your 85th birthday.
DK: I’m deeply honoured that you wish to interview me, because I love the spirit of your magazine. How I wish there had been a similar magazine that had once conducted such interviews of (say) Euler, Gauss, and Ramanujan when they were still alive! I certainly don’t come close to filling the shoes of any of those giants; but I firmly believe that everybody can learn by studying the insights of creative people.
I see that you’ll be asking me questions based on things that you’ve already learned about my life. A lot has evidently been written about that already; so I think it will be best if I adopt Twitter’s wise policy of limiting the length of my replies. Therefore, I shall do my best to answer each of your questions with a “tweet” consisting of exactly 280 characters.
We learn that you have German ancestry and that you were quite a precocious child. Tell us about your childhood and your family, and also about how your name is pronounced.
DK: My dad Ervin Knuth (kaNOOTH) was a teacher in Lutheran schools. My mom Louise managed buildings in downtown Milwaukee. I enjoyed music and grammar, also wrote corny jokes. Was poor at sports and art. Spent many hours drawing graphs. Learned to identify many trees by their leaves.
Growing up in post-WWII America’s boom years, you had access to excellent public schools and community libraries. Did you find yourself lost in books then, and more so, were there any positive influences and incidents from those years, both at home and outside, that shaped your boyhood and young adulthood?
DK: At age 4 I was the youngest “bookworm” in Milwaukee’s library. Once was lost in that library, having stayed in the stacks past closing time. Went to parochial school where my teachers taught us to love others. Many chances to sing and to play music, to enjoy nature. Knew no math.
A television program aired while you were still in school provided a key bridge that connected language sentence structure to diagrams, and deeply impacted your lifelong appreciation for languages, and their internal structure. Please elaborate on this impactful event.
DK: My 7th-grade teacher introduced us to diagramming sentences. My friends and I tried to apply it to non-textbook examples, with limited success; but we learned a lot in the process. The TV program was different: I won a contest to find as many words as possible from given letters.
Post-WWII, America was not only making huge advances in science, but was also building computing machines that were available off the shelf. When did you first hear about, lay your hands on, and eventually go on to program a digital computer? Also, did you realize then that it was going to evolve to be such an intensely passionate journey?
DK: I probably first heard about UNIVAC on election night 1952, when I was 14 years old. I saw a real computer—the IBM 650—first in 1956, as a college freshman at Case. We were allowed to touch that machine, sitting at the console and feeding cards to it. I was hooked for life.
Let us dwell a bit more about your life at the erstwhile Case Institute of Technology, now Case Western Reserve University. You joined there when you were 17, and by 22 had earned a BS, and an MS too. What subjects did you study during those heady and busy days in Case?
DK: Freshman classes (physics, chem, calculus, civics, writing); sophomore (astronomy, basic math, diff geometry, physics2, history, speaking); junior (algebra, topology, electrical engineering, literature, numer analysis); senior (automata, combinatorics, logic, complex variables).
Apart from excelling in studies, you were editing a student campus magazine, creating probabilistically founded algorithms for rating Basketball players via win probabilities, and even programming an IBM machine on campus. Walk us through those four intense years at Case, please.
DK: Marched in the band, copy-edited the newspaper, was fraternity vice-president, fell in love, managed sports teams, wrote compilers and assemblers, entered math competitions, edited magazine and student handbook, watched plays and orchestra rehearsals, wrote a short musical comedy.
California Institute of Technology (Caltech) in the ’60s had the luminous Richard Feynman, and Murray Gell-Mann in Physics. In Math, it had the couple Jack Todd and Olga Taussky-Todd, both with strong interests in aspects of computing. Yet, when it was time to think of a PhD, you chose Mathematics in Caltech, and went with Marshall Hall Jr., a well known expert in Combinatorics. How did this choice of University and guide come about, and what was your PhD thesis about?
DK: R C Bose taught me combinatorics and inspired me to work with Hall. I planned at first to study designs with \lambda=2. But one day I happened to construct new kinds of non-Desarguesian projective planes, solving a conjecture, and Marshall told me that that should be my thesis.
Your PhD days were apparently already getting partitioned by your internally competing dual interests, Computing and Mathematics. We hear that you were also a consultant to Industry those days, even earning significantly more during the summer breaks via consulting, than the annual salaries of newly minted Assistant Professors! Was Hall sympathetic towards the dual hats you were donning then? Further, why did you later decide to completely give up consulting?
DK: Yes, Hall was an early believer in the power of computing to help develop combinatorial theory. Consulting was my connection to the newfangled field of Computer Science. I taught computing at Caltech, even as a math Prof. But I stopped consulting when becoming a Stanford CS Prof.
Around the same time, via a contract to author a book on Computer Programming, you had a realignment of academic interests that would eventually much later impart to you a sharper focus, along with a new identity in the world of academia. Please take us through this phase in your career, and the key personal milestones from this period.
DK: A representative of Addison-Wesley, the publisher of my favorite textbooks, met with me in January 1962 and invited me to write a new book about compiler-writing. That was exciting because existing publications were poor, one-sided, often contradictory. I’d always liked to write.
After your PhD, you joined the Department of Mathematics in Caltech as an Assistant Professor. But deep within, you were restless and constantly churning internally about being unable to pursue the various avenues that had opened up in your mind regarding Computing, and which by then had also grown to be an intellectual stirring, impossible to ignore. Why, how, and since when did Computing begin to occupy almost all of your serious academic interests?
DK: During the summer of 1962, my friend Bill Lynch and I wrote a FORTRAN compiler for UNIVAC SS80 computers. One day I decided to explore why its “hash algorithm” worked well; and got lucky: I saw how to solve that problem, and realized that many similar yet-unsolved problems exist.
Caltech’s own surprisingly primitive computing infrastructure in the ’60s was contrasted by excellent computing facilities available in the Jet Propulsion Laboratories (JPL) next door. Was this also broadly symptomatic of Caltech somewhat lagging then, in its overall focus on advances in Computing that were happening elsewhere? Was this one of the triggers for your eventual departure from there?
DK: As a consultant, I had easy access to excellent Burroughs machines near my home. Almost every university had primitive computing facilities at that time. Stanford was an exception: George Forsythe understood that computer science was a new field with deep intellectual challenges.
Volker Strassen, Stephen Cook, and you were three young mathematicians who, between the late ’60s and the mid-’70s, contributed significantly to our present understanding of the complexity and analysis of algorithms. While Strassen focused on the complexity of linear algebra and matrix operations, Cook’s work was founded more on the logical and theoretical aspects. How did you end up carving out your own path, so different from your contemporaries like Victor Pan, James Wilkinson, Shmuel Winograd, and others who were pursuing problems related to Numerical Analysis and Scientific Computing?
DK: My initial focus was on programming languages; I became editor of that section in CACM, then JACM. While writing my book I soon learned that compiler writers were also developing techniques of general interest, and that it was great fun to analyze those algorithms quantitatively.
At the ICM of 1970 (Nice, France), the very title of your paper “The Analysis of Algorithms” was a novelty. What crucial idea did you want to convey, even via your choice of this novel title? It is of historical context to appreciate that even as recently as in the early ’70s, computer science aroused no great interest except perhaps in a handful of campuses, and a few avant-garde companies.
DK: At the end of the 1960s, computer science was tripartite: Numerical analysis, programming languages, artificial intelligence. But none of those titles quite matched my interests. So I made up the name Analysis of Algorithms, whose first definition was: the part of CS I like best.
The late ’60s coincided with your departure from Caltech. Still barely 30, you joined Stanford as a full Professor of Computer Science. This move was made easier by George Forsythe, a benign and commanding presence, both at Stanford and also in the larger world of computing, and whom you have even likened to a “Martin Luther of the computer reformation”; also your thoughts on a deep friendship with another Stanford colleague, Robert W. Floyd.
DK: Floyd and I had been friends since we met in 1962. We had thrilling correspondence re Bose–Nelson sorting networks. We wanted to be eventually at the same university. Everywhere but Stanford we’d have to help build a leading department; Forsythe had already done that beautifully.
To pun, you happily checked into Stanford in 1969, and have never left since then! Computer Science at Stanford was spun out of its mathematics department, and the latter had some brilliant mathematicians on its roster. There was the majestic and statesmanly Kunihiko Kodaira, and his frequent collaborator Don Spencer. There was the Hungarian duo Gábor Szego̎ and George Pólya, both men exceptional in pedagogy as well as research. Charles Loewner, Paul Garabedian, Paul Cohen, and Solomon Feferman also belong to that era. Your thoughts on your illustrious colleagues.
DK: Stanford had great mathematicians, but almost all in analysis with a token algebraist. Cohen was brilliant, yet thought combinatorics trivial. Pólya was a wonderful exception; my other math colleagues were Dantzig in OR, plus people like Berlekamp, Gale, Karp, Lehmer at Berkeley.
Paul Cohen, a senior colleague at Stanford, started off with interests in analysis, before a pleasantly fateful post-dinner conversation with a logician colleague Sol Feferman in a joint Stanford-Berkeley Colloquium got him interested in Kurt Gödel’s ideas on the continuum hypothesis. This serendipitous event apparently spurred Cohen to make major advances in Logic, finally even winning him a Fields Medal, awarded in 1966. Did you too find the Berkeley–Stanford Colloquium of those days a stimulating experience?
DK: Cohen was legendary also at Caltech. But I actually never heard about a Berkeley–Stanford math colloquium; at math talks I often asked myself “so what, so what?” I started a series of weekly combinatorial math seminars at my home, and people from Berkeley would often participate.
In a meeting held in Bucharest in 1971, you state: “The text of my sermon today is taken from Plato (The Republic, vii) who said, `I have hardly ever known a mathematician who was able to reason’. If we make an unbiased examination of the accomplishments made by mathematicians to the real world of computer programming, we are forced to conclude that, so far, the theory has actually done more harm than good. There are numerous instances in which theoretical `advances’ have actually stifled the development of computer programming, and in some cases they have even made it take several steps backward. I shall discuss a few such cases in this talk.” Your thoughts on theory and practice.
DK: A week before the Bucharest Congress I spoke at the IFIP Congress in Ljubljana about the beauties of theory. Thus I could straddle both sides of the fence. I’d seen both extremes when writing Volumes 1, 2, and 3 of The Art of Computer Programming. Everywhere we find yin and yang.
Your ten-year-long aesthetic pilgrimage in the ’70s, taken up to radically overhaul the very foundations of printing and publishing, heralded a brand new dawn for typesetting. In 1979, the Bulletin of the American Mathematical Society carried your article titled “Mathematical Typography”, which you incidentally dedicated to George Pólya on his 90th birthday; and where unsurprisingly, you cite bibliographical references dating back to Luca Pacioli of Venice from 1509 CE. How did the idea of a rigorous approach to the aesthetics of typesetting even begin?
DK: I loved books and alphabets since childhood. I loved the look of Addison-Wesley’s texts. I spent many years as copy editor, journal editor, and proofreader. Computer science needed new typographic tricks, which I helped to formulate. Many fine printing establishments were nearby.
“Every reasonable algorithm suggests interesting questions of a `pure mathematical’ nature; and the answers to these questions sometimes lead to useful applications, thereby adding a little vigour to the subject without spoiling its beauty”—are words drawn from your ICM 1970 address. Taking the letter “S” as a representative example for fonts, how did you geometrically formulate the problem of finding the “most pleasing curve” that would represent “S”, thus fusing mathematics with creative aesthetics, even while still operating in a purely algorithmic context?
DK: The letter S suggests a beautiful geometry problem of matching an ellipse to a tangent, solvable with classic primitives. Fitting curves to rasters, breaking paragraphs to lines, and many other appealing math challenges arise naturally, summarized in my 1985 lecture at Epidaurus.
Nobel Prize winning physicists, Paul Dirac and S. Chandrasekhar, have both eulogised beauty and its intimate kinship with truth, in their own characteristic ways. The inimitable computer scientist Edsger Dijkstra had a famous quote: “I find style important in Programming”. Set in the general spirit of these three titans, is the relentless and rigorous pursuit of truth, a time-tested approach to create works of lasting aesthetic beauty, across the spectrum of creative human endeavours?
DK: EWD316 also said that everyone should “find their own style”. And I believe everyone should find their own beauty. But curiously, I don’t recommend finding one’s own truth! To me, truth is totally objective. And I’m glad there are mysteries, whose truth or falsity can’t be known.
Combinatorics and its various avatars have continuously held your attention for the past six decades now. If at all individual humans can be roughly partitioned as belonging to either those who naturally appreciate the universe of discrete objects (graphs, numbers, notes of music, finite fields), as against those for whom the universe of continuous objects seems naturally pleasing, where do you position yourself?
DK: I’m clearly at the discrete end of that continuum. (However, surreal numbers are much richer than the continuum itself.) Like Leibniz, I like to think of everything as made from 0s and 1s. We take limits when that’s a useful approximation, and when we understand what limits mean.
When we think of discrete objects and their study, especially in the 20th century, it is almost inevitable to think of Paul Erdõs. You and he shared many common interests, and were both also incidentally, invited speakers in ICM 1970, (Nice, France). Your Erdõs number though is 2, and not 1 as most people would expect, eyes closed. Tell us a good Erdõs story!
DK: When I met him in Nice, I proudly mentioned that I’d discovered the surprising formula l(382)=l(191) in the theory of addition chains. Without blinking he immediately asked if there were infinitely many cases with l(2n)=l(n). (That result was proved by Thurber three years later.)
There is a story of how Bill Gates wanted to make a big donation to Stanford’s Computer Science Department. This is the same Bill Gates, who Christos Papadimitriou recalls as being barely interested in research, even though their joint paper had been accepted for publication in a mathematical journal. Please fill us in on the story behind the donation.
DK: A few hours before Bill’s visit, I’d been thinking about the way random graphs evolve, at their big bang. From the transition probabilities, I realized that the denominator 17017 was the key to the whole pattern. I excitedly drew the picture on the blackboard; Bill was impressed.
In 2016, while delivering the Paris Kanellakis Memorial Lecture, you spoke about “Hamiltonian Paths in Antiquity”. In it, you mentioned the contributions of various Indian mathematicians to the world of Combinatorics. You also dwelt in detail on how the Sanskrit language was used to creatively convey combinatorial truths, usually set in a lyrical metre that lured people to dwell on the combined joys of reading, counting, and calculating. Does ancient India appeal to you as a civilizational setting where many of your own interests converged in potentially interesting, and insightful ways?
DK: Yes! The ninth century Kashmiri poets who created ingenious “citrakāvya” certainly rank among the pioneers of combinatorics. The study of Sanskrit prosody has also advanced other parts of mathematics. But only a few scholars remembered them, after other ideas became fashionable.
Personally, do you feel a closeness to the Indian approach to Mathematics, as espoused by early Indian pioneers? The Indian approach was pragmatic, and had focused on solving concrete problems, consciously eschewing an approach that sought to first theorise, or further, even axiomatise the subject. The manifestations of this Indian school of thought pervades the works of Āryabhaṭa, Bhāskara, Brahmagupta among others.
DK: I don’t agree that the Indian approach has been entirely pragmatic, instead of curiosity-driven. I immediately felt strong kinship with Nārāyaṇa when I learned of his Gaṇita Kaumudī (1356), since I’d played with exactly the same concepts in 1961! He must have felt lonely in 1356.
You have admitted a close kinship with languages, especially their inner structure. Right from your school days when you were introduced to diagramming sentences, to the time you were studying Noam Chomsky’s Syntactic Structures during your honeymoon, and even up to now, languages have fascinated you. Could you please throw light on the work of the Indian grammarian Pāṇini, and how his work done many millennia ago has relevance to the modern-day study and use of computer languages.
DK: Mathematics is the science of patterns. Languages are perhaps the most complex artifacts of civilization. Pāṇini had the profound insight that Sanskrit had patterns that could be formalized and put into a logical system. He gave his commentators millenia of rich food for thought.
From the viewpoint of Natural Language Processing (NLP) applied to Sanskrit, when one attempts to algorithmically unravel the intended semantic interpretation of a given sentence, an immediate challenge that is faced is the sheer variety of possible emergent meanings, each significantly different from the other, all thanks to the subtleties of word splittings in Sanskrit. Even a slightly differing splitting of a compound word can result in a vastly different semantic interpretation. How does one algorithmically tame, or rein in such undesirable situations?
DK: My knowledge of Sanskrit is entirely third-hand, but I’ve been told it’s this very ambiguity that has made things like citrakāvya possible and inspiring. On the other hand, modern NLP and ML methods seem to have made the complexity manageable, once enough data has been gathered.
The above example inspires us to invoke the domain of very large numbers. Your good friend, the late Ronald Graham is associated with an eponymous number that is also among the largest numbers ever known to be used in the published proof of a mathematical statement. Isn’t the idea of a “largest positive number” an oxymoron, because there is always a number just one greater than itself.
DK: No, the largest positive number used in a published proof isn’t an oxymoron (nor is it a constant)! My lecture on Coping With Finiteness cited Super-K = 10\uparrow\uparrow\uparrow\uparrow3, which is way bigger than Ron’s number; I think it’s too large to actually be comprehended.
Recently, as reported in the 4 February 2021 issue of Nature, a team of Israeli computer scientists used ideas from Gradient Descent Optimization to find mathematical expressions for fundamental constants seen in nature such as “e” and “\mathbf{\pi}”. Calling their algorithm the Ramanujan Machine, they even found novel continued fraction expressions for the Catalan constant, a ubiquitous object in combinatorial settings. The subject of combinatorial identities itself is an old and well-studied subject, and you have even written a preface for Marko Petkovsek, Herb Wilf and Doron Zeilberger’s classic, itself quirkily titled A = B. Mother Nature never ceases to amaze us, does she!
DK: Indeed, I keep running into apparently new patterns that involve only very elementary ideas. For example, I spoke to Zeilberger’s seminar in January about the fascinating Tchoukaillon array, which contains every positive integer exactly once; its properties are mostly unexplored.
In the 10 June 2021 issue of Nature, researchers from Google Brain working on a particularly thorny, decades-old problem in VLSI Design called Chip Floorplanning, made a breakthrough that reduces otherwise painstaking and expensive effort expended by a team of human experts working for many months, to a mere 6 hours. Hailed as a breakthrough by both industry and academia, Deep Reinforcement Learning, the learning framework behind the advance, is ideally suited for problems involving combinatorial data. Considering the centrality of combinatorial ideas in your approach, can readers expect relevant material on AI and ML in coming issues of TAOCP?
DK: No; AI and ML are topics that others can write about far better than I. Of course there are many flavors of learning and many flavors of combinatorics. For example, TAOCP does discuss reinforcement in message-passing models of random satisfiability problems. But that’s different.
In the January 2021 issue of Bhāvanā, when David Mumford was asked what he thought of Deep Learning, he said that he was initially sceptical until he came across related work by Chris Manning. We quote Mumford verbatim: “Manning asked these two questions: `How do these deep learning algorithms which analyse sentences work? Are they `discovering’ grammar?”. Mumford goes on to say that he was eventually convinced that Manning’s creation of a suitable high-dimensional representation of words already implicitly carried inside them, the till then elusive grammar of the sentence. Will we see machines leading humans not just in mechanical tasks, but in intellectual tasks as well?
DK: I’m still with Mumford’s former self with respect to not believing that ML can find discrete things like orthogonal Latin squares (loved by me and my mentor R C Bose). The big potential problem is that nobody is able to understand how deep neural networks reach their conclusions.
On the P = NP question, you earlier held the position that P is not equal to NP. But, we hear that you now hold a different viewpoint. Was this change triggered by Laszlo Babai’s treatment of the Graph Isomorphism Problem? Or did it have to do with Peter Shor’s Quantum Algorithm that successfully took on the classical RSA algorithm?
DK: No. I suspect that P=NP because a polytime algorithm might exist without being comprehensible (even more so than Super-K). Existence is far different from embodiment. Robertson and Seymour showed that polytime algorithms for some graph problems exist, yet are probably unknowable.
The great Israel M. Gelfand listed Beauty, Exactness, Simplicity, and the Craziness of ideas, as what mathematics essentially shares with both music and poetry. Since you are in equal parts both scientist and artist, as your uniquely crafted designation at Stanford stands testimony to, do you believe that in order to create ideas of lasting aesthetic value, the four ideas listed here are necessary and sufficient? What about passion?
DK: Hmm. Why did Gelfand leave out “fun”? Also, the Japanese philosophy of Wabi-Sabi explicitly denies that Exactness is necessary, or sufficient. My musical and poetic friends aren’t fans of exactness either; perfect rhythm is too dull. Passion and je-ne-sais-quoi are indispensable.
Active participation in the Lutheran Church, alongside a passionate pursuit of music, have both been a big part of your entire life outside of academics. How has music itself influenced your appreciation of combinatorics, or is it the other way around? In a related context, do you see parallels between music and combinatorics, especially in the works of the masters?
DK: Anybody who looks at the scores of Tchaikovsky, say, realizes that he was a great combinatorialist. There are strong historical ties (works of Sār\dot{\rm n}gadeva, Mersenne, and Schillinger explicitly, Bach implicitly). This week I’m enjoying the fantastic combinatorics of Burt Bacharach.
From Fairchild Semiconductors to SUN Microsystems to Google, a host of hugely successful and profitable technology companies have all been Stanford babies at some point. Did you ever personally harbour entrepreneurship dreams, considering that you are the world’s foremost guru of Programming, and have overseen and presided over many advances that have provided the ideological fuel to the revolution happening literally outside your window, in Silicon Valley?
DK: I understand why my friends find fulfilment when they supervise others or affect the marketplace. But the profit motive has always been furthest from my thoughts, after I had a stable job. I envy astronomers, because people understand that astronomers are motivated by curiosity.
The history of Combinatorics is rich and filled with amazing personalities and achievements. Since you love setting ideas in their proper historical setting, as evident in the allusion to a manuscript from 1509 CE in the bibliography of your Bulletin of the AMS article on Typography, we would like to know who your own heroes of combinatorial thought are. Since this may be a potentially ill-posed question owing to the sheer antiquity of the subject, let us focus on the last six centuries, in linear order starting from the 16th century onwards till date, and one for each century.
DK: Great question! Girolamo Cardano (1501–1576); John Wallis (1616–1703); Leonhard Euler (1707–1783); James Joseph Sylvester (1814–1897); Jack Edmonds (1934–); László Lovász (1948–). And let me add Richard Stanley (1944–), a student in the first calculus class I taught (1963).
You have been widely feted, and have won the Turing Award, the Kyoto Prize, the National Medal of Science, the John von Neumann Medal, the very first ACM Grace Murray Hopper Award. There is even an asteroid in deep space by the name 21656 Knuth. But surprisingly, you now choose to stay away from email and electronic communication, preferring to solely use secretarial help to sift through sack loads of snail mail. How do you manage your grinding daily schedule between keeping up with future issues of TAOCP, and other academic responsibilities such as being the Oracle of a grateful, and appreciative community of Computer Scientists and Programmers?
DK: I have no TV. I’ve been retired since 1993. I work in “batch mode”, not “swap-in-swap-out”. I write and rewrite TAOCP one word at a time. I swim almost every day. Take frequent naps. Play piano and organ. Go to Stanford Theatre. Have loving family. Have myriad helpers. Chocolate.
Richard Hamming said, “The purpose of computing is insight, not numbers”. To set the context here, here is a quote by the late Vladimir Voevodsky (Fields Medal, 2002) who envisaged computers working as assistants to professional mathematicians, helping them automatically verify the correctness of proofs that they come up with. “The world of mathematics is becoming very large, the complexity of mathematics is becoming very high, and there is a danger of an accumulation of mistakes. Proofs rely on other proofs; if one contains a flaw, all others that rely on it will share the error.” Is this the harbinger of a more synergetic intellectual relationship between man and machine?
DK: (Also, the purpose of machine learning should be insight, not results!) It is wonderful that computers complement human capabilities, not only for verifying proofs but also symbolic calculations, etc. Of course computer programs can have mistakes; so can their proofs of validity.
Humour in uniform has evidently kept you in good humour all these years! In a course titled “Concrete Mathematics” that you offered across departments at Stanford, and for which you were teaching from your eponymous book, you had students hoping that they were going to learn some “hard”, and not “soft” mathematics. When they heard you say in class that there would be no “Theory of Aggregates”, nor “Stone’s Embedding Theorem”, and not even the “Stone-C̆ech Compactification”, a disappointed bunch of civil engineers quietly left the room. Your uninhibited comments!
DK: The “graffiti” in the margins of Concrete Mathematics, mostly contributed by Princeton and Stanford students when Ron and I first taught from preprints of the book, have clearly been successful: Any errors they contained were discovered by readers before all mistakes in the text.
We understand that “Prediction is difficult, especially of the future”. We also learn that you are not too fond of making prophecies. But still, we want to hear your views on developments that are welding the core of computing, with mathematics and physics like never before. What should the Don Knuth of the Quantum Machine Learning Era of the 21st Century, say a teenager running a program on a Public Quantum Cloud from somewhere in interior India be aware of, even while he/she prepares for the challenges and opportunities ahead?
DK: I’ve tried unsuccessfully to understand quantum computing. Maybe there are people who understand it but can’t fathom the kind of computing that I do. All I know is that there seem to be two completely different things, both called “computing”. Everybody should follow their own star.
\mathbf{\infty} Thank you for the exhilarating questions. Please continue to interview others.
Dear Professor Donald Knuth, this interaction has been an inspiring experience for us, even as we walked down memory lane with an accomplished pioneer. We wish you many more years of robust health, and continued success in research, and other pursuits of life. Thank you! \blacksquare
Footnotes | ||||||
2453 | dbpedia | 3 | 83 | http://lambda-the-ultimate.org/node/2783 | en | Lambda the Ultimate | [
"http://lambda-the-ultimate.org/node/themes/chameleon/ltu/tagline.png",
"http://lambda-the-ultimate.org/misc/xml.gif"
] | [] | [] | [
""
] | null | [] | null | en | favicon.ico | null | Home
Feedback
FAQ
Getting Started
Discussions
Site operation discussions
Recent Posts
(new topic)
Departments
Courses
Research Papers
Design Docs
Quotations
Genealogical Diagrams
Archives
User login
Navigation
program verification: the very idea
James H. Fetzer's Program Verification: The Very Idea (1988) is one of the two most frequently cited position papers on the subject of program verification. The other one is Social Processes and Proofs by De Millo, Lipton, and Perlis (1979), previously discussed on LtU. Fetzer's paper generated a lot of heated discussion, both in the subsequent issues of CACM and on Usenet.
It's not clear to me what all the fuss is about. Fetzer's main thesis seems pretty uncontroversial:
The notion of program verification appears to trade upon an equivocation. Algorithms, as logical structures, are appropriate subjects for deductive verification. Programs, as causal models of those structures, are not. The success of program verification as a generally applicable and completely reliable method for guaranteeing program performance is not even a theoretical possibility.
(See also part I, part II, and part III.)
Comment viewing options
Select your preferred way to display the comments and click "Save settings" to activate your changes. | ||||||
2453 | dbpedia | 1 | 75 | https://www.mathunion.org/fileadmin/IMU/ICM2006/offline/icm2006.mathunion.org/press/bulletins/bulletin13/index.html | en | International Congress of Mathematicians MADRID 2006 | [
"https://www.mathunion.org/fileadmin/IMU/ICM2006/offline/icm2006.mathunion.org/imgs/logo_sup_icm.gif",
"https://www.mathunion.org/fileadmin/IMU/ICM2006/offline/icm2006.mathunion.org/imgs/logo_inf_icm.gif",
"https://www.mathunion.org/fileadmin/IMU/ICM2006/offline/icm2006.mathunion.org/archivos/Image/ICM/boletin_... | [] | [] | [
""
] | null | [] | null | null | Bulletin number -13
29 May 2006
INDEX:
Painting with Numbers
The ICM2006 Fractal Art Competition Hill be decided in June
Interview with Marta Sanz-Solé, President of the ICM2006 Local Programme Committee
“This ICM places the accent on the fruitful interaction among the different areas of mathematics”
Plenary Session: Iain Johnstone
How to Convert Data into Information
El ICM sección a sección
Fundamentos matemáticos de las ciencias de la computación
Satellite Conferences: Gran Canaria
Intelligent Computation Imitates the “Practical Wisdom” of Living Beings
Applications
Encrypted Messages
Painting with Numbers
The ICM2006 Fractal Art Competition Hill be decided in June
Equations are a way of describing reality, but some equations can be used the other way round; that is, to generate artificial worlds. Equations describing fractal sets, for example, can give rise to landscapes as fascinating as those depicted in famous canvases. Does this sound impossible? Before jumping to conclusions, it would be better to take a look at the works on display at the Exhibition of Fractal Art, which forms part of the ICM2006 International Congress of Mathematicians, to be held from August 22nd to 30th at the ICM2006 venue and at the Centro Cultural Conde Duque in Madrid. Many of the works exhibited will come from the Benoit Mandelbrot ICM2006 International Competition of Fractal Art, the prizes for which will be announced in June.
The ICM2006 Exhibition of Fractal Art will be held thanks to support from the Fundación Española de Ciencia y Tecnología (Fecyt – Spanish Foundation and Science and Technology). More than three hundred entries have been received from all over the world for the Fractal Art Competition. The jury will be chaired by Benoit Mandelbrot himself, widely recognized as the “father” of fractal geometry.
What are fractals? It is not necessary to enter into a complicated mathematical description to get an intuitive grasp of what they are: structures which, “when a small portion is observed, preserve a similar, although not necessarily identical appearance to what they look like when observed in their entirety”, explains Javier Barrallo, one of the organizers of the fractal art competition, and a fractal artist himself. Some examples of fractals are: a tree and its branches; a cauliflower, apparently made up of endless cauliflowers joined together; the coastline of a country…
The example of a coastline serves to explain another property of fractals; the fact that no matter how small the scale at which they are observed – however close you “zoom in” – they always keep the same appearance, and so on to infinity. Obviously, a coastline is not infinite – “authentic” fractals are a mathematical idealization – but the effect of the fractal phenomenon can be seen to be real in the “coastline paradox”. When a coastline, or for that matter any rough surface, is measured, the result will vary according to the accuracy desired: if one takes into account the shape of bays, of rocks, of grains of sand, etc., the coastline will get theoretically longer and longer, and in an ideal fractal it would be infinite.
Is it really art?
Beyond the strictly aesthetic qualities, in the opinion of some people these properties give added value to works of “fractal art”. However, these works have not always been considered art. Are they not merely a computer-generated graphic representation of a formula? Yes and no, reply the authors of fractal art. What follows is a brief explanation of how a fractal is painted.
The point of departure is indeed a mathematical formula. The first fractal formulae were described more than a century ago. Today there are hundreds. And yes, as Barrallo explains, the computer is vital: “A small image, one of 640 x 480 pixels, for example, contains 307,200 dots that must be calculated. It may be necessary to calculate each one of these dots about 1,000 times by the formula determining the fractal. This means that the formula must be calculated more than 300 million times. And this is just for a small-size image!”.
So, armed with both formula and computer, we must now proceed to iteration. This involves “calculating a formula over and over again, starting from its initial value”, says Barrallo. “After calculating the formula for the first time, we take the resulting value and introduce it into the formula. The new result is calculated again, and so on successively”. In the case of fractals, the initial value has to do with the position of the dot in the frame (the pixel on the screen).
Then colours are assigned according to the value of each dot. The fact that the behaviour of two dots situated very close together can be radically different – one diverging toward the infinite and the other converging toward a given value - is “what makes fractal exploration so fascinating”, says Barrallo. And what leads to the explosion of shapes and colours in the image.
But this is not in the least due exclusively to the computer. “An image of 800 x 600 dots contains 480,000 pixels, or dots on the screen, which can be combined in an image in 103467865 different ways; that is, 10 followed by more than three million zeros. A computer does not possess the capacity to select images from among such an immense collection and determine which are beautiful and which are not”. It is the hand, or in this case, the brain, of the artist that are the vital factors. Furthermore, as in all art, and in mathematics themselves, fractal art is in a constant state of evolution. The algorithms currently employed have little to do with those employed twenty years ago.
For further information:
Interview with Benoit Mandelbrot in InfoICM2006-05-24
/prensa/boletines/boletin19/#mandelbrot
Fractal Art Competition web site:
http://www.fractalartcontests.com/2006/
Information about fractals:
http://www.divulgamat.net (in “Virtual Exhibitions” and “Art and Mathematics”)
www.fractalus.com
Interview with Marta Sanz-Solé, President of the ICM2006 Local Programme Committee
“This ICM places the accent on the fruitful interaction among the different areas of mathematics”
Marta Sanz-Solé teaches at the University of Barcelona, the same university from which she graduated (1974) and gained her doctorate (1978), and where she has been Dean of the Faculty of Mathematics and vice-president of the Division of Sciences. She has spent time doing research in the USA, Italy, France and Switzerland, and her research work has been centered on Malliavin’s calculus and stochastic analysis. She is the author of some 80 publications, serves as a member on various committees and has participated in the organization of numerous congresses and events. In recent months she has devoted much time and effort as a member of the Organizing Committee to the ICM2006 World Congress of Mathematicians, in particular as the president of the Local Programme Committee in charge of organizing the scientific content of the event.
How has the scientific programme for the ICM2006 been drawn up?
There is a committee responsible for deciding on the number of scientific sections in the congress and their content. On this occasion, the ICM2006 has been divided into 20 different sections. This Programme Committee is named by the International Mathematical Union executive committee, and it also responsible for putting forward the names of invited speakers, both for the plenary lectures and the talks given in each section.
¿What is the task of the committee you are chairing?
The members of the LPC (Local Programme Committee) work in co-ordination with the Programme Committee, and our main job is the organization of the congress programme. It’s a question of arranging the agenda in a coherent manner. You have to take into account the fact that invited talks in each section must necessarily overlap, so they have to be scheduled in such a way that related fields do not coincide with each other, as far as this is possible, because all the talks are of interest to the majority of the audience. We are also responsible for the schedule of the plenary programme, since this will highlight trends and set the pace of the congress.
Don’t you find it frustrating that others decide on the content?
Those are the rules of the game. However, we have a certain scope for movement. The Organizing Committee of the congress has the prerogative of proposing one plenary lecturer and three section speakers, and it has delegated this choice to the LPC. Furthermore, we are responsible for many other activities in the programme, such as Special Activities and Other Activities. Personally, I’ve been deeply involved in some of these scientific activities. For example, I’m the organizer of the Closing Round Table, an activity which has been included in the ICM for the very first time. All the panellists are prestigious mathematicians, one of whom is Lennart Carleson, the winner of this year’s Abel Prize. The title of the round table - Are Pure and Applied Mathematics Drifting Apart? – is a reflection of the interest in the debate about this delicate dovetailing between two aspects of the profession, as well as the need to work closely together to make important advances in an eminently technological society. In fact, the programme for this ICM places the accent on the fruitful interaction among the different fields of mathematics, which until a short time ago were following divergent paths.
One of these ‘special activities’ is devoted to promulgation: Are you worried about the public image of mathematics?
Of course; most people don’t even realize that mathematics are useful, or even that they are present in our daily activity. Their intellectual value and appeal remain largely unknown. The image people have of them is of something boring; it’s a subject surrounded by a lack of understanding and a lack of communication. However, we intend to deal with this question at a round table proposed by the European Mathematical Society, in which I am also involved as joint organizer, and which has even broader objectives. The debate will revolve around how to make mathematics more accessible to scientists working in other disciplines; how to communicate the values of our research to politicians, who are the ones who decide on funding for research, and how to convey the real values of mathematics to young people who are on the point of choosing their future university education and PhD courses.
In addition to the invited speakers, there are also other contributions: Are you and your colleagues involved in the process of selection for these?
Certainly; I was about to explain this before. This is a very important part of the congress, and we on the committee have been responsible for organizing all of it. We were responsible for the “call for abstracts” in three different categories: oral communications, posters, and contributions on mathematical software. Then we evaluated the abstracts submitted for these contributions and programmed their presentation in the appropriate sessions.
How many were submitted and how many have been selected?
Well, the question is how many will be presented during the congress, because the figures may vary. Approximately 1,600 were submitted. After the evaluation process, and taking into account the withdrawals, we have at the moment about 1,400. However, we know from past experience that some of those who have submitted contributions will not actually attend the congress. I would hazard a guess that the final count will be about 800 oral communications, 300 posters, and 25 mathematical software presentations, which will be a great success in terms of participation.
¿Does this represent any change in comparison with previous congresses?
I think the final figures will be similar to those of the ICM 98 in Berlin. However, we’ve introduced some changes, such as increasing the time for oral communications from 15 to 20 minutes. We’ve also made a special effort to promote the presentation of posters. There’s not much tradition of posters among mathematicians, even though they are more informal, can be much richer, and facilitate greater interaction with people interested in the subject. One of our initiatives has been to organize a competition with prizes for the best posters in terms of presentation, visual quality and content. There are two prizes in each section, although they can also be declared void.
How many people are on the LPC?
There are nine members in all, covering a broad range of mathematical fields, although each one is working in collaboration with between ten and fifteen other people on the evaluation process and other tasks, because there’s a great deal of work to do.
Marta Sanz personal web page
http://www.mat.ub.es/~sanz
President of the IMU Programme Committee
Noga Alon, Universidad de Tel-Aviv
nogaa@tau.ac.il
http://www.math.tau.ac.il/~nogaa
Local Programme Committee (LPC):
/organization/localprogramcommittee/
Plenary Session: Iain Johnstone
How to Convert Data into Information
For some time now the following wording can be found in many job advertisements: “Minimum requirements: Experience in the implantation of Information Management Systems (Business Intelligence, Data Warehousing, Data Mining)". Given the growing amount of data handled in many sectors, experience in information management is becoming increasingly necessary. The mathematician Iain Johnstone will give a plenary lecture at the ICM2006 on “High Dimensional Statistical Inference and Random Matrices”, which will deal with the management of massive amounts of data.
Until recently, statistics was centered on the study of one- and multi-dimensional random variables. However, the development of computation has led to the era of “data mining", and all organizations – banks, hospitals, research centres – handle enormous quantities of data which must often be constantly available, such as financial assets. Statistics in high dimensions are essential for analysing this data, and this will be the central theme of Johnstone’s lecture. This branch of mathematics shows how to organize and summarize data, whether it involves an electrocardiogram, Internet traffic or a stocks and shares, in such a way that they provide useful information.
Iain Johnstone was born in Melbourne in 1956. In 1977 he graduated in mathematics at the Australian National University, specializing in pure mathematics and statistics. He obtained his doctorate in statistics from Cornell University in 1981. Since then he has been associated with Stanford University in California, where in 1992 he became professor of statistics and biostatistics. In addition to his work on biostatistics, in the field of statistics he has received much recognition.
Lecturer: Iain Johnstone
“High Dimensional Statistical Inference and Random Matrices”
Date: Friday, August 25th: 10:15-11:15
ICM2006 Scientific Programme
/scientificprogram/plenarylectures/
More about Iain Johnstone:
http://www-stat.stanford.edu/people/faculty/johnstone/
http://hcr3.isiknowledge.com/author.cgi?&link1=Browse&link2=Results&id=1345
Plenary Session:
Mathematical Foundations of the Computational Sciences
One of the main mathematical problems in theoretical computer science is that known as P vs NP. A simple example suffices to understand the nature of this problem:
Suppose that we wish to select a group of one hundred people from a total of four hundred candidates. Selection must be carried out according to certain determining criteria (for example, in accordance with a list of incompatible pairs; Tom and Dick cannot be together, neither can Harry or So-and-So, nor Tom with Harry etc.).
Bear in mind that the total number of ways of selecting one hundred elements out of four hundred easily exceeds the number of atoms making up the known universe. Not even an exhaustive search by means of a super-computer would be capable of covering every possible combination.
This is an example of what is known as an NP problem, whose main characteristic is that it is (relatively) easy to check whether a particular selection satisfies the given criteria. However, the task of generating a solution directly is in general quite difficult. P problems, on the other hand, are those for which direct methods exist for providing solutions (relatively easy).
The P vs NP problem consists in providing a problem for which a possible solution can be easily checked, but which requires an excessively long time for solutions to be found by direct methods, or for demonstrating that such problems do not exist. At present, the majority feeling in the scientific community is that such problems do indeed exist. Paradoxically, increasingly efficient algorithms are being found for problems traditionally considered difficult to solve. Manindra Agrawal has this year been awarded the Gödel Prize from the European Association for Theoretical Computer Science for demonstrating that the problem for determining whether a number is prime belongs to class P.
In fact, for many problems of practical importance, methods based on carrying out a random selection and checking that it satisfies the appropriate restrictions have proved to be simpler and faster than the best direct algorithms known to date. Similarly in combinatorics, objects exist (such as self-correcting codes) whose existence is easy to check by means of probabilistic methods, but for which only explicit constructions are available that are very complex for approximating optimal solutions.
It is perhaps surprising to learn that in recent years results have been obtained which suggest that every random algorithm can be simulated by a deterministic algorithm of comparable efficiency. As an example we have Agrawal’s deterministic algorithm for checking if a number is prime in polynomial-time, and Omer Reingold’s deterministic algorithm for solving problems of connectivity in undirected graphs having less than linear logarithmic complexity with regard to the memory required.
Worthy of mention in relation to this are Ronnit Rubinfeld researches, which are centered on the study of algorithms of complexity less than linear, that is to say, sublinear. At a time when enormous amounts of data must be handled, algorithms of linear complexity can prove to be impracticable. Many interesting problems exist for which algorithms of sublinear complexity are known, although they are often random and provide approximate solutions. With regard to this point, Luca Trevisan will speak on the elimination of randomness, quasi-randomness and the direct constructions of combinatorial objects such as error-correcting codes.
Jon Kleinberg’s talk will deal with graphs in which any pair of nodes are linked by a short length path (small world graphs) and with random methods of finding such paths. This avenue of research has applications to the theory of algorithms and to discrete probability.
Tim Roughgarden will address the connections between theoretical computer science and game theory, known as algorithmic game theory, with particular attention to the use of potential functions for delimiting the equilibria inefficiency of different models of selfish behaviour in networks. An example of this behaviour can be found in the well-known dilemma posed by two prisoners in solitary confinement who are given the choice between two options; if, because of their isolation from each other, the behaviour of each one is governed by self-interest, then the final result will be negative for both.
For his part, Alexander Holevo will present results related to quantum computation, which is a paradigm of computation based on quantum mechanics alternative to the classical paradigm, in which quantum bits are used instead of customary bits. The quantum paradigm makes new algorithms possible, and the same task may involve different complexity in classical computation and in quantum computation, all of which has aroused great expectation, since it renders some formerly intractable problems tractable. It is worth pointing out, for example, that Peter Shor was awarded the Nevanlinna Prize in 1998 for his polynomial complexity factoring algorithm based on quantum computation.
Manuel Ojeda Aciego
Lecturer in Applied Mathematics at the University of Málaga.
Satellite Conferences: Gran Canaria
Intelligent Computation Imitates the “Practical Wisdom” of Living Beings
The possibility of designing systems that imitate how human beings, animals and other living beings process information and solve problems has long been a dream of Computational Science. Today, after half a century of research work, techniques have been developed which provide benefits for a broad spectrum of sectors. Neuronal networks, expert systems, diffuse logic and evolutionary computation are just some of the techniques that have proved capable of identifying complex patterns in vast sets of data, providing support for decision-making based on qualitative and quantitative factors, and for giving solutions to problems involving high-complexity variables. Armed with these resources, computer science is now able to tackle designs for increasingly complex applications, manage greater volumes of information, seek “intelligent” solutions and discover unsuspected correlations in data bases.
The Fifth International Conference on Engineering Computational Technology and the Eighth International Conference on Computational Structures Technology, the two symposia to be held in Las Palmas de Gran Canaria this September, will cover all these topics, as well as debating the latest computational technology applications to all aspects of engineering, structural mechanics and other fields. Furthermore, the latest advances in hardware and software, algorithms and theoretical development will also be discussed.
The two conferences will be held concurrently, and participants will be able to attend the talks given at both. In addition to mathemat-icians and engineers, scientists from other specialized fields are expected to attend.
“The Fifth International Conference
on Engineering Computational Technology”
Person to contact: Gustavo Montero
e-mail: gustavo@dma.ulpgc.es
web: http://www.civil-comp.com/conf/ect2006.htm
“The Eighth International Conference
on Computational Structures Technology”
Person to contact: Rafael Montenegro
e-mail: rafa@dma.ulpgc.es
web: http://www.civil-comp.com/conf/cst2006.htm
Las Palmas de Gran Canaria
12-15 September 2006
Applications
Encrypted Messages
Exchange of information is an on-going process in 21st century society, where data must travel rapidly and constantly, whether it be in the form of bank transfers, telephone conversations or official documents. The robustness of the entire system rests on the possibility of encrypting information to enable it to be transmitted quickly and safely without being used or intercepted by prying eyes. Once again, mathematics has a key role to play in this process. According to Alejandro Melle, professor of algebra at the Complutense University of Madrid, there are many theoretical systems envolved in the field of encryption. However, most of them cannot be used in applications because they are insufficient for ensuring a secure and fluid exchange of information.
The mathematical community is hard at work in both the generation of encryption algorithms and in cryptoanalysis; that is, the breaking of encryption algorithms. This is the combination that makes security really effective, since cryptographic algorithms must necessarily belong to the public sphere, so that the security of the cryptosystem is based on mathematics and not on secrecy. Trends in security processes and protocols currently used in the world are largely set by the NSA (National Security Agency) and by the NIST (National Institute of Standards and Technology), both North American organizations. Furthermore, the most frequently employed encryption protocols are based on two fundamental problems of mathematics: “the big number factorization problem” and “the discrete logarithm problem”.
In particular, the RSA protocol, which is the most commonly used, revolves around the idea of the big number factorization problem: given a very large number N, it is very difficult to find its prime factors (p, q), such that N = p x q. Nevertheless, even though it is difficult to find these factors, the increased availability of more powerful computers requires the use of ever larger numbers in order to prevent problems from occurring. As Melle explains, it is customary now to work with key sizes of 1024 bits or even 2048 bits, and the greater the key size, the slower the speed of operation. It is for that reason that the Fábrica Nacional de la Moneda y Timbre (National Mint), which acts as the State Certifying Authority (responsible for issuing digital certificates for official transactions with the Government), officially advises against the use of key sizes of 2048 bits.
Moreover, protocols based on the discrete logarithm problem either work on finite bodies or on elliptical curves on finite bodies. Cryptography based on elliptical curves ensures security for systems with much smaller key sizes, and is therefore used in supports where storage space is a determining factor, such as in credit cards.
For further information:
Alejadro Melle: amelle@mat.ucm.es
Cryptomathic company
http://www.cryptomathic.com/labs/ellipticcurves.html
Second Cryptography Hash Workshop
ttp://www.csrc.nist.gov/pki/HashWorkshop/index.html | ||||||||
2453 | dbpedia | 1 | 63 | https://mathoverflow.net/questions/53122/mathematical-urban-legends | en | Mathematical "urban legends" | [
"https://cdn.sstatic.net/Sites/mathoverflow/Img/logo.svg?v=3a674b060adf",
"https://mathoverflow.net/posts/53122/ivc/c2ad?prg=269fbf80-e0eb-4545-9ff9-9e1b3f323159"
] | [] | [] | [
""
] | null | [] | 2011-01-24T20:48:16 | When I was a young and impressionable graduate student at Princeton, we scared each other with the story of a Final Public Oral, where Jack Milnor was dragged in against his will to sit on a commit... | en | https://cdn.sstatic.net/Sites/mathoverflow/Img/favicon.ico?v=8bbfe38cfc48 | MathOverflow | https://mathoverflow.net/questions/53122/mathematical-urban-legends | When I was a young and impressionable graduate student at Princeton, we scared each other with the story of a Final Public Oral, where Jack Milnor was dragged in against his will to sit on a committee, and noted that the class of topological spaces discussed by the speaker consisted of finite spaces. I had assumed this was an "urban legend", but then at a cocktail party, I mentioned this to a faculty member, who turned crimson and said that this was one of his students, who never talked to him, and then had to write another thesis (in numerical analysis, which was not very highly regarded at Princeton at the time). But now, I have talked to a couple of topologists who should have been there at the time of the event, and they told me that this was an urban legend at their time as well, so maybe the faculty member was pulling my leg.
So, the questions are: (a) any direct evidence for or against this particular disaster? (b) what stories kept you awake at night as a graduate student, and is there any evidence for or against their truth?
EDIT (this is unrelated, but I don't want to answer my own question too many times): At Princeton, there was supposedly an FPO in Physics, on some sort of statistical mechanics, and the constant $k$ appeared many times. The student was asked:
Examiner: What is $k?$
Student: Boltzmann's constant.
Examiner: Yes, but what is the value?
Student: Gee, I don't know...
Examiner: OK, order of magnitude?
Student: Umm, don't know, I just know $k\dots$
The student was failed, since he was obviously not a physicist.
$\begingroup$
Since this has become a free-for-all, allow me to share an anecdote that I wouldn't quite believe if I hadn't seen it myself.
I attended graduate school in Connecticut, where seminars proceeded with New England gentility, very few questions coming from the audience even at the end. But my advisor Fred Linton would take me down to New York each week to attend Eilenberg's category theory seminars at Columbia. These affairs would go on for hours with many interruptions, particularly from Sammy who would object to anything said in less than what he regarded as the optimal way. Now Fred had a tendency to doze off during talks. One particular week a well-known category theorist (but I'll omit his name) was presenting some of his new results, and Sammy was giving him a very hard time. He kept saying "draw the right diagram, draw the right diagram." Sammy didn't know what diagram he wanted and he rejected half a dozen attempts by the speaker, and then at least an equal number from the audience. Finally, when it all seemed a total impasse, Sammy, after a weighty pause said "Someone, wake up Fred." So someone tapped Fred on the shoulder, he blinked his eyes and Sammy said, in more measured tones than before, "Fred, draw the right diagram." Fred looked up at the board, walked up, drew the right diagram, returned to his chair, and promptly went back to sleep. And so the talk continued.
Thank you all for your indulgence - I've always wanted to see that story preserved for posterity and now I have.
$\endgroup$
$\begingroup$
The following story is a bit strange to be true, but we all believed it as students, and I think I still do believe that a somewhat weaker version of events must have indeed occurred.
Michael Maschler (most famous in Israel as author of the standard math textbooks for middle-schools and high-schools) was in the middle of teaching an undergraduate course- I think it was Linear Algebra- when one afternoon he walks into the lecture hall and announces the discovery of a new class of incredible Riemannian symmetric spaces with incredible properties, missed by Elie Cartan. The undergrads have no idea what he is on about; but the faculty all get very excited, and start sitting in on his Linear Algebra course. Ignoring the syllabus, Prof. Maschler begins to give lecture upon lecture about the new incredible symmetric spaces which he discovered. The excitement builds. Will he win a prize? Will he win the Fields Medal?...
And then, 3 lectures in, a student (some say it was Avinoam Mann, about whom many stories are told) gets up and asks, "Excuse me, sir. How can you distinguish your space from a sphere?"
Maschler turns to answer the "stupid question", but he freezes in mid-motion... Gradually, his face turns white. The lecture hall is so silent you can hear a pin drop. Finally, after what seems like an eternity, Prof. Maschler unfreezes. "By golly, a sphere it is," he murmurs in an undertone. And he picked the Linear Algebra textbook up from his desk, and resumed teaching where he had left off. The subject was never broached again.
And so, some Hebrew University students of my generation call spheres "Maschler spaces".
$\endgroup$
$\begingroup$
A wholly different set of "named urban legends" (in order of time):
Allegedly, Jacobi came to show Gauss his cool results on elliptic functions. Gauss' response was to open a drawer, point at a sheaf of papers, and say: that's great you are doing this! I have actually discovered these results a while ago, but did not think they were good enough to publish... To which Jacobi responded: Funny, you have published a lot worse results.
When the logician Carnap was immigrating to the US, he had the usual consular interview, where one of the questions was (and still is, I think): "Would you favor the overthrow of the US government by violence, or force of arms?". He thought for a while, and responded: "I would have to say force of arms..."
Finally, on the graduate experience front, it was rumored at Princeton that Bill Thurston's qualifying exams at Berkeley were held as his wife was in labor with his first child -- the department refused to change the date for such a minor reason! I have just asked him about this, and it's true...
EDIT A certain (now well-known) mathematician was a postdoc at IHES in the late 1980s. Call him R. R comes to lunch, and finds himself across the table from Misha Gromov. Gromov, very charmingly, asks him what he was working on. R tells him, Gromov has some comments, they have a good conversation, lunch is over. The next day R finds himself across from Gromov again. Misha's first question is: so, what are you working on now?
$\endgroup$
$\begingroup$
This one happened - I was there (as an observer, not a principal). Only the names have been changed.
X was Professor A's first doctoral student, and their relations weren't good. Rumor had it that the first time A saw most of X's thesis was when X handed in the final draft.
By the rules, there had to be a non-mathematician on the thesis defense committee - let's call him Professor H. Professor H made a valiant effort to read the thesis, understandably didn't get very far, but decided he was going to ask a question at the defense, to justify his being there in the first place. So he says to X, I notice you didn't provide a proof of your Lemma 2.3.1 - how does it go? X says, well, 2.3.1 isn't my work, it's a well-known result of van der Corput.
This satisfies H, but A says, OK, it's a result of van der Corput - but, how do you prove it? Well, X was prepared to answer questions on his own work, but hadn't brushed up on all the previous work that his thesis rested on. He hummed and hawed, started to give a proof, got stuck - at which point A gave him a hint. Using the hint, X got a little farther, but got stuck again - so A gave him another hint. This went on for an excruciating fifteen minutes (which, I'm sure, felt like 15 years to X), until finally Professor N broke the tension by saying, say, just whose thesis defense is this anyway, X's or van der Corput's?
$\endgroup$ | ||||
2453 | dbpedia | 0 | 97 | https://gist.github.com/moyix/2a21fe829a8b898901fbba3cebdc8386 | en | aaronson.txt | [
"https://avatars.githubusercontent.com/u/34380?s=64&v=4"
] | [] | [] | [
""
] | null | [] | null | GitHub Gist: instantly share code, notes, and snippets. | en | Gist | https://gist.github.com/moyix/2a21fe829a8b898901fbba3cebdc8386 | ||||||
2453 | dbpedia | 0 | 42 | https://igorpak.wordpress.com/2020/12/ | en | Igor Pak's blog | [
"https://igorpak.wordpress.com/wp-content/uploads/2019/03/cropped-g6-12x9.png?w=50",
"https://igorpak.wordpress.com/wp-content/uploads/2019/03/cropped-g6-12x9.png?w=50",
"https://pixel.wp.com/b.gif?v=noscript"
] | [] | [] | [
""
] | null | [] | null | 2 posts published by igorpak during December 2020 | en | Igor Pak's blog | null | Conjectures are a staple of mathematics. They are everywhere, permeating every area, subarea and subsubarea. They are diverse enough to avoid a single general adjective. They come in al shapes and sizes. Some of them are famous, classical, general, important, inspirational, far-reaching, audacious, exiting or popular, while others are speculative, narrow, technical, imprecise, far-fetched, misleading or recreational. That’s a lot of beliefs about unproven claims, yet we persist in dispensing them, inadvertently revealing our experience, intuition and biases.
The conjectures also vary in attitude. Like a finish line ribbon they all appear equally vulnerable to an outsider, but in fact differ widely from race to race. Some are eminently reachable, the only question being who will get there first (think 100 meter dash). Others are barely on the horizon, requiring both great effort, variety of tools, and an extended time commitment (think ironman triathlon). The most celebrated third type are like those Sci-Fi space expeditions in requiring hundreds of years multigenerational commitments, often losing contact with civilization it left behind. And we can’t forget the romantic fourth type — like the North Star, no one actually wants to reach them, as they are largely used for navigation, to find a direction in unchartered waters.
Now, conjectures famously provide a foundation of the scientific method, but that’s not at all how we actually think of them in mathematics. I argued back in this pointed blog post that citations are the most crucial for the day to day math development, so one should take utmost care in making references. While this claim is largely uncontroversial and serves as a raison d’être for most GoogleScholar profiles, conjectures provide a convenient idealistic way out. Thus, it’s much more noble and virtuous to say “I dedicated my life to the study of the XYZ Conjecture” (even if they never publish anything), than “I am working hard writing so many papers to gain respect of my peers, get a promotion, and provide for my family“. Right. Obviously…
But given this apparent (true or perceived) importance of conjectures, are you sure you are using them right? What if some/many of these conjectures are actually wrong, what then? Should you be flying that starship if there is no there there? An idealist would argue something like “it’s a journey, not a destination“, but I strongly disagree. Getting closer to the truth is actually kind of important, both as a public policy and on an individual level. It is thus pretty important to get it right where we are going.
What are conjectures in mathematics?
That’s a stupid question, right? Conjectures are mathematical claims whose validity we are trying to ascertain. Is that all? Well, yes, if you don’t care if anyone will actually work on the conjecture. In other words, something about the conjecture needs to interesting and inspiring.
What makes a conjecture interesting?
This is a hard question to answer because it is as much psychological as it is mathematical. A typical answer would be “oh, because it’s old/famous/beautiful/etc.” Uhm, ok, but let’s try to be a little more formal.
One typically argues “oh, that’s because this conjecture would imply [a list of interesting claims and known results]”. Well, ok, but this is self-referential. We already know all those “known results”, so no need to prove them again. And these “claims” are simply other conjectures, so this is really an argument of the type “this conjecture would imply that conjecture”, so not universally convincing. One can argue: “look, this conjecture has so many interesting consequences”. But this is both subjective and unintuitive. Shouldn’t having so many interesting conjectural consequences suggest that perhaps the conjecture is too strong and likely false? And if the conjecture is likely to be false, shouldn’t this make it uninteresting?
Also, wouldn’t it be interesting if you disprove a conjecture everyone believes to be true? In some sense, wouldn’t it be even more interesting if until now everyone one was simply wrong?
None of this are new ideas, of course. For example, faced with the need to justify the “great” BC conjecture, or rather 123 pages of survey on the subject (which is quite interesting and doesn’t really need to be justified), the authors suddenly turned reflective. Mindful of self-referential approach which they quickly discard, they chose a different tactic:
We believe that the interest of a conjecture lies in the feeling of unity of mathematics that it entails. [M.P. Gomez Aparicio, P. Julg and A. Valette, “The Baum-Connes conjecture“, 2019]
Huh? Shouldn’t math be about absolute truths, not feelings? Also, in my previous blog post, I mentioned Noga Alon‘s quote that Mathematics is already “one unit“. If it is, why does it need a new “feeling of unity“? Or is that like one of those new age ideas which stop being true if you don’t reinforce them at every occasion?
If you are confused at this point, welcome to the club! There is no objective way to argue what makes certain conjectures interesting. It’s all in our imagination. Nikolay Konstantinov once told me that “mathematics is a boring subject because every statement is equivalent to saying that some set is empty.” He meant to be provocative rather than uninspiring. But the problem he is underlying is quite serious.
What makes us believe a conjecture is true?
We already established that in order to argue that a conjecture is interesting we need to argue it’s also true, or at least we want to believe it to be true to have all those consequences. Note, however, that we argue that a conjecture is true in exactly the same way we argue it’s interesting: by showing that it holds is some special cases, and that it would imply other conjectures which are believed to be true because they are also checked in various special cases. So in essence, this gives “true = interesting” in most cases. Right?
This is where it gets complicated. Say, you are working on the “abc conjecture” which may or may not be open. You claim that it has many consequences, which makes it both likely true and interesting. One of them is the negative solution to the Erdős–Ulam problem about existence of a dense set in the plane with rational pairwise distances. But a positive solution to the E-U problem implies the Harborth’s conjecture (aka the “integral Fáry problem“) that every graph can be drawn in the plane with rational edge lengths. So, counterintuitively, if you follow the logic above shouldn’t you be working on a positive solution to Erdős–Ulam since it would both imply one conjecture and give a counterexample to another? For the record, I wouldn’t do that, just making a polemical point.
I am really hoping you see where I am going. Since there is no objective way to tell if a conjecture is true or not, and what exactly is so interesting about it, shouldn’t we discard our biases and also work towards disproving the conjecture just as hard as trying to prove it?
What do people say?
It’s worth starting with a general (if slightly poetic) modern description:
In mathematics, [..] great conjectures [are] sharply formulated statements that are most likely true but for which no conclusive proof has yet been found. These conjectures have deep roots and wide ramifications. The search for their solution guides a large part of mathematics. Eternal fame awaits those who conquer them first. Remarkably, mathematics has elevated the formulation of a conjecture into high art. [..] A well-chosen but unproven statement can make its author world-famous, sometimes even more so than the person providing the ultimate proof. [Robbert Dijkgraaf, The Subtle Art of the Mathematical Conjecture, 2019]
Karl Popper thought that conjectures are foundational to science, even if somewhat idealized the efforts to disprove them:
[Great scientists] are men of bold ideas, but highly critical of their own ideas: they try to find whether their ideas are right by trying first to find whether they are not perhaps wrong. They work with bold conjectures and severe attempts at refuting their own conjectures. [Karl Popper, Heroic Science, 1974]
Here is how he reconciled somewhat the apparent contradiction:
On the pre-scientific level we hate the very idea that we may be mistaken. So we cling dogmatically to our conjectures, as long as possible. On the scientific level, we systematically search for our mistakes. [Karl Popper, quoted by Bryan Magee, 1971]
Paul Erdős was, of course, a champion of conjectures and open problems. He joked that the purpose of life is “proof and conjecture” and this theme is repeatedly echoed when people write about him. It is hard to overestimate his output, which included hundreds of talks titled “My favorite problems“. He wrote over 180 papers with collections of conjectures and open problems (nicely assembled by Zbl. Math.)
Peter Sarnak has a somewhat opposite point of view, as he believes one should be extremely cautious about stating a conjecture so people don’t waste time working on it. He said once, only half-jokingly:
Since we reward people for making a right conjecture, maybe we should punish those who make a wrong conjecture. Say, cut off their fingers. [Peter Sarnak, UCLA, c. 2012]
This is not an exact quote — I am paraphrasing from memory. Needless to say, I disagree. I don’t know how many fingers he wished Erdős should lose, since some of his conjectures were definitely disproved: one, two, three, four, five, and six. This is not me gloating, the opposite in fact. When you are stating hundreds of conjectures in the span of almost 50 years, having only a handful to be disproved is an amazing batting average. It would, however, make me happy if Sarnak’s conjecture is disproved someday.
Finally, there is a bit of a controversy whether conjectures are worth as much as theorems. This is aptly summarized in this quote about yet another champion of conjectures:
Louis J. Mordell [in his book review] questioned Hardy‘s assessment that Ramanujan was a man whose native talent was equal to that of Euler or Jacobi. Mordell [..] claims that one should judge a mathematician by what he has actually done, by which Mordell seems to mean, the theorems he has proved. Mordell’s assessment seems quite wrong to me. I think that a felicitous but unproved conjecture may be of much more consequence for mathematics than the proof of many a respectable theorem. [Atle Selberg, “Reflections Around the Ramanujan Centenary“, 1988]
So, what’s the problem?
Well, the way I see it, the efforts made towards proving vs. disproving conjectures is greatly out of balance. Despite all the high-minded Popper’s claims about “severe attempts at refuting their own conjectures“, I don’t think there is much truth to that in modern math sciences. This does not mean that disproofs of famous conjectures aren’t celebrated. Sometimes they are, see below. But it’s clear to me that the proofs are celebrated more frequently, and to a much greater degree. I have only anecdotal evidence to support my claim, but bear with me.
Take prizes. Famously, Clay Math Institute gives $1 million for a solution of any of these major open problems. But look closely at the rules. According to the item 5b, except for the P vs. NP problem and the Navier–Stokes Equation problem, it gives nothing ($0) for a disproof of these problems. Why, oh why?? Let’s look into CMI’s “primary objectives and purposes“:
To recognize extraordinary achievements and advances in mathematical research.
So it sounds like CMI does not think that disproving the Riemann Hypothesis needs to be rewarded because this wouldn’t “advance mathematical research”. Surely, you are joking? Whatever happened to “the opposite of a profound truth may well be another profound truth“? Why does the CMI wants to put its thumb on the scale and support only one side? Do they not want to find out the solution whatever it is? Shouldn’t they be eager to dispense with the “wrong conjecture” so as to save numerous researches from “advances to nowhere“?
I am sure you can see that my blood is boiling, but let’s proceed to the P vs. NP problem. What if it’s independent of ZFC? Clearly, CMI wouldn’t pay for proving that. Why not? It’s not like this kind of thing never happened before (see obligatory link to CH). Some people believe that (or at least they did in 2012), and some people like Scott Aaronson take this seriously enough. Wouldn’t this be a great result worthy of an award as much as the proof that P=NP, or at least a nonconstructive proof that P=NP?
If your head is not spinning hard enough, here is another amusing quote:
Of course, it’s possible that P vs. NP is unprovable, but that that fact itself will forever elude proof: indeed, maybe the question of the independence of P vs. NP is itself independent of set theory, and so on ad infinitum! But one can at least say that, if P vs. NP (or for that matter, the Riemann hypothesis, Goldbach’s conjecture, etc.) were proven independent of ZF, it would be an unprecedented development. [Scott Aaronson, P vs. NP, 2016].
Speaking of Goldbach’s Conjecture, the most talked about and the most intuitively correct statement in Number Theory that I know. In a publicity stunt, for two years there was a $1 million prize by a publishing house for the proof of the conjecture. Why just for the proof? I never heard of anyone not believing the conjecture. If I was the insurance underwriter for the prize (I bet they had one), I would allow them to use “for the proof or disproof” for a mere extra $100 in premium. For another $50 I would let them use “or independent of ZF” — it’s a free money, so why not? It’s such a pernicious idea of rewarding only one kind of research outcome!
Curiously, even for Goldbach’s Conjecture, there is a mild divergence of POVs on what the future holds. For example, Popper writes (twice in the same book!) that:
[On whether Goldbach’s Conjecture is ‘demonstrable’] We don’t know: perhaps we may never know, and perhaps we can never know. [Karl Popper, Conjectures and Refutations, 1963]
Ugh. Perhaps. I suppose anything can happen… For example, our civilizations can “perhaps” die out in the next 200 years. But is that likely? Shouldn’t the gloomy past be a warning, not a prediction of the future? The only thing more outrageously pessimistic is this theological gem of a quote:
Not even God knows the number of permutations of 1000 avoiding the 1324 pattern. [Doron Zeilberger, quoted here, 2005]
Thanks, Doron! What a way to encourage everyone! Since we know from numerical estimates that this number is ≈ 3.7 × 101017 (see this paper and this follow up), Zeilberger is suggesting that large pattern avoidance numbers are impossibly hard to compute precisely, already in the range of only about 1018 digits. I really hope he is proved wrong in his lifetime.
But I digress. What I mean to emphasize, is that there are many ways a problem can be resolved. Yet some outcomes are considered more valuable than others. Shouldn’t the research achievements be rewarded, not the desired outcome? Here is yet another colorful opinion on this:
Given a conjecture, the best thing is to prove it. The second best thing is to disprove it. The third best thing is to prove that it is not possible to disprove it, since it will tell you not to waste your time trying to disprove it. That’s what Gödel did for the Continuum Hypothesis. [Saharon Shelah, Rutgers Univ. Colloqium, 2001]
Why do I care?
For one thing, disproving conjectures is part of what I do. Sometimes people are a little shy to unambiguously state them as formal conjectures, so they phrase them as questions or open problems, but then clarify that they believe the answer is positive. This is a distinction without a difference, or at least I don’t see any (maybe they are afraid of Sarnak’s wrath?) Regardless, proving their beliefs wrong is still what I do.
For example, here is my old bog post on my disproof of the Noonan-Zeiberger Conjecture (joint with Scott Garrabrant). And in this recent paper (joint with Danny Nguyen), we disprove in one big swoosh both Barvinok’s Problem, Kannan’s Problem, and Woods Conjecture. Just this year I disproved three conjectures:
The Kirillov–Klyachko Conjecture (2004) that the reduced Kronecker coefficients satisfy the saturation property (this paper, joint with Greta Panova).
The Brandolini et al. Conjecture (2019) that concrete lattice polytopes can multitile the space (this paper, joint with Alexey Garber).
Kenyon’s Problem (c. 2005) that every integral curve in R3 is a boundary of a PL surface comprised of unit triangles (this paper, joint with Alexey Glazyrin).
On top of that, just two months ago in this paper (joint with Han Lyu), we showed that the remarkable independence heuristic by I. J. Good for the number of contingency tables, fails badly even for nearly all uniform marginals. This is not exactly disproof of a conjecture, but it’s close, since the heuristic was introduced back in 1950 and continues to work well in practice.
In addition, I am currently working on disproving two more old conjectures which will remain unnamed until the time we actually resolve them (which might never happen, of course). In summary, I am deeply vested in disproving conjectures. The reasons why are somewhat complicated (see some of them below). But whatever my reasons, I demand and naively fully expect that my disproofs be treated on par with proofs, regardless whether this expectation bears any relation to reality.
My favorite disproofs and counterexamples:
There are many. Here are just a few, some famous and some not-so-famous, in historical order:
Fermat‘s conjecture (letter to Pascal, 1640) on primality of Fermat numbers, disproved by Euler (1747)
Tait’s conjecture (1884) on hamiltonicity of graphs of simple 3-polytopes, disproved by W.T. Tutte (1946)
General Burnside Problem (1902) on finiteness of periodic groups, resolved negatively by E.S. Golod (1964)
Keller’s conjecture (1930) on tilings with unit hypercubes, disproved by Jeff Lagarias and Peter Shor (1992)
Borsuk’s Conjecture (1932) on partitions of convex sets into parts of smaller diameter, disproved by Jeff Kahn and Gil Kalai (1993)
Hirsch Conjecture (1957) on the diameter of graphs of convex polytopes, disproved by Paco Santos (2010)
Woods’s conjecture (1972) on the covering radius of certain lattices, disproved by Oded Regev, Uri Shapira and Barak Weiss (2017)
Connes embedding problem (1976), resolved negatively by Zhengfeng Ji, Anand Natarajan, Thomas Vidick, John Wright and Henry Yuen (2020)
In all these cases, the disproofs and counterexamples didn’t stop the research. On the contrary, they gave a push to further (sometimes numerous) developments in the area.
Why should you disprove conjectures?
There are three reasons, of different nature and importance.
First, disproving conjectures is opportunistic. As mentioned above, people seem to try proving much harder than they try disproving. This creates niches of opportunity for an open-minded mathematician.
Second, disproving conjectures is beautiful. Let me explain. Conjectures tend to be rigid, as in “objects of the type pqr satisfy property abc.” People like me believe in the idea of “universality“. Some might call it “completeness” or even “Murphy’s law“, but the general principle is always the same. Namely: it is not sufficient that one wishes that all pqr satisfy abc to actually believe in the implication; rather, there has to be a strong reason why abc should hold. Barring that, pqr can possibly be almost anything, so in particular non-abc. While some would argue that non-abc objects are “ugly” or at least “not as nice” as abc, the idea of universality means that your objects can be of every color of the rainbow — nice color, ugly color, startling color, quiet color, etc. That kind of palette has its own sense of beauty, but it’s an acquired taste I suppose.
Third, disproving conjectures is constructive. It depends on the nature of the conjecture, of course, but one is often faced with necessity to construct a counterexample. Think of this as an engineering problem of building some pqr which at the same time is not abc. Such construction, if at all possible, might be difficult, time consuming and computer assisted. But so what? What would you rather do: build a mile-high skyscraper (none exist yet) or prove that this is impossible? Curiously, in CS Theory both algorithms and (many) complexity results are constructive (you need gadgets). Even the GCT is partially constructive, although explaining that would take us awhile.
What should the institutions do?
If you are an institution which awards prizes, stop with the legal nonsense: “We award […] only for a publication of a proof in a top journal”. You need to set up a scientific committee anyway, since otherwise it’s hard to tell sometimes if someone deserves a prize. With mathematicians you can expect anything anyway. Some would post two arXiv preprints, give a few lectures and then stop answering emails. Others would publish only in a journal where they are Editor-in-Chief. It’s stranger than fiction, really.
What you should do is say in the official rules: “We have [this much money] and an independent scientific committee which will award any progress on [this problem] partially or in full as they see fit.” Then a disproof or an independence result will receive just as much as the proof (what’s done is done, what else are you going to do with the money?) This would also allow some flexibility for partial solutions. Say, somebody proves Goldbach’s Conjecture for integers > exp(exp(10100000)), way way beyond computational powers for the remaining integers to be checked. I would give this person at least 50% of the prize money, leaving the rest for future developments of possibly many people improving on the bound. However, under the old prize rules such person gets bupkes for their breakthrough.
What should the journals do?
In short, become more open to results of computational and experimental nature. If this sounds familiar, that’s because it’s a summary of Zeilberger’s Opinions, viewed charitably. He is correct on this. This includes publishing results of the type “Based on computational evidence we believe in the following UVW conjecture” or “We develop a new algorithm which confirms the UVW conjecture for n<13″. These are still contributions to mathematics, and the journals should learn to recognize them as such.
To put in context of our theme, it is clear that a lot more effort has been placed on proofs than on finding counterexamples. However, in many areas of mathematics there are no small counterexamples, so a heavy computational effort is crucial for any hope of finding one. Such work is not be as glamorous as traditional papers. But really, when it comes to standards, if a journal is willing to publish the study of something like the “null graphs“, the ship has sailed for you…
Let me give you a concrete example where a computational effort is indispensable. The curious Lovász conjecture states that every finite connected vertex-transitive graph contains a Hamiltonian path. This conjecture got to be false. It hits every red flag — there is really no reason why pqr = “vertex transitive” should imply abc = “Hamiltonian”. The best lower bound for the length of the longest (self-avoiding) path is only about square root of the number of vertices. In fact, even the original wording by Lovász shows he didn’t believe the conjecture is true (also, I asked him and he confirmed).
Unfortunately, proving that some potential counterexample is not Hamiltonian is computationally difficult. I once had an idea of one (a nice cubic Cayley graph on “only” 3600 vertices), but Bill Cook quickly found a Hamiltonian cycle dashing my hopes (it was kind of him to look into this problem). Maybe someday, when the TSP solvers are fast enough on much larger graphs, it will be time to return to this problem and thoroughly test it on large Cayley graphs. But say, despite long odds, I succeed and find a counterexample. Would a top journal publish such a paper?
Editor’s dilemma
There are three real criteria for evaluation a solution of an open problem by the journal:
Is this an old, famous, or well-studied problem?
Are the tools interesting or innovative enough to be helpful in future studies?
Are the implications of the solution to other problems important enough?
Now let’s make a hypothetical experiment. Let’s say a paper is submitted to a top math journal which solves a famous open problem in Combinatorics. Further, let’s say somebody already proved it is equivalent to a major problem in TCS. This checks criteria 1 and 3. Until not long ago it would be rejected regardless, so let’s assume this is happening relatively recently.
Now imagine two parallel worlds, where in the first world the conjecture is proved on 2 pages using beautiful but elementary linear algebra, and in the second world the conjecture is disproved on a 2 page long summary of a detailed computational search. So in neither world we have much to satisfy criterion 2. Now, a quiz: in which world the paper will be published?
If you recognized that the first world is a story of Hao Huang‘s elegant proof of the induced subgraphs of hypercubes conjecture, which implies the sensitivity conjecture. The Annals published it, I am happy to learn, in a welcome break with the past. But unless we are talking about some 200 year old famous conjecture, I can’t imagine the Annals accepting a short computational paper in the second world. Indeed, it took a bit of a scandal to accept even the 400 year old Kepler’s conjecture which was proved in a remarkable computational work.
Now think about this. Is any of that fair? Shouldn’t we do better as a community on this issue?
What do other people do?
Over the years I asked a number of people about the uncertainty created by the conjectures and what do they do about it. The answers surprised me. Here I am paraphrasing them:
Some were dumbfounded: “What do you mean this conjecture could be false? It has to be true, otherwise nothing I am doing make much sense.”
Others were simplistic: “It’s an important conjecture. Famous people said it’s true. It’s my job to prove it.”
Third were defensive: “Do you really think this conjecture could be wrong? Why don’t you try to disprove it then? We’ll see who is right.”
Fourth were biblical: “I tend to work 6 days a week towards the proof and one day towards the disproof.”
Fifth were practical: “I work on the proof until I hit a wall. I use the idea of this obstacle to try constructing potential counterexamples. When I find an approach to discard such counterexamples, I try to generalize the approach to continue working on the proof. Continue until either side wins.”
If the last two seem sensible to you to, that’s because they are. However, I bet fourth are just grandstanding — no way they actually do that. The fifth sound great when this is possible, but that’s exceedingly rare, in my opinion. We live in a technical age when proving new results often requires great deal of effort and technology. You likely have tools and intuition to work in only one direction. Why would you want to waste time working in another?
What should you do?
First, remember to make conjectures. Every time you write a paper, tell a story of what you proved. Then tell a story of what you wanted to prove but couldn’t. State it in the form of a conjecture. Don’t be afraid to be wrong, or be right but oversharing your ideas. It’s a downside, sure. But the upside is that your conjecture might prove very useful to others, especially young researchers. In might advance the area, or help you find a collaborator to resolve it.
Second, learn to check your conjectures computationally in many small cases. It’s important to give supporting evidence so that others take your conjectures seriously.
Third, learn to make experiments, explore the area computationally. That’s how you make new conjectures.
Fourth, understand yourself. Your skill, your tools. Your abilities like problem solving, absorbing information from the literature, or making bridges to other fields. Faced with a conjecture, use this knowledge to understand whether at least in principle you might be able to prove or disprove a conjecture.
Fifth, actively look for collaborators. Those who have skills, tools, or abilities you are missing. More importantly, they might have a different POV on the validity of the conjecture and how one might want to attack it. Argue with them and learn from them. | |||||
2453 | dbpedia | 3 | 17 | https://www.aip.org/history-programs/niels-bohr-library/oral-histories/47147 | en | American Institute of Physics | https://www.aip.org/sites/default/files/favicon_1.ico | https://www.aip.org/sites/default/files/favicon_1.ico | [
"https://www.aip.org/sites/default/files/aip-logo-180.png",
"https://www.aip.org/sites/default/files/styles/large/public/2022-03/Preskill%20Credit%20Lance%20Hayashida%20-%20Caltech.jpg?itok=zmESsFP0",
"https://www.aip.org/sites/default/files/aipcorp/images/sitewide/aip-logo-fullname-400.png",
"https://www.aip... | [] | [] | [
""
] | null | [] | 2022-03-29T10:05:23-04:00 | Zierler: OK, this is David Zierler, Oral Historian for the American Institute of Physics. It is February 26, 2021. I am so happy to be here with Professor John Preskill. John, it's great to see you. Thank you so much for joining me. Preskill: Well, I'm glad to do it, David. Zierler: To start, would you please tell me your titles and institutional affiliations? And you'll notice I pluralize everything because I know you have more than one. | en | /sites/default/files/favicon_1.ico | https://www.aip.org/history-programs/niels-bohr-library/oral-histories/47147 | Zierler:
OK, this is David Zierler, Oral Historian for the American Institute of Physics. It is February 26, 2021. I am so happy to be here with Professor John Preskill. John, it's great to see you. Thank you so much for joining me.
Preskill:
Well, I'm glad to do it, David.
Zierler:
To start, would you please tell me your titles and institutional affiliations? And you'll notice I pluralize everything because I know you have more than one.
Preskill:
Oh, OK. Well, I am the Richard P. Feynman Professor of Theoretical Physics at the California Institute of Technology, and I'm the Director of the Institute for Quantum Information and Matter at Caltech. And we can leave it at that.
Zierler:
When were you named to the Feynman Chair?
Preskill:
Well, when Kip [Thorne] retired. Actually, the background on that is interesting because the donor endowed the chair around 1990. So there was a lot of discussion at Caltech about how we would make use of the leverage of being able to offer the Feynman Chair in Theoretical Physics to someone. And Kip and I actually went on a secret mission to Cambridge to offer it to Stephen Hawking in 1991.
Zierler:
Oh, wow. I did not know that.
Preskill:
Yeah. And so, he was very polite. Of course, he already was the Lucasian Professor obviously. So, the upshot of that was that Stephen didn't accept the Feynman Chair, but he did agree to make regular visits to Caltech. Which he did for many years. He would come for a variable amount of time, typically a month or six weeks in the depths of the Cambridge winter and enjoy the California sunshine. But then, Kip wound up being the Feynman Chair. And I guess I'd have to look up when he retired. It was probably about ten years ago. And I became the Feynman Chair at that point.
Zierler:
Is it known who the donor was back in 1990? Or is that anonymous?
Preskill:
I think I can say. Actually, it's an interesting fellow who's still with us. Mike Scott is his name. And he was, at one time in the early days, the CEO of Apple Computer. When Steve Jobs and Wozniak founded Apple Computer, and they wanted to produce the Apple II, they needed to raise capital, and their investors insisted that they bring in someone with business experience to help manage the company, and that was Mike Scott, who was at Apple for just a few years and had other interesting experiences in his career. But the connection with Feynman is that Mike Scott was a Caltech alum who was among the class that attended the Feynman lectures at ‘61-‘62, ‘62-‘63, the whole two-year sequence. And like many of the students who attended, he was profoundly impressed by that experience. And I think that's what induced him to endow a chair in Feynman's honor.
Zierler:
John, you knew Feynman, and Feynman being who he was, what did it mean for you when you were named with this honor?
Preskill:
Well, it gives you a sense of impostor syndrome, right? Who can live up to that title? But I guess I just shrug it off and carry on.
Zierler:
[laugh] Tell me about the origins of IQIM, the Institute.
Preskill:
Well, of course, I started out my career doing particle physics and with occasional forays into cosmology. So the backstory of IQIM is, I made a mid-career shift in research interests in the mid-1990s. And that happened around 1994, when I learned about Peter Shor's factoring algorithm. But I was primed, I think, to get interested in the subject of quantum information and quantum computing for a couple of reasons. One was that just the previous year, the SSC had been canceled. And for my generation of particle physicists, this was really a terrible blow because we had come along a little bit too late to participate in erecting the Standard Model. I started graduate school in 1975, and so, there was still controversy about what the right electroweak model was at that stage. But the J/psi had been discovered the year before, all the great stuff had been done. Not that there wasn't still a lot of interesting particle physics to do.
But our big hope for beyond the Standard Model physics, we were going to be the generation that would unravel the origin of electroweak symmetry breaking and all the new physics associated with it, and the SSC was going to be the source of the rich phenomenology that we thought we were going to mine as theorists to probe more deeply into nature's secrets. And when the SSC was canceled for complicated political reasons, even though they had already been digging the tunnel in Texas and had sunk a couple of billion dollars in it, one realized it was going to be quite a while before we were going to have the experimental input that we needed to really understand what was going on with physics beyond the Standard Model, which it was generally believed would be discovered when we got up to those energies. And so, I was sort of in a mood to think about different things.
And in fact, while I was sort of waiting, as many of us were, for the SSC to come along, I had already been doing things which were not very phenomenological, like thinking about black holes and how they process information. So, I had sort of become acquainted with principles of quantum information—which were not so widely known by physicists except for a small community—just because I thought that might be useful for understanding what's going on with black holes. And when Shor discovered this factoring algorithm, about a month later Artur Ekert, who was a pioneer of quantum cryptography, visited Caltech and gave a talk. And he mentioned this recent breakthrough that Shor had discovered that you could efficiently factor with quantum computers. And it's possible I've embellished the memory in hindsight, but I was quite amazed by this.
Feynman had been interested in quantum computing, and I knew that. And I wasn't very impressed by the whole subject. I didn't quite see the point of it. But I realized with the discovery of Shor's algorithm that it really meant there was a big difference between what problems we'd be able to solve and which ones we'd never be able to solve with computing technology because it's a quantum world instead of a classical one. Things became possible thanks to quantum mechanics that just wouldn't be possible in a different type of physical world. And I still think that's one of the most amazing things we've ever learned about quantum physics, the difference between quantum and classical. Sorry, I'm giving a rather longwinded answer to your question.
Zierler:
No, this is the intellectual origins of the Institute.
Preskill:
Correct. And so, I had a colleague at Caltech, Jeff Kimble, still at Caltech, and he was also quite excited about this surge of interest in quantum computing. He was a quantum optics guy. And he had made experimental advances in squeezing states of light and using them for metrology and stuff. But it's different now. Atomic Molecular Optic physics is widely recognized by most physicists as an intellectually rich and exciting field. And that's happened largely, I think, in the last 25 years, because of the connections with quantum computing and because of the connections with condensed matter, the possibility of simulating interesting states of quantum matter using these AMO systems. But back then, there was sort of a feeling in the quantum optics community that they didn't get respect because, from the perspective or somebody like me, what was the point, you know? As a particle physicist, I was trying to understand new laws of nature. But what can you do with quantum optics in the lab that you couldn't just figure out with pencil and paper what was going to happen?
And quantum computing, at least in principle, kind of changed that. Because I think it drove home that you should be able to do experiments where you're learning something from the experiment that you couldn't just simulate or calculate. And so, Jeff would have to say himself what got him excited, but I think he realized he had experimental tools that were very relevant to exploring quantum information. And I got excited from the theory perspective and wanted to learn more about what was experimentally possible. So, we formed what we called the Quantum Computing Club at the time, and we started having joint group meetings. And so, I learned some things. I've never been deeply knowledgeable, really, about how experiments work, but I learned a lot more than I had known about what you could do with quantum optics tools. And meanwhile, I was trying to understand whether you could protect quantum computers from noise, which led to the development of the idea of quantum error correction.
But actually, we wound up getting a DARPA grant. This was kind of interesting. The Department of Defense agencies had an immediate interest in quantum computing after Shor's algorithm because of the applications to cryptology. And they were the early supporters of the research in the field. Including the development of experimental tools. And DARPA, in particular, put out a call, and we submitted a proposal, and we got funded for a project which we called QUIC. I guess it was Quantum Information and Computation, but QUIC for short. And there were five PIs, and that helped Jeff do his first teleportation experiment in the lab, and I worked on quantum error correction ideas, and stuff like that. It was a five-year award, and after two years, they cut it. There was a new program manager. This was, I learned, not unprecedented for DARPA --- a new program manager comes in, and what you think is a multi-year commitment turns out not to be.
But I had learned something under Jeff's tutelage, that with money, you can do things. As a theorist and particle physicist, I had it pretty easy as far as funding a group because at Caltech, we had this big DOE high energy physics umbrella grant, which was mostly for experimentalists, and the theorists were a little pimple on it. And that was enough for us to support post-docs and so on. And we also had Caltech funding for theoretical physics post-docs, which helped a lot. But when we had this DARPA funding, I was, for example, able to bring in Alexei Kitaev as a year-long visitor and pay him a salary. I'd never had the resources to do that sort of thing. So that was kind of an eye opener for me.
In my naivete, I'd never worried much about raising money, and applying for grants, and stuff because I sort of had it made with this DOE grant, which was always renewed time after time. But then, with Jeff's encouragement, we applied for a quantum computing center. Actually, NSF started to show an interest in quantum information in the late ‘90s, and they asked a group of us, including me, to organize a workshop because that's how they do things, which took place in the fall of 1999. These were the waning days of the Clinton Administration. And the conventional wisdom at the time was that, partly due to Al Gore's influence, NSF got a surge of funding for a program they called Information Technology Research, which included a lot of practical things, but also sort of a lunatic fringe of blue sky research. And that's what we were part of.
We applied to this ITR program, and we asked for a big center, which would encompass experiment and theory. And the NSF program manager involved, a guy named Mike Foster, said he wasn't interested in the experiment, only in the theory. So we wound up getting a million dollars a year just for a theory institute. This was in the fall of 2000, which was the Institute for Quantum Information. There was no Matter then, just a theory institute. But the timing was great because there were all these young people who were excited about the field, who were getting PhDs. We were able to build a group of really strong post-docs and attract Caltech students into research in that area. And we could pretty much get any outstanding post-doc we wanted because there wasn't so much competition then. There's a lot more now. So, we had an amazing group of young people in the early 2000s who came through, many of whom are leaders of research in quantum information now, like Patrick Hayden, and Guifré Vidal, and Frank Verstraete, and quite a few others.
Zierler:
Today, of course, there are several centers that have a similar research focus. But at the time, there were not, of course. You were really at the vanguard of all of this. So, the question is, what was your model? What other centers were out there that you might have used to base your ideas on…on where this ultimately would develop?
Preskill:
Well, actually, my model was the experience that I had with the particle theory group, which I didn't appreciate immediately was a bit culturally different than most research efforts in AMO physics and what was then the nascent interest in quantum information—which was I wanted to bring in the best young scientists and give them a lot of freedom, to create a community of people who had some common interests, but also complementary backgrounds. So, I deliberately would put a computer scientist in the same office with a physicist so those guys would talk. And I guess that was the model. Now, the Institute, the next one which had a big investment, was the Perimeter Institute. When it was founded, they saw quantum information as a core part of their mission. And then, later, there was another institute at Waterloo, The Institute for Quantum Computing, which had a lot of resources, all thanks to Mike Lazaridis, and the Canadian government, and government of Ontario.
But we got off to such a quick start, and we already had a track record of bringing in great people who did great research while they were at Caltech, and then went on to later career success. And we were able to continue to recruit the best young people very successfully. The first thing I did with the IQI funding is, we hired Alexei Kitaev. This is interesting, too. When I got that DARPA money, I thought, “Hey, I could bring in a long-term visitor with this funding. So, who should that be?” And so, I asked a few colleagues for suggestions. And indirectly, I heard from Richard Joza that he had met this amazing young Russian at a conference. The conference was in Japan. That was Kitaev. And I didn't know much about Kitaev, but he had a paper on the arXiv, which I then read and was blown away by. He had sort of reformulated Shor's algorithm in a more general and powerful way.
And so, I arranged to have him visit in 1997. Actually, the legend about that paper that he wrote is, in 1994 he heard about Shor having discovered that you could factor with a quantum computer. He was at the Landau Institute in Russia. And he wasn't in the in-group that had access to the preprint. It wasn't posted on the arXiv, although the arXiv existed at that time. But it was just kind of circulating around by email. And he wasn't able to get it. So he had to figure it out for himself. Now, it's a huge advantage to know that it's possible, so he had it on good authority that Peter Shor had discovered an algorithm for factoring large numbers efficiently on a quantum computer, and then he figured it out. His approach was different and more general than Shor's. That was the paper I read, his version of what we now call the Hidden Subgroup Problem. He called it the Abelian Stabilizer Problem, and Shor's algorithm fit into that framework.
So, this guy is clearly very interesting. And I arranged it for him to first come for a shorter visit. It was his first time in the US, I think. And the first day that we met and sat down for coffee, he started telling me about this idea he had to use non-Abelian anyons for quantum computing. And here's something funny. I was very interested in non-Abelian anyons. That was one of the things I was fooling around with waiting for the SSC to turn on. Non-Abelian anyons are particles in a two-dimensional medium which have exotic statistics, more general than bosons or fermions. And non-Abelian means that you can actually have a state of many of these particles be modified just by braiding them around one another.
And what Kitaev had realized is that this was an approach to quantum computing that would be resistant to noise because it was topological. The effect of exchanging a pair of these anyons, because the information is encoded in a very, very nonlocal way, the environment buffeting the system locally doesn't interfere with it. This was a very brilliant idea. And I understood it immediately after 15 minutes of taking about it over coffee because I knew about non-Abelian anyons, and I was very interested in quantum error correction. And it had never occurred to me that these two things that I was very interested in were related. And I guess that shows that I'm not Kitaev.
Zierler:
But you can spot a Kitaev when you see one.
Preskill:
Well, that's true, and I was ahead of my time in that regard. He was underappreciated for sure in 1997. And so, he came back the next year supported by this DARPA grant as a visiting professor. And we actually jointly taught a course on quantum computing. And then, when we got the NSF award in 2000, the first thing I tried to do was hire him. Of course, I couldn't hire him as a professor by myself. That had to go through the usual Caltech hiring process. But I could hire him, although it had to be approved by a committee, as what we called at the time a senior research associate. Now, we call it a research professor. It's a position that we have at Caltech for people who are world-leading researchers with stature comparable to a tenured professor, but it's a soft money position, and it's paid out of a grant. No teaching responsibilities.
Zierler:
This is what Sean Carroll has, for example.
Preskill:
That's what Sean has. And that's what John Schwarz had, actually, when I first came to Caltech. That's another story, speaking of someone who was underappreciated for a while. Yeah. And so, we had Kitaev, and we had this amazing group of young people. And then, a lot of students came through and trained. I think probably in terms of impact on science, leading the IQI and establishing it is the most impactful thing I've done when you look at all the people who came through and how they've become scientific leaders. But anyway, to come around to answering your question, for ten years, we were just the IQI, and we went through several cycles of renewal at NSF. And Jeff spearheaded this. I wouldn't have thought to request a grant to start a theory institute if Jeff Kimble hadn't been pushing me, so I'm grateful to him for that.
And in 2010, we applied for the Physics Frontier Center program at NSF, very competitive thing. There are ten of them in the country in different areas of physics. And that turned out to be successful, and as a result, what had been the IQI expanded to a larger center that did encompass both theory and experiment, pretty much as we had envisioned back when we originally proposed it in 2000. We had something like that in mind, but NSF at that time said, “We only want the theory.” But in 2011 we became the Institute for Quantum Information and Matter. And now, that's been around for almost ten years, since 2011, and has been very successful.
Zierler:
John, that's a great overview of your current titles and affiliations. So before we take it all the way back and develop your personal history, I'd like to ask a very in-the-moment question. As you say, of course, we're all working from our home offices now. As a theoretical physicist, I wonder if in some ways, these past 11 months have been more productive for you because the social and physical isolation perhaps has given you a bit more headspace or bandwidth to work on some equations or problems that you might otherwise not have. On the other hand, I wonder if your style as a scientist really depends on in-person, interpersonal interaction, and in many ways, your research agenda has suffered as a result.
Preskill:
Well, as the question suggests, it's a complicated issue with tradeoffs. One big change for me is I was traveling a lot. And I get, of course, as we all do, lots of invitations, most of which I turn down. But for opportunities to lecture, attend conferences, and things like that, there were a certain number of them which I really thought I had to accept. So the last couple of years, I had been making lots of trips. And it was really a bit of a relief to put a stop to that for a while and not be chasing around so much.
On the other hand, the kinds of interactions you have when you visit other places to attend a conference or give a talk and so on, the kinds of informal interactions, those are not very well simulated in the Zoom era, although there are various attempts to do that. And so, you do miss that kind of thing where you go to dinner or lunch and just chat. And sometimes, that's a good way of probing questions and coming up with ideas. So, I think we've all suffered a bit from missing that kind of interaction.
In my group, it hasn't been too bad. We have our group meetings on Zoom, and I'm able to keep up with what students and post-docs are doing, and so on. But I think it's hard for the new students. They can attend meetings and stuff like that, but it's hard to become sort of integrated into the community in our online existence compared to when we're able to hang around and chat in our offices or at a coffee break. But in response to your question, yeah, I think I have had a bit more time for reflection than was the case, say, in the previous couple of years, and that has been helpful. And it's also given me a little more time, maybe, for reading and catching up on things.
One thing that I had been increasingly feeling was missing from my education or knowledge base was the students are more and more interested in machine learning, and I really just didn't know much about it. And I still know only a limited amount about it. But I did take some time to read textbooks and papers, and I also am collaborating with some students who know a lot more about machine learning than I do. And so, that's been a plus over the last year.
It's interesting with the experimentalists. They seem to be much more challenged than we are as theorists. Some of the labs were closed down for a while. Now, they're operating under socially distanced protocols, and that slows things down. But I've also had several experimentalists tell me that they're getting the best data ever because the lab is so quiet. There's nobody walking around, people aren't opening and shutting doors. And a lot of experiments are operating remotely or with minimal physical presence in the lab of group members, and that's had some benefits. So, it's not all bad, even for the experimentalists.
Zierler:
The big question going forward, what are the best aspects of the current dynamic that you plan to continue using once we're out of the pandemic?
Preskill:
Well, I don't know. I think the model of doing seminars and conferences online will have a place going forward. Like I said, it's not really the same in terms of the personal interactions as a face-to-face conference. But it's still pretty effective. So, I've attended workshops, and, of course, things get recorded, so you can watch them later. That was happening anyway. Usually, when there was an event, people were making videos. But since it's just not feasible to travel to all the things that one wants to attend, having that option of participating in a meeting with people all over the world is something we'll probably take advantage of more than we have in the past, going forward.
Zierler:
Well, let's take it all the way back to the beginning. Tell me about your parents and where they're from.
Preskill:
I grew up in Chicago. My dad, Alfred Preskill, his parents were Eastern European Jews, his mom from Latvia, his dad from Lithuania. And like many Jews, they came to the United States in the 1880s or 1890s. In the case of my grandfather, he and his brothers would've been drafted into the Tsar's army if they had stuck around in Lithuania. That was one of the incentives for leaving. And they all came to Chicago. And that's where my grandfather met my grandmother.
Zierler:
I assume Preskill is an Anglicized name.
Preskill:
Well, according to family folklore, in Lithuania, it had a similar sound. And I've sometimes pondered whether it's related to names like Peskin and Peshkin. But we think in Lithuania, they were saying it more or less the way I do as Preskill. And there are several alternative spellings that were adopted when people immigrated. So, there are some Preskills still around the Chicago area, but there are also other spellings, like Preaskil. So anyway, my grandfather's business that he started was a harness shop. He would make the rig that you would use to attach your horse to your buggy. But when automobiles came in, he realized that wasn't a good business model, so he opened a hardware store. And when my dad was a kid, he used to work in the hardware store. So even in later life, he considered himself to be an expert on tools.
And my mom has a rather different origin story. She actually converted to Judaism when she married my father, and then later in life, she actually got bat mitzvahed. Much later in life. But she did not grow up Jewish. She grew up in Cleveland. Her father was a lawyer, and his family had been, for many generations, farmers. And my mom's mother also came from a family that had been farming in Pennsylvania and Ohio for many generations. We think they go back to before the American Revolution in Pennsylvania. But when my mom was a kid, she would work in my grandfather's law office. He was a probate lawyer, did wills. And very successful in the sense that he was very highly regarded in his profession. He wrote a textbook on Ohio probate law that was widely used and had some high-level connections.
One of the famous family stories is my mom, as a teenage girl, was working the switchboard in the office, and she cut off the Vice President of the United States by pulling the plug while he was talking to my grandfather. She wound up going to law school, and it was pretty unusual for women to attend law school. She was the only one in her class at what was then called Western Reserve, now Case Western Reserve in Cleveland. And I think she might've gone into practice with my grandfather. But then, World War II came.
And going back to my dad for a minute, he wound up going to the University of Chicago. He was very good student. He graduated high school at 16, and he graduated law school at 20. At the University of Chicago, he was able to get a bachelor's degree and a law degree more or less concurrently. And that was 1932. It was the Depression. Nobody wanted to hang around in school. Everybody had to go out and earn a living if you could get a job. So, he was in a big hurry. Because he was 20, he couldn't take the bar exam because he was still a minor. He had to wait til he was 21. And he passed the bar exam and worked for a law firm for a while. But when the war came, he was 4F because of a medical condition. He was about 30 then, but he couldn't enlist.
So, the way he did public service was he became a federal employee. He moved to Baltimore and worked for what was then called the Federal Security Agency, which was setting the legal foundations for the Social Security system, which was still sort of being fleshed out. And that's where he met my mom. When the war came, she also thought she should work for the government, and they wound up in Baltimore. The reason they were in Baltimore is a lot of the federal agencies moved out of DC because that was being taken over for military purposes. And that's where they met. They were both lawyers in the same office, and they got married 1944 and moved back to Chicago.
My mom did give up the law when she had her first child, my older brother David, in 1947. But she was really a remarkably capable woman. And so, she volunteered for everything. She was the President of the PTA, the League of Women Voters, and a local philanthropic organization. And she learned Sign Language so she could work with the deaf, and she worked with kids with Cerebral Palsy, and she volunteered in the hospital, and as a tutor at the high school. She was really a dynamo and has a very different personality than me—I'm quite introverted, she was very extroverted.
Zierler:
I wonder, as a product of her generation, if her decision to leave law was because that was sort of externally expected of her. In other words, in latter generations, the same person would not have done that.
Preskill:
I think that's right.
Zierler:
Did she ever express regret or frustration with that?
Preskill:
Not to me. And like I said, she managed to have a lot of impact outside the home in quite a number of ways. She was pretty amazing in that respect. My dad started to think the law was boring, so he joined a business called Allied Radio, which my uncle was involved in. And he worked there for over 20 years in marketing, became the VP of marketing. So one of the ways that impacted me was he would bring home these kits. Allied Radio made what were called Knight Kits you could assemble yourself with a soldering iron—radios, and walkie-talkies, a photoelectric relay, and things like that. So that was my introduction to electronics, starting when I was around 10. I really enjoyed putting those things together. And I was surprisingly uncurious about how they worked, actually. I built radios, and I was very proud that I was able to break the iron curtain and hear a broadcast in Russia on a shortwave radio, but I didn't really understand what the tuning coil, and the capacitors, and the resistors were doing. I just thought it was fun to put them together.
Zierler:
Growing up, how Jewishly connected was your family? Particularly with your mom, was she more interested in doing stuff than your dad in certain regards?
Preskill:
My dad was the more interested one, and we belonged to a reform congregation on the North Shore of Chicago. We moved to Highland Park, one of the northern suburbs. Or they did, before the first child was born. And we belonged to this huge congregation called North Shore Congregation Israel with over 1,000 families. And he was involved in the temple one way or another at various times in his life. He was the Chair of the Board of Religious Education there for a while.
And later, actually, after he retired he was very interested in studying Torah, and Talmud, and stuff like that with classes that the rabbi would lead. My dad was quite scholarly—I think he might have been an academic if he hadn’t come of age during the Depression. And my brothers and I went to religious school. It was usually on Sunday, actually. Reform Judaism. And you could be bar mitzvahed. I chose not to be 'cause I wasn't too keen on the idea of having to go to Hebrew school after school from 3rd grade through 7th grade. And my parents said that was OK if I didn't want to.
But I did get involved. I became, actually, when I was in high school, the audio-visual supervisor at the Temple. And so, one of my responsibilities was to make sure that the rabbi's sermons were recorded at every Friday night service. I had a crew of volunteers who would sign up. And if something went wrong, and we failed to record the sermon, the rabbi was not pleased. So, there was a little pressure there. And then, at the religious school, as the supervisor of audio-visual activities, we used to show movies sometimes, so we had to thread the projector. And that was also a bit stressful because every once in a while, the film would break, and you'd have to do emergency film repair with some magic tape or something. But that was my most active role in my youth at the Temple.
Zierler:
John, you went to public schools throughout your childhood?
Preskill:
Highland Park High School. Highland Park public school all the way in the town we lived in. It was a good school system. And there were a lot of Jewish kids in the community. We had a tracking system, which was a pretty common practice back then, where for each discipline, they would put the kids in—I don't actually know how they decided this—level 1 English, level 2, and level 3. And the level 1 would feed into the AP classes. And so, even though it was a big school, there were over 2,000 students, if you were in those level 1 classes, it was the same kids you'd see in most of your classes year after year.
Zierler:
Was stuff like the space race, the moon landing, formative to your development as a kid?
Preskill:
Hugely, yes. I still remember vividly, or at least I think I do, my dad bringing home a newspaper in early 1961. It was the Daily News, the afternoon newspaper in Chicago, with this huge headline, a couple inches high: “Russian First Space Man.” Yuri Gagarin was orbiting the Earth. And it was a huge deal. Of course, the US had a space program, too. The Mercury astronauts had been chosen, and they were training, and the Russians kind of beat us to the punch with Yuri Gagarin's first flight. And Alan Shepard's first flight was a month later or so, I don't remember exactly. But those Mercury astronauts were heroes. Whenever there was a flight, Alan Shepard, Gus Grissom, John Glenn, Scott Carpenter, and so on, it was a huge national event. And it seemed like the world came to a standstill, and we were holding our breath while these guys were flying into space and managing to return to the Earth.
And, of course, in those days, there were three TV networks, and they all had news organizations. And they'd all stop regular programming so they could cover these missions. And so, I ate everything up. I read everything I could. So [in] 1961, I was 8 years old. But I could go to the library and get a book about rockets. And there'd be a feature story in TIME magazine or whatever, lots of newspaper coverage, and I'd read all that stuff. I wanted to know everything about how Mercury was going to lead to Gemini, which was going to lead to Apollo. And so, I very avidly followed all that, and I think it did have a significant role in awakening my interest in science.
Zierler:
John, were politics a topic of discussion at the dinner table as a kid? Would you have known if your parents were voting for Nixon or Kennedy?
Preskill:
I remember watching the Kennedy-Nixon debate, as a matter of fact. I don't remember my dad being there, but my mom was. And they were Democrats. Well, I shouldn't say that. My mom always identified as an independent. She always said she'd vote for the best candidate. Usually, it was a Democrat, but not always. And things were a little less polarized then than now. So the idea that you could, in a given year, prefer the candidate of one party, which was different than that of the previous election, did not seem wildly unlikely.
Zierler:
In middle school or high school, were there any standout math or science teachers who exerted a real influence on you?
Preskill:
Well, there was one in high school. His name was Donald Ens. He was a math teacher. He was a young guy. There was an English teacher, too, who I admired a lot. But the thing about Mr. Ens was he really loved math. And at that time—after a few years of being very interested in space, and rockets, and then chemistry—I had a chemistry lab, and building the radios and stuff, I decided really, the coolest thing was math. And the thing I loved the most in the reading I did was Gödel's Theorem. The idea that there were limits to what we could prove or what we could know is true in mathematics. That really impressed me. And Mr. Ens loved that kind of stuff, too. So, I had somebody I could talk to about those sorts of things.
And in fact, when I was thinking about where to go to college, I had some rather funny notions, and one was that if you wanted to do math, Princeton was the place to be. And I'm not sure what that was based on, maybe because Einstein had been there or something. But that was firmly implanted in my head. And another idea I had was that you shouldn't go to Harvard. Because I had a friend whose older brother went to Harvard and majored in biochemistry. And when he'd come back from college, he'd always complain that all the classes were taught by graduate students. And they had all these famous professors, but you never saw them. So I thought, “Well, that doesn't sound good.”
At least Princeton claimed to be a more undergraduate-focused institution. So that's where I decided I wanted to go, and indeed, where I went. And when I went to Princeton, I was thinking I'd major in math. I talked my way into a graduate-level course on Set Theory and Logic my freshman year taught by a guy named Dana Scott, who was a distinguished logician and philosopher. And I had to get permission from the guidance office, and I had to pass out of freshman English, and stuff. I was very insistent that I had to take this class because this was going to be my future, Set Theory and Logic. And I wasn't sure if Dana Scott would be teaching it again. And it was a fun class.
Zierler:
John, this was a pure math environment, not an applied math environment?
Preskill:
That's right. But I realized, by the end of my freshman year, several things. One of them, I think, I'd known all along. I'm just not cut out intellectually to be a mathematician. I'm just not good enough at that kind of thing. Meanwhile, I was taking freshman physics, and in the spring term, we used this book by Purcell, Electricity and Magnetism, which is a great book, and it's still used in some places. And that really impressed me because I was learning in my math class calculus on manifolds, and about differential forms, and things like that. It was all very abstract, and very beautiful and fun. But no hint of what it was good for.
Well, maybe I'm exaggerating. But certainly, the emphasis was not on what you do with this stuff. But then, in Electricity and Magnetism, learning Maxwell's equations, and why you would want to take the curl or divergence of a vector field for some useful purpose, the fact that I could piece those two things together, this very abstract math and then this physics class, which was making use of those concepts, that made me appreciate that maybe physics was a more natural home for me.
Zierler:
Did you sense, even as an undergraduate, the hierarchy of theory above experimentation in those days?
Preskill:
Yeah, and in fact, even back in high school, I had this very snooty attitude that theorists were somehow superior. I was terrible, looking back. I thought that the intellectual pinnacle was to do theory, and that experiment just didn't appeal to me personally, let's put it that way. And so, maybe I had a perspective, which, of course, is completely wrong, that experiment was not the best route to a deep understanding of the secrets of nature, that thinking would do that. It's completely wrong. But I really did have that attitude.
Zierler:
Can you either affirm or deny the famous quote attributed to Wightman that he referred to the experimentalists as “the help?”
Preskill:
Arthur Wightman said that?
Zierler:
Allegedly.
Preskill:
He was my senior thesis advisor.
Zierler:
That's right.
Preskill:
“The help.” Well, I'm not sure I knew that. He was a wonderful man, but I'm surprised he would say that.
Zierler:
It may be apocryphal, I don't know.
Preskill:
Although, of course, he was a mathematical physicist and proved theorems, when he was young he did more practical things. He worked out details like how ionizing radiation deposits energy in materials and things like that. So, he had some appreciation for that type of knowledge building. Actually, another college teacher who had a big impact on me was John Wheeler. My sophomore year, he taught a class that I took for the whole year, covered everything in physics. We called it Honors Physics. And we did classical mechanics, and E&M, and stat mech, and quantum physics, and waves all in one year. And it was a very idiosyncratic course, to put it mildly.
Of course, to us undergraduates, there was something kind of god-like about Wheeler. So, this was 1972. The thought that he had worked with Niels Bohr seemed unimaginable --- that anyone could be that old. He was 61 at the time. Here, I'm 68, so it doesn't seem so old now, but at the time, it sure did. And he always came to class in a suit and tie, and that also made him seem like a denizen from another generation. And, of course, he had this marvelous ability to use the blackboard to draw intricate illustrations on the spot. But the thing that was most memorable is–here's what he did on the first day of class, or at least how I remember it. We're going to do classical mechanics. We're going to use Goldstein. We're going to learn Lagrangian Mechanics. And we're going to learn Hamiltonian Mechanics from this book. And I'd already dipped into the book a little, I was excited.
And so, I figured he was going to tell us about the calculus of variations, and the Euler-Lagrange Equations, and stuff. I kind of had a hint what that was about. But he comes in, and he goes up to the board, and he draws A on the board and B. And then, he draws a line going from A to B. And he said, “An electron is going to travel from A to B. How does it know how to go? What path should it take? Well, of course, it takes all the paths. It adds them all together with an E to the iS …” “What?” He was trying to explain that what we were learning was the classical limit of quantum theory. Although Goldstein wasn't saying it that way, he thought it was important for us to know right from the start that that was the context, and that you could understand why this calculus of variations stuff was relevant by thinking about how the phase when it's stationary would add up constructively.
Of course, this is a wonderful insight coming from Feynman, who was Wheeler's student. And I thought this was great. I just was dazzled. And a lot of students, understandably, were a bit upset because then, we had to do the homework problems in Goldstein, which said, “Here's a couple of springs and a mass. Write down the Lagrangian.” What were we supposed to say? “Well, the mass is going to follow all the paths. Add them up with an E to the iS.” That didn't really help you do the homework. But Wheeler was inspiring.
Zierler:
This obviously planted a seed in you later on.
Preskill:
It did. And here's another thing he said, which I never forgot. And this was later in the year. He came into class, and he told everyone to take out a piece of paper. He said, “I want you to write down, on your piece of paper, all of the equations of physics. Everything that one needs to know in order to derive everything else in the world.” I don't know how much time we had, a few minutes. You could write down the Maxwell Equation and the Schrödinger Equation. Fluid mechanics. Maybe the definition of entropy, and so on. And then, he collected all the papers. And he put them on a table in the front of the room, and he said, “Here on the table are all the equations of physics.” And then, he said, “Fly.” And he's talking to the equations. “Fly.” Nothing happened. The papers just sat there. And he said, “What went wrong? Here are all the equations of physics, but they won't fly. Yet, the universe flies.” That was Wheeler. [laugh]
Zierler:
On the social side of things, you may have heard the quip that at Princeton, the 60s came in the early 1970s. It was a little later to the game than places like Harvard or Berkeley. Were you political at all? Were you involved in any of the anti-war protests or Civil Rights things that were going on at campus in those days?
Preskill:
I participated, but rather passively. I guess it was before I was in college in 1970 was when a lot of campuses shut down after the invasion of Cambodia. I was in high school then. When I was at Princeton, there were some anti-war protests, and I would attend, sometimes with my friends. But it was not something that I devoted much of my time or my mindfulness to. I was pretty focused.
Zierler:
Was the draft something you needed to contend with?
Preskill:
Well, yeah. So, by that time, there was a lottery. And there would be an event where they would, on national television, take balls out of an urn, and it was based on your birthday. So, you would get a number for each date of the calendar year, and if you had a high number attached to your birthdate, then you were unlikely to be called. And if you had a low number, there was a serious possibility of being called. And I had a high number, January 19. My number was over 300. So, I knew it was pretty clear I didn't have to worry about being drafted.
Zierler:
Was a senior thesis at Princeton standard? Or was that an above and beyond kind of thing for you?
Preskill:
Every Princeton student does it. So, it's a big deal. You spend a lot of your senior year doing it. Actually, there were junior papers as well that I think everyone had to do. In physics, we had to do one the first term and second term. And actually, looking back, maybe this was sort of formative as well. So, you're a junior, you don't have any idea what to do for a research project. You're supposed to knock on doors and talk to faculty, see if they have suggestions, say you're interested in working on something. “What do you propose?” And so, I don't know why, I guess maybe I was assigned to him, I went to see Marc Davis who's a cosmologist, he's been at Berkeley for many years now, but he was at Princeton then. And so, he asked me what I was interested in. And what I said was, I was interested in the interpretation of quantum theory. And he said, “Well, you know what --- you might be interested in is the EPR Paradox,” which I had never heard of.
And so, he explained a little about what it was. He didn't really know. But that piqued my curiosity, and it turned out that there was a new instructor who had just arrived at Princeton that year named Stuart Freedman, and he had just done an experiment with John Clauser to test the Bell Inequality. And so, I went to him and asked him to fill me in a little bit about that. And he said something that stuck, which I thought was really weird. He had done the experiment with Clauser, which seemed to confirm violation of the Bell Inequality. But there was a competing experiment that had found a different result, that the Bell Inequality was satisfied, so the idea of local realism seemed to be confirmed by that competing experiment, which was done by a Russian group. And I said, “Well, how do you account for the discrepancy?” And he didn't give a scientific answer, he gave a political one, which was, “Well, it has to do with dialectical materialism. So, there's a bias in favor of local realism.” I thought, “Boy, could that really be it?” Anyway, it just kind of shocked me that he said that.
So I wound up reading up on the Einstein-Podolsky-Rosen paper and other papers, and I wrote my JP on that. That's what we called junior paper, JP. And I didn't really think much more about that stuff for some time. But then, when I came back to quantum information, of course, a lot of it was about entanglement. So maybe having had that experience in my formative years helped make me receptive to those kinds of ideas, I don't know.
But in the case of the senior thesis, again, the onus is on the student to find an advisor. And I had had an experience I guess late in my junior year. I used to go to the bookstore, the Princeton U Store, where there were various physics books on display, and I'd browse through them. Every once in a while, I'd buy one. And I found this book by Streater and Wightman, which was called PCT, Spin and Statistics, and All That. And I thought that was a very charming title. And so, I started browsing through it, and having still a sort of mathematical predilection, it appealed to me that there was rigorous mathematics about Quantum Field Theory. And I thought, “Boy, if I really want to understand Quantum Field Theory, I should understand what all this is about.”
And I decided I would ask Arthur Wightman to be my thesis advisor. But then, a kind of really embarrassing thing happened. I won an award that fall at the beginning of my senior year because I had the highest academic standing in my class. And the President of Princeton in the opening ceremony presented this award, and we chatted a little. And he asked me, “Who are you going to do your thesis with?” And I said, “Oh, I'm planning to do it with Arthur Wightman.” But at this point, I'd never spoken to Wightman, he had no idea who I was. You know how it is with professors, they're hard to catch. So I went to his class, and I went up to talk to him after class. And I told him who I was, and he says, “Oh, yeah, I've heard about you from a surprising source.” He had talked to President Bowen who had said, “Oh, I talked to this guy Preskill who's going to do a thesis with you,” and Wightman had said, “What?” So that was pretty humiliating. But because Arthur Wightman was such a sweet man, I didn't stay embarrassed for long.
And looking back, he spent an extraordinary amount of time with me that year. And I had sort of a typical undergraduate’s sense of entitlement. Whenever I saw him in his office, I figured I could barge in and start asking questions. And he never turned me away that I recall. He had sort of a gift for making you feel at ease, like he was really enjoying talking to you. At least I always felt that way. You know how sometimes people wish you'd go away, you can tell, even if they don't come right out and say it. But he was never like that.
Zierler:
In many ways, a senior thesis is a tryout for real scholarship later on. And so, with that in mind, I'm curious how parochial your worldview was, or not, given the extraordinary excitement and advance in particle physics in the early 1970s. Were you aware of what Sam Ting was doing? Were you aware of Grand Unification with Glashow and Georgi? Were these things on your radar? Or was your world of physics really confined to Princeton?
Preskill:
Well, I guess it was a little insular. Of course, at Princeton, asymptotic freedom was discovered by Gross and Wilczek while I was there, and also by Politzer at Harvard. But I was not so aware of that. I do remember the J/psi, the so-called November Revolution. I was a senior, and we had a speaker from the SLAC experiment actually, from SPEAR, who, not long after the discovery was announced, described the event. So even the undergraduates, the excitement bubbled down to us about the discovery of the J/psi and a lot of discussion of what it could mean, what it could be. And so, I was aware of that excitement, but I wasn't clued into the latest developments the way you are when you're a graduate student. Not as an undergraduate. I did read, under the tutelage of Arthur Wightman, a paper by Sidney Coleman that I found very remarkable, and that was part of the reason I wanted to go to Harvard. It was the paper by Coleman and Erick Weinberg.
Erick Weinberg had been Sidney's student at Harvard. And this was the paper about spontaneous symmetry breaking driven by radiative corrections. Very beautiful paper, which I studied in detail as an undergraduate and made use of ideas from it in my senior thesis. So that was fairly current. I guess that paper came out in '73, and I was reading it the next year. And I went into that particular paper in some depth. But I don't think I was aware of Grand Unification until I got to Harvard. Although, the original papers appeared when I was a senior in college.
Zierler:
In terms of Wightman's mentorship, did he essentially hand you a thesis problem to work on? Or you more or less came up with it on your own?
Preskill:
He handed it to me, and it was way too hard. Way too hard. It was to prove that spontaneous symmetry breaking occurs in the Yukawa Theory. And I mean prove it in the sense of rigorous mathematics. That’s a problem about one plus one dimensional field theory, but the tools that he wanted me to use had just been developed that year, the Osterwalder-Schrader Axioms for Euclidian Quantum Field Theory. And he believed that those tools would enable one to show that this theory of fermions and scalars would have a phase in which a discrete symmetry was spontaneously broken. And I tried to do that, and I kind of nibbled around the edges but didn't really make much headway towards a proof. The problem wasn't solved for quite some time. Maybe it took another 15 years before it was solved by real mathematicians. So, I really was not very well equipped for it either intellectually or by background, but I learned a lot. And I think most senior theses turned out that way.
Zierler:
What kind of advice did you get, or not, in terms of choosing graduate programs, particular professors to work with?
Preskill:
Well, I did talk to Wightman about that, I recall. Actually, here's something else, though, which maybe is worth mentioning. At that time, the attitude was widespread that if you tried to get a PhD in physics, you'd never get a job. In the ‘60s, there was a surge of hiring sort of in the post-Sputnik building of science. And so, all these young people got hired as professors in theoretical physics in particular, and all the jobs were filled. And it wouldn't be until the late 1990s that they'd start to retire and there'd be an opportunity to get a faculty job again. I heard this all the time, including from faculty when I was an undergraduate of Princeton. “If your goal is to get a PhD in physics and go on in academia, think again. Because there aren't any jobs.” But somehow, although that should've been very discouraging, it wasn't. I'm not really that conceited of a person, I'm well-aware of my limitations, but somehow, I thought, “Well, for me, it'll be different. And if you don't try, how are you going to know?” Didn't bother me so much.
But anyway, Wightman was quite positive about Sidney Coleman in particular as a potential mentor. The other thing, which I got more from talking to other students, was there was this kind of cultural divide at the time between so-called East Coast and West Coast physics. And at least the buzz with the students was, “The exciting stuff is happening at Princeton and Harvard, and Caltech and Berkeley are still doing what they were doing in the ‘60s, and they haven't caught up. What's exciting is gauge theories. And they're still doing S-matrix theory at Berkeley, so you better not go there.”
Zierler:
What about the theory group at SLAC? Was that something you considered?
Preskill:
No, not so much. Who would I have been aware of? Of course, I knew Sid Drell because I had read his textbook.
Zierler:
Bjorken, for example?
Preskill:
Yeah. I don't know. I wasn't too excited about Stanford or SLAC. And at Caltech, the feeling was the glory days were behind. That's what the students were saying in the mid-70s, that Gell-Mann and Feynman were in their declining years. I had one friend who graduated a year ahead of me, good friend, Orlando Alvarez, who had been a Princeton undergrad. And he went to Harvard. I talked to him a lot, and I figured, “Well, if it's good enough for Orlando, that's probably where I should go.” And I was aware of Steve Weinberg. He had been hired at Harvard relatively recently, I think in '73 he had moved from MIT. He was sort of supposed to fill Schwinger's shoes. And I was aware of the Weinberg-Salam Model, so I knew he was supposed to be a big deal.
But to the extent that I did it, reading papers to get an idea of what people were doing or what would be interesting that faculty members at Harvard, for example, were working on-- I was very impressed by Coleman's papers at the time, less so by Weinberg's. A lot of that had to do with Coleman’s style, which was extremely clear and clever. He would use methods that he would explain very brilliantly, and which you might not have thought of yourself.
Zierler:
So Coleman was really the primary motivation for you wanting to go to Harvard?
Preskill:
That's how I recall it. It didn't turn out quite that way because I became Steve Weinberg's student.
Zierler:
Did you have any interactions with Coleman before you got to Harvard? Did he ever come to Princeton? Did you know him personally?
Preskill:
It's funny because he was at Princeton on leave at the time asymptotic freedom was being discovered. But at that time, I guess I was a sophomore. I don't remember interacting with him at all or even being aware he was there. So, I had not met him. I knew him by reputation and by reading his papers. And getting an assurance from Arthur Wightman that he was doing extremely interesting things. And like I said, I was aware that Weinberg was a big shot. I don't know if I was so aware of Glashow when I was an undergrad. But yeah, I decided I wanted to go Harvard, and that's how it turned out. And as a first-year graduate student, I took Sidney's Field Theory course, which was very popular, and the room would be filled to the rafters. In fact, that was the year that videos were made of all the lectures. And those videos were later used as part of the basis for a version of Coleman's Field Theory lectures that were recently published.
And so, those lectures were beautiful. Coleman was a legendary lecturer, always extremely clear and entertaining. But it was the kind of thing where while you were listening, you thought everything was perfectly clear, but then afterward, it would be very hard to remember why it had been so clear. So, I would go over my notes at great length afterwards and try to re-derive everything. Sometimes I would go and watch the videos, actually, with another friend from the class. And I was determined to master whatever he talked about.
So, that was a very memorable experience. There wasn't any other course I took at Harvard that was taught nearly as well, even though they were taught by distinguished people. That same year, I took Weinberg's gravity course based on his book, Gravitation and Cosmology. General Relativity course. I liked the book, but his lectures weren’t very good. It seemed like he would come in unprepared, and then he'd open the book and start copying equations out of it. It was very uninspiring. And Shelly Glashow was not a very good lecturer, either. And you kind of got the impression he was winging it. I remember taking group theory from Shelly. It was fun, but it always seemed like very little preparation had gone into the lecture.
Zierler:
It's been said that pedagogy is much more prized at Princeton than it is at Harvard. I wonder if you had that experience, even though those are very different perspectives as an undergraduate to a graduate student.
Preskill:
Well, no, I've never really thought about it that way. But I guess that does align with my experience. I thought there were some very well-taught classes when I was an undergrad. Actually, I'd mentioned taking that freshman course on electricity and magnetism. I didn't mention the instructor. It was Val Fitch. And, of course, he won the Nobel Prize for the discovery of CP-violation, which I didn't know at the time. But he was an inspiring teacher. And I took another course from him on more advanced so-called modern physics, in which he discussed at great length the K-Kbar system, and flavor mixing, and CP-violation. And I wasn't aware until told by another one of the students that that was his research bread and butter. But he sure seemed to know a lot about it. [laugh] And so, that was another very memorable class.
Zierler:
If Steve Weinberg didn't give you a great impression as a student in his class, how did you end up becoming his student?
Preskill:
Well, everybody wanted to work with Sidney because he could explain things so clearly, and he was receptive to a certain degree to supervising students. But in a way which was only half-joking, I rather vividly remember him saying, “I have graduate students like a dog has fleas.” And, of course, he meant it as a joke, and I actually thought it was funny. But he really had a lot of students, and his personal habits were different than they were in later years. This was before he was married and before he'd been diagnosed with adult-onset diabetes. And he was a very heavy smoker and kept unconventional hours. He'd stay up all night and then go to sleep at dawn, and he'd come in in the afternoon. He always insisted that his lectures be scheduled for the afternoon because he would be sleeping in the morning. And when he would arrive in mid-afternoon, sometimes late afternoon, students would be queued up outside his office because they wanted a moment with Sidney. And I just thought, “Who needs it? I've got to stand in line to get a few minutes with my advisor?” So that was part of it.
But meanwhile, I guess I became more acquainted with some of the things Weinberg was doing, and I realized, although I wasn't that impressed by the quality of the instruction in his cosmology class, I thought cosmology was really interesting, and the idea of someone who was pursuing research that was relevant to both particle physics and cosmology appealed to me. And even more so, when the idea started to bubble up that we could learn things about particle physics by studying cosmology. Grand Unification had a lot to do with that. Of course, I did learn about Grand Unification. I would say that was one of the obvious exciting things going on in my early years in graduate school. And it became more exciting when Georgi, Quinn, and Weinberg computed, from the running of the couplings, the Grand Unification scale. Originally, Georgi and Glashow had just noted the scale had to be high, or else the proton would be too short-lived.
But by actually calculating the coupling unification scale, that seemed to indicate at first that proton decay might be right on the edge of observability, and that helped to stimulate the early experiments to detect proton decay, which wound up detecting neutrinos from Supernova 1987A and all that. But the idea that you could observationally learn something about these incredibly high-energy scales by doing the right kinds of observations was exciting to me. And then, the idea came along that baryogenesis, the origin of the excess of matter over antimatter in the universe, had an explanation coming from Grand Unification, where there would be baryon-number-violating interactions, and the history of the very early universe, I thought that was very exciting.
And Steve jumped on that, too. The first paper I remember about that was by a guy named Yoshimura. And I thought that was a really cool idea right away. Actually, around the same time, Dimopoulis and Susskind were working on this, though I wasn't as keenly aware of what they were doing, but the idea that you could understand the excess of matter over antimatter using Grand Unification and early universe cosmology, I thought that was really exciting. There was a bit of a courtship in getting to know Steve. Steve was really only interested in talking about what he was interested in generally. And I didn't spend all that much time talking to him. And when I did, he was usually pumping me for information. But the way I managed to get his attention is I thought at the time that the other really exciting thing going on in theoretical particle physics was the connection of topology with particle physics.
And the two main aspects of that that were intriguing were that 't Hooft and Polyakov had pointed out that in unified theories, there could be magnetic monopoles. And also the idea of instantons, which came around the same time—again, with Polyakov and 't Hooft having a key role. These were quantum tunneling events that occur in Yang-Mills Theory, and they had consequences for QCD, in particular, providing a way of solving what people were calling at the time the U1 problem. There seemed to be a symmetry that QCD should have, which wasn't really a good symmetry.
And it turned out that was due to a so-called anomaly, that the symmetry was good at the classical level but broken by quantum effects. And to understand how that worked, you had to use these topological ideas, or at least that's how people understood it at the time, having to do with instantons. And Steve got interested in instantons at some point, and I had been actually working on a problem relating to instantons, so I knew a lot about it. And so, I was always able to answer his questions. He would ask me technical questions about instantons, which he was trying to learn, and I actually knew.
So that's how he learned my name. But yeah, he was certainly inspiring in many ways, but he never gave me much guidance, and I didn't really mind that so much. But I got guidance from other people, most of all, from the post-docs. And, of course, you learn a lot from your other students. But there were remarkable post-docs at Harvard at the time. The two who I was most inspired by were Ed Witten and Michael Peskin. And Peskin, really, was the closest thing I had to a mentor in graduate school.
Zierler:
What was Peskin working on at that point?
Preskill:
Well, we worked together on a project, actually, which had to do with instantons. And also, with my friend Orlando Alvarez, who I'd mentioned had come to Harvard from Princeton a year ahead of me. We were trying to use these instanton ideas to compute contributions to electroproduction, high-energy inelastic scattering. And we did, we worked on that a lot. It was the first serious research project I worked on. Orlando and I did most of the calculations, but Michael kind of got us started. And we had some pretty interesting results. So that was sort of how I came up to speed on these instanton methods. And my first seminars were about that work, which I'm ashamed to say, and I find a little bit inexplicable, we never published, never wrote it up, although we really did have some good results. We all sort of got distracted by other things. Not long after, Misha Shifman, and Vainshtein, and Zakharov covered some similar ideas in their papers.
The key thing that we realized is that when you do these instanton calculations, they have infrared divergences, and they show up as the instanton has a size, and you have to integrate overall the possible sizes. And if nothing cuts off that integral, it looks like you get infinite results. But what we realized is those infrared-sensitive pieces could be factored into matrix elements, and so there were other short-distance pieces that you really could compute. It was really pretty nice work. We should've written a paper. We didn't. But it still helped me get going because it gave me some confidence that I could do research that people were interested in. Howard Georgi was interested in what we were doing.
And also, Ed Witten seemed to be interested. And my first talk at a conference was actually at Caltech. That was in early 1979, there was a meeting where students were encouraged to attend, and some of us went from Harvard, and I met other students there for the first time from Princeton and other places. But there was a session in which students could volunteer to give 20-minute talks, so I signed up for that. And I was quite excited because it was an evening session, but Feynman came, and he was in the audience. He was sort of listening to the talks. Every once in a while, he'd go out in the hallway and just have informal conversations with people.
But anyway, this session went on, and on, and on. It started at 7:00, and I didn't get to talk until 10 pm. Feynman was long gone. I had a terrible cold. I could barely speak audibly because I was so hoarse. But I gave the talk, and it went well. And again, that also helped to build confidence. And I gave similar talks at Harvard. And so, then I started to feel like I was ready to do serious research.
Zierler:
In what ways did this work feed into what ultimately would be your thesis research?
Preskill:
Well, what you might be surprised to hear is that the work I did in graduate school, which became well-known, which was about magnetic monopoles produced in the early universe, was not in my thesis at all. I wrote my thesis on something different. Actually, I think it's interesting that I drew on what I learned from Sidney Coleman and from Steve Weinberg to find my problem having to do with monopoles in the early universe. I was very interested in this idea that magnetic monopoles could be understood using topological ideas applied to unified gauge theories, and that Grand Unified theories should have these magnetic monopoles. But the question I remember discussing with some of the other students, Steve Parke was one of them, was, “Who cares? Because these things are so heavy, you'll never see them experimentally. They're completely irrelevant to any physics we'll be able to do in our lifetimes. So why are you even bothering to learn about these magnetic monopoles?”
Zierler:
This is the very early beginnings of Henry Tye’s and Alan Guth's collaboration. Were you aware of what they were doing? Did you know either of them?
Preskill:
Well, I knew Alan Guth, but I wasn't aware what they were doing until later. Alan Guth was an instructor at Princeton when I was an undergrad. And speaking of great instruction at Princeton, he taught a beautiful class on classical mechanics, which I took as a junior, Goldstein Classical Mechanics. And, really, he's one of the best lecturers. He was Coleman-caliber. And he was clearly working very, very hard on that class. He told me later he was putting an enormous amount of time into it, as I'm sure he must've been. And so, I knew him for that reason. He remembered me later as a student in that class.
But no, I didn't know that Guth and Tye were interested in the issue of production of magnetic monopoles in the early universe. And there were other things that I didn't know and found out later, which preceded my work. One was Kibble had written this paper in 1976 on topological defects that could be produced in a cosmological setting. His focus was mostly on cosmic strings. I didn't know about that at the time.
And there was also a paper by Zeldovich and Khlopov about magnetic monopoles produced in the early universe. I didn't know about their work, either. But I started working on it myself. And Steve was not interested. I tried to explain it to him. “Look, there's something really interesting here,” speaking to Steve Weinberg. Grand Unified theories, we have reasons to believe they're the truth. They make this prediction, these very heavy magnetic monopoles. If there's a phase transition in the early universe, these could be created in such a phase transition, and they should still be around. In fact, there should be so many of them around that the universe would've been closed by monopoles, and it wouldn't look anything like the universe we inhabit. And first of all, he said, “Well, I'm not really sure about the magnetic monopoles existing, and I'm not really sure why I should believe you that they were produced in the early universe. This is all so speculative.”
And it was a little discouraging that my PhD advisor thought I was barking up the wrong tree. But there were other people who did encourage me. Michael Peskin was one. And some people, I actually got some technical advice from. One was Bert Halperin, a condensed matter physicist, but he knew a lot about topological defects in the condensed matter setting. And he helped me to set up a calculation of how many of these monopoles would be created in a phase transition. And another was Ed Purcell, who was, of course, a wonderful man. And I knew Ed because I was TA for his quantum mechanics class. And he was very interested in magnetic monopoles, and in fact, had been involved in searches for them some years earlier, and followed subsequent efforts to detect magnetic monopoles or put limits on their abundance. And actually, there had been a little bit of a false alarm around the time I was entering graduate school. Price claimed to have detected a magnetic monopole in a cosmic ray event, which was later debunked by Luis Alvarez and others as just a misinterpretation of something that could be explained by more conventional phenomena.
Zierler:
It's almost too delicious to think of how, in some ways, Bert Halperin, as a condensed matter theorist, was more helpful in developing your dissertation idea than Steve Weinberg. Can you explain a little bit the science for how Bert's background might've actually been useful? Because at first glance, it's hard to see how condensed matter theory would be relevant for this line of inquiry.
Preskill:
Well, as I mentioned, what I thought was the most exciting thing in my first few years of graduate school that was happening was these topological ideas coming into particle physics, particularly in the theory of magnetic monopoles and instantons. But topological ideas were also becoming increasingly useful in condensed matter, where in different materials, there can be topological defects associated with spontaneous breaking of symmetries. Not usually gauge symmetries in the case of the magnetic monopoles. Well, actually, in the case of a superconductor, a vortex in a superconductor is an example of a topological defect in a gauge theory. That is sort of a prototypical example of such a topological defect. Bert knew everything about superconductors. He knew all about vortices. But also, there were point-like defects that occurred in three-dimensional materials like liquid crystals. And Bert knew about that, too.
Furthermore, what I was interested in is what would happen if there was a phase transition in the early universe. If it's SU5, for example—which is the gauge group—it had been understood in the previous few years that at high temperature, even if the gauge symmetry is spontaneously broken—if the Higgs phenomenon occurs at low temperature, at high temperature that symmetry would be restored. So, you would expect very early in the universe that the SU5 symmetry was still intact, but as the universe cooled, the symmetry breaking would occur. There might be a sequence of phase transitions, but there should at least at some point be a transition to the phase in which SU3 cross SU2 cross U1, the symmetry of the Standard Model, is the unbroken remaining gauge symmetry. And one could show that the breakdown of SU5 to the Standard Model would give rise to stable magnetic monopoles. The question was, how abundantly would they be created?
And there were a couple of ways of looking at that, one of which was really the idea that Kibble had discussed, although I didn't know his paper at the time, which is that there's an order parameter, which is fluctuating around, you're in the symmetric phase, but then it freezes out. Like, for example, if you're cooling a material, and it goes from paramagnetic phase to a ferromagnetic phase, the magnetization locks in, and all the spins line up. And that is the same kind of phenomenon where the symmetry is restored at high temperature and then becomes spontaneously broken at a critical point.
And I thought the same thing would happen in a unified gauge theory. And that that would give rise to the possibility of magnetic monopoles, for one thing, just because of relativistic causality. When the magnetization turns on, the magnetization at one point in space has no way of knowing to line up with the magnetization of another point in space because there hasn't been time for a light signal to travel between this domain and that domain. And so, as the spins start to line up, there will be knots that get locked in. Those are the topological defects. And that's how magnetic monopoles can form.
I had that idea, but Bert told me a different idea, which was that even if the phase transition were a smooth phase transition, if it were second order, that because the order parameters would be fluctuating, you would expect to get a lot of defects, even not taking into account the effects of relativistic causality, and that's what he showed me how to calculate. Which, because I wound up writing a very short letter-length article, got squeezed down to, like, a paragraph or something without many details. That was another thing. Steve did give me one piece of advice. I asked him where I should publish the result that he didn't find very interesting, and he made a suggestion which really surprised me. He said Nature. The particle physics students didn't read Nature. That was where there were papers about astronomy and stuff like that, or biology. But Steve at least had the notion that there was something of broad interest about what I was doing because it related to cosmology, to particle physics, and even to condensed matter ideas.
Zierler:
It was of broad interest, but not particularly interesting to him.
Preskill:
Not interesting enough, but broad. [laugh] But Bert said, “No, Physical Review Letters would be better.” So that's where I submitted it, and it was rejected because first of all, it wasn't novel enough. It'd actually be interesting for me to dig up that referee report. I think I still have it. But also, that it didn't seem to be right because I had overestimated the abundance of the monopoles for some reason which the reviewer didn't explain. But the editors, to their credit, said, “Well, we'll give you a chance to respond and resubmit.” But it was a very bad day because I was already in my fourth year of graduate school. I had no papers. And this was my first one. And it got rejected. And I was pretty depressed. So, I remember my wife thought this would cheer me up—we went shopping, and we bought a color TV set. Up until then, our TV set was this little black and white TV set. And it did cheer me up a little to have a color TV. But anyway, I resubmitted the single-author paper, and it did get accepted.
But what really seems funny and odd to me, looking back, is that when I was applying for post-docs, I had no publications. I had this one preprint, the one about magnetic monopole production in the early universe. And nothing had been published, and that was it. And yet, that didn't seem to be too big an impediment to getting good post-doc offers because that one paper was getting a fair amount of attention. Now, coming back to Henry Tye and Alan Guth, after my preprint came out, Alan invited me to visit Cornell. I don't think Henry was there. I think he was traveling. It was during the summer.
Zierler:
He was probably in China at that point.
Preskill:
Yeah, I think that's probably right. But Alan was there, and, of course, like I said, I knew him from my undergraduate days. And by that time, Michael Peskin was at Cornell. And I think Steve Shenker was there. Yeah, I think he was still a student there. Steve Shenker was there. And Ken Wilson. That was the first time I had a chance to sit-- Ken Wilson is one of my heroes. And although I had met him during his visit to Harvard, I'm sure he didn't know me. But during that visit, I got a chance to sit down with Ken and chat. I think that may be really the only time that we ever had a serious talk about physics, so that was memorable.
Zierler:
Do you remember what you talked about?
Preskill:
Yeah, magnetic monopoles. And actually, he thought my paper was wrong. And he thought it was wrong because he thought the magnetic monopoles would be confined. And he was wrong. And I'm not sure I convinced him.
Zierler:
You were confident at this point though.
Preskill:
I knew a lot about magnetic monopoles. Yeah. But anyway, I talked to Alan a lot. I don't remember, was their paper out yet? Not sure. But they had some of the same ideas, and I guess you've already talked to Alan. But I'd been pretty careful in analyzing if a significant number of monopoles were produced, how many of them would survive. And I had an argument, which I thought was pretty convincing, that unless something nonstandard happened in the cosmology, it just couldn't work, that the production of the monopoles was unavoidable. It would be copious, they wouldn't annihilate fast enough, and the universe would be closed by them many times over, and that couldn't be our cosmology, so there had to be some way out.
And during the following fall, I probably should've thought about that more. Because I figured there had to be something about the phase transition that was unconventional. But by that time, I had gotten interested in a different topic, which is what I did end up writing my thesis on, which was technicolor, as we called it at the time. The breaking of electroweak symmetry by strong interaction. So, of course, Alan famously continued thinking about it and had the insight that inflation could blow the monopoles away, but he also, to his credit, realized that that could explain the flatness and isotropy of the universe. And, of course, that idea was very explosive when it came out. That paper had a lot of impact right away. I remember him coming to Harvard and giving a talk about it, which was received with a lot of excitement.
Zierler:
Did you see the transition to technicolor as switching gears? Was it related?
Preskill:
It wasn't that closely related, but I thought the ideas were quite exciting. I was particularly inspired by a paper by Lenny Susskind, which is actually a little ironic because Steve Weinberg had written a related paper. He didn't call it technicolor, but he did call it dynamical breaking of electroweak symmetry, which is what it is. And his paper was, in a way, sort of typical Weinberg style. He calculated everything, and he correctly discussed all the issues. It was a little dry.
And Lenny Susskind is also one of my heroes because of his creativity as a scientist, but also he's a very charismatic communicator in writing and in person. And this was an inspiring paper. And what he realized, which Weinberg had not, was something quite simple, which was, in the Weinberg-Salam Model, the so-called rho parameter, which basically says that the ratio of the W to Z mass is determined by the Weinberg mixing angle theta-W. Steve, by the way, always claimed the W in theta-W stood for weak, and he wouldn't call it the Weinberg angle. He called it the weak mixing angle, in a burst of modesty.
But at any rate, Murray Gell-Mann always liked to say in his snide way, “Oh, we call this angle theta-W because W stands for the last letter in the word Glashow.” [laugh] Anyway, Murray and Steve were not fans of one another. So what Lenny said in his paper was that you could understand how the Weinberg angle was related to the W of Z masses just from a symmetry consideration, and that in the dynamical symmetry breaking scenario, that symmetry would naturally be present, and that the dynamics that you needed was dynamics that we already understood fairly well from QCD—the breaking of chiral symmetry in QCD, which is responsible for the pion being much lighter than other hadrons. That could occur with this new strong interaction with a similar structure to QCD, but which becomes strong at a higher scale, at the weak scale like a TeV, or a few hundred GeV rather than a few hundred MeV, as in QCD. That could account for how the electroweak symmetries get broken. And Lenny called this new strong interaction “technicolor.”
And what I found so appealing about this was that because it was dynamical, it should be highly constrained. One thing I found very curious and was very interested in, for some years, starting when I was in graduate school is, where do the quark and lepton masses come from? In the Standard Model, they're just free parameters. You write down Yukawa couplings, they can be anything, and those determine the mixing angles like the Cabibbo angle and the Kobayashi-Maskawa matrix, it's all free parameters. Same thing for all the masses. And what fun is that? You'd like to be able to explain where those masses come from. And I thought in a dynamical scenario, we'd be able to do that much better.
I was very interested in those ideas for a couple of years, and it turned out that these dynamical scenarios are so constrained that it was very hard to come up with a viable phenomenology. Because you didn't have the same kind of freedom you do when you have Higgs fields, where you can choose Yukawa couplings to be whatever you want. Explaining the masses of the quarks, and leptons, and all that became very challenging. Actually, I'll tell you something funny. When I first came to Caltech, that was in 1983, I thought the important problem in particle physics was to explain those quark and lepton masses. I thought, “If we could do that, if we could understand what that hint was telling us, that would be a good path to understanding what's beyond the Standard Model.”
And so, to remind myself that was important, I made a chart which showed all the masses, the spectrum of quarks and leptons, and I posted it in my office on the bulletin board so I'd see it every day to remind me, “This is the important thing to think about.” And then, a couple of years went by, and one day, I was talking to Mark Wise in my office, who occupied the office next door, and we looked at that chart, which had been on the bulletin board, roasting in the sun every afternoon, and the masses had all faded away. They'd been bleached by the sun. And we took that to be some kind of metaphor for how this problem somehow was too elusive to admit an easy solution. And by that time, I wasn't thinking about it anymore.
Zierler:
I want to ask, at this point, when you're really starting to solidify your identity in theoretical physics, going from magnetic monopoles to technicolor, did you feel at the time that you dipped a toe into cosmology, and then went sort of back to your home intellectual environment of particle theory?
Preskill:
Well, yeah. I don't know if I looked at it that way. But because I got excited about technicolor, I sort of dropped the cosmology ball for a while and focused on technicolor. My interest in cosmology got reactivated partly because of another experimental false alarm. In 1982, Blas Cabrera thought he saw a magnetic monopole. It was on Valentine's Day, 1982. I was still at Harvard. By then, I was on the faculty. And that seemed incredible and really exciting. And hard to explain. He had this little loop of superconducting wire and saw the flux jump, which he interpreted as evidence that a magnetic monopole had passed through the loop. And so, one needed to understand why magnetic monopoles would be plentiful enough for Blas Cabrera to detect one, and at the same time, not do other things, which the astrophysicists told us would be bad. Parker, in particular, had gotten a bound on the abundance of monopoles from observing that if there's a magnetic monopole plasma in the galaxy, it'll short out the galactic magnetic field on some time scale short compared to the galactic rotation time, which cranks up the dynamo.
And so, was there something wrong with that argument? Guess that wasn't really cosmology. But at any rate, we did realize that if the monopoles were very heavy, the story was changed because Parker had assumed they got relativistic velocities, which for the types of monopoles predicted by grand unified theories, needn't be the case. They'd more likely have typical virial velocities in the galaxy like 10 to the minus 3 c. Of course, it turned out Blas Cabrera never saw another magnetic monopole. But it was exciting for a while. And actually, that helped to elevate my star a little bit maybe because now everybody was excited about magnetic monopoles and where they came from. And I was asked to give talks about that and things like that.
Zierler:
To clarify, when you say that Cabrera never saw another one, is that to suggest it's possible that what he saw was a magnetic monopole?
Preskill:
Well, it seems extremely unlikely, right? Because he would've had to be incredibly lucky to see that one and be consistent with other bounds we have on the flux. So no, I don't think he ever explained, or at least never publicly explained, what went wrong or what the right interpretation was of the event he saw. But no, it wasn't a magnetic monopole, sad to say.
Yeah, so then, the next foray into cosmology which had some impact concerned axions and predicting that they could potentially be the dark matter. And probably Alan Guth told you about this workshop, the Nuffield Workshop in 1982 in Cambridge. I was there. It was organized by Stephen Hawking and Gary Gibbons. It was a pretty exciting event. And the big topic there was whether inflation could explain the origin of galaxies by seeding the density perturbations from which galaxies grew. And there was a lot of disagreement at the beginning of that three-week workshop about what inflation predicted.
And I'm sure you discussed this with Alan, but after the idea of inflation, which seemed very exciting, trouble was brewing because how inflation ended was unclear. And Alan and others had done computations of how as bubbles of true vacuum appeared in the false vacuum in a phase transition, whether those bubbles would succeed in filling up the universe and giving rise to a reheated universe that would then be described by Big Bang cosmology, and he couldn't get this to work. But then, around I guess it must've been the end of 1981, the idea by Andrei Linde, and Albrecht and Steinhardt, that instead of having to go through a barrier, the universe could sort of roll off the table to end inflation. The energy density would be high because you'd be on a plateau of a potential function, but rolling along, and then you'd start to oscillate in the potential after you roll off this flat part. And that would give rise to reheating.
So, what everybody was interested in was what kind of perturbations of density would be produced in that transition from the inflationary phase to the more standard radiation-dominated phase. And so, Alan, and Starobinsky, and Turner, Steinhardt, and Bardeen, and Hawking, they were all trying to calculate those things. So that was sort of the focal point of excitement. But I went there to talk about magnetic monopoles and to think about what axions might have to do with cosmology. And Frank Wilczek was there, too, who had an interest in axions, as the founder of them—
Actually, just to backpedal for a second, this is sort of a funny story. Or maybe, I don't know, sort of a typical experience of a graduate student. In the fall of 1977, I crank up my courage, and I go to see Steve Weinberg. I'd like suggestions for a research problem to work on. And so, he responded immediately with the thing that he was thinking about that day. What was it? Well, he had just read this paper by Peccei and Quinn that would explain potentially the solution to the strong CP problem, why CP is a very good symmetry of the strong interactions. And their idea has something to do with the Higgs sector, and how you can introduce another Higgs field, and that can help. “So what might be interesting to work out is, what's the phenomenology of this type of model with more than one Higgs field?”
So, I thought that sounded interesting. So, like any graduate student would, I spent the next couple of weeks reading everything I could find on Higgs phenomenology. But then, Steve, after a few weeks, announced he was giving a seminar, and he explained the idea, which we now called the axion. He actually called it the Higglet, because it was a little Higgs, a light Higgs, at the time.
Zierler:
The Higglet never caught on.
Preskill:
[laugh] Higglet didn't catch on. And Frank's good at names, isn't he?
Zierler:
Yeah, yeah.
Preskill:
And so, Steve was trying to figure out at that point whether the Higglet was ruled out by experiments that had already been done. But I was a little miffed because I thought, “Boy, Steve suggested this problem. Why didn't he tell me that he was making progress? And here I am, spending every waking hour learning about Higgs phenomenology, so I'll be ready to dive in.” But, of course, I'm sure Steve didn't give it another thought. I doubt he remembered that he had even mentioned it to me. I just happened to walk into his office at the time he was looking at the paper or something. Anyway, I was reminded of that. [laugh]
Zierler:
While we're still on graduate school, who was on your committee?
Preskill:
Weinberg, Coleman, Georgi, and Estia Eichten, who was junior faculty at Harvard at that time. I do have a very disturbing memory of my exam, actually. You're not going to believe it when I tell you this, probably. Well, here's the thing. I didn't understand what a PhD defense was. Somehow, I didn't realize I was expected to give a presentation. How could I have not known this? All the other students seemed to know it. So, I thought, “Well, they've all read my thesis, and they'll come in, and they'll ask me questions about it.” I had nothing prepared. My thesis was related to technicolor. Didn't have anything to do with cosmology and magnetic monopoles. But actually, it was something Steve was very interested in.
I'll tell you something funny about that, too. It was very Weinbergian, what I did. I studied what's called the vacuum alignment problem. And what that means is, you have spontaneous symmetry breaking, but you also have some explicit breaking of the symmetry. And the explicit breaking of the symmetry determines which of the degenerate vacua will actually get preferred. If you have a ferromagnet, and you turn on a small magnetic field, then the lower energy vacuum will be the true vacuum. And so, in this case, I had some global symmetries, but then because I also introduced gauge interactions, that explicitly broke some of those symmetries. And the interesting thing was that the way that the symmetry breaking aligned with the gauge symmetry gave rise to some phenomenological predictions, that there would be light mesons coming from the technicolor sector that you might be able to see in collider experiments and stuff like that.
And I gave a talk about this in early 1980 at Harvard. And Steve was there, and he seemed enthusiastic about it. And then, maybe a month or two later. Now, I mentioned Michael Peskin earlier. Michael, that year, the 1979-1980 academic year, was spending the year in France at Saclay as a visitor, and he had written a paper on a very similar topic with very similar conclusions while he was in France, and I hadn't been communicating with him. And he sent it to Weinberg. And so, I don't remember exactly why, but I came into Steve's office, and he said, “I have this paper from Peskin. It's very interesting, and he does blah, blah, blah.” And I said, “But, Steve, that's what I talked about at that seminar two months ago.” He didn't remember that at all. Later, maybe he recollected, he was apologetic about expressing that enthusiasm about Peskin's paper without realizing that much of it overlapped with the content of my thesis.
So anyway, that's what was in the thesis, so I figured I had a receptive audience because I knew Howard was also quite interested, and Estia, too. But I didn't prepare anything. And Arthur Jaffe also came, and he brought Cliff Taubes, who was his graduate student and was actually my officemate. And Cliff became a famous topologist. He’s won many awards, and he's a great mathematician now. They thought I was going to talk about magnetic monopoles, which they were both interested in, so they came in to hear my talk. And I just got up there, and Steve said, “OK, now you can begin.” And I thought, “What”" I had nothing prepared. So I just started mumbling about what was in my thesis very stream-of-consciousness. It must've been excruciating to listen to. And that was my PhD exam.
Zierler:
But you survived. You lived to tell the tale.
Preskill:
I lived, yeah. But I try not to think about it. But that's what really happened.
Zierler:
Was the game plan to stay at Harvard already buttoned up before your defense?
Preskill:
Yes. So, I became a junior fellow after my PhD in the Harvard Society of Fellows. The Society of Fellows, at least in those days, would appoint eight new fellows every year, and they were in all fields. Not just science, in fact, humanities as well. But it was kind of typical to have one or two theoretical physicists in a class, and pretty often, they were Harvard graduate students, not always, who became junior fellows. Some of my predecessors the previous year or two were Paul Steinhardt, who got a Harvard PhD and became a junior fellow, and also Ian Affleck, who later became a very distinguished condensed matter theorist, though he was doing particle theory at the time. So, in my year, I became a junior fellow, and also in that same year was Mark Wise, who became a good friend. He had been a graduate student with Fred Gilman at Stanford. And Cliff Taubes, who was doing topology. We were all junior fellows together.
Zierler:
Was your sense that the Society was essentially finishing school to see if you could elevate to become a Harvard professor?
Preskill:
I didn't really look at it that way because it was so rare for junior fellows, or even Harvard junior faculty, to become tenured professors.
Zierler:
So as naive as you were about what a thesis defense was, you clearly understood the culture of not promoting from within at Harvard.
Preskill:
Oh, that was well-known. Although, actually, we used to joke about it, the students, because we were aware that there had been, in recent years, strong assistant professors doing excellent research who had not gotten tenure at Harvard. Tom Applequist was one who was a couple years ahead of when I arrived. And actually, I had two collaborators who were junior faculty while I was in graduate school, Estia Eichten and Ken Lane. And there was not any serious expectation that they would become tenured professors at Harvard, and they didn't. But, of course, they both went on to good careers. And that was the typical pattern with the Harvard junior faculty, and with the junior fellows, that they would usually go elsewhere and be successful. Now, I did something unusual. I was a junior fellow for only one year, even though it was a three-year appointment. I became an assistant professor and then an associate professor in the following two years.
Zierler:
And, of course, the associate professor is not tenured.
Preskill:
No, and I didn't really think it was likely that I would get tenure, and I wound up going to Caltech.
Zierler:
But to be promoted to associate is an indication that it's a step in the right direction.
Preskill:
Well, maybe so. But actually, what happened was this. My wife had just gotten her business degree at MIT at the Sloan School, what everybody else calls an MBA, but they call a Master's of Science in Management, and she was working at a company that seemed like a real up-and-coming company, Digital Equipment Corporation, which made the VAX minicomputer and other products. And it looked like she was off to a great start in her career, and we wanted to have the flexibility of staying in the Boston area longer. And I thought if I transitioned into the junior faculty slot, although it would mean I'd have to teach and other stuff, we would at least have the flexibility to stick around longer. As it turned out, I didn't do that. I was only at Harvard for three years.
Actually, I remember I was visiting Santa Barbara. This was at the very beginnings of what was then the Institute for Theoretical Phy | |||
2453 | dbpedia | 2 | 6 | https://www.math.utu.fi/icalp04/godel2004.html | en | Gödel Prize 2004 | [] | [] | [] | [
""
] | null | [] | null | null | 2004 Gödel Prize
Maurice Herlihy, Nir Shavit and Michael Saks, Fotios Zaharoglou
The 2004 Gödel Prize for outstanding journal articles in theoretical computer science is shared between the papers:
"The Topological Structure of Asynchronous Computation"
by Maurice Herlihy and Nir Shavit,
Journal of the ACM, Vol. 46 (1999), 858-923,
and
"Wait-Free k-Set Agreement Is Impossible: The Topology of Public Knowledge"
by Michael Saks and Fotios Zaharoglou,
SIAM J. on Computing, Vol. 29 (2000), 1449-1483.
The two papers recognized by the 2004 Gödel Prize offer one of the most important breakthroughs in the theory of distributed computing.
The problem attacked is the complete understanding of asynchronous wait-free deterministic computation in the basic shared memory model. These papers demonstrate that one can avoid the inherent difficulty of analyzing a dynamic model, transforming it into a static one by associating computational tasks with simplicial complexes and translating the question of existence of a wait-free protocol into (distinct but related) topological questions about the complexes. This reformulation allows the introduction of powerful topological invariants, such as homologies, to show the impossibility of numerous tasks, including set-agreement and renaming.
The discovery of the topological nature of distributed computing provides a new perspective on the area and represents one of the most striking examples, possibly in all of applied mathematics, of the use of topological structures to quantify natural computational phenomena.
Call for Nominations
Call for Nominations in pdf-format
The Gödel Prize for outstanding papers in the area of theoretical computer science is sponsored jointly by the European Association for Theoretical Computer Science (EATCS) and the Special Interest Group on Algorithms and Computing Theory of the Association of Computing Machinery (ACM-SIGACT). This award is presented annually, with the presentation taking place alternately at the International Colloquium on Automata, Languages, and Programming (ICALP) and ACM Symposium on the Theory of Computing (STOC). The twelfth presentation will take place during the 2004 ICALP, July 2004 in Turku, Finland. The Prize is named in honor of Kurt Gödel in recognition of his major contributions to mathematical logic and of his recently discovered interest in what has become the famous "P versus NP" question. The Prize includes an award of $5000 (US).
AWARD COMMITTEE: The winner of the Prize is selected by a committee of six members. The EATCS President and the SIGACT Chair each appoint three members to the committee, to serve staggered three-year terms. The committee is chaired alternately by representatives of EATCS and SIGACT, with the 2004 Chair being an EATCS representative. The 2004 Award Committee consists of Giorgio Ausiello (University of Rome "La Sapienza"), László Babai (University of Chigaco), Pierre-Louis Curien (CNRS, Paris 7), Zvi Galil (Columbia University), Juhani Karhumäki (Chair, University of Turku) and Jeff Ullman (Stanford University).
ELIGIBILITY: Any research paper or a series of papers published (not reprinted) in a recognized refereed journal by a single author or a team of authors in the period 1997-2003 is eligible. This extended period is in recognition of the fact that the value of fundamental work cannot always be immediately assessed. The research nominated for the award should be in the area of theoretical computer science. The term "theoretical computer science" is meant in a broad sense, and encompasses, but is not restricted to, those areas covered by ICALP and STOC. The Award Committee shall have the ultimate authority to decide whether a particular paper is eligible for the Prize.
NOMINATIONS: Nominations for the award should be submitted to the Award Committee Chair at the following address:
Professor Juhani Karhumäki
Department of Mathematics & Turku Centre for Computer Science
University of Turku
20014 University of Turku, FINLAND
email: karhumak@cs.utu.fi
tel.: 358-2-333 5613
fax: 358-2-333 6595
To be considered, nominations for the 2004 prize must be received by January 10, 2004. Nominations may be made by any member of the scientific community. A nomination should contain a brief summary of the technical content of the paper and a brief explanation of its significance. A copy of the research paper or papers should accompany the nomination. The work may be in any language. However, if it is not in English, a more extended summary written in English should be enclosed. Additional recommendations in favor of the nominated work may also be enclosed. To be considered for the award, the paper or series of papers must be recommended by at least two individuals, either in the form of two distinct nominations or one nomination including recommendations from two different people.
It is the duty of the Award Committee to actively solicit nominations from as broad a spectrum of the theoretical computer science community as possible, so as to ensure that potential award-winning papers are not overlooked. To this end, the Award Committee will accept informal proposals of potential nominees, as well as tentative offers to prepare formal nominations, should they be needed to fulfill the requirements that the paper have two separate recommendations.
SELECTION PROCESS: Although the Award Committee is encouraged to consult with the theoretical computer science community at large, the Award Committee is solely responsible for the selection of the winner of the award. In the case that the Award Committee cannot agree on a recipient, the prize may be shared by more than one paper or series of papers, and the Award Committee reserves the right to declare no winner at all. All matters relating to the selection process that are not specified here are left to the discretion of the Award Committee.
PAST WINNERS:
1993:
László Babai and Shlomo Moran, "Arthur-Merlin games: a randomized proof system and a hierarchy of complexity classes," Journal of Computer and System Sciences 36 (1988), 254-276.
Shafi Goldwasser, Silvio Micali and Charles Rackoff, "The knowledge complexity of interactive proof systems," SIAM Journal on Computing 18 (1989), 186-208.
1994:
Johan Håstad, "Almost optimal lower bounds for small depth circuits," Advances in Computing Research 5 (1989), 143-170.
1995:
Neil Immerman, "Nondeterministic space is closed under complementation," SIAM Journal on Computing 17 (1988), 935-938.
Róbert Szelepcsényi, "The method of forced enumeration for nondeterministic automata," Acta Informatica 26 (1988), 279-284.
1996:
Alistair Sinclair and Mark Jerrum, "Approximate counting unform generation and rapidly mixing Markov chains," Information and Computation 82 (1989), 93-133.
Mark Jerrum and Alistair Sinclair, "Approximating the permanent," SIAM Journal on Computing 18 (1989), 1149-1178.
1997:
Joseph Halpern and Yoram Moses, "Knowledge and common knowledge in a distributed environment," Journal of the ACM 37 (1990), 549-587.
1998:
Seinosuke Toda, "PP is as hard as the polynomial-time hierarchy," SIAM Journal on Computing 20 (1991), 865-877.
1999:
Peter W. Shor, "Polynomial-time algorithms for prime factorization and discrete logarithms on a quantum computer," SIAM Journal on Computing26 (1997), 1484-1509.
2000:
Moshe Y. Vardi and Pierre Wolper, "Reasoning about infinite computations," Information and Computation 115 (1994), 1-37.
2001:
Uriel Feige, Shafi Goldwasser, László Lovász, Shmuel Safra, and Mario Szegedy, "Interactive proofs and the hardness of approximating cliques," Journal of the ACM 43 (1996), 268-292.
Sanjeev Arora and Shmuel Safra, "Probabilistic checking of proofs: a new characterization of NP," Journal of the ACM 45 (1998), 70-122.
Sanjeev Arora, Carsten Lund, Rajeev Motwani, Madhu Sudan, and Mario Szegedy, "Proof verification and the hardness of approximation problems," Journal of the ACM 45 (1998), 501-555.
2002:
Géraud Sénizergues, "L(A)=L(B)? Decidability results from complete formal systems," Theoretical Computer Science 251 (2001), 1-166.
2003: | ||||||||
2453 | dbpedia | 1 | 13 | https://www.nbcnews.com/id/wbna3077363 | en | A quantum leap in computing | [] | [] | [] | [
""
] | null | [
"Alan Boyle"
] | 2000-07-19T17:44:16+00:00 | The seemingly bizarre world of quantum mechanics could open the way for a revolution in computing and cryptography. | en | https://nodeassets.nbcnews.com/cdnassets/projects/ramen/favicon/nbcnews/all-other-sizes-PNG.ico/favicon.ico | NBC News | https://www.nbcnews.com/id/wbna3077363 | The world of quantum mechanics goes against the grain of everyday experience. It’s an “Alice in Wonderland” realm beyond the ones and zeroes of classical computing. But if we can figure out how to put this world to work, it would lead to a technological quantum leap, allowing us to solve problems that would take millions of years to figure out using present-day computers. And that has huge implications for the Internet — indeed, for any means of communicating data.
Present-day computing rests on a foundation of bits, with information encoded within electronic circuitry as a series of ones and zeroes. But as circuits become more and more miniaturized, computers come closer to the fuzzy threshold of quantum physics: Quantum objects, such as electrons and other subatomic particles, can be thought of as existing in multiple states simultaneously: “up” as well as “down” … “1” as well as “0.” When you observe a quantum object, you take a snapshot of one of those states — but you also destroy quantum information.
This quantum realm serves as the lower limit for classical computing. The “one-or-zero” concept won’t work in a world of fuzzy “one-and-zero” bits.
But this property, known as “superposition,” opens the way to a completely different approach to computing. In this approach, one quantum bit — or qubit — enables you to manipulate two values at the same time. As you string together more and more qubits, the power grows exponentially. If you link two qubits together, you can work with four values at the same time. Three qubits can work with eight values, and so on. If you can get up to 40 qubits, you could work with more than a trillion values simultaneously.
Code-breaking
What could such computers be used for? One important application would be to find the prime factors of very large numbers.
This isn’t just an empty mathematical exercise. Prime factorization happens to be the foundation for secure data communications. It’s relatively easy to multiply two prime numbers together (7,817 and 7,333, for example), but no one has found an easy way to do the calculation in reverse — that is, figure out which two prime numbers can be multiplied together to equal 57,322,061.
This is what makes public-key cryptography possible. Other people can send you messages that are coded using the product of two primes, but that secret message can be deciphered only by someone who knows the two prime factors.
Your computer automatically handles all this coding and decoding in a secure electronic transaction. That’s what protects your credit card information from electronic eavesdroppers when you buy something over the Internet. But suppose the eavesdroppers had quantum computers: With all that computing power, they could figure out the prime factors of even incredibly large numbers — and crack the code.
Thus, the development of quantum computers would require a complete change in the methods used to protect information transmitted over the Internet and other “secure” communications links.
Code-making
Fortunately for code-makers, quantum computing techniques could be used as well to guarantee security (at least within a negligibly small probability). Quantum cryptography rests on the fact that quantum information cannot be measured without disrupting it. The secret-message software could be built so that attempts to eavesdrop on a message would set off an alarm — and automatically shut down transmission.
Another feature useful for quantum cryptography — and essential for quantum computing — is a bizarre characteristic called entanglement. Two quantum objects can be linked together so that if you observe the result of an interaction with one of the objects, you can figure out what the state of the other object is as well. The entanglement holds even if the two objects are widely separated.
This makes possible an “action-at-a-distance” phenomenon often called quantum teleportation — a term that often leads people to think of “Star Trek” transporters. In reality, what’s being teleported is information about a quantum object, not the object itself.
Two people could encode information, trade it back and forth, and reconstruct the information using entangled quantum systems. Even if eavesdroppers intercept the coded information, they couldn’t read the message because they wouldn’t be part of the entangled system.
Making it real
What forms do these quantum systems take? Photons, ions and atomic nuclei already are being put to work, with the spin of those particles representing ones and zeroes simultaneously.
Researchers at the Los Alamos National Laboratory have demonstrated a quantum cryptography scheme that works over 30 miles (48 kilometers) of optical fiber. At the National Institute of Standards and Technology, two trapped beryllium ions have been wired together through entanglement, potentially representing the world’s first two-qubit computational device.
In addition to ion traps, nuclear magnetic resonance devices are helping scientists use the spin of atomic nuclei in quantum computing experiments. There are even proposals to make quantum computing devices out of good old silicon.
Peter Shor, an award-winning mathematician at AT&T Labs, says it may be possible to develop a 30-qubit computer within the next decade — but that would be just the start. It would take hundreds or thousands of networked qubits to solve problems beyond the capability of classical computers. No one knows when we’ll be able to reach that point. In fact, some researchers worry that the technical hurdles are too great to overcome.
Problems and solutions
Getting the information out: Since measurement destroys quantum information, how do you actually get the results of your calculations? The output from a quantum computer might well be analogous to an interference pattern, Shor says: The correct answer would be built up through constructive interference, while incorrect answers would be canceled out through destructive interference.
Scaling up the system: The NIST experiment shows that qubits can be linked together through entanglement, but can such networks be scaled up in size? Quantum information has a tendency to “leak” into the outside environment, in a process known as decoherence. Thus, the quantum system has to be isolated from outside influence as much as possible.
Compensating for errors: No matter what you do, quantum operations are inherently “noisy.” How do you correct for errors? It turns out that you can adapt classical error-correcting techniques to quantum systems to make them fault-tolerant. If the error rate is less than one part per 10,000, you can make quantum computers work even though the individual operations you’re applying to your qubits aren’t perfectly accurate, Shor says.
If we do develop workable quantum computers, they would come in handy for much more than code-breaking and code-making. They could make it easier to find solutions to other “needle-in-a-haystack” problems — problems for which no better approach is known than exhaustively searching a large set of possible solutions for the correct one. We could gain new insights into how molecules, atoms and subatomic particles behave — unlocking secrets of the quantum world itself.
But in truth, we can’t imagine all the potential uses for quantum computing today — any more than the creators of the first digital computers, a half-century ago, could have imagined where their pioneering work would eventually lead.
This article is based on an interview with Peter Shor, senior researcher at AT&T Labs. Dr. Shor won the 1999 Godel Prize and the 1998 Nevanlinna Award for his work in quantum computing and quantum physics and has been with AT&T Labs since 1986. Dan Simon of Microsoft Research also contributed to this report. | ||||
2453 | dbpedia | 1 | 44 | https://www.edge.org/conversation/david_deutsch-the-edge-of-computation-science-prize | en | THE EDGE OF COMPUTATION SCIENCE PRIZE | [
"https://www.edge.org/sites/default/files/edge_logo.jpg",
"https://www.edge.org/3rd_culture/prize05/images/David1.jpg",
"https://www.edge.org/3rd_culture/prize05/images/1125844.gif",
"https://www.edge.org/sites/all/modules/print/icons/print_icon.png",
"https://www.edge.org/sites/all/modules/print/print_mail... | [] | [] | [
""
] | null | [] | null | https://www.edge.org/favicon.ico | https://www.edge.org/conversation/david_deutsch-the-edge-of-computation-science-prize | THE $100,000 EDGE OF COMPUTATION SCIENCE PRIZE
For individual scientific work, extending the computational idea, performed, published, or newly applied within the past ten years.
David Deutsch
Recipient of the 2005
$100,000 Edge of Computation Science Prize
DAVID DEUTSCH is the founder of the field of quantum computation. Paul Benioff, Richard Feynman, and others had written about the possibility of quantum computation earlier, but Deutsch's 1985 paper on Quantum Turing Machines was the first full treatment of the subject, and the Deutsch-Jozsa algorithm is the first quantum algorithm.
When he first proposed it, quantum computation seemed practically impossible. But the last decade has seen an explosion in the construction of simple quantum computers and quantum communication systems. None of this would have taken place without Deutsch's work.
The nominating essay is reproduced in part below.
Although the general idea of a quantum computer had been proposed earlier by Richard Feynman, in 1985 David Deutsch wrote the key paper which proposed the idea of a quantum computer and initiated the study of how to make one. Since then he has continued to be a pioneer and a leader in a rapidly growing field that is now called quantum information science.
Presently, small quantum computers are operating in laboratories around the world, and the race is on to find a scalable implementation that, if successful, will revolutionize the technologies of computation and communications. It is fair to say that no one deserves recognition for the growing success of this field more than Deutsch, for his ongoing work as well as for his founding paper. Among his key contributions in the last ten years are a paper with Ekert and Jozsa on quantum logic gates, and a proof of universality in quantum computation, with Barenco and Ekert (both in 1995).
One reason to nominate Deutsch for this prize is that he has always aimed to expand our understanding of the notion of computation in the context of the deepest questions in the foundations of mathematics and physics. Thus, his pioneering work in 1985 was motivated by interest in the Church-Turing thesis. Much of his recent work is motivated by his interest in the foundations of quantum mechanics, as we see from his 1997 book.
ABOUT DAVID DEUTSCH
The main papers written by Deutsch that contained "achievement in scientific work that embodies extensions of the computational idea" were in 1985 ("Quantum theory, the Church-Turing principle, and the universal quantum computer") and 1989 ("Quantum computational networks").
His 1995 paper, "Conditional quantum dynamics and logic gates" (with A. Barenco, A. Ekert and R. Jozsa) was an important step in clarifying what sort of physical processes would be needed to implement quantum computation in the laboratory, and what sort of things the experimentalists should be trying to get to work.
"Universality in quantum computation," also written in 1995 (with A. Barenco and A. Ekert) proved the universality of almost all 2-qubit quantum gates, thus verifying his conjecture made in 1989 and showing that quantum computation and quantum gate operations are "built in" to quantum physics far more deeply than classical physics. In 1996, in "Quantum privacy amplification and the security of quantum cryptography over noisy channels" (with A. Ekert, R. Jozsa, C. Macchiavello, S. Popescu and A. Sanpera), he brought quantum cryptography a little bit closer to being practical as opposed to just a laboratory curiosity.
His recent work as seen in the following three papers can be seen as new "applications" of the computational idea, rather than extensions of it.
In 2000, "Information Flow in Entangled Quantum Systems" (with P. Hayden) refutes the long-held belief that quantum systems contain 'non-local' effects, and it does it by appealing to the universality of quantum computational networks, and analysing information flow in those.
Also in 2000, in "Machines, Logic and Quantum Physics" (with A. Ekert and R. Lupacchini), a philosophic paper, not a scientific one, he appealed to the existence of a distinctive quantum theory of computation to argue that our knowledge of mathematics is derived from, and is subordinate to, our knowledge of physics (even though mathematical truth is independent of physics).
In 2002, he answered several long-standing questions about the multiverse interpretation of quantum theory in "The Structure of the Multiverse" — in particular, what sort of structure a "universe" is, within the multiverse. It does this by using the methods of the quantum theory of computation to analyse information flow in the multiverse.
His two main lines of research at the moment, qubit field theory and quantum constructor theory, may well yield important extensions of the computational idea eventually, but at the moment neither of them has yielded any results at all, to speak of, only promising avenues of research.
Born in Haifa, Israel, David Deutsch was educated at Cambridge and Oxford universities. After several years at the University of Texas at Austin, he returned to Oxford, where he now lives and works. Since 1999, he has been a non-stipendiary Visiting Professor of Physics at the University of Oxford, where he is a member of the Centre for Quantum Computation at the Clarendon Laboratory, Oxford University.
In 1998 he was awarded the Institute of Physics' Paul Dirac Prize and Medal. This is the Premier Award for theoretical physics within the gift of the Council of the Institute of Physics. It is made for "outstanding contributions to theoretical (including mathematical and computational) physics." In 2002 he received the Fourth International Award on Quantum Communication for "theoretical work on Quantum Computer Science."
He is the author of The Fabric of Reality [1997].
References:
"Quantum Theory, The Church-Turing Principle, and the Universal Quantum Computer," Proc. Roy. Soc. London A400, 97-117 (1985)
" Quantum computational networks" Proceedings of the Royal Society of London A425:73-90. (1989)
"Conditional quantum dynamics and logic gates" (with A. Barenco, A. Ekert and R. Jozsa) Phys. Rev. Lett. 74 4083-6 (1995)
"Universality in quantum computation" (with A. Barenco and A. Ekert) Proc. R. Soc. Lond. A449 669-77 (1995)
"Quantum privacy amplification and the security of quantum cryptography over noisy channels" (with A. Ekert, R. Jozsa, C. Macchiavello, S. Popescu and A. Sanpera) Phys. Rev. Lett. 77 2818-21 (1996)
"Information Flow in Entangled Quantum Systems" (with P. Hayden) Proc. R. Soc. Lond. A456 1759-1774 (2000)
"Machines, Logic and Quantum Physics" (with A. Ekert and R. Lupacchini) Bulletin of Symbolic Logic 3 3 (2000)
"The Structure of the Multiverse" Proc. R. Soc. Lond.A458 2028 2911-23 (2002) | |||||||
2453 | dbpedia | 3 | 26 | https://gilkalai.wordpress.com/2013/03/13/a-few-slides-and-a-few-comments-from-my-mit-lecture-on-quantum-computers/ | en | A Few Slides and a Few Comments From My MIT Lecture on Quantum Computers | [
"https://gilkalai.wordpress.com/wp-content/uploads/2015/11/cropped-gil-jirka.jpg",
"https://gilkalai.wordpress.com/wp-content/uploads/2013/03/hkmit11.jpg?w=640&h=467",
"https://gilkalai.wordpress.com/wp-content/uploads/2012/12/aram.gif?w=120&h=150",
"https://gilkalai.wordpress.com/wp-content/uploads/2013/03/h... | [] | [] | [
""
] | null | [] | 2013-03-13T00:00:00 | I gathered a few of the comments made by participants of my lecture "Why quantum computers cannot work and how", and a few of my answers. Here they are along with some of the lecture's slides. Here is the link for the full presentation. 1) Getting started Aram Harrow: Introduces me, mentions our Internet… | en | https://s1.wp.com/i/favicon.ico | Combinatorics and more | https://gilkalai.wordpress.com/2013/03/13/a-few-slides-and-a-few-comments-from-my-mit-lecture-on-quantum-computers/ | |||||
2453 | dbpedia | 3 | 71 | https://www.linkedin.com/posts/department-of-computer-science-and-engineering-at-iit-bombay_risc-2024-keynote-speaker-dr-shweta-activity-7169274981934096385-e3TT | en | CSE at IIT Bombay on LinkedIn: 💡 RISC 2024 Keynote Speaker : Dr. Shweta Agrawal Know more:… | https://media.licdn.com/dms/image/v2/D5622AQGjERZdTVVh6g/feedshare-shrink_2048_1536/feedshare-shrink_2048_1536/0/1709288353084?e=2147483647&v=beta&t=RQQRIFKV09NhG9P61uyacqJUgi3_WwqjIMQ9C5eUSRY | https://media.licdn.com/dms/image/v2/D5622AQGjERZdTVVh6g/feedshare-shrink_2048_1536/feedshare-shrink_2048_1536/0/1709288353084?e=2147483647&v=beta&t=RQQRIFKV09NhG9P61uyacqJUgi3_WwqjIMQ9C5eUSRY | [
"https://media.licdn.com/dms/image/v2/D4D3DAQHGLUq6DTlaAg/image-scale_191_1128/image-scale_191_1128/0/1712553354002/department_of_computer_science_and_engineering_at_iit_bombay_cover?e=2147483647&v=beta&t=IUKzHRmLQbjYOFJ6iLvkaRAmCvW7uj1Qs5tcaKk0Wz8"
] | [] | [] | [
""
] | null | [
"CSE at IIT Bombay"
] | 2024-03-01T10:19:14.363000+00:00 | 💡 RISC 2024 Keynote Speaker : Dr. Shweta Agrawal
Know more: https://lnkd.in/dGgq-CSf
Register for talk: https://lnkd.in/dqUsKrWt
Dr. Shweta Agrawal is a… | en | https://static.licdn.com/aero-v1/sc/h/al2o9zrvru7aqj8e1x2rzsrca | https://www.linkedin.com/posts/department-of-computer-science-and-engineering-at-iit-bombay_risc-2024-keynote-speaker-dr-shweta-activity-7169274981934096385-e3TT | 🏆Proud to Receive the Certificate of Excellence in Quantum Cryptography! 🏆 I'm absolutely delighted to share a significant milestone in my learning journey - the Certificate of Excellence for successfully completing the Quantum30 Learning Challenge in Quantum Cryptography! 🚀 Over the past 30 days, I've had the privilege of diving deep into the captivating world of quantum mysteries and cryptographic marvels. From grasping the fundamentals of quantum mechanics to exploring advanced cryptographic algorithms, this journey has been nothing short of transformative. I want to express my heartfelt gratitude to Quantum Computing India for curating this exceptional learning experience. The seamless blend of theory, practical application, and the support of a vibrant community made this challenge an unforgettable exploration of the quantum realm. This certificate is not just a recognition of my efforts but also a testament to the power of continuous learning and the boundless opportunities that await in the fields of quantum computing and cryptography. Here's to the pursuit of knowledge, the thrill of discovery, and the incredible potential of quantum cryptography! 🌟 #CertificateOfExcellence #QuantumCryptography #Quantum30 #LifelongLearning #AchievementUnlocked #QuantumComputingIndia #LinkedIn
In the mid-1990s, Peter Shor's groundbreaking algorithm opened the door to quantum computing's immense potential. It demonstrated how a hypothetical quantum computer could factor large numbers into their prime components far more efficiently than classical machines, challenging the foundations of internet security relying on public-key cryptography. Fast forward to today, Oded Regev, a computer scientist at New York University, has unveiled a groundbreaking variant of Shor's algorithm, altering the relationship between the size of the number being factored and the required quantum operations. This development marks a significant leap forward in quantum computing and cryptography. It's an exciting breakthrough that has the potential to revolutionize the industry and reshape our understanding of computational complexity. While further optimization is needed to apply this algorithm practically, it represents a remarkable advancement in the field. The broader message here is clear: quantum computing researchers must remain open to unexpected surprises and continue exploring new frontiers in this ever-evolving landscape. Don't miss this enlightening article that delves into the future of quantum computing and its potential impacts on our digital world. https://lnkd.in/gKthh8We #QuantumComputing #Cryptography #Innovation #Tech #IP #VC #Patents #DeepTech #Quantum
Quantum computing has a hype problem "Established applications for quantum computers do exist. The best known is Peter Shor’s 1994 theoretical demonstration that a quantum computer can solve the hard problem of finding the prime factors of large numbers exponentially faster than all classical schemes. Prime factorization is at the heart of breaking the universally used RSA-based cryptography, so Shor’s factorization scheme immediately attracted the attention of national governments everywhere, leading to considerable quantum-computing research funding. " "There are proposals to use small-scale quantum computers for drug design, as a way to quickly calculate molecular structure, which is a baffling application given that quantum chemistry is a minuscule part of the whole process. Equally perplexing are claims that near-term quantum computers will help in finance. No technical papers convincingly demonstrate that small quantum computers, let alone NISQ machines, can lead to significant optimization in algorithmic trading or risk evaluation or arbitrage or hedging or targeting and prediction or asset trading or risk profiling. This however has not prevented several investment banks from jumping on the quantum-computing bandwagon." By Sankar Das Sarma at MIT Technology Review Link https://lnkd.in/ddjbKVws
Quantum leaps in learning! I had an incredible time at the "Introduction to Quantum Computing" live workshop with UCL Quantum Science and Technology Institute last week! The highlight? Getting our hands dirty with IBM's Quantum Platform for the first practice lessons. It was just pure, fascinating fun! Huge thanks to the UCLQ team for this eye-opening experience. It's amazing to see how quantum theory comes to life in practical applications. https://lnkd.in/eH4-fqqP Quantum computers are coming, and they're not here for the office ping-pong tournament. These subatomic smarty-pants are threatening to crash our crypto party faster than you can say "Schrödinger's cat." Imagine a nerd with superpowers who can read your diary, steal your lunch money, and solve your Rubik's cube – all before you've had your morning coffee. That's quantum computing for ya! So, unless you want your firewalls to become as effective as a chocolate teapot, it's time to quantum-proof your cyber defenses. Remember, in the quantum world, your bits and bytes are like an all-you-can-eat buffet for these qubit-crunching monsters. Don't let your data become quantum toast! Upgrade your crypto game now, or risk becoming the punchline in the cosmic joke of cybersecurity.
Day 16 of the #quantum30 challenge with Quantum Computing India: The beginner's path to quantum computing 🖥⚙ Today was the sixteenth day (delayed) of the Quantum30 challenge, where we learned about Quantum Cryptography. Here are some of my valuable takeaways from today's learning: ⭐The current cryptographic codes that are being used online are mostly called public key systems. ⭐It's an algorithm that encrypts a message into a mess that is not understandable but predictable by the receiver. Since the key is public, anyone can encrypt the message but only the receiver can decrypt it. ⭐The method is easy to do but hard to undo. One of the most common forms of this is the division of a very large number into primes. The idea is that it is easy to multiply the numbers but hard to find the factors when only the product is given. ⭐These keys are widely used as they are practically very safe; for the large numbers that are being used, it will take millions of years to find the prime factors on a standard computer. However, a quantum computer could speed up the process incredibly fast and we will no longer have the time advantage of this encryption. ⭐Scientists are currently working on encryption methods where a quantum computer does not bring any advantage, called post-quantum cryptography. ⭐Quantum key distribution - Usage of quantum particles along with their properties like superposition and the dual wave nature in order to securely exchange encryption keys. ⭐QKD works because if an intruder tries to measure the spin of the photons, the act of measuring changes the key itself. This means that the sender and the receiver will find out if someone has tried to listen in to the message due to the mismatch in the received key. #quantumcomputing #quantum30 #quantumcomputingindia #quantumscience | |||
2453 | dbpedia | 3 | 67 | http://backreaction.blogspot.com/2019/09/the-five-most-promising-ways-to.html | en | Sabine Hossenfelder: Backreaction: The five most promising ways to quantize gravity | https://lh3.googleusercontent.com/blogger_img_proxy/AEn0k_vqYk4OKUi23AamCoXznDJ6CV93aJ-auA9xYCbfbiE4Q82VnxzbgW3nVA8O-vuACMCBApGyjGhbq3x3b7ULAmFQGt0DuerGncyuH_ePoUOeaJ38=w1200-h630-n-k-no-nu | https://lh3.googleusercontent.com/blogger_img_proxy/AEn0k_vqYk4OKUi23AamCoXznDJ6CV93aJ-auA9xYCbfbiE4Q82VnxzbgW3nVA8O-vuACMCBApGyjGhbq3x3b7ULAmFQGt0DuerGncyuH_ePoUOeaJ38=w1200-h630-n-k-no-nu | [
"http://1.bp.blogspot.com/-N9QDX9zz1aE/XGuq12-EKSI/AAAAAAAAEgE/EuyipZCfAW8vKpgNLc1fh744VXTtai6ZQCK4BGAYYCw/s1600/br_banner_new06.jpg",
"https://resources.blogblog.com/img/icon18_edit_allbkg.gif",
"http://www.blogger.com/img/blogger_logo_round_35.png",
"http://www.blogger.com/img/blogger_logo_round_35.png",
... | [] | [] | [
""
] | null | [
"Sabine Hossenfelder"
] | null | Science News, Physics, Science, Philosophy, Philosophy of Science | http://backreaction.blogspot.com/favicon.ico | http://backreaction.blogspot.com/2019/09/the-five-most-promising-ways-to.html | |||||
2453 | dbpedia | 3 | 88 | https://algs4.cs.princeton.edu/66intractability/ | en | Intractability | [
"https://algs4.cs.princeton.edu/cover.png",
"https://introcs.cs.princeton.edu/java/cover.png",
"https://algs4.cs.princeton.edu/cover-analysis-of-algorithms.png",
"https://algs4.cs.princeton.edu/66intractability/images/integration.png"
] | [] | [] | [
""
] | null | [
"Robert Sedgewick",
"Kevin Wayne"
] | null | en | null | 6.6 Intractability
This section under construction. The goal of complexity theory is to understand the nature of efficient computation. We have learned about analysis of algorithms, which enables us to classify algorithms according to the amount of resources they will consume. In this section, we will learn about a rich class of problems for which nobody has been able to devise an efficient algorithm.
Computational complexity.
As digital computers were developed in the 1940s and 1950s, the Turing machine served as the theoretical model of computation. In the 1960s Hartmanis and Stearns proposed measuring the time and memory needed by a computer as a function of the input size. They defined complexity classes in terms of Turing machines and proved that some problems have "an inherent complexity that cannot be circumvented by clever programming." They also proved a formal version (time hierarchy theorem) of the intuitive idea that if given more time or space, Turing machines can compute more things. In other words, no matter how hard a problem is (time and space requirements), there are always harder problems.
Computational complexity is the art and science of determining resource requirements for different problems. Computational complexity deals with assertions about any conceivable algorithm for a problem. Making such statements is significantly more challenging than understanding the running time of one particular algorithm for the problem since we must reason about all possible algorithms (even those not yet discovered). This makes computational complexity an exciting, yet daunting, field of study. We will survey some of its most important ideas and practical outgrowths.
Polynomial time.
We have analyzed the running time of an algorithm as a function of its input size. When solving a given problem, we prefer an algorithm that takes 8 N log N steps to one that takes 3 N2 steps, since when N is large, the first algorithm is significantly faster than the first. The second algorithm will ultimately solve the same problem (but it might take hours instead of seconds). In contrast, an exponential time algorithm has a different qualitative behavior. For example, a brute force algorithm for the TSP might take N! steps. Even if each electron in the universe (1079) had the power of today's fastest supercomputer (1012 instructions per second), and each worked for the life of the universe (1017 seconds) on solving the problem, it would barely make a dent in solving a problem with N = 1,000 since 1000! >> 101000 >> 1079 * 1012 * 1017. Exponential growth dwarfs technological change. We refer to any algorithm whose running time is bounded by a polynomial in the input size (e.g., N log N or N^2) as a polynomial-time algorithm. We say that a problem is intractable if there is no polynomial-time algorithm for the problem.
Create log-log scale plot of N, N3, N5, N10, 1.1N, 2N, N! as in Harel p. 74.
As programmers gained more experience with computation, it became evident that polynomial-time algorithms were useful and exponential-time algorithms were not. In a very influential paper, Jack Edmonds referred to polynomial algorithms as "good algorithms" and argued that polynomial time is a good surrogate for efficient computation. Kurt Godel wrote a letter to von Neumann (p. 9) in 1956 that contains the (implicit) notion that polynomiality is a desirable feature. Earlier (1953), von Neumann recognized the qualitative difference between polynomial and exponential algorithms. The idea of classifying problems according to polynomial and exponential time profoundly changed the way people thought about computational problems.
NP.
Informally we define a search problem as a computational problem where we are looking for a solution among a (potentially huge) number of possibilities, but such that when we find a solution, we can easily check that it solves our problem. Given an instance I of a search problem (some input data specifying the problem), our goal is to find a solution S (an entity that meets some pre-specified criterion) or report that no such solution exists. To be a search problem, we require that it be easy to check that S is indeed a solution. By easy, we mean polynomial-time in the size of the input I. The complexity class NP is the set of all search problems. Here are a few examples.
Linear systems of equations. Given a system of linear equations Ax = b, find a solution x that satisfies the equations (if one exists). The problem is in NP because if we are given a purported solution x, we can check that Ax = b by plugging in x and verifying each equation.
Linear programming. Given a system of linear inequalities Ax ≤ b, find a solution x that satisfies the inequalities (if one exists). The problem is in NP because if we are given a purported solution x, we can check that Ax ≤ b by plugging in x and verifying each inequality.
Integer linear programming. Given a system of linear inequalities Ax ≤ b, find a binary (0/1) solution x that satisfies the inequalities (if one exists). The problem is in NP because if we are given a purported solution x, we can check that Ax ≤ b by plugging in x and verifying each inequality.
While it is easy to check a proposed solution to all three problems, how difficult is it to find a solution from scratch?
Remark: our definition of NP is slightly non-standard. Historically, complexity classes were defined in terms of decision problems (yes-no problems). For example, given a matrix A and a vector b, does there exist a solution x such that Ax = b?
P.
The complexity class P is the set of all search problems solvable in polynomial-time (on a deterministic Turing machine). As before, we define P in terms of search problems (instead of decision problems). It captures most of the problems that we can solve in practice on real machines. We list a few examples below:
Problem Description Algorithm Instance Solution GCD Find the greatest common divisor of two integers x and y. Euclid's algorithm
(Euclid, 300 BCE) 34, 51 17 STCONN Given a graph G and two vertices s and t, find a path from s to t. BFS or DFS
(Theseus) SORT Find permutation that puts elements in ascending order. Mergesort
(von Neumann, 1945) 2.3 8.5 1.2
9.1 2.2 0.3 5 2 4 0 1 3 PLANARITY Given a graph G, draw it in the plane so that no two edges cross. (Hopcroft-Tarjan, 1974) LSOLVE Given a matrix A and a vector b, find a vector x such Ax = b. Gaussian elimination
(Edmonds, 1967) x+y=1
2x+4y=3 x = 1/2
y = 1/2 LP Given a matrix A and a vector b, find a vector x such that Ax ≤ b? Ellipsoid algorithm
(Khachiyan, 1979) x+y≤1
2x+4y≤3 x = 0
y = 0 DIOPHANTINE Given a (sparse) polynomial of one variable with integer coefficients, find an integral root? (Smale et. al, 1999) x5 - 32 x = 2
Extended Church-Turing Thesis.
In the mid 1960s Cobham and Edmonds independently observed that the set of problems solvable in a polynomial number of steps remains invariant over a very wide range of computational models, from deterministic Turing machines to RAM machines. The extended Church-Turing thesis asserts that the Turing machine is as efficient as any physical computing device. That is, P is the set of search problems solvable in polynomial-time in this universe. If some piece of hardware solves a problem of size N in time T(N), the extended Church-Turing thesis asserts that a deterministic Turing machine can do it in time T(N)k for some fixed constant k, where k depends on the particular problem. Andy Yao expresses the broad implications of this thesis:
They imply that at least in principle, to make future computers more efficient, one only needs to focus on improving the implementation technology of present-day computer designs.
In other words, any reasonable model of computation can be efficiently simulated on a (probabilistic) Turing machine. The extended Church-Turing thesis is true for all known physical general purpose computers. For random access machines (e.g., your PC or Mac) the constant k = 2. So, for example, if a random access machine can perform a computation in time N3/2, then a Turing machine can do the same computation in time N3.
Does P = NP?
One of the most profound scientific questions of our time is whether P = NP. That is, can all search problems be solved in polynomial time? Clay Foundation offers a 1 million dollar millennium prize for solving it. Here are some speculations on when the question will be resolved. The overwhelming consensus is that P != NP, but nobody has been able to prove it.
Video of Homer Simpson pontificating over P = NP, with accompanying music Erased by Paradise Lost.
Godel's letter to von Neumann anticipated the P = NP question. He recognized that if P = NP (satisfiability is in P), it "would have consequences of the greatest importance" since then "the mental work of a mathematician concerning Yes-or-No questions could be completely replaced by a machine." He asked for which combinatorial problems was there a more efficient alternative to exhaustive search.
NP-completeness.
Informally, NP-complete problems are the "hardest" problems in NP; they are the ones most likely to not be in P. Define: a problem is NP-complete if (i) it is in NP and (ii) every problem in NP polynomial reduces to it. Defining the concept of NP-completeness does not mean that such problems exist. In fact, the existence of NP-complete problems is an amazing thing. We cannot prove a problem is NP-complete by presenting a reduction from each NP problem since there are infinitely many of them. In the 1960s, Cook and Levin proved that SAT is NP-complete.
This is an example of universality: if we can solve any NP-complete, then we can solve any problem in NP. Unique scientific discovery giving common explanation to all sorts of problems. It is even more amazing that there exist "natural" problems that are NP-complete.
The impact of NP-completeness on the natural sciences has been undeniable. One the first NP-complete problems were discovered, intractability "spread like a shockwave through the space of problems", first in computer science, and then to other scientific disciplines. Papadimitriou lists 20 diverse scientific disciplines that were coping with internal questions. Ultimately, scientists discovered their inherent complexity after realizing that their core problems were NP-complete. NP-completeness is mentioned as a keyword in 6,000 scientific papers per year. "Captures vast domains of computational, scientific, mathematical endeavors, and seems to roughly delimit what mathematicians and scientists had been aspiring to compute feasibly." [Papadimitriou] Few scientific theories have had such a breadth and depth of influence.
Some NP-complete problems. Since the discovery that SAT is NP-complete, tens of thousands of problems have been identified as NP-complete. In 1972, Karp showed that 21 of the most infamous problem s in discrete mathematics were NP-complete, including Tsp, Knapsack, 3Color, and Clique. The failure of scientists to find an efficient algorithm for these 21 problems, despite being unaware that they were NP-complete, was among the first evidence suggesting that P != NP. Below we list a sampling of some NP-complete problems. Here are some more NP-complete problems. This is only meant to illustrate their diversity and pervasiveness.
Bin Packing. You have n items and m bins. Item i weighs w[i] pounds. Each bin can hold at most W pounds. Can you pack all n items into the m bins without violating the given weight limit?
This problem has many industrial applications. For example, UPS may need to ship a large number of packages (items) from one distribution center to another. It wants to put them into trucks (bins), and use as few trucks as possible. Other NP-complete variants allow volume requirements: each 3-dimensional package takes up space and you also have to worry about arranging the packages within the truck.
Knapsack. You have a set of n items. Item i weighs w[i] pounds and has benefit b[i]. Can you select a subset of the items that have total weight less than or equal to W and total benefit greater than or equal to B? For example, when you go camping, you must select items to bring based on their weight and utility. Or, suppose you are burglarizing a home and can only carry W pounds of loot in your knapsack. Each item i weighs w[i] pounds has a street value of b[i] dollars. Which items should you steal?
Subset Sum. Given n integers does there exists a subset of them that sum exactly to B? For example, suppose the integers are {4, 5, 8, 13, 15, 24, 33}. If B = 36 then the answer is yes (and 4, 8, 24 is a certificate). If B = 14 the answer is no.
Partition. Given n integers, can you divide them into two subsets so that each subset sums to the same number? For example, suppose the integers are {4, 5, 8, 13, 15, 24, 33}. Then the answer is yes, and {5, 13, 33} is a certificate. Load balancing for dual processors.
Integer linear programming. Given an integer matrix A and an integer vector b, does there exist an integer vector x such that Ax ≤ b? This is a central problem in operations research since many optimization problems can be formulated in this way. Note the contrast to the linear programming problem presented above where we are looking for a rational vector instead of an integer vector. The line between problems which are tractable and problems which are intractable can be very subtle.
SAT. Given n Boolean variables x1, x2, ..., xN and a logical formula, is there an assignment of truth variables that makes the formula satisfiable, i.e., true? For example, suppose the formula is
(x1' + x2 + x3) (x1 + x2' + x3) (x2 + x3) (x1' + x2' + x3')
Then, the answer is yes and (x1, x2, x3) = (true, true, false) is a certificate. Many applications to electronic design automation (EDA), including testing and verification, logic synthesis, FPGA routing, and path delay analysis. Application to AI, including knowledge base deduction and automatic theorem proving.
Exercise: given two circuits C1 and C2, design a new circuit C such that some setting of input values makes C output true if and only if C1 and C2 are equivalent.
3-SAT. Given n Boolean variables x1, x2, ..., xN and a logical formula in conjunction normal form (product-of-sums) with exactly 3 distinct literals per clause, is there an assignment of truth variables that makes the formula satisfiable?
Clique. Given n people and a list of pairwise friendships. Is there a group or clique of k people such that every possible pair of people within the group are friends? It is convenient to draw the friendship graph, where we include a node for each person and an edge connecting each pair of friends. In the following example with n = 11 and k = 4, the answer is yes, and {2, 4, 8, 9} is a certificate.
Longest path. Given a set of nodes and pairwise distances between nodes, does there exists a simple path of length at least L connecting some pair of nodes?
Machine Scheduling. Your goal is to process n jobs on m machines. For simplicity, assume each machine can process any one job in 1 time unit. Also, there can be precedence constraints: perhaps job j must finish before job k can start. Can you schedule all of the jobs to finish in L time units?
Scheduling problems have a huge number of applications. Jobs and machines can be quite abstract: to graduate Princeton you need to take n different courses, but are unwilling to take more than m courses in any one semester. Also, many courses have prerequisites (you can't take COS 226 or 217 before taking 126, but it is fine to take 226 and 217 at the same time). Can you graduate in L semesters?
Shortest Common Superstring. Given the genetic alphabet { a, t, g, c } and N DNA fragments (e.g., ttt, atggtg, gatgg, tgat, atttg) is there a DNA sequence with K or fewer characters that contains every DNA fragment? Suppose K = 11 in the above example; then the answer is yes and atttgatggtg is a certificate. Applications to computational biology.
Protein folding. Proteins in organism fold in three dimensional dimensional space in a very specific way, to their native state. This geometric pattern determines the behavior and function of a protein. One of the most widely used folding models is the two dimensional hydrophilic-hydrophobic (H-P) model. In this model, a protein is a sequence of 0s and 1s, and the problem is to embed it into a 2-d lattice such that the number of pairs of adjacent 1s in the lattice, but not in the sequence (its energy), is minimized. For example, the sequence 011001001110010 is embedded in the figure below in such a way that there are 5 new adjacent pairs of 1s (denoted by asterisks).
Minimizing the H-P energy of a protein is NP-hard. (Papadimitriou, et al.) It is well accept by biologists that proteins fold to minimize their energies. A version of Levinthal's paradox asks how it is possible that proteins are able to efficiently solve apparently intractable problems.
Integration. Given integers a1, a2, ..., aN, does the following integral equal 0? If you see this integral in your next Physics course, you should not expect to be able to solve it. This should not come as a big surprise because in Section 7.4 we consider a version of integration that is undecidable.
Crossword puzzle. Given an integer N, and a list of valid words, is it possible to assign letters to the cells of an N-by-N grid so that all horizontal and vertical words are valid? No easier if some of the squares are black as in a crossword puzzle.
Theorem. Given a purported theorem (such as one for the Riemann Hypothesis), can you prove it is true using at most n symbols in some formal system such as Zermelo-Fraenkel set theory?
Tetris.
Minesweeper.
Regular expressions. Give two regular expressions over the unary alphabet { 1 }, do they represent different languages? Give two NFAs, do they represent different languages? It may not be apparent that either problem is even decidable since we don't have an obvious bound on the size of the smallest string that is in one language but not the other. [Note that the corresponding inequivalence problem for DFAs is polynomial solvable.] The reason why we phrase the problem as inequivalence instead of equivalence is that it is easy to check that the two entities are non-equivalent by demonstrating a string s. In fact, if the two languages are different, then the smallest string is polynomial in the size of the input. Thus, we can use the efficient algorithms from Section 7.xyz to check whether s is recognized by an RE or accepted by an NFA. However, to argue that two REs are equivalent, we would need an argument that guarantees that all strings in one are in the other, and vice versa. [It is possible to devise an (exponential) algorithm to test whether two REs or NFAs are equivalent, although this should not be obvious.]
Lemmings. Is it possible to guide a tribe of green-haired lemming creatures to safety in a level of the game Lemmings?
Multinomial minimization over unit hypercube. Given a multinomial of N variables, is the minimum <= C, assuming all variables are bounded between 0 and 1. Classic calculus problem: min f(x) = ax^2 + bx + c over [0, 1]. Derivative at x = ?? is 0, but minimum occurs at boundary.
Quadratic Diophantine equations. Given positive integers a, b, and c, are there positive integers x and y such that ax2 + by = c?
Knot theory. Which knots on a 3-dimensional manifold bound a surface of genus ≤ g?
Bounded Post Correspondence Problem. Given a post correspondence problem with N cards and an integer K &le N, is there a solution that uses at most K cards? Recall it is undecidable if there is no limit on K.
Nash equilibria. Cooperative game theory. Given a 2-player game, find a Nash equilibrium that maximizes the payoff to player 1. Do more than one NE exist? Is there a NE that is Pareto optimal? NE that maximizes social welfare.
Quadratic congruence. Given positive integers a, b, and c, is there a positive integer x < c such that x2 = a (mod b)?
Ising model in 3d. Simple mathematical model of phase transitions, e.g., when water freezes or when cooling iron becomes magnetic. Computing lowest energy state is NP-hard. Solvable in polynomial time if graph is planar, but 3d lattice is nonplanar. Holy grail of statistical mechanics for 75 years before proved NP-hard. Establishing NP-completeness means that physicists won't spend another 75 years attempting to solve the unsolvable.
Bandwidth minimization. Given an N-by-N matrix A and an integer B, is it possible to permute the rows and columns of A such that Aij = 0 if |i - j| > B. Useful for numerical linear algebra.
Voting and social choice. NP-hard for an individual to manipulate a voting scheme known as single transferable vote. NP-hard to determine who has won an election in a scheme seriously proposed by Lewis Carroll (Charles Dodgson) in 1876. In Carroll's scheme, the winner is the candidate who with the fewest pairwise adjacent changes in voters' preference rankings becomes the Condercet winner (a candidate who would beat all other candidates in a pairwise election). Shapley-Shubik voting power. Computing the Kemeny optimal aggregation.
Coping with intractability.
The theory of NP-completeness says that unless P = NP, there are some important problems for which we can't create an algorithm that simultaneously achieves the following three properties:
Guarantee to solve the problem in polynomial-time.
Guarantee to solve the problem to optimality.
Guarantee to solve arbitrary instances of the problem.
When we encounter an NP-complete problem, we must relax one of the three requirements. We will consider solutions to the TSP problem that relax one of the three goals.
Complexity theory deals with worst-case behavior. This leaves open the possibility of designing algorithms that run quickly on some instances, but take a prohibitive amount of time on others. For example, Chaff is a program that can solve many real-world SAT instances with 10,000 variables. Remarkably, it was developed by two undergraduates at Princeton. The algorithm does not guarantee to run in polynomial time, but the instances we're interested in may be "easy."
Sometimes we may be willing to sacrifice the guarantee on finding the optimal solution. Many heuristic techniques (simulating annealing, genetic algorithms, Metropolis algorithm) have been designed to find "nearly optimal" solutions to the TSP problem. Sometimes it is even possible to prove how good the resulting solution will be. For example, Sanjeev Arora designed an approximation algorithm for the Euclidean TSP problem that guarantees to find a solution that costs at most, say 1%, above the optimum. Designing approximation algorithms is an active area of research. Unfortunately, there are also non-approximability results of the form: if you can find an approximation algorithm for problem X that guarantees to get within a factor of 2 of the optimum, then P = NP. Thus, designing approximation algorithms for some NP-complete problems is not possible.
If we are trying to solve a special class of TSP problems, e.g., where the points lie on the boundary of a circle or the vertices of an M-by-N lattice, then we can design efficient (and trivial) algorithms to solve the problem.
Exploiting intractability. Having intractability problems is occasionally a good thing. In Section XYZ, we will exploit intractable problems to design cryptographic systems.
Between P and NP-complete. Most natural problems in NP are now known to be in P or NP-complete. If P != NP, then there are provably some NP problems that are neither in P or NP-complete. Like "dark matter we have not developed means of observing." A few notable unclassified problems in the netherworld: factoring, and subgraph isomorphism.
Factoring. Best known algorithm is 2^O(n^1/3 polylog(n)) - number field sieve. Believed by experts not to be in P.
Precedence constrained 3-processor scheduling. Given a set of unit length tasks, and a precedence order, find the shortest schedule on 3 parallel machines.
Turnpike problem. Given N(N-1)/2 positive numbers (not necessarily distinct), does there exist a set of N points on the line such that these numbers are the pairwise distances of the N points. Intuition: points are exits on I-95. Problem first arose in 1930s in the context of x-ray crystallography. Also known as the partial digest problem in molecular biology.
Boolean formula dualization. Given a monotone CNF formula and a monotone DNF formula, are they equivalent? (a + b)(c + d) = ac + ad + bc + bd. Naively applying De Morgan's law leads to exponential algorithm because of redundancy. Best algorithm O(n^(log n / log log n)).
Stochastic games. White, Black and Nature alternate moving a token on the edges of a directed graph, starting at the start state s. White's goal is to move the token to a goal state t. Black's goal is to prevent the token from ever reaching t. Nature moves the tokens at random. Given a digraph, a start state s, and a goal state t,, does White have a strategy which will make the token reach t with probability ≥ 1/2? Problem is in NP intersect co-NP, but not known to be in P. Believed to be in P, we just haven't found a polynomial-time algorithm.
Other complexity classes.
The complexity classes P, NP, and NP-complete are the three most famous complexity classes. Scott Aaronson's website The Complexity Zoo contains a comprehensive list of other complexity classes that are useful in classifying problems according to their computational resources (time, space, parallelizability, use of randomness, quantum computing). We describe a few of the most important ones below.
PSPACE. The complexity class PSPACE = problems solvable by a Turing machine using polynomial space. PSPACE-complete = in PSPACE and every other problem in PSPACE can be reduced to it in polynomial time.
Here is a complexity version of the halting problem. Given a Turing machine that is limited to n tape cells, does it halt in at most k steps? The problem is PSPACE-complete, where n is encoded in unary. This means that unless P = PSPACE, we are unlikely to be able to tell whether a given program, running on a computer with n units of memory, will terminate before k steps substantially faster than the trivial method of running it for k steps and seeing what happens.
Bodlaender: given a graph with vertices 1, ..., N, two players alternate in labeling the vertices red, green, or blue. The first player to label a vertex the same color as one of its neighbors loses. Determining whether there is a winning strategy for the first player is PSPACE-complete.
Versions of many conventional games are provably intractable; this partially explains their appeal. Also natural generalizations of Othello, Hex, Geography, Shanghai, Rush Hour, go-moku, Instant Insanity, and Sokoban are PSPACE-complete.
Eppstein's list of hard games.
Is a given string a member of a context sensitive grammar?
Do two regular expressions describe different languages? PSPACE-complete even over the binary alphabet and if one of the regular expressions is .*.
Another example that can be made rigorous is the problem of moving a complicated object (e.g., furniture) with attachments that can move and rotate through an irregularly shaped corridor.
Another example arises in parallel computing when the challenge is to determine whether a deadlock state is possible within a system of communicating processors.
Note PSPACE = NPSPACE (Savitch's theorem).
EXPTIME. The complexity class EXPTIME = all decision problem solvable in exponential time on deterministic Turing machine. Note P ⊆ NP ⊆ PSPACE ⊆ EXPTIME, and, by the time hierarchy theorem, at least one inclusion is strict, but unknown which one (or more). It is conjectured that all inclusions are strict.
Roadblock from Harel p. 85.
Natural generalization of chess, checkers, Go (with Japanese style ko termination rule), and Shogi are EXPTIME-complete. Given a board position, can the first player force a win? Here N is the size of the board, and the running time is exponential in N. One reason that these problems are harder from a theoretical standpoint than Othello (and other PSPACE-complete games) is that they can take an exponential number of moves. Checkers (aka English draughts on an N-by-N board): player can have an exponential number of moves at a given turn because of jump sequences. [pdf] Note: depending on termination rules, checkers can either be PSPACE-complete or EXPTIME-complete. For EXPTIME-complete, we assume the "forced capture rule" where a player must make a jump (or sequence of jumps) if available.
Here is a complexity version of the halting problem. Given a Turing machine, does it halt in at most k steps? Alternatively, given a fixed Java program and a fixed input, does it terminate in at most k steps? The problem is EXPTIME-complete. Here the running time is exponential in the binary representation of k. In fact, no turing machine can guarantee to solve it in, say, O(k / log k) steps. Thus, brute force simulation is essentially best possible: provably, the problem cannot be solved substantially faster than the trivial method of running the Turing machine for the first k steps and seeing what happens.
An EXPTIME-complete problem cannot be solved in polynomial-time on a deterministic Turing machine - it does not depend on the P ≠ NP conjecture.
EXPSPACE. EXPSPACE-complete: given two "extended" regular expressions, do they represent different languages? By extended, we allow a squaring operation (two copies of an expression). Stockmeyer and Meyer (1973). Or, more simply set intersection (Hunt, 1973). Word problem for Abelian groups (Cardoza, Lipton, Meyer, 1976), Vector Addition Subsystem.
The Vector Addition Subsystem is EXPSAPCE-hard: given a nonnegative vector s and a set of arbitrary vectors, v1, v2, ..., vn, a vector x is reachable from s if it is either (i) the vector s or (ii) the vector y + vi where y is reachable. The VAS problem is to determine whether a given vector x is reachable.
DOUBLE-EXPTIME. The class DOUBLE-EXPTIME is the set of all decision problems solvable in doubly exponential time. A remarkable example is determining whether a formula in first order Presburger arithmetic is true. Presburger arithmetic consists of statements involving integers with + as the only operation (no multiplication or division). It can model statements like the following: if x and y are integer such that x &le y + 2, then y + 3 > x. In 1929 Presburger proved that his system is consistent (can't prove a contradiction like 1 > 2) and complete (every statement can be proven true or false). In 1974, Fischer and Rabin proved that any algorithm that decides the truth of a Presburger formula requires at least 2(2cN) time for some constant c, where N is the length of the formula.
Non-elementary. More than 2^2^2^...^2^N for any finite tower. Given two regular expressions that allow squaring and complementation, do they describe different languages?
Other types of computational problems.
We focus on search problems since this is a very rich and important class of problems for scientists and engineers.
Search problems. This is the version we have considered in detail. Technically, FP = polynomial-time function problems, FNP = polynomial-time function problems on nondeterministic Turing machine. FP problems can have any output that can be computed in polynomial time (e.g., multiplying two numbers or finding the solution to Ax = b).
Decision problems. Traditionally, complexity theory is defined in terms of yes/no problems, e.g., Does there exist a solution to Ax &le b? Definition of reduction is cleaner (no need to deal with output). Classes P and NP traditionally defined in terms of decision problems. Typically the search problem reduces to the decision problem (and this is known to be true for all NP-complete problems). Such search problems are referred to as self-reducible. The P = NP question is equivalent to the FP = FNP question.
Total functions. Occasionally, a decision problem is easy, while the corresponding search problem is (believed to be) hard. For example, there may be a theorem asserting that a solution is guaranteed to exist, but the theorem does not provide any hint as to how to find one efficiently.
subset sum example. Given N numbers, find two (disjoint) subsets of these N numbers that sum to exactly the same value. If N = 77 and all the numbers are at most twenty-one decimal digits long, then by the pigeonhole principle, at least two subsets must sum to the same value. This is because there are 2^77 subsets but at most 1 + 77 * 10^21 < 2^77 possible sums. Or decision = composite, search = factor.
John Nash proved that Nash equilibria always exist in a normal form game of two or more players with specified utilities. Proof was nonconstructive, so unclear how to find such an equilibria. Proved to be PPAD-complete - the analog of NP-complete for problems known to have solutions.
General equilibrium theory is foundation of microeconomics. Given an economy with k commodities, each of N agents has an initial endowment of the commodities. Each agent also as a utility function for each commodity. The Arrow-Debreu theorem asserts that under suitable technical conditions (e.g., utility functions are continuous, monotonic, and strictly concave) there exist a (unique) set of market prices such that each agent sells all their goods and buys the optimal bundle using this money (i.e., supply equals demand for every commodity). But how does market compute it? The proof relies on a deep theorem from topology (Kakutani's fixed point theorem) and no efficient algorithm is currently known. Economists assume that the market finds the equilibrium prices; Adam Smith used the metaphor of the invisible hand to describe this social mechanism.
Generalization of 15-slider puzzle. Testing whether solution exists is in P, but finding shortest solution is intractable. [Ratner-Warmuth, 1990]
Optimization problems. Sometimes we have optimization problems, e.g., TSP. Given an NP problem and a cost function on solutions, the goal for a given instance is to find the best solution for it (e.g find the shortest TSP tour, the minimum energy configuration, etc.) Sometimes hard to formulate as a search problem (find the shortest TSP tour) since not clear how to efficiently check that you have optimal tour. Instead, we rephrase as: given a length L, find a tour of length at most L. Then binary search for optimal L.
Counting problems. Given an NP problem, find the number of solutions for it. For example, given a CNF formula, how many satisfying assignments does it have? Includes many problems in statistical physics and combinatorics. Formally, the class of problems is known as #P.
Strategic problems. Given a game, find an optimal strategy (or best move) for a player. Includes many problems in economics and board games (e.g., chess, go).
Output polynomial time.
Some problems involve more output than a single bit of information. For example, outputting a solution to the Towers of Hanoi problem requires at least 2^N steps. This requirement is not because the solution is inherently hard to compute, but rather because there are 2^N symbols of output, and it takes one unit of time to write each output symbol. Perhaps a more natural way to measure efficiency is a function both of the input size and of the output size. A classic electrical engineering problem with DFAs is to build a DFA from a RE that uses the minimum number of states. We would like an algorithm that is polynomial in the size of the input RE (number of symbols) and also in the size of the output DFA (number of states). Unless P = NP, designing such an algorithm is impossible. In fact, it's not even possible to design a polynomial algorithm that gets the answer within a constant (or even polynomial) number of states! Without the theory of NP-completeness, researchers would waste time following unpromising research directions.
Other lower bounds.
Information theoretic. In Section X.Y we saw that insertion uses at most N^2 compares to sort N items, and mergesort uses at most N log N compares. A natural question to ask is whether we can do better, perhaps one that uses at most 5N compares or even 1/2 N log N compares. To make the question more precise, we must explicitly state our computational model (decision tree). Here, we assume that we only access the data through the less() function. A remarkable theorem due to X says that no (comparison based) sorting algorithm can guarantee to sort every input of N distinct elements in fewer than ~ N log N compares. To see why, observe that each compare (call to less) provides one bit of information. In order to identify the correct permutation, you need log N! bits, and log N! ~ N log N. This tells us that mergesort is (asymptotically) the best possible sorting algorithm. No sorting algorithm in existence (or even one not yet imagined) will use substantially fewer compares.
3-Sum hard. Given a set of N integers, do any three of them sum to 0? Quadratic algorithm exists (see exercise xyz), but no subquadratic algorithm known. 3-SUM linear reduces to many problems in computational geometry. (find whether set of points in the plane have 3 that are collinear, decide whether a set of line segments in the plane can be split into two subsets by a line, determining whether a set of triangles cover the unit square, can you translate a polygon P to be completely inside another polygon Q, robot motion planning).
Brute force TSP takes N! steps. Using dynamic programming, can get it down to 2^N. Best lower bound = N. Essence of computational complexity = trying to find matching upper and lower bounds.
Circuit complexity.
There are other ways to define and measure computational complexity. A Boolean circuit of n inputs can compute any Boolean function of n variables. We can associate the set of binary strings of size n for which the circuit outputs 1 as the set of strings in the language. We need one circuit for each input size n. Shannon (1949) proposed the size of the circuit as a measure of complexity. It is known that a language has uniformly polynomial circuits if and only if the language is in P.
Physical and analog computation.
The P = NP question is a mathematical question regarding the capabilities of Turing machines and classical digital computers. We might also wonder whether the same is true for analog computers. By analog, we mean any "deterministic physical device that uses a fixed number of physical variables to represent each problem variable." Internal state represented by continuous variables instead of discrete. E.g., soap bubbles, protein folding, quantum computing, gears, time travel, black holes, etc.
Vergis, Steiglitz, and Dickinson proposed an analog form of the Strong Church-Turing thesis:
Any finite analog computer can be simulated efficiently by a digital computer, in the sense that the time required by the digital computer to simulate the analog computer is bounded by a polynomial function of the resources used by the analog computer.
The resources of the analog computer could be time, volume, mass, energy, torque, or angular momentum. Reference: The Physics of Analog Computation
Any reasonable model of computation (e.g., not involving exponential parallelism) can be simulated in polynomial time by a Turing machine (supplemented by a hardware random number generator).
Reference: Scott Aaronson. Can yield new insights into physics. One day "the presumed intractability of NP-complete problems might be taken as a useful constraint in the search for new physical theories" just like the second law of thermodynamics. Still can be falsified by experiment, but don't waste time looking...
Soap bubbles. Folklore that you can solve Steiner tree problem. In reality, only finds a local minimum, and may take a while to do so.
Quantum computing. One speculative model of computation - quantum computers - might be capable of solving some problems in a polynomial time that a deterministic Turing machine cannot do. Peter Shor discovered an N^3 algorithm for factoring N-digit integers, but the best known algorithm on a classical computer takes time exponential in N. Same idea could lead to a comparable speedup in simulating quantum mechanical systems. This explains the recent excitement in quantum computation, as it could result in a paradigm shift for computing. However, quantum computers do not yet violate the extended Church-Turing thesis since we don't yet know how to build them. (Difficult to harness because much of the quantum information seems to be easily destroyed by its interactions with the outside world, i.e., decoherence.) Moreover, it is still possible that someone might discover a polynomial-time algorithm for factoring on a classical computer, although most experts suspect that this is not possible. Grover's algorithm: search in sqrt(N) time instead of N.
Richard Feynman showed in 1982 that classical computers cannot simulate quantum mechanical systems without slowing down exponentially (crux of argument is that Turing machines have locality of reference whereas quantum mechanics includes "exploit spooky action at a distance"). A quantum computer might be able to get around this problem. Feynman quote with respect to building a computer to simulate physics...
"The rule of simulation that I would like to have is that the number of computer elements required to simulate a large physical system is only to be proportional to the space-time volume of the physical system. I don't want to have an explosion."
Rephrase in terms of modern complexity theory by replacing "proportional to" by "bounded by a polynomial function of".
Deutsch-Jozsa give algorithm that is provably exponentially faster on a quantum computer than on a deterministic Turing machine. (Though exponential gap does not exist if the Turing machine has access to a hardware random number generator and can be wrong with negligible probability. Quantum computers can generate true randomness. )
PRIMES and COMPOSITE.
It is easy to convince someone that a number is composite by producing a factor. Then, the person just has to check (by long division) that you did not lie to them. Marin Mersenne conjectured that numbers of the form 2p - 1 are prime for p = 2, 3, 5, 7, 13, 17, 19, 31, 67, 127 and 257. His conjecture for p = 67 was disproved by F. N. Cole over two hundred and fifty years later in 1903. According to E. T. Bell's book Mathematics: Queen and Servant of Science
In the October meeting of the AMS, Cole announced a talk "On the Factorisation of Large Numbers". He walked up to the blackboard without saying a word, calculated by hand the value of 267, carefully subtracted 1. Then he multiplied two numbers (which were 193707721 and 761838257287). Both results written on the blackboard were equal. Cole silently walked back to his seat, and this is said to be the first and only talk held during an AMS meeting where the audience applauded. There were no questions. It took Cole about 3 years, each Sunday, to find this factorization, according to what he said.
For the record 267 - 1 = 193707721 × 761838257287 = 147573952589676412927.
Q + A
Q. Are polynomial algorithms always useful?
A. No, algorithms that take N100 or 10100 N2 steps are as useless in practice as exponential ones. The constants that arise in practice are usually sufficiently small that polynomial-time algorithms scale to huge problems, so polynomiality often serves as a surrogate for useful in practice.
Q. Why is the class of all search problems named NP?
A. The original definition of NP was in terms of nondeterministic Turing machines: NP is the set of all decision problems that can be solved in polynomial-time on a nondeterministic Turing machine. Roughly speaking, the difference between a deterministic and nondeterministic Turing machine is that the former operates like a conventional computer, performing each instruction in sequence, forming a computational path; a nondeterministic Turing machine can "branch off" where each branch can execute a different statement in parallel, forming a computational tree (If any path in the tree leads to a YES, then we accept; if all paths lead to NO, we reject.) This is where the N in NP comes from. It turns out the two definitions are equivalent, but the certificate one is now more widely used. (Also, Karp's 1972 paper uses the polynomial-time verifiability definition.)
Q. What is the complexity class NP-hard?
A. Several competing definitions. We define a problem (decision, search, or optimization) problem to be NP-hard if solving it in polynomial time would imply P = NP. Definition implicitly uses Turing reduction (extended to search problems).
Q. What's so hard about factoring an integer N in polynomial time - can't I just divide all potential factors less than N (or √N) into x and see if any have a remainder of zero?
A. The algorithm is correct, but remember it takes only lg N bits to represent the integer N. Thus, for an algorithm to be polynomial in the input size, it must be polynomial in lg N, and not N.
Q. How is it possible that checking whether an integer is composite is solvable in polynomial time, yet finding its factors is not known (or believed) to be?
A. There are ways to prove a number is composite without getting your hands on any of its factors. A famous theorem from number theory (Fermat's little theorem) implies that if you have two integers a and p such that (i) a is not a multiple of p and (ii) ap-1 != 1 (mod p), then p is not prime.
Q. Is there a decision problem that is polynomial solvable on a quantum computers, but provably not in P?
A. This is an open research problem. FACTOR is a candidate, but there is no proof that FACTOR is not in P, although this is widely believed to be outside P.
Q. Does NP = EXPTIME?
A. The experts believe no, but have been unable to prove it.
Q. Suppose someone proves P = NP. What would be the practical consequences?
A. It depends on how the question is resolved. Obviously, it would be a remarkable theoretical breakthrough. In practice, it might have dramatic significance if the proof of P = NP established a fast algorithm for an important NP-complete problem. If the proof results in an 2^100 N^117 algorithm for the TSP (and the constant and exponent could not be reduced), it would not have little practical impact. It could also be that someone proves P = NP by indirect means, thereby yielding no algorithm at all!
Q. Suppose someone proves P != NP. What would be the practical consequences?
A. It would be a remarkable theoretical breakthrough and solidify the foundation of much of computational complexity.
Q. Suppose P = NP. Does that mean deterministic TM are the same as non-deterministic TM?
A. Not quite. For example, even if P = NP, a non-deterministic TM may be able to solve a problem in time proportional to N^2, where the best deterministic one would take N^3. If P = NP, it just means that the two types of machines solve the same set of decision problems in polynomial time, but it says nothing about the degree of the polynomial.
Q. Where can I learn more about NP-completeness?
A. The authoritative reference remains Garey and Johnson Computers and Intractability: A Guide to the Theory of NP-completeness. Many of the most important subsequent discoveries are documented in David Johnson's NP-completeness column.
Exercises
Suppose that X is NP-complete, X poly-time reduces to Y, and Y poly-time reduces to X. Is Y necessarily NP-complete?
Answer: No, since Y may not be in NP. For example if X = CIRCUIT-SAT and Y = CO-CIRCUIT-SAT then X and Y satisfy the conditions, but it is unknown whether Y is in NP. Note that the answer depends on our definition of poly-time reduction (to be Turing reductions and not Karp reductions).
Explain why the optimization version of the vertex cover problem is not necessarily a search problem.
Answer: There does not appear to be an efficient way to certify that a purported solution is the best possible (even though we could use binary search on the search version of the problem to find the best solution).
Web Exercises
Subset sum. Given N positive integers and a target value V, determine if there is a subset whose sum is exactly V. Divide the integers into 4 equal groups. Enumerate and store all of the subset sums in each group by brute force. Let A, B, C, and D denote the subset sums of the four groups. The goal is to find integers a, b, c, and d such that a + b + c + d = V, where a is in A, b is in B, c is in C, and d is in D. Now, use a heap to enumerate the sums a + b where a is in A and b is in B. Simultaneously, use another heap to enumerate the sums c + d in decreasing order, where c is in C and d is in D.
Sum of square roots. What is the minimum nonzero difference between two sums of square roots of integers? Given n and k, find the minimum positive value of
where ai and bi are between 0 and n. For example r(20, 2) = √10 + √11 - √5 - √18 and r(20, 3) = √5 + √6 + √18 - √4 - √12 - √12. Hint: enumerate all 2^(n/2) sums of square roots of the first n/2 integers and let that set be A, enumerate all 2^(n/2) sums of square roots of the last n/2 integers and let that be B. Now enumerate sums of a + b in sorted order, where a is in A and b is in B. Look for sums whose difference is very tiny.
Dividing diamonds. Given N (around 36) class D diamonds, divide them into two groups so that they are as close in total weight to each other as possible. Assume the weights are real numbers (measured in carats).
Hamilton path in DAG. Given a directed acyclic graph G, give an O(n+m)-time algorithm to test whether or not it is Hamiltonian. Hint: topological sort.
Which of the following can we infer from the fact that the traveling salesperson problem is NP-complete, if we assume that P is not equal to NP?
There does not exist an algorithm that solves arbitrary instances of the TSP problem.
There does not exist an algorithm that efficiently solves arbitrary instances of the TSP problem.
There exists an algorithm that efficiently solves arbitrary instances of the TSP problem, but no one has been able to find it.
The TSP is not in P.
All algorithms that are guaranteed to solve the TSP run in polynomial time for some family of input points.
All algorithms that are guaranteed to solve the TSP run in exponential time for all families of input points.
Answer: (b) and (d) only.
Which of the following can we infer from the fact that PRIMALITY is in NP but not known to be NP-complete, if we assume that P is not equal to NP?
There exists an algorithm that solves arbitrary instances of PRIMALITY.
There exists an algorithm that efficiently solves arbitrary instances of PRIMALITY.
If we found an efficient algorithm for PRIMALITY, we could immediately use it as a black box to solve TSP.
Answer: We can infer only (a) since all problems in P are decidable. If P != NP, then there are problems in NP that are neither in P or NP-complete. PRIMALITY could be one of them (although this was recently disproved). Part (c) cannot be inferred since we don't know if PRIMALITY is NP-complete.
Which of the following are NP-complete?
The brute force TSP algorithm.
The quicksort algorithm for sorting.
The Halting problem.
Hilbert's 10th problem.
Answer: None. NP-completeness deals with *problems* not specific algorithm for problems. The Halting problem and Hilbert's 10th problem are undecidable, so they are not in NP (and all NP-complete problems are in NP).
Let X and Y be two decision problems. Suppose we know that X reduces to Y. Which of the following can we infer?
If Y is NP-complete then so is X.
If X is NP-complete then so is Y.
If Y is NP-complete and X is in NP then X is NP-complete.
If X is NP-complete and Y is in NP then Y is NP-complete.
X and Y can't both be NP-complete.
If X is in P, then Y is in P.
If Y is in P, then X is in P.
Answer: (d) and (g) only. X reduces to Y means that if you had a black box to solve Y efficiently, you could use it to solve X efficiently. X is no harder than Y.
Show that CIRCUIT-SAT reduces to CIRCUIT-DIFF. Hint: create a circuit with N inputs that always outputs 0.
Show that CIRCUIT-DIFF reduces to CIRCUIT-SAT.
Show that DETERMINANT is in NP: given an N-by-N integer matrix A, is det(A) = 0?
Solution: certificate is a nonzero vector x such that Ax = 0.
Show that FULL-RANK is in NP: given an N-by-N integer matrix A, is det(A) ≠ 0?
Solution: certificate is an N-by-N inverse matrix B such that AB = I.
Search problems vs. decision problems. We can formulate a search problem using a corresponding decision problem. For example, the problem of finding the prime factorization of an integer N can be formulate using the decision problem: given two integers N and and L, does N have a nontrivial factor strictly less than L. The search problem is solvable in polynomial time if and only if the corresponding decision problem is. To see why, we can efficiently find the smallest factor p of N by using different values of L along with binary search. Once we have the factor p, we can repeat the process on N/p.
Usually we can show that the search problem and the decision problem are equivalent up to polynomial factors in running time. Papadimitriou (Example 10.8) gives an interesting counterexample to the rule. Given N positive integers such that their sum is less than 2^N - 1, find two subsets whose sum is equal. For example, the 10 numbers below sum to 1014 < 1023.
23 47 59 88 91 100 111 133 157 205
Since there are more subsets of N integers (2^N) than numbers between 1 and 1014, there must be two different subsets with the same sum. But nobody know a polynomial time algorithm for finding such a subset. On the other hand, the natural decision problem is trivial solvable in constant time: are there two subsets of numbers that sum to the same value?
Pratt's primality certificate. Show that PRIMES is in NP. Use Lehmer's theorem (Fermat's Little Theorem Converse) which asserts that an integer p > 1 is prime if and only if there exists an integer x such that xN-1 = 1 (mod p) and x(p-1)/d ≠ 1 (mod p) for all prime divisors d of p-1. For example, if N = 7919, then the prime factorization of p-1 = 7918 = 2 × 37 × 107. Now x = 7 satisfies 77918 = 1 (mod 7919), but 77918/2 ≠ 1 (mod 7919), 77918/37 ≠ 1 (mod 7919), 77918/107 ≠ 1 (mod 7919). This proves that 7919 is prime (assuming that you recursively certify that 2, 37, and 107 are prime).
Pell's equation. Find all positive integer solutions to Pell's equation: x^2 - 92y^2 = 1. Solution: (1151, 120), (2649601, 276240), etc. There are infinitely many solutions, but each successive one is about 2300 times the previous one.
Pell's equation. In 1657, Pierre Fermat challenged his colleagues with the following problem: given a positive integer c, find a positive integer y such that cy2 is a perfect square. Fermat used c = 109. It turns out the smallest solution is (x, y) = (158,070,671,986,249, 15,140,424,455,100). Write a program Pell.java that reads in an integer c and finds the smallest solution to Pell's equation: x2 - c y2 = 1. Try c = 61. The smallest solution is (1,766,319,049, 226,153,980). For c = 313, the smallest solution is ( 3,218,812,082,913,484,91,819,380,158,564,160). The problem is provably unsolvable in a polynomial number of steps (as a function of the number of bits in the input c) because the output may require exponentially many bits!
3-COLOR reduced to 4-COLOR. Show that 3-COLOR polynomial reduces to 4-COLOR. Hint: given an instance G of 3-COLOR, create an instance G' of 4-COLOR by adding a special vertex x to G and connecting it to all of the vertices in G.
3-SAT is self-reducible. Show that 3-SAT is self-reducible. That is, given an oracle that answers whether or not any 3-SAT formula is satisfiable, design an algorithm that can find a satisfying assignment to a 3-SAT formula (assuming it is satisfiable). Your algorithm should run in polynomial time plus a polynomial number of calls to the oracle.
3-COLOR is self-reducible. Show that 3-COLOR is self-reducible. That is, given an oracle that answers whether or not any graph G is 3-colorable, design an algorithm that can 3-color a graph (assuming it is 3-colorable). Your algorithm should run in polynomial time plus a polynomial number of calls to the oracle. | |||||||
2453 | dbpedia | 0 | 65 | https://huggingface.co/datasets/BAAI/IndustryCorpus_mathematics | en | mathematics · Datasets at Hugging Face | [
"https://huggingface.co/front/assets/huggingface_logo-noborder.svg",
"https://cdn-avatars.huggingface.co/v1/production/uploads/1664511063789-632c234f42c386ebd2710434.png",
"https://cdn-uploads.huggingface.co/production/uploads/6459c242abdbb77c4c6e1f8e/8okkYsiKvGcU_ssn--vpD.png"
] | [] | [] | [
""
] | null | [] | 2024-07-29T00:47:27 | We’re on a journey to advance and democratize artificial intelligence through open source and open science. | https://huggingface.co/datasets/BAAI/IndustryCorpus_mathematics | 2014-15/0339/en_head.json.gz/14335
Home»February»Discover Dialogue: Mathematician Peter Woit Discover Dialogue: Mathematician Peter Woit"No one has a plausible idea about how string theory can explain anything."By Susan Kruglinski|Monday, February 20, 2006 RELATED TAGS: STRING THEORYPeter Woit is a lecturer in Columbia University's mathematics department. He launched a blog, called Not Even Wrong, two years ago to contest the underpinnings of string theory. It was a hit and has become a lively forum for bickering scientists. You have a Ph.D. in physics. Why did you end up in a math department? W: Well, one reason actually had to do with string theory. After I received my Ph.D. in theoretical physics, it became clear that if you wanted to keep working in theoretical physics, especially in the mathematical end of theoretical physics, you would pretty much have to do string theory. And I really wasn't very interested in that, so I thought the math department would be a better idea. What year was this? W: I got my Ph.D. in '84, which is right around the time the string theory fad started. One effect of it was if you were doing something else involving mathematics and physics, people just don't want to hear about it, they were just not interested. If you started looking for a job, you found out that nobody really wants what you're doing. Why are you so interested in the problems with string theory? W: In the mid-eighties, my reaction to it was it didn't seem that promising to me and there were all these other smart people doing it. I thought, within a couple of years either it will get somewhere, in which case they'll all want to work on it, or it won't go anywhere and they'll give up. It seemed like a perfectly reasonable thing for people to be doing. And as the years went on—and we're now 21 years past this—it became more and more disturbing that this had taken on a very different character than just a few years of people working on a very speculative idea. It had reached this kind of critical mass and totally had taken over the field. I think much of it has really gotten to the point where it's not even a legitimate science anymore. How is it not a legitimate science? W: At this point they really don't even have a plausible idea about how to ever make a prediction out of this, or how to use this in order to really explain anything about the world. So there's an ongoing discussion now almost at the level of philosophy of science: Is this even a science? I think I am not the only one who thinks this has gone past the point where it's not even really a science anymore. And yet they are clearly working with math, which is scientific. How do you describe what they are doing? W: The science writer John Horgan has a nice line about this. He calls it science fiction in mathematical form. They are certainly using mathematics, and they are building models and writing down equations for them, but the models they are working with just aren't connected to the real world. There isn't even any plausible way you could imagine that they are going to be able to connect that to the real world and to use these models to explain some experiment we are seeing. Even string theorists admit that it is not really a theory. What is it? W: The best way to say it is what people have now is really an approximation to a theory. The kinds of equations that they have now are the kinds of equations you would get in an approximation scheme to some underlying theory, but nobody knows what the underlying theory is. What about the positive things that have come out of string theory? W: It has had a very good effect on mathematics. It's gone through several different stages. In the mid-eighties, it went through something called conformal field theory, and some truly great mathematics was done because of that. And more recently there has been a lot of work in what's called topological string theory and this has led to a lot of fantastic mathematics recently, and a bit earlier on there was something called mirror symmetry which give you a completely new information about higher dimensional spaces. So from the point of view of mathematics, it's been a big success. Also, over the last ten years, there has been some very interesting work using string theory to understand the theory of the strong interactions. So if string theory is so useful, what are your issues? W: I think what most bothers me about it, the problem with it, is the way it has driven out other sorts of research. The way in which it has been pursued has made it virtually impossible to work on other things in the field. Why is string theory such a phenomenon? W: One of the big factors is that the field is a victim of its own success. The standard model, which was in place by around 1973, has been absurdly successful. There are literally zero experimental results that disagree with this model. Normally, one thing that has kept physics from becoming overly speculative or going off into the wrong direction is that sooner or later an experimentalist comes along and shows you that that was the wrong direction. That just hasn't happened. Another aspect has to do with Edward Witten, who is the most amazing figure in the field. He legitimately is an incredible genius. And throughout the early eighties he was doing stuff that was ten times more interesting than anyone else in the field. No one had ever seen anyone like him. He got interested in this in 1984, and he was pushing the idea very strongly. So I think it was a combination of by far the most influential person in the field pushing the idea very strongly, combined with the fact that there weren't any other good ideas around, and there wasn't anything from experiments telling us which way to go. By now you have several generations of physicists—this has been going on for more than 20 years—who have spent their whole career doing this. People don't like to give up on something they have their lives invested in and try something else. It's not human nature. Doesn't string theory fit a physics need? W: Certainly the reason people originally got interested in it was that it held out hopes of unifying the standard model in particle physics and general relativity, the theory of gravitation. And I think there are still some people who believe in that promise. The other thing to say about string theory is that nobody really quite knows what it is. It's still a very mysterious business. It really is more of a set of hopes that some things that people already understand are an approximation of some deeper theory, although nobody knows what it is. Why is your blog called Not Even Wrong? W: It's a famous quote from Wolfgang Pauli, a well-known physicist from the earlier part of the century. I certainly wasn't the first one to use it or the first one even to apply it to string theory. Pauli supposedly was asked about some paper, and he just described it as, "That paper is not even wrong." Meaning that it is just so undefined that you can't even tell if it is wrong or not. It's appropriate for string theory. It's so ill-formulated that you really can't tell whether it's wrong or not. Sometimes people use it as a term of abuse, as in, "That's so stupid, it's not even wrong." But it also has an implication that something is not well-defined enough that you can even decide whether it is wrong or not. What inspired you to start a blog? W: Since I'm responsible for the computer system in the department, I'd played a little bit with the software. I thought, I've got a lot of things I want to say and this may be a good forum with which to do this. So I guess it was about a year and a half ago that I started it. I was immediately surprised at how much attention it started getting. Had you felt you were not getting much reaction for earlier articles on this subject? W: I wrote something and initially tried to get Physics Today to publish it, but they wouldn't, and then I put it up on a physics web archive, and I got quite an amazing reaction to it. I probably heard from at least 50 to 100 physicists who said, "We've always thought this. It's great that someone is saying this." And I heard from only 2 or 3 people who said, "You don't know what you're talking about." So there was a huge positive reaction. How many hits do you get on your blog? W: Very quickly it was a few hundred, and then a thousand, then a couple thousand, and now its up to about 5,000 per day. Do you know of people who have changed their mind about what they were pursuing in physics because of your blog? W: People have told me that my blog has had an effect on students who were trying to decide whether to go into string theory or not. Brian Greene, one of the most prominent advocates of string theory, is in your department. How has your high profile as an anti-string theorist affected your relationship? W: We get along fine. He's a very reasonable guy. We disagree about string theory to some extent, but unlike a lot of other string theorists, Brian is certainly someone who is willing to publicly admit that string theory is something that may very well be wrong. Have you influenced each other at all? W: The few times I ever thought there was something to string theory was after hearing Brian talk. He's certainly a very convincing speaker. I haven't talked to him much about it, so I have no idea whether I've succeeded in making him more skeptical about the prospects of string theory or not. This is not something you sit down and talk about? W: No, we haven't discussed it very much at all. It seems people's attitude toward string theory has changed recently. Is that true? W: I think definitely up until last year or so it was very rare to see anything skeptical about string theory. I think within the last year this idea has really gotten out there, has become much more commonplace, that there quite possibly is a problem with all of this. Any idea why the climate has changed? W: Well, one thing is, in string theory itself, things are really not going well. Regularly, every year or two, there would be some new idea they would come out with, so the field was kind of bubbling along. I think what's happened in the last two or three years, there really haven't been any new ideas to try. The other problem is that there's been a split within the string theory community. There are some who have basically decided that whatever this theory is, it has infinitely complex possible solutions [known as the string theory landscape]. As for the dream that there's going to be one solution of string theory and it's going to be the real world, I think a lot of them have given up on that. So they're trying to pursue this idea that string theory really is an infinitely complex thing. I think a lot of other string theorists are well aware that if you go down that road you really can't predict anything and you're in danger of leaving what is normal science. Do you think there was a backlash because string theory was pushed into the public spotlight too early or too hard? W: I think it was certainly oversold. I think if you talk to most string theorists, they actually see themselves as suffering from this overselling of the theory. They're actually not happy that they have to contend with that. Can you compare this time in physics to another time in physics? Is this an exciting time? W: I'm afraid I don't think it is an exciting time. It's a very difficult and challenging time. I think it really is an unparalleled situation in the history of physics. There is this new accelerator that will come online in Geneva at CERN, the LHC, Large Hadron Collider. And there are a lot of good reasons to believe that that accelerator will finally have enough energy to start to get some interesting data. And so I think what's going on among theorists is people are just kind of waiting and hanging in there for a couple of years and hoping that this will really finally return the field to a much healthier state, where there will be new experimental results coming in which will start telling us what direction to go in. So I think when you do hear people saying that this is an exciting time, often what they are referring to is that it's exciting that two or three years from now we're going to finally start getting some new data which will turn things around. How do you respond to critics who say you are unqualified to discuss string theory? W: I have a PhD from Princeton in particle theory and have been thinking about this subject for twenty years. The only criticism of what I've been doing over the years that I would actually agree with is that I should have spent less time thinking about string theory and complaining about it and learning about it just to criticize it, instead of devoting time to more positive things that I should be pursuing. To you, in a perfect world, what would today's field of physics be like? How would you imagine it would be functioning at its best capacity? W: Broadly, I think what the field needs is for people to acknowledge that this particular speculative idea doesn't really work and there aren't any obvious good ideas out there. I think it would actually be healthier for theoretical physics these days to take a look at how mathematicians operate, because mathematics has always been a less faddish subject. In mathematics there is much more of a culture where people spread out and devote their lives to thinking hard about something that interests them. There has always been much more of a culture in physics that you want to work on something where you can get results and produce a paper a few months from now. And when the problems are very hard and no one knows what to do, I think people need to be willing to dig in and spend years thinking about something different than what other people are thinking. And there really isn't the kind of institutional support within the physics community for this kind of behavior, whereas there is in mathematics. I think it would be a lot healthier if physicists would acknowledge that they are in a different situation than we've been in historically because of this lack of experiment, and under the circumstances people do need to behave differently. Is part of it that, often, mathematicians are not searching for a larger meaning? Do you think that because physicists are looking for larger meaning, it's throwing off their objectivity? W: The thing that keeps mathematicians honest is the notion of mathematical rigor and mathematical proof. Mathematicians have this very strong culture where everything you say has to be very precise and you have to be able to rigorously prove it. Whereas physicists have never worried too much about that because even if you are doing something that is logically a bit inconsistent, it doesn't really matter because it is all going to get sorted out by the experiment in the end. I think physicists need to spend a lot more time being clear about exactly what's working and what isn't. We have to think about what really is beautiful -- what really is a powerful, beautiful new idea and what isn't. Do you think physicists have lost the idea of what is beautiful and powerful because they have gotten carried away? W: I see what's happening in the field is that they are getting backed into a corner. They've got this underlying speculative idea that they don't want to give up on. If an idea doesn't work, you can always make it try to work by making it more complicated, so a lot of what is going on is that they're being forced into more complicated and more ugly things, to the extent that some of them are even trying to make a virtue of this. They're saying, well, the world really is this incredibly complicated, ugly, Rube Goldberg kind of place. That's just the way it is.You might also likeDiscover Interview: The Math Behind the Physics Behind the UniverseThe Extremely Long Odds Against the Destruction of EarthNothingness of Space Could Illuminate the Theory of EverythingThe Large Hadron Collider Will Finally Start Smashing in SeptemberTesting String TheoryDiscover Interview: Roger Penrose Says Physics Is Wrong, From String Theory to Quantum MechanicsGravity LeaksPhysicists Are Close to Uncovering the Fundamental Rules of Reality3 Ideas That Are Pushing the Edge of Science
数学
2014-15/0339/en_head.json.gz/14669
Search this site: Plus Blog Mathematical theatre at the Science Museum: X&Y What is the shape of the Universe? Is it finite or infinite? Does it have an edge? In their new show X&Y Marcus du Sautoy and Victoria Gould use mathematics and the theatre to navigate the known and unknown reaches of our world. Through a series of surreal episodes, X and Y, trapped in a Universe they don't understand and confronted for the first time with another human being, tackle some of the biggest philosophical and scientific questions on the books: where did the Universe come from, does time have an end, is there something on the other side, do we have free will, can we ever prove anything about our Universe for sure or is there always room for another surprise? Marcus and Victoria met while working on A disappearing number, Complicite's multi award-winning play about mathematics. X&Y has developed from that collaboration and pursues many of the questions at the heart of A disappearing number. X&Y is on at the Science Museum in London 10 - 16 October 2013. Click here to book tickets. You can read about A disappearing number, an interview with Victoria Gould and several articles by Marcus du Sautoy on Plus. The Magic Cube: Get puzzling! If you like the Rubik's cube then you might love the Magic Cube. Rather than having colours on the little square faces it has number on it. So your task is not only to put the large faces together in the right way, but also to figure out what this right way is. Which numbers should occur together on the same face and in what order? Jonathan Kinlay, the inventor of the Magic Cube, has estimated that there are 140 x 1021 different configurations of the Magic Cube. That's 140 followed by 21 zeroes and 3000 more configurations than on an ordinary Rubik's cube. To celebrate the launch of the Magic Cube, Kinlay's company Innovation Factory is running a competition to see who can solve the cube first. To start it off they will be shipping a version the puzzle directly to 100 of the world's leading quantitative experts, a list that includes people at MIT, Microsoft and Goldman Sachs. You can join too by nominating yourself (or someone else). Innovation Factory will accept up to 20 nominees (in addition to those that have already been picked). The competition will launch in September and run for 60 days. To nominate someone please send an email to MagicCubeCompetition@IF-Chicago.com, giving the name and email, mailing address of the nominee and a brief explanation of why you think they should be included in the competition. If you don't get accepted, don't worry — the Magic Cube will go on sale after the competition has ended. The winner will receive lots of glory and a metal version of the Magic Cube precision-machined from solid aluminium, and they will be featured on the Innovation Factory website. As a warm-up you can read about the ordinary Rubik's cube on Plus. The paper galaxy
数学
2014-15/0339/en_head.json.gz/18725
Past recipients have included: Energy Access Foundation, an organization that increases access to clean and renewable energy through rural energy enterprises Aqua Para La Vida, an organization that works in rural Nicaragua to build safe drinking water and sanitation systems Trees, Water, People, a group dedicated to helping communities protect, conserve and manage natural resources IDEA WILD, a group dedicated to helping preserve the earth’s biodiversity Union of Concerned Scientists, a group dedicated to improving the environment Marine Conservation Biology Institute (MCBI), a group dedicated to advancing the science of marine conservation biology Sustainable Ecosystems Institute (SEI), a group that uses science-based, cooperative solutions to maintain natural ecosystems and the human communities that depend on them Conversation Law Foundation, an organization working to solve significant environmental challenges facing New England Sustainability Mini-ReviewsBiodiversity Mini-ReviewsDeep Ocean Mini-ReviewsEnergy Mini-ReviewsNon-Government Organizations (NGOs) Mini-Reviews Rise of the NGO Fifty years ago, about four dozen non-governmental organizations (NGOs) had a voice at United Nations conferences. Since then, these groups have proliferated both in number and influence to the point they are now major players in the environmental arena. In 1992, representatives of 13,000 NGOs convened in Rio de Janeiro for a UN-sponsored Earth Summit-the largest ever gathering of environmentalists and heads of state. The Summit yielded the Convention on Biological Diversity-an important treaty, ratified by 178 nations, which calls for the assessment and protection of threatened plants and animals. “That conference would not have taken place if NGOs hadn’t pushed for it,” comments William Moomaw, an environmental policy specialist at Tufts University. “Without NGOs”, he adds, “there wouldn’t be a Kyoto Protocol aimed at curbing the release of greenhouse gases. Society needs a voice that’s not always provided by governments or the corporate sector, which is why NGOs have flourished.” Today there are more than 20,000 NGOs. These organizations have become a “third force” on the world stage, taking “their place at the table of business and governments,” according to Harvard biologist E.O. Wilson. When Wilson advised conservation groups in the 1970s, their role was “basically that of beggars and evangelists,” raising awareness about problems in the hopes that someone would follow through. By the 1990s, groups such as the Nature Conservancy and Conservation International had the clout to buy up large tracts of land. Actions by both private organizations and governments have brought about 9 percent of the world’s land mass and 1 percent of its waters under some form of protection. Over the decades, NGOs like the World Wildlife Fund and Natural Resources Defense Council have grown from shoestring operations into empires with a million or more members and multi-million dollar budgets. Along with the change in size has come a different attitude. “People got tired of the gloom and doom approach,” says Bud Ris, executive director of the Union of Concerned Scientists from 1984 through 2003. “They wanted to hear about solutions.” Getting results today often entails a very different approach. “In the early days, the process was adversarial because that was the only way to get anyone’s attention. But some companies have finally gotten the message,” Moomaw says, leading to unlikely alliances between environmental groups and their former rivals in industry. The Conservation Law Foundation, for example, is now forging partnerships with companies that, a decade ago, they might have sued. Minireviews are written by Steve Nadis, a writer based in Cambridge, Massachusetts who has worked for the Union of Concerned Scientists and the World Resources Institute. His articles have appeared in Nature, Scientific American, the Atlantic Monthly and other magazines. He was a 1997/98 Knight Science Journalism Fellow at MIT. Protecting the Amazon About one third of the world’s tropical rainforests lie in the Amazon Basin, an ecologically-rich region that is home to more than a fifth of the known plant and animal species. But the Amazon is disappearing fast: About 15 percent of its original forests are gone, and deforestation continues at a frightening clip. Ten thousand square miles of rainforest-more than twice the size of the state of Connecticut-were lost from July 2001 through June 2002, according to a satellite image survey carried out by the Brazilian government. Drastic measures are needed to save this region of unparalleled biological diversity, and the Amazon Region Protected Areas (ARPA) initiative, which was launched in September 2002, is aggressively confronting the challenge. Under ARPA, the Brazilian government has pledged to set aside at least 10 percent of its Amazon land within a decade, thereby safeguarding over 190,000 square miles-an area larger than the entire U.S. National Park System. The government asked the World Wildlife Fund (WWF), the world’s largest privately-funded conservation group, to help devise the conservation plan. WWF and other participating organizations proposed a plan that included strictly-protected parks and managed reserves where indigenous people will be allowed to hunt, fish and farm on a limited basis. WWF and its partners are now identifying new Amazon regions that should be considered for federal protection. Tumucumaque National Park, which was created in 2003, is the biggest single addition to ARPA. At 15,000 square miles, Tumucumaque is the world’s largest tropical rainforest reserve-six times bigger than the Florida Everglades. No major roads have yet been cut into the pristine forest where jaguars, sloths, harpy eagles and other creatures are thought to roam. Information furnished by local indigenous tribes equipped with Global Positioning System handsets was combined with aerial photos to create the most detailed map of an Amazon region ever produced. Later in 2003, the 2,600 square mile Chandless State Park, which provides habitat for rare spider monkeys and endangered species such as the jaguar and Goeldi’s tamarin, was added to ARPA. Within a half year of the project's inauguration, more than 20,000 square miles of Brazilian land came under protection. Advocates of the initiative hope momentum is gathering. “Nothing like ARPA has ever been attempted before,” says WWF vice president Guillermo Castilleja. “WWF, together with the other partners in this program, share a vision to make ARPA the most successful large-scale forest conservation effort in history.” NGO Minireviews are written by Steve Nadis, a writer based in Cambridge, Massachusetts who has worked for the Union of Concerned Scientists and the World Resources Institute. His articles have appeared in Nature, Scientific American, the Atlantic Monthly and other magazines. He was a 1997/98 Knight Science Journalism Fellow at MIT. Protecting the Cardamoms For decades, the Cardamom Mountains in southwestern Cambodia remained off limits to most people. The dense forested mountains-scented with the sweet spice for which they’re named-had been the last stronghold of the brutal Khmer Rouge regime. Ironically, the presence of the armed guerillas, along with the minefields and booby traps they laid, helped make the Cardamoms a largely untouched wilderness area. The Khmer Rouge movement collapsed in the late 1990s, leaving its mountain sanctuary up for grabs. The cash-poor Cambodian government sold five logging concessions to timber companies for parcels in the central mountains. Several roads had been cut into the heart of the range, paving the way for farmers, hunters, and settlers. That’s when Conservation International (CI) stepped in. The Washington-based conservation giant, which is helping to protect more than 100 million acres worldwide, helped finance a Cardamom wildlife survey in 2000 conducted by Flora and Fauna International, a British NGO. The survey confirmed ecologists’ suspicions that the region is a biological treasure trove. Covering just six percent of Cambodia, the Cardamoms are home to most of the country’s large mammals and also shelter about half of its birds, reptiles and amphibians. Threatened species found there include the Indochinese tiger, the Asian elephant, the Malaysian sun bear, the pileated gibbon and the Siamese crocodile. Even rarer species such as the Javan rhinoceros and the khiting vor-a bizarre half-sheep, half-antelope creature thought to exist though never seen before-are rumored to inhabit the mountain slopes. Convinced of the area’s importance, CI struck a deal with the Cambodian government in 2001 that prohibited logging in the mountains while their permanent status was being determined. A year later, Cambodia announced the creation of a million-acre protected forest in the central Cardamoms. The new park abuts two existing sanctuaries, adding up to a combined preserve of 2.44 million acres. It is the largest and most pristine wildlife refuges in the Southeast Asia mainland, covering nearly one-fourth of the total mountain range. CI’s work is not yet done. It now hopes to expand conservation corridors that would link the Cardamoms with the coast, thereby securing key elephant habitats. CI is also paying 50 forest rangers, at a cost of about $250,000 per year, to patrol the park and prevent poaching and illegal logging. The organization knows from experience that creating a wilderness preserve is just the first step; a continued presence is required to make it last. Room to Roam: Yellowstone To Yukon National Parks, even spacious ones like Yellowstone Park (the largest in the United States), are not big enough to accommodate predators like grizzly bears and wolves. Radio-collar tracking studies have shown that a single wolf can rove over a 60,000 square mile area. Other research indicates that the 400 or so grizzly bears thought to occupy Yellowstone need an area 10 times bigger than the park’s confines to thrive. Roads and other obstacles further limit the bears’ range. Small, isolated populations can become inbred; the loss of genetic hardiness brings a greater susceptibility to disease and famine. A new breed of conservationist believes the traditional “patchwork” approach of cordoning off scattered wildlife enclaves is far from optimal. “Little island parks,” claims Canadian environmental lawyer Harvey Locke, can become “islands of extinction.” A decade ago, Locke and other conservationists conceived of the Yellowstone to Yukon (Y2Y) plan for creating a continuous, 2,000 mile seam of protected or otherwise managed land, half a million square miles in all, that stretches from Yellowstone to the Yukon Territories. This “bright green thread” championed by Y2Y proponents will tie wilderness areas together by means of wildlife corridors at least 30 miles wide. Y2Y spokesperson Jeff Gailus calls the effort “the biggest, boldest conservation initiative in history.” He and his colleagues are taking a long-term perspective, thinking in terms of a 50- to 100-year process. In the meantime, they’re trying to encourage the U.S. and Canadian governments-as well as environmental groups, big and small-to preserve additional land or, failing that, to make sure land management practices conform to the needs of wildlife. It’s an ambitious task, admits Gailus, “but we’re not starting from scratch.” The Y2Y plan incorporates 11 existing national parks, along with dozens of other parks and protected areas. And the team is steadily expanding the base: The government of British Columbia recently protected 24,000 square miles in the northeastern section of the province as part of an agreement with native Americans, wildlife advocates and industry representatives. In a similar arrangement, Canada has begun to protect nearly 40,000 square miles in the Mackenzie Valley. Meanwhile, conservationists are trying to establish wildlife-friendly links between the Glacier-Waterton National Park complex and Banff and Jasper to the north. With a staff of just 10 people operating out of offices in Canmore, Alberta and Missoula, Montana, the Y2Y organization cannot begin to manage the vast sweep of land under consideration. The success of Y2Y rests instead on collaborations with 180 other participating groups. “People aren’t working alone anymore to protect their own river, mountain, or valley,” says Gailus. “We’re all working together toward a greater vision, which makes it even more rewarding.” The Last Great Places: The Nature Conservancy Palmyra Atoll-a string of islands in the center of the Pacific, 1,000 miles south of Hawaii-had, by some miracle, emerged from the 20th century virtually intact, without being ruined, or colonized by humans. Still, plans were afoot to “develop” this tropical paradise, a U.S. territory. One proposal called for building a resort and casino; another scenario envisioned a spent nuclear fuel repository. The Nature Conservancy (TNC), unenthused about either prospect, came up with a simple strategy for saving Palmyra. In November 2000, they bought the whole package-680 acres of land and 15,500 acres of coral reefs and lagoons-for $30 million. Some might consider that an unusual approach to conservation, but it’s standard practice for TNC, which has already protected more than 116 million acres of land and waters worldwide. As part of the largest private conservation campaign ever initiated, the group has recently pledged to spend $1 billion to save 200 of the “Last Great Places on Earth,” Palmyra being one of them. In 2001, the U.S. Fish and Wildlife Service established the Palmyra Atoll National Wildlife Refuge to protect the atoll and its surrounding waters. Palmyra was a coveted addition to the national refuge system because it contains one of the last undeveloped atolls in the entire Pacific and some of the most spectacular corals found anywhere, as well as hosting diverse marine species. More than 130 different hard corals grow there-three times more species than are found in the Caribbean Sea or in all the Hawaiian Islands put together. The islands also provide a habitat for more than one million nesting seabirds-their only sanctuary within 450,000 square miles of ocean. Palmyra supports one of the world’s largest colonies of red-footed boobies, second only to the Galapagos Islands. Migratory birds, such as the bristle-thighed curlew, make their first rest stop there while passing through from Alaska. Seabirds are by no means the only visitors to Palmyra. Pilot whales, bottled-nosed dolphins, tiger sharks, manta rays, sea turtles, giant clams, parrot fish, lumphead wrasses and groupers are also well represented. Through its purchase, TNC is trying to ensure that the atoll remains a wildlife haven for the indefinite future. In so doing, various commercial plans for the site, including the construction of an offshore bank, manufacturing center, fish processing plant, and missile launch site have been thwarted. Although TNC is in favor of progress, they believe that sometimes means keeping things the way they are. Call for Calm in the Seas: Natural Resources Defense Council In the past half-century, the oceans have become at least 10 times noisier due to the growing din from shipping, mining, drilling, military sonar systems and other human activities. The problem of underwater noise pollution is not widely known to terrestrial creatures like humans. But to ocean denizens such as whales and dolphins-who rely on sound to communicate with one another, navigate, avoid peril and find food-a vastly noisier world would be hard to ignore. Marine mammals certainly have their champions in environmental circles to the extent that the “save the whales” rallying cry has long been a cliche. But on the noise pollution front, the Washington-based Natural Resources Defense Council (NRDC), has taken the lead. NRDC’s efforts in this area are largely due to the initiative of one man, senior attorney Joel Reynolds. In 1994, Reynolds heard about clandestine experiments the U.S. Navy was conducting off the California coast. After questioning military officials and scientists, and wading through volumes of documents during a nine-month investigation, Reynolds learned about a Navy technique for detecting enemy submarines through the use of “low-frequency active” (LFA) sonar. The Navy had tested the technology repeatedly without studying how it affected marine life, despite its own calculations showing the sonar emitted sound levels of 140 decibels-comparable to a space-shuttle launch-300 miles from the source. The Navy also failed to obtain permits mandated by the Marine Mammal Protection and Endangered Species Acts. In response to pressure from Reynolds, the Navy agreed to study the effects of LFA on marine life. The agency later admitted that one of its mid-frequency sonar systems contributed to the March 2000 deaths of at least eight Cuvier’s beaked whales that had beached themselves in the Bahamas. Reynolds and NRDC won a major lawsuit in October 2003, when a federal court ruled that the Navy could only test its LFA system in a limited area of the North Pacific, with additional restrictions imposed to protect migratory species. But the battle is far from over. The Navy has appealed its loss-a move NRDC has vowed to fight. The successful lawsuit is only a first step, Reynolds explains, as it only applies to the U.S. Navy-not to other countries who are developing similar systems-and only pertains to low-frequency sonar. Since the problem of undersea noise is international in scope, NRDC has recently launched an international program to combat it-a measure that may offer some relief to whales and dolphins whose movements are not constrained by political boundaries. Protecting Fisheries: Conservation Law Foundation The waters off New England once teemed with millions of cod-a term that encompasses 10 families and more than 200 species of fish worldwide. The abundance of cod lured Pilgrims to New England in the early 1600s and kept those settlers from starving, helping to convert the region into an economic powerhouse. For centuries, the sea’s bounty appeared limitless, but all that changed in the last century with the advent of technology that enabled people to remove fish from the seas faster than they could reproduce. By the 1990s, populations of cod, haddock, flounder and other “groundfish” dropped to the lowest levels ever recorded, signaling the collapse of famed fisheries in the Gulf of Maine and Georges Bank. A Boston-based advocacy group, Conservation Law Foundation (CLF), is working to mitigate this environmental disaster. CLF, a group founded by lawyers, is not afraid to use the arm of the law to achieve its ends. The organization began investigating the state of fisheries in the late 1980s in response to pleas from fishermen for help in stemming the decline in fish stocks. CLF filed a lawsuit in 1991 after concluding that a management plan endorsed by the National Marine Fisheries Service (NMFS) did little to address overfishing. The settlement led to a more stringent policy and the unprecedented closures of fisheries. In 2000, CLF was the lead agency in another suit against NMFS for failing to prevent overfishing and for sanctioning inadequate fishery rebuilding plans. Two fishing groups intervened on behalf of the conservation coalition. CLF scored a big victory, as did the cause of marine conservation, when the federal court ruled in its favor a year later. CLF is now waiting to see whether NMFS’s new management plan complies with the Sustainable Fisheries Act. “If the right plan is approved, the outlook is very good for many, though not all, groundfish species,” says Priscilla Brooks, who heads CLF’s Marine Resources Project. Populations of haddock and yellowtail flounder have almost fully rebounded, although cod stocks remain low as overfishing persists. “Our goal is to reduce pressure on cod and hope it responds positively,” says Brooks. “Once we can get stocks to come back up, the trick then will be to establish sustainable fishing practices so we don’t drive them back down again.” Her group vows to closely monitor the status of overtaxed groundfish, which constitute a New England treasure. CLF realizes that vigilance alone will not save the day, but without a watchful eye, we may soon bid farewell to cod and other prized species. An Urban Success Story: Charles River Watershed Association Once regarded as a glorified sewer separating Boston from Cambridge, the Charles River is on the mend. According to the Charles River Watershed Association (CRWA), which monitors the river daily, the waterway is clean enough for swimming, except after heavy downpours when some sewage seeps in through storm drains. Within a few years, the Charles could be “fishable and swimmable” every summer day, regardless of storms-at which point the river of “Dirty Water” fame will become a thing of the past. “This is an urban success story,” claims Kari Dolan, a researcher with the National Wildlife Federation. Much of the credit for this remarkable turnaround goes to CRWA, one of the nation’s first watershed organizations, which was formed in 1965 to address concerns over the river’s declining state. In 1995, the Environmental Protection Agency (EPA) gave the river a “D” for water quality, but since 1998, the Charles has received “B’s”-largely because CRWA has identified pollution sources by measuring bacteria and chemical toxins at various sites, while the EPA has shut down those sources. It has been a fruitful partnership, with CRWA providing the research that guides the cleanup process and EPA supplying the enforcement muscle. Although the Charles has come a long way, there’s still much to be done. Going from a “B” to an “A” in EPA ratings will be harder than going from “D” to “B”, mainly due to all the contaminants lodged in river sediments. Removing them would require dredging-a costly job. For CRWA, the cleanup itself is just half the story. The group is equally focused on preserving the river flow through strategies that keep rainwater in the Charles River watershed, a 300-square-mile basin, rather than discharging it through sewers into Boston Harbor. Their agenda goes beyond fishing and swimming, setting the broader goal of restoring the watershed through innovative water conservation and recovery techniques. CRWA’s efforts are clearly paying off. There are other measures of success than EPA report cards, one being the increased presence of blue herons, osprey, marsh hawks, turtles and other forms of wildlife along the river and its banks. “The Charles was written off for 50 years, but most Bostonians don’t write it off anymore,” says the group’s executive director Robert Zimmerman. “It’s important to put nature on display in urban areas so people realize the environment is not confined to places like Yellowstone.” Restoring Wetlands: The Wetlands Institute, The Trustees of Reservations Since the 1800s, the 273-mile-long Illinois River has been dammed, channelized and otherwise transformed, serving at various times as a canal between the Mississippi River and the Great Lakes, as well as a sewer for Chicago. To redress more than a century of abuse, The Wetlands Initiative (TWI), a Chicago-based non-profit group, recently began restoring 2,600 acres of Illinois River floodplain near Hennepin, Illinois-the first step in TWI's plan to revive broad stretches of wetlands alongside the river and its tributaries. If TWI realizes its goals, the project will be the largest wetlands restoration in the history of Illinois, a state that has already lost 90 percent of its wetlands. Work at Hennepin got started in 2001 after TWI purchased the floodplain site that once supported two lakes (Hennepin and Hopper) as well as wetland, prairie and fen communities. Since the 1920s, however, the land had been pumped dry to permit corn and soybean cultivation. TWI turned off the pumps, allowing precipitation and groundwater to refill the lake beds. In response, the adjacent wetlands and marshes quickly sprang back to life. Frogs, birds and plants returned after nearly a century’s absence. The sounds of Western chorus frogs, American toads and spring peepers were also heard. Muskrats and beavers began reshaping the landscape, and state-threatened bird species, such as the pied-bill grebe and black tern, magically appeared. Meanwhile, a more modest effort is underway in Hingham, Massachusetts to resuscitate a 15-acre salt marsh that spans a narrow peninsula called World’s End. It’s not a big undertaking as restoration ventures go, but salt marshes are endangered ecosystems, and every marsh that can be revived can yield environmental benefits. The Trustees of Reservations (TTOR), the non-profit land conservation organization which owns the land, hope to inspire similar projects along the East Coast. A once-thriving salt marsh was drained for agriculture in the early 1600s by the first European settlers, who installed two dikes to keep saltwater out of the area. In 2003, TTOR took steps to reverse some of the centuries-long damage inflicted by humans. The first priority, restoring tidal flows into and out of the former marsh, was accomplished by installing four-foot by eight-foot concrete culverts in the middle of each dike. Since then, dense stands of phragmites, an invasive reed, have retreated, making way for native salt marsh grasses. A popular destination for hikers and picnickers, World’s End now offers unique opportunities for witnessing a wetland restoration in progress, says TTOR ecologist Andy Walsh. “People can come here to see a coastal zone in transition-a salt marsh coming back to life.” Defending Bats and Insects: Bat Conservation International, Xerces Society In 1992, a spelunker named Steve Smith searched for artifacts in an abandoned mine in Iron Mountain, Michigan, just before the mine was scheduled to be sealed from the public for good. Instead of uncovering artifacts, Smith found thousands of bats and realized they would soon be entombed. Smith alerted Bat Conservation International (BCI) in Austin, Texas. Merlin Tuttle, who started the group in 1982, immediately flew to Michigan where he saw signs of a million brown bats hibernating in the mine. Tuttle persuaded officials from the town and mining company to jettison their plans for covering the mine’s entrances with backfill. Instead, steel gates-with bars spaced widely enough to let bats fly through, while keeping people out-were installed at the entrances. That decision saved a million bats from certain death and protected their home, prompting BCI to look at other mines throughout the country. In 1993, BCI and the U.S. Bureau of Land Management started the North American Bats and Mines Project to prevent the loss of bats due to the closure of abandoned mines. The program is important because more than half of North America’s 46 bat species find sanctuary in mines, after having been driven from traditional roosts in caves and forests. Nearly 2,000 bat-friendly gates have been installed to date, at a cost of roughly $5,000 apiece. BCI has been successful in getting mining companies to share the expenses. The Xerces Society, like BCI, is focused on preserving noncharismatic creatures like insects. “We’re equal opportunity, so long as it doesn’t have a backbone,” says the group’s executive director Scott Hoffman Black. Xerces petitioned the U.S. Fish and Wildlife Service a few years ago when it learned that fewer than 200 butterflies called Carson wandering skippers were left in this country, confined to two sites in California and Nevada. In 2000, the agency protected the skipper under the Endangered Species Act, which provided a legal mandate for preserving its habitat. Xerces, which has a staff of just six, is now trying to save an orange, black, and white checkered butterfly called the Taylor’s checkerspot, whose prairie habitat in the Pacific Northwest has shrunk by more than 99 percent and whose population now numbers in the hundreds. The group is conducting surveys to see where the butterfly lives, while also working with land trusts-the Nature Conservancy as well as smaller, local organizations-to safeguard critical sites. “It’s a combination of advocacy, land management, and science,” says Xerces staffer Matthew Shepherd, “wrapped up in one small butterfly with a two-inch wingspan.” NEB Nature Conservancy: Trees, Water & People Trees, Water & People (TWP) is dedicated to improving people’s lives by helping communities to protect, conserve and manage the natural resources upon which their long-term well being depends. To that end, TWP has developed ongoing watershed protection, sustainable agriculture, reforestation and appropriate technology programs in cooperation with communities throughout Central America. Throughout Central America, native forests are being felled at an alarming rate. If current deforestation trends are not slowed, many scientists agree that the region could be completely void of native forests in the next century. Such massive deforestation causes severe soil erosion, water degradation, loss of wildlife habitat and therefore precious biodiversity. While it is essential to protect natural resources and prevent deforestation, the forest is a resource essential to the survival of human populations. Millions of families throughout Central America rely on forests and land to provide fuel to cook meals, wood to build homes, water to drink and space to live and grow crops. Programs designed to help these communities meet their current needs without endangering long-term ecosystem health are essential. Since establishment in 1998, Trees, Water & People has helped hundreds of low-income communities to balance the needs of human populations with the long-term health of the ecosystem that supports them. In less than six years, TWP and community volunteers have planted nearly 1,000,000 trees, helping to prevent soil erosion, protect native forests and ensure the future of regional biodiversity. TWP staff and local volunteers have also provided sustainable agriculture and watershed protection training to more than 50,000 community members. Finally, in order to compliment their reforestation and watershed protection programs, TWP also works with families to introduce improved cooking stoves that reduce the demand for fuel wood by approximately 70%. These stoves are a huge improvement over traditional open fire cooking stoves still used by most of Central America’s rural population. Not only do they save forests and the life they support, but by removing all smoke from the home via a chimney, they protect families from life-endangering respiratory ailments caused by indoor air pollution. If you are interested in learning more about Trees, Water & People’s work in Central America, please contact: Trees, Water & People 633 Remington Street http://www.treeswaterpeople.org twp@treeswaterpeople.org LeadershipNews and Press ReleasesCertificationsEnvironmental CommitmentBusiness Development OpportunitiesInternational Ordering & SupportCareersContact UsResearch at NEB Home ›About NEB ›Environmental Commitment ›Global Conservation ›Non-Government Organziations (NGOs) Mini-Reviews
数学
2014-15/0339/en_head.json.gz/19716
On-Line Recreation Permits Mapping Products Jobs/USAJOBS Land Use Planning/NEPA Our Offices/Centers Eastern States Operations Center Nat'l Training Center BLM > Arizona > On-Line Recreation Permits > Paria Canyon Wilderness Area/Vermilion Cliffs National Monument > Paria Canyon Permit Area > Prehistory & History Print Page Vermilion Cliffs National MonumentParia Canyon/Vermilion Cliffs WildernessParia Canyon Permit Area - Prehistory & History People of Paria Canyon - Those Who Came BeforeThe Paria River flows intermittently from its headwaters in Bryce Canyon National Park through Utah and Arizona to deposit silt and sediment in the Colorado River. The river basin has been used by various cultures for thousands of years.Some archaeologists believe Paria Canyon was inhabited for at least 10,000 years before the first Europeans arrived in the 1770s. The nature of its geography probably made Paria Canyon a travel route between what is now southern Utah and northern Arizona.From about A.D. 200 to A.D. 1200, the Anasazi occupied the region and may have had small farms and granaries in some stretches of the canyon. Paiute people later occupied and traversed much of the area 600 years before the first Europeans entered the mouth of the canyon. The word "Paria" is Paiute, and may mean water that tastes salty.Most archaeological sites located in Paria Canyon are petroglyphs or rock art sites. Petroglyphs are images or symbols carved, incised, or pecked into the rocks while pictographs are painted on the rock. No habitations or large villages have been found in the canyon, leading researchers to conclude that the canyon was primarily used as a travel route in prehistoric times.More Recent ResidentsTwo missionaries, Fathers Dominguez and Escalante, were the first documented European visitors in the region in 1776. They attempted to establish a route from Santa Fe, New Mexico, to Monterey, California. On their return route to Santa Fe, after failing to reach California, the expedition camped several days at the mouth of Paria Canyon, which would become the site of Lees Ferry 100 years later. They succeeded in exiting the canyon through what is now Dominguez Pass, which is located high on the northeast rim of the lower canyon. (Dominguez-Escalante Expedition Site) What to ExpectPlan and PreparePack It In, Pack It OutCamping ResponsiblyThe Wilderness Experience InformationCampgroundsClimate and WeatherDriving DirectionsGuidesMaps & GuidebooksShuttle ProvidersTrailheadsPhoto GalleryFAQs Natural WondersPrehistory and HistoryGeologyVegetationWildlife PermitsHow to Obtain a PermitObtain a Permit Note: If you are unable to access the "Obtain" link above, please check your security settings.Click "Tools" on the menu bar, then click "internet options", the "advanced" tab, scroll down to the Security Section and check the box "Use TLS 1.0." Click "OK" and try to access it again. Lees Ferry was established in 1871 by John D. Lee, who was the first settler and operator of the ferry. Lee's diary mentioned how rugged Paria Canyon was in the late 1800s. Crampton and Rusho write that Lee, during his eight-day trip through Paria Canyon, struggled for two days and one night without stopping because a safe place to camp out of flash flood danger could not be found."We concluded to drive down the creek (Paria), which took us some eight days of toil, fatigue, and labour through brush, water, ice, and quicksand -- without seeing the sun for 48 hours."John D. LeeUtah State Historical SocietyIn 1870, the small settlement of Pahreah, which was located northwest of the modern-day Paria Contact Station, included 47 families, a church and a post office. This frontier settlement, like many in the West, was frequented by Native Americans, pioneers and the occasional outlaw. John Wesley Powell, the first director of the United States Geological Survey, in surveying the region, used the spelling Paria, which is the name found on modern topographical maps.Jacob Hamblin, envoy for the Church of Jesus Christ of Latter Day Saints, reported his observations about Pahreah on March 27, 1870."The settlement was progressing with a guardhouse and a small corral, where men can cook and lodge safely with 20-25 horses." (Feduska 6)Pahreah TownsiteUtah State Historical SocietyBy that time, Pahreah also had vegetable farms, fruit and nut orchards, and cattle.Unfortunately, severe flooding during the 1880s brought alkaline soil and entrenched arroyos and by 1889, only eight families remained at the settlement. By the 1930s the town of Pahreah had vanished. Near the abandoned settlement, now a ghost town being slowly swept away by the river, a western movie set was built. Here famous characters like Calamity Jane, Buffalo Bill and others came to life. Zane Grey, a famous western writer during the 1920s, had some of his novels filmed in the area, including Revelation, Heritage of the Desert, and A Biography of Buffalo Jones. Today, the ghost town and movie set can be visited by traveling 30-miles east from Kanab along U.S. Highway 89.The Arizona Strip, a portion of Arizona geographically isolated from the rest of the state by the Colorado River, has always been a difficult area to access.According to Crampton and Rusho (1992), Zane Grey described Paria Creek in 1907 thus:"Dawn opened my eyes to what seemed the strangest and most wonderful place in the world. Paria Creek watered this secluded and desert bound spot."Zane GreyJane FosterCrampton and Rusho (1992) also wrote that south of Paria Canyon in House Rock Valley, two men named Uncle Jim Owens and Buffalo Jones, established a Buffalo Ranch in the early 1900s."The original intent of the ranch was to produce hybrid offspring from buffalo and cattle called cattalo."AZ Game & Fish DepartmentThe attempt failed, but today the buffalo herd is managed by the Arizona Game and Fish Department. The Buffalo Ranch is 22-miles south of U.S. Highway 89A, and can be reached by USFS Road 8910.Water DevelopmentWater is the essence of life in the West. According to James J. Ligner of the U. S. Geological Survey, the gaging station on the Colorado River at the mouth of the Paria River is:"The most important station in the United States." (Reilly, 1997).This remote gaging station was important in the development of the Colorado River Compact of 1922, an agreement among western states to divide the Colorado River into an upper basin, which is located north of its confluence with the Paria River, and a lower basin south of the confluence. The compact allocates the water from the Colorado River to individual states. Vermilion Cliffs National Monument Monument Manager: Kevin Wright345 E. Riverside Drive St. George, UT 84790-6714 (435) 688-3200 Hours: 7:45 a.m.-5:00 p.m. Monday through Friday 10:00 a.m.-3:00 p.m. Saturday Closed Sunday
数学
2014-15/0340/en_head.json.gz/424
close Do not display this message againClose window Math Matters Maplesoft PRODUCTS Maple Home : Math Matters - A Brief Look at How Mathematics has Influenced Modern Life. --- Make Selection --- Water Supply & Distribution Petroleum & Petrochemicals High Performance Materials Health Technologies Agricultural Mechanization Laser & Fiber Optics In 2003, the National Academy of Engineering (USA) published A Century of Innovation: Twenty Engineering Achievements that Transformed Our Lives. This book celebrates the top twenty technological advances of the twentieth century that fundamentally changed society. These advances have influenced where and how we live, what we eat, what we do for work or leisure, and even how we think about our world and the universe. Mathematics has played a major role in bringing these innovations to reality. Many mathematical theories and models of real world problems have helped scientists and engineers grapple with seemingly impossible tasks. Today, mathematical techniques reach even further into our society. In addition to making technology more efficient and effective, mathematical techniques help organizations deal with financial, manufacturing, and even marketing issues. This poster is a tribute to the National Academy of Engineering as well as the men and women who have focused their brilliance to transform the modern world. The poster is a mosaic of the ways mathematics helps us utilize and benefit from these great technological achievements. Some achievements will be familiar. Some will be a surprise. All, hopefully, will encourage you to investigate these topics further. 1 George Constable and Bob Somerville, A Century of Innovation: Twenty Engineering Achievements That Transformed Our Lives (Washington: National Academies Press, 2003). The content on this poster does not necessarily reflect the position or views of the National Academy of Engineering, and no official endorsement should be inferred.
数学
2014-15/0340/en_head.json.gz/5400
Search this site: Plus Blog Mathematical theatre at the Science Museum: X&Y What is the shape of the Universe? Is it finite or infinite? Does it have an edge? In their new show X&Y Marcus du Sautoy and Victoria Gould use mathematics and the theatre to navigate the known and unknown reaches of our world. Through a series of surreal episodes, X and Y, trapped in a Universe they don't understand and confronted for the first time with another human being, tackle some of the biggest philosophical and scientific questions on the books: where did the Universe come from, does time have an end, is there something on the other side, do we have free will, can we ever prove anything about our Universe for sure or is there always room for another surprise? Marcus and Victoria met while working on A disappearing number, Complicite's multi award-winning play about mathematics. X&Y has developed from that collaboration and pursues many of the questions at the heart of A disappearing number. X&Y is on at the Science Museum in London 10 - 16 October 2013. Click here to book tickets. You can read about A disappearing number, an interview with Victoria Gould and several articles by Marcus du Sautoy on Plus. The Magic Cube: Get puzzling! If you like the Rubik's cube then you might love the Magic Cube. Rather than having colours on the little square faces it has number on it. So your task is not only to put the large faces together in the right way, but also to figure out what this right way is. Which numbers should occur together on the same face and in what order? Jonathan Kinlay, the inventor of the Magic Cube, has estimated that there are 140 x 1021 different configurations of the Magic Cube. That's 140 followed by 21 zeroes and 3000 more configurations than on an ordinary Rubik's cube. To celebrate the launch of the Magic Cube, Kinlay's company Innovation Factory is running a competition to see who can solve the cube first. To start it off they will be shipping a version the puzzle directly to 100 of the world's leading quantitative experts, a list that includes people at MIT, Microsoft and Goldman Sachs. You can join too by nominating yourself (or someone else). Innovation Factory will accept up to 20 nominees (in addition to those that have already been picked). The competition will launch in September and run for 60 days. To nominate someone please send an email to MagicCubeCompetition@IF-Chicago.com, giving the name and email, mailing address of the nominee and a brief explanation of why you think they should be included in the competition. If you don't get accepted, don't worry — the Magic Cube will go on sale after the competition has ended. The winner will receive lots of glory and a metal version of the Magic Cube precision-machined from solid aluminium, and they will be featured on the Innovation Factory website. As a warm-up you can read about the ordinary Rubik's cube on Plus. The paper galaxy
数学
2014-15/0340/en_head.json.gz/5573
2x2 Mixed Factorial Design Background information you need to know to understand the 2x2 mixed analysis is covered in the PsychWorld commentary "Within-Subjects Designs" and "2x2 Between Subjects Designs". The mixed factorial design is, in fact, a combination of these two. It is a factorial design that includes both between and within subjects variables. One special type of mixed design, that is particularly common and powerful, is the pre-post-control design. This is a design in which all subjects are given a pre-test and a post-test, and these two together serve as a within-subjects factor (test). Participants are also divided into two groups. One group is the focus of the experiment (i.e., experimental group) and one group is a base line (i.e., control) group. So, for example, if we are interested in examining the effects of a new type of cognitive therapy on depression, we would give a depression pre-test to a group of persons diagnosed as clinically depressed and randomly assign them into two groups (traditional and cognitive therapy). After the patients were treated according to their assigned condition for some period of time, lets say a month, they would be given a measure of depression again (post-test). This design would consist of one within subject variable (test), with two levels (pre and post), and one between subjects variable (therapy), with two levels (traditional and cognitive) (Figure 1). Pre-Test Post-Test Cognitive Therapy Cognitive Pre Cognitive Post Traditional Therapy Traditional Pre Traditional Post Figure 1. Example of Pre-Post-Control Design When a researchers uses the pre-post-control design he or she is usually looking for an interaction such that one cell in particular stands out, and that is the experimental groups post test score. Ideally the pre-test scores will be equivalent. It is the post-test score difference between the experimental and control group that is important (see Figure 2). Figure 2. Hypothetical Means for Experiment in Figure 1 Therefore, in terms of post-hoc tests the most important comparison is between the post-test mean for the experimental group and the post-test mean for the control group (see Figure 3). Figure 3. Comparison of Post-Test Means Also, it is typical for the experimenter to expect a change in the experimental group from pre to post, but not in the control group, which would make the important post-hoc comparisons between pre- and post-test for the experimental groups and between pre- and post-test for the control group (see Figure 4). Figure 4. Comparison of Pre vs. Post Test Means for Both Groups Of course, the pre-post-control design is not the only type of mixed design. Another common type of mixed design (and within-subjects design in general) is one that includes a change over time, so that one independent variable consists of multiple measures of one group of people over time. So, for example, we might be interested in comparing the interest of males vs. females in math and science over some time period during development. More specifically, we could give a group of school children a measure of interest in math and science at age 10 and then give the same group of students the same measure of interest at age 18. Our design then would look like Figure 5, and one set of possible means would look like the means in Figure 6, which would represent an interaction. Age 10 Age 18 Males Males-Age 10 Males-Age 18 Females Females-Age 10 Females-Age 18 Figure 5. Mixed Design with Time as a Within-Subjects Factor Figure 6. Hypothetical Means for the Experiment in Figure 5 The two-way mixed analysis of variance is the most complex type of design/analysis that is covered in the PsychConnections.com modules. The VirtualStatistician and experimental psych modules cover the inferential tests listed below. Although, of course, there are many more types of statistical tests, there are an amazing number of experiments, both within psychological and biological sciences that you can answer with the designs/analyses listed below. Of course, there are many variations, since the examples in the modules are limited to two levels of the independent variables and two independent variables, but adding levels and independent variables is just a slight extension of what is covered. There are also cases in which there are no continuous variables, in which case you would often use a "non-parametric" technique, and complex modeling of many continuous variables which would require "multivariate" analyses. However, in cases where an experimenter uses a traditional method, in which groups are formed and variables are manipulated the designs and analyses covered in these modules will often work fine. Further, these more complex types of data analyses such as multivariate techniques are extensions of the basic "univariate" techniques coverd in the modules, so that this knowledge can serve as an important and necessary foundation for the understanding of these techniques. (Figure 7 is a map/flow chart to aid you in selecting the appropriate analysis for a given design among those covered in the PsychConnections.com modules. If you want to go to the module to review a given analysis click on the appropriate white square.) Figure 7. Flow Chart Representing Choice of Analysis Depending on Design Psychology World was created by Richard Hall in 1998 and is covered by a creative commons (by-nc) copyright
数学
2014-15/0340/en_head.json.gz/7414
Calculus/Differentiation/Differentiation Defined < Calculus | Differentiation ← Differentiation/Contents Product and Quotient Rules → Differentiation/Differentiation Defined 1 What is Differentiation? 2 The Definition of Slope 2.1 Of a line 2.2 Of a graph of a function 2.3 Exercises 3 The Rate of Change of a Function at a Point 4 The Definition of the Derivative 4.2 Understanding the derivative notation 5 Differentiation Rules 5.1 Derivative of a constant function 5.1.1 Intuition 5.1.2 Proof 5.2 Derivative of a linear function 5.3 Constant multiple and addition rules 5.3.1 The Constant Rule 5.3.2 The Addition and Subtraction Rules 5.4 The Power Rule 5.5 Derivatives of polynomials What is Differentiation?[edit] Differentiation is an operation that allows us to find a function that outputs the rate of change of one variable with respect to another variable. Informally, we may suppose that we're tracking the position of a car on a two-lane road with no passing lanes. Assuming the car never pulls off the road, we can abstractly study the car's position by assigning it a variable, . Since the car's position changes as the time changes, we say that is dependent on time, or . This tells where the car is at each specific time. Differentiation gives us a function which represents the car's speed, that is the rate of change of its position with respect to time. Equivalently, differentiation gives us the slope at any point of the graph of a non-linear function. For a linear function, of form , is the slope. For non-linear functions, such as , the slope can depend on ; differentiation gives us a function which represents this slope. The Definition of Slope[edit] Historically, the primary motivation for the study of differentiation was the tangent line problem: for a given curve, find the slope of the straight line that is tangent to the curve at a given point. The word tangent comes from the Latin word tangens, which means touching. Thus, to solve the tangent line problem, we need to find the slope of a line that is "touching" a given curve at a given point, or, in modern language, that has the same slope. But what exactly do we mean by "slope" for a curve? The solution is obvious in some cases: for example, a line is its own tangent; the slope at any point is . For the parabola , the slope at the point is ; the tangent line is horizontal. But how can you find the slope of, say, at ? This is in general a nontrivial question, but first we will deal carefully with the slope of lines. Of a line[edit] Three lines with different slopes The slope of a line, also called the gradient of the line, is a measure of its inclination. A line that is horizontal has slope 0, a line from the bottom left to the top right has a positive slope and a line from the top left to the bottom right has a negative slope. The slope can be defined in two (equivalent) ways. The first way is to express it as how much the line climbs for a given motion horizontally. We denote a change in a quantity using the symbol (pronounced "delta"). Thus, a change in is written as . We can therefore write this definition of slope as: An example may make this definition clearer. If we have two points on a line, and , the change in from to is given by: Likewise, the change in from to is given by: This leads to the very important result below. The slope of the line between the points and is Alternatively, we can define slope trigonometrically, using the tangent function: where is the angle from the rightward-pointing horizontal to the line, measured counter-clockwise. If you recall that the tangent of an angle is the ratio of the y-coordinate to the x-coordinate on the unit circle, you should be able to spot the equivalence here. Of a graph of a function[edit] The graphs of most functions we are interested in are not straight lines (although they can be), but rather curves. We cannot define the slope of a curve in the same way as we can for a line. In order for us to understand how to find the slope of a curve at a point, we will first have to cover the idea of tangency. Intuitively, a tangent is a line which just touches a curve at a point, such that the angle between them at that point is zero. Consider the following four curves and lines:
数学
2014-15/0340/en_head.json.gz/10117
Search this site: Plus Blog News from the world of maths: The mathematics of monopoly on More or less The mathematics of monopoly on More or less More or Less, BBC Radio 4's program that takes you on a journey through the often abused but ever ubiquitous world of numbers, has recently returned to the airways, and next Monday (17th December 4.30 pm), regular Plus contributors Rob Eastaway and John Haigh are featuring on the program discussing the maths of Monopoly. Eastaway and Haigh have written for Plus many times on a range of topics including: Coincindences; The national lottery; The maths of cricket and football; Remembrance of numbers past; Maths and Magic. Plus spoke to Eastaway about the science of Monopoly, and without giving too much away, Eastaway commented that because the "Go to jail" square is the most frequently visited sqaure on the board, the orange properties are the best investments, as players leaving jail are most likely to then land on these properties. This means you should invest in Bow Street, Marlborough and Vine Street — or in the US version, St James Place, New York Avenue or Tennessee Avenue. News from the world of maths: Christopher Zeeman Medal for Maths Communication Christopher Zeeman Medal for Maths Communication The Christopher Zeeman Medal, the first award dedicated to recognising excellence in the communication of mathematics has been launched by the London Mathematical Society (LMS) and the Institute of Mathematics and its Applications (IMA). The LMS and IMA want to honour mathematicians who have excelled in promoting mathematics and engaging with the general public. They may be academic mathematicians based in universities, mathematics school teachers, industrial mathematicians, those working in the financial sector or indeed mathematicians from any number of other fields. Most importantly, these mathematicians will have worked exceptionally to bring mathematics to a non-specialist audience. Whether it is through giving public lectures, writing books, appearing on radio or television, organising events or through an entirely separate medium, the LMS and IMA want to celebrate the achievements of mathematicians who work to inspire others. In a joint statement, the presidents of the LMS and IMA said, "We are delighted to be able to show how much we need and value mathematicians who can promote their subject successfully. This role is vital to inspiring the next generation of mathematicians as well as helping the wider public to enjoy mathematics." The award is named after Professor Sir Christopher Zeeman, FRS, whose notable career was pioneering not only in his fields of topology and catastrophe theory, but who was also ground-breaking in bringing his beloved mathematics to the wider public. Sir Christopher was the first mathematician to be asked to deliver the Royal Institution Christmas Lectures in 1978, a full 160 years since they began. His "Mathematics into pictures" lectures, have been cited by many young UK mathematicians as their inspiration. They also led to the creation of the Ri's Mathematics Masterclasses, weekly lectures delivered to schoolchildren across the UK via a network of 50 centres. Sir Christopher's skill as a communicator has been recognised in the wider community. In 1988, he was the third recipient of the Royal Society's Faraday Prize, awarded annually to a scientist or engineer who has excelled in communicating science to public audiences. His award was made "for the contributions he has made to the popularization of mathematics". On the announcement of the medal, Sir Christopher said, "I am extremely honoured to have such an important award bear my name. I hope this medal will encourage more mathematicians to see communicating their work to the wider public as a key part of their role." Nominations for the medal are now invited. To receive a nomination form, please contact: The Secretary to the Christopher Zeeman Medal London Mathematical Society De Morgan House 57-58 Russell Square London WC1B 4HS Or email prizes@lms.ac.uk. Forms should be returned by February 2008. posted by Plus @ 2:39 PM
数学
2014-15/0340/en_head.json.gz/10345
Abstract linear spaces Algebra index History Topics Index Cartesian geometry, introduced by Fermat and Descartes around 1636, had a very large influence on mathematics bringing algebraic methods into geometry. By the middle of the 19th Century however there was some dissatisfaction with these coordinate methods and people began to search for direct methods, i.e. methods of synthetic geometry which were coordinate free. It is possible however to trace the beginning of the vector concept back to the beginning of the 19th Century with the work of Bolzano. In 1804 he published a work on the foundations of elementary geometry Betrachtungen über einige Gegenstände der Elementargoemetrie. Bolzano, in this book, considers points, lines and planes as undefined elements and defines operations on them. This is an important step in the axiomatisation of geometry and an early move towards the necessary abstraction for the concept of a linear space to arise. The move away from coordinate geometry was mainly due to the work of Poncelet and Chasles who were the founders of synthetic geometry. The parallel development in analysis was to move from spaces of concrete objects such as sequence spaces towards abstract linear spaces. Instead of substitutions defined by matrices, abstract linear operators must be defined on abstract linear spaces. In 1827 Möbius published Der barycentrische Calcul a geometrical book which studies transformations of lines and conics. The novel feature of this work is the introduction of barycentric coordinates. Given any triangle ABC then if weights a, b and c are placed at A, B and C respectively then a point P, the centre of gravity, is determined. Möbius showed that every point P in the plane is determined by the homogeneous coordinates [a,b,c], the weights required to be placed at A, B and C to give the centre of gravity at P. The importance here is that Möbius was considering directed quantities, an early appearence of vectors. In 1837 Möbius published a book on statics in which he clearly states the idea of resolving a vector quantity along two specified axes. Between these two works of Möbius, a geometrical work by Bellavitis was published in 1832 which also contains vector type quantities. His basic objects are line segments AB and he considers AB and BA as two distinct objects. He defines two line segments as 'equipollent' if they are equal and parallel, so, in modern notation, two line segments are equipollent if they represent the same vector. Bellavitis then defines the 'equipollent sum of line segments' and obtains an 'equipollent calculus' which is essentially a vector space. In 1814 Argand had represented the complex numbers as points on the plane, that is as ordered pairs of real numbers. Hamilton represented the complex numbers as a two dimensional vector space over the reals although of course he did not use these general abstract terms. He presented these results in a paper to the Irish Academy in 1833. He spent the next 10 years of his life trying to define a multiplication on the 3-dimensional vector space over the reals. Hamilton's quaternions, published in 1843, was an important example of a 4-dimensional vector space but, particularly with Tait's work on quaternions published in 1873, there was to be some competition between vector and quaternion methods. You can see a picture of the plaque commemorating where Hamilton discovered the Quaternions and a (fanciful) engraving of when he carved the rules. In 1857 Cayley introduced matrix algebras, helping the move towards more general abstract systems by adding to the different types of structural laws being studied. In 1858 Cayley noticed that the quaternions could be represented by matrices. In 1867 Laguerre wrote a letter to Hermite Sur le calcul des systèmes linéaires. His systèmes linéaires is a table of coefficients of a system of linear equations denoted by a single upper-case letter and Laguerre defines addition, subtraction and multiplication of of these linear sysyems. In this work Laguerre aims to unify algebraic systems such as complex numbers, Hamilton's quaternions and notions introduced by Galois and by Cauchy. Laguerre's work on linear systems was followed up by a paper by Carvallo in 1891. In this paper he defines operators on vector functions and draws a clear distinction between operators and matrices. To understand the difference between the notions of an operator and a matrix, it suffices to say that, if one changes the coordinate system, one obtains a different matrix to represent the same vector function, but the same operator. Another mathematician who was moving towards geometry without coordinates was Grassmann. His work is highly original but the notion of barycentric coordinates introduced by Möbius was his main motivation. Grassmann's contribution Die Ausdehnungslehre appeared in several different versions. The first was in 1844 but it was a very difficult work to read, and clearly did not find favour with mathematicians, so Grassmann tried to produce a more readable version which appeared in 1862. Clebsch inspired Grassmann to work on this new version. Grassmann studied an algebra whose elements are not specified, so are abstract quantities. He considers systems of elements on which he defines a formal operation of addition, scalar multiplication and multiplication. He starts with undefined elements which he calls 'simple quantities' and generates more complex quantities using specified rules. But ... I go further, since I call these not just quantities but simple quantities. There are other quantities which are themselves compounded quantities and whose characteristics are as distinct relative to each other as the characteristics of the different simple quantities are to each other. These quantities come about through addition of higher forms ... His work contains the familiar laws of vector spaces but, since he also has a multiplication defined, his structures satisfy the properties of what are today called algebras. The precise structures are now known as Grassmann algebras. The ideas of linearly independent and linearly dependent sets of elements are clearly contained in Grassmann's work as is the idea of dimension (although he does not use the term). The scalar product also appears in Grassmann's 1844 work. Grassmann's 1862 version of Die Ausdehnungslehre has a long introduction in which Grassmann gives a summary of his theory. In this introduction he also defends his formal methods which had clearly been objected to by a number of mathematicians. Grassmann's justification comes very close to saying that he is setting up an axiomatic theory and this shows that he is well ahead of his time. Cauchy and Saint-Venant have some claims to have invented similar systems to Grassmann. Saint-Venant's claim is a fair one since he published a work in 1845 in which he multiples line segments in an analogous way to Grassmann. In fact when Grassmann read Saint-Venant's paper he realised that Saint-Venant had not read his 1844 work and sent two copies of the relevant parts to Cauchy, asking him to pass one copy to Saint-Venant. However, rather typically of Cauchy, in 1853 he published Sur les clefs algébrique in Comptes Rendus which describes a formal symbolic method which coincides with that of Grassmann's method (but makes no reference to Grassmann). Grassmann complained to the Académie des Sciences that his work had priority over Cauchy's and, in 1854, a committee was set up to investigate who had priority. We still await the committee's report! The first to see the importance of Grassmann's work was Hankel. In 1867 he wrote a paper Theorie der complexen Zahlensysteme concerning formal systems where combination of the symbols are abstractly defined. He credits Grassmann's Die Ausdehnungslehre as giving the foundation for his work. The first to give an axiomatic definition of a real linear space was Peano in a book published in Torino in 1888. He credits Leibniz, Möbius's 1827 work, Grassmann's 1844 work and Hamilton's work on quaternions as providing ideas which led him to his formal calculus. Peano's 1888 book Calcolo geometrico secondo l'Ausdehnungslehre di H. Grassmann preceduto dalle operazioni della logica deduttiva is remarkable. It gives the basic calculus of set operation introducing the modern notation ∩, ∪, for intersection, union and an element of. It was many years before this notation was to become accepted, in fact Peano's book seems to have had very little influence for many years. It is equally remarkable for containing an almost modern introduction to linear spaces and linear algebra. In Chapter IX of the book Peano gives axioms for a linear space. It is hard to believe that Peano writes the following in 1888. It could almost come from a 1988 book! The first is for equality of elements (a = b) if and only if (b = a), if (a = b) and (b = c) then (a = c). The sum of two objects a and b is defined, i.e. an object is defined denoted by a + b, also belonging to the system, which satisfies: If (a = b) then (a + c = b + c), a + b = b + a, a + (b + c) = (a + b) + c, and the common value of the last equality is denoted by a + b + c. If a is an object of the system and m a positive integer, then we understand by ma the sum of m objects equal to a. It is easy to see that for objects a, b, ... of the system and positive integers m, n, ... one has If (a = b) then (ma = mb), m(a+b) = ma+mb, (m+n)a = ma+na, m(na) = mna, 1a = a. We suppose that for any real number m the notation ma has a meaning such that the preceeding equations are valid. Peano goes on to state the existence of a zero object 0 and says that 0a = 0, that a - b means a + (-b) and states it is easy to show that a - a = 0 and 0 + a = a. Peano defines a linear system to be any system of objects satisfying his four conditions. He goes on to define dependent objects and independent objects. He then defines dimension. Definition: The number of the dimensions of a linear system is the maximal number of linearly independent objects in the system. He proves that finite dimensional spaces have a basis and gives examples of infinite dimensional linear spaces. Peano considers entire functions f(x) of a variable x, defines the sum of f1(x) and f2(x) and the product of f(x) by a real number m. He says:- If one considers only functions of degree n, then these functions form a linear system with n + 1 dimensions, the entire functions of arbitrary degree form a linear system with infinitely many dimensions. Peano defines linear operators on a linear space, shows that by using coordinates one obtains a matrix. He defines the sum and product of linear operators. In the 1890's Pincherle worked on a formal theory of linear operators on an infinite dimensional vector space. However Pincherle did not base his work on that of Peano, rather on the abstract operator theory of Leibniz and d'Alembert. Like so much work in this area it had very little immediate impact and axiomatic infinite dimensional vector spaces were not studied again until Banach and his associates took up the topic in the 1920's. Although never attaining the level of abstraction which Peano had achieved, Hilbert and his student Schmidt looked at infinite dimensional spaces of functions in 1904. Schmidt introduced a move towards abstraction in 1908 introducing geometrical language into Hilbert space theory. The fully axiomatic approach appeared in Banach's 1920 doctoral dissertation. References (13 books/articles) Main index Biographies Index Famous curves index Chronology Time lines Mathematicians of the day Anniversaries for the year Search Form Societies, honours, etc JOC/EFR May 1996 http://www-history.mcs.st-andrews.ac.uk/HistTopics/Abstract_linear_spaces.html
数学
2014-15/0340/en_head.json.gz/17245
Amereon Limited 1 The Time Traveller (for so it will be convenient to speak of him) was expounding a recondite matter to us. His grey eyes shone and twinkled, and his usually pale face was flushed and animated. The fire burned brightly, and the soft radiance of the incandescent lights in the lilies of silver caught the bubbles that flashed and passed in our glasses. Our chairs, being his patents, embraced and caressed us rather than submitted to be sat upon, and there was that luxurious after-dinner atmosphere when thought runs gracefully free of the trammels of precision. And he put it to us in this way—marking the points with a lean forefinger—as we sat and lazily admired his earnestness over this new paradox (as we thought it:) and his fecundity.“You must follow me carefully. I shall have to controvert one or two ideas that are almost universally accepted. The geometry, for instance, they taught you at school is founded on a misconception.”“Is not that rather a large thing to expect us to begin upon?” said Filby, an argumentative person with red hair.“I do not mean to ask you to accept anything without reasonable ground for it. You will soon admit as much as I need from you. You know of course that a mathematical line, a line of thickness nil, has no real existence. They taught you that? Neither has a mathematical plane. These things are mere abstractions.”“That is all right,” said the Psychologist.“Nor, having only length, breadth, and thickness, can a cube have a real existence.”“There I object,” said Filby. “Of course a solid body may exist. All real things—”“So most people think. But wait a moment. Can an instantaneous cube exist?”“Don’t follow you,” said Filby.“Can a cube that does not last for any time at all, have a real existence?”Filby became pensive. “Clearly,” the Time Traveller proceeded, “any real body must have extension in four directions: it must have Length, Breadth, Thickness, and—Duration. But through a natural infirmity of the flesh, which I will explain to you in a moment, we incline to overlook this fact. There are really four dimensions, three which we call the three planes of Space, and a fourth, Time. There is, however, a tendency to draw an unreal distinction between the former three dimensions and the latter, because it happens that our consciousness moves intermittently in one direction along the latter from the beginning to the end of our lives.”“That,” said a very young man, making spasmodic efforts to relight his cigar over the lamp; “that…very clear indeed.”“Now, it is very remarkable that this is so extensively overlooked,” continued the Time Traveller, with a slight accession of cheerfulness. “Really this is what is meant by the Fourth Dimension, though some people who talk about the Fourth Dimension do not know they mean it. It is only another way of looking at Time. There is no difference between Time and any of the three dimensions of Space except that our consciousness moves along it. But some foolish people have got hold of the wrong side of that idea. You have all heard what they have to say about this Fourth Dimension?”“I have not,” said the Provincial Mayor.“It is simply this. That Space, as our mathematicians have it, is spoken of as having three dimensions, which one may call Length, Breadth, and Thickness, and is always definable by reference to three planes, each at right angles to the others. But some philosophical people have been asking why three dimensions particularly—why not another direction at right angles to the other three?—and have even tried to construct a Four-Dimension geometry. Professor Simon Newcomb was expounding this to the New York Mathematical Society only a month or so ago. You know how on a flat surface, which has only two dimensions, we can represent a figure of a three-dimensional solid, and similarly they think that by models of three dimensions they could represent one of four—if they could master the perspective of the thing. See?”“I think so,” murmured the Provincial Mayor; and, knitting his brows, he lapsed into an introspective state, his lips moving as one who repeats mystic words. “Yes, I think I see it now,” he said after some time, brightening in a quite transitory manner.“Well, I do not mind telling you I have been at work upon this geometry of Four Dimensions for some time. Some of my results are curious. For instance, here is a portrait of a man at eight years old, another at fifteen, another at seventeen, another at twenty-three, and so on. All these are evidently sections, as it were, Three-Dimensional representations of his Four-Dimensioned being, which is a fixed and unalterable thing.“Scientific people,” proceeded the Time Traveller, after the pause required for the proper assimilation of this, “know very well that Time is only a kind of Space. Here is a popular scientific diagram, a weather record. This line I trace with my finger shows the movement of the barometer. Yesterday it was so high, yesterday night it fell, then this morning it rose again, and so gently upward to here. Surely the mercury did not trace this line in any of the dimensions of Space generally recognized? But certainly it traced such a line, and that line, therefore, we must conclude was along the Time-Dimension.”“But,” said the Medical Man, staring hard at a coal in the fire, “if Time is really only a fourth dimension of Space, why is it, and why has it always been, regarded as something different? And why cannot we move in Time as we move about in the other dimensions of Space?”The Time Traveller smiled. “Are you sure we can move freely in Space? Right and left we can go, backward and forward freely enough, and men always have done so. I admit we move freely in two dimensions. But how about up and down? Gravitation limits us there.”“Not exactly,” said the Medical Man. “There are balloons.”“But before the balloons, save for spasmodic jumping and the inequalities of the surface, man had no freedom of vertical movement.”“Still they could move a little up and down,” said the Medical Man.“Easier, far easier down than up.”“And you cannot move at all in Time, you cannot get away from the present moment.”“My dear sir, that is just where you are wrong. That is just where the whole world has gone wrong. We are always getting away from the present movement. Our mental existences, which are immaterial and have no dimensions, are passing along the Time-Dimension with a uniform velocity from the cradle to the grave. Just as we should travel down if we began our existence fifty miles above the earth’s surface.”“But the great difficulty is this,” interrupted the Psychologist. “You can move about in all directions of Space, but you cannot move about in Time.”“That is the germ of my great discovery. But you are wrong to say that we cannot move about in Time. For instance, if I am recalling an incident very vividly I go back to the instant of its occurrence: I become absent-minded, as you say. I jump back for a moment. Of course we have no means of staying back for any length of Time, any more than a savage or an animal has of staying six feet above the ground. But a civilized man is better off than the savage in this respect. He can go up against gravitation in a balloon, and why should he not hope that ultimately he may be able to stop or accelerate his drift along the Time-Dimension, or even turn about and travel the other way?”“Oh, this,” began Filby, “is all—-”“Why not?” said the Time Traveller.“It’s against reason,” said Filby.“What reason?” said the Time Traveller.“You can show black is white by argument,” said Filby, “but you will never convince me.”“Possibly not,” said the Time Traveller. “But now you begin to see the object of my investigations into the geometry of Four Dimensions. Long ago I had a vague inkling of a machine—-”“To travel through Time!” exclaimed the Very Young Man.“That shall travel indifferently in any direction of Space and Time as the driver determines.”Filby contented himself with laughter.“But I have experimental verification,” said the Time Traveller.“It would be remarkably convenient for the historian,” the Psychologist suggested. “One might travel back and verify the accepted account of the Battle of Hastings, for instance!”“Don’t you think you would attract attention?” said the Medical Man. “Our ancestors had no great tolerance for anachronisms.”“One might get one’s Greek from the very lips of Homer and Plato,” the Very Young Man thought.“In which case they would certainly plough you for the Little-go. The German Scholars have improved Greek so much.”“Then there is the future,” said the Very Young Man. “Just think! One might invest all one’s money, leave it to accumulate at interest, and hurry on ahead!”“To discover a society,” said I, “erected on a strictly communistic basis.”“Of all the wild extravagant theories!” began the Psychologist.“Yes, so it seemed to me, and so I never talked of it until—-”“Experimental verification!” cried I. “You are going to verify that?”“The experiment!” cried Filby, who was getting brain-weary.“Let’s see your experiment anyhow,” said the Psychologist, “though it’s all humbug, you know.”The Time Traveller smiled round at us. Then, still smiling faintly, and with his hands deep in his trousers pockets, he walked slowly out of the room, and we heard his slippers shuffling down the long passage to his laboratory.The Psychologist looked at us. “I wonder what he’s got?”“Some sleight-of-hand trick or other,” said the Medical Man, and Filby tried to tell us about a conjurer he had seen at Burslem; but before he had finished his preface the Time Traveller came back, and Filby’s anecdote collapsed.The thing the Time Traveller held in his hand was a glittering metallic framework, scarcely larger than a small clock, and very delicately made. There was ivory in it, and some transparent crystalline substance. And now I must be explicit, for this that follows—unless his explanation is to be accepted-is an absolutely unaccountable thing. He took one of the small octagonal tables that were scattered about the room, and set it in front of the fire, with two legs on the hearthrug. On this table he placed the mechanism. Then he drew up a chair, and sat down. The only other object on the table, was a small shaded lamp, the bright light of which fell upon the model. There were also perhaps a dozen candles about, two in brass candlesticks upon the mantel and several in sconces, so that the room was brilliantly illuminated. I sat in a low arm-chair nearest the fire, and I drew this forward so as to be almost between the Time Traveller and the fire-place. Filby sat behind him, looking over his shoulder. The Medical Man and the Provincial Mayor watched him in profile from the right, the Psychologist from the left. The Very Young Man stood behind the Psychologist. We were all on the alert. It appears incredible to me that any kind of trick, however subtly conceived and however adroitly done, could have been played upon us under these conditions.The Time Traveller looked at us, and then at the mechanism. “Well?” said the psychologist.“This little affair,” said the Time Traveller, resting his elbows upon the table and pressing his hands together above the apparatus, “is only a model. It is my plan for a machine to travel through time. You will notice that it looks singularly askew, and that there is an odd twinkling appearance about this bar, as though it was in some way unreal.” He pointed to the part with his finger. “Also, here is one | |||||||
2453 | dbpedia | 3 | 30 | https://inspirehep.net/literature/2643637 | en | INSPIRE | [] | [] | [] | [
""
] | null | [] | null | en | /favicon.ico | null | |||||||
2453 | dbpedia | 1 | 29 | https://mathoverflow.net/questions/44244/what-recent-discoveries-have-amateur-mathematicians-made | en | What recent discoveries have amateur mathematicians made? | [
"https://cdn.sstatic.net/Sites/mathoverflow/Img/logo.svg?v=3a674b060adf",
"https://i.sstatic.net/lg0bO.png",
"https://mathoverflow.net/posts/44244/ivc/0eda?prg=a5730145-2187-472f-a7d4-108a0c4a5254"
] | [] | [] | [
""
] | null | [] | 2010-10-30T14:20:24 | E.T. Bell called Fermat the Prince of Amateurs. One hundred years ago Ramanujan amazed the mathematical world. In between were many important amateurs and mathematicians off the beaten path, but what | en | https://cdn.sstatic.net/Sites/mathoverflow/Img/favicon.ico?v=8bbfe38cfc48 | MathOverflow | https://mathoverflow.net/questions/44244/what-recent-discoveries-have-amateur-mathematicians-made | About ten years ago Ahcène Lamari and Nicholas Buchdahl independently proved that all compact complex surfaces with even first Betti number are Kahler. This was known since 1983, but earlier proofs made use of the classification of surfaces to reduce to hard case-by-case verification.
At the time, Lamari was a teacher at a high school in Paris. Apparently he announced his result by crashing a conference in Paris and going up to Siu (who had proved the last case in the earlier proof in 1983) with a copy of his proof. Lamari's proof was published in the Annales de l'Institut Fourier in 1999 (Courants kählériens et surfaces compactes, Annales de l'institut Fourier, 49 no. 1 (1999), p. 263-285, doi:10.5802/aif.1673), next to Buchdahl's (On compact Kähler surfaces, Annales de l'institut Fourier, 49 no. 1 (1999), p. 287-302, doi: 10.5802/aif.1674)
Greg Egan. He's a very renowned science fiction writer who holds a bachelor degree in mathematics. He wrote, as a coauthor, 2 articles which were published in peer-reviewed journals, one of them is with John Baez. The first one was written when he was approximately 40 years old.
There's also more eccentric example of Andrew Beal, which is much more known in the world of poker. He made however one minor conjecture in number theory for whose proof or disproof he offers $100,000.
And there's also a list on wikipedia which might be worth going through.
Edit: (nov-2018) Some recent progress by G. Egan has been made with an anonymous 4chan-member, on a problem on permutations.
The complete graph on $n$ vertices $K_n$ is not planar for $n \geq 5.$ One may ask: what is the maximum Euler characteristic $\gamma(K_n)$ among all compact orientable surfaces into which $K_n$ may be embedded? It is a nice exercise to embed $K_5,$ $K_6,$ and $K_7$ into the torus. The final result was that $\gamma(K_n) = 2 \lfloor \frac{n (7 - n)}{12} \rfloor.$ In 1968 this theorem had been proven for "all cases except $n = 18,20,$ and $23.$ The proof was completed, at the end of the sixties, by Jean Mayer, a professor of French literature (!), when he found embeddings for these three values." (Surface topology, Firby and Gardiner, p. 111).
Kenneth A. Perko Jr. is a lawyer and an amateur topologist (with graduate-level training). In 1974 he found that two knots that were listed as separate knots in C. N. Little's "On knots, with a census for order 10" (1885) and similar tables, were actually identical.
Mathoverflow-user Daniel Moskovich recounted earlier on this site:
Little (with Tait and Kirkman) compiled his tables combinatorially. He drew all possible 4-valent graphs with some number of vertices (in this case 10), and resolved 4-valent vertices into crossings in all possible ways. He ended up with 210 knots. Then he worked BY HAND to eliminate doubles, by making physical models with string. He failed to bring these two knots to the same position, and concluded that they must be different. It took almost 100 years to find the ambient isotopy which shows that there are the same knot.
The book "Knots and Links" by Dale Rolfsen, published two years after Perko's publication, still lists the knots as different, they are knots [; 10_{161} ;] and [; 10_{162} ;] in Appendix C.
An anonymous poster of a 4chan messaging board, in thinking about how long it would take to watch a 14-episode nonlinear anime program in any order, improved the lower bound for a length of a superpermutation. A superpermutation is a string that contains each permutation of $n$ elements as a substring. See OEIS A180632. Superpermutations are somewhat similar to De Bruijn sequences.
Whether the anonymous poster meets the definition of "amateur" may never be known, but the posting was from 2011, and apparently was noted by a handful of other mathematicians who think about these things not long afterwards.
The story has taken off in the public recently in part because Greg Egan, who was previously mentioned, has also in October 2018 improved the upper bound on the length of a minimal superpermutation.
Quanta Magazine has a nice article as well.
Eugène Ehrhart was a high school teacher when he discovered the so-called Ehrhart polynomial, at the age of 55. He got his PhD at the age of 60.
Let $\Delta$ be a polytope with integral vertices in $\bf R^d$. Then there exists a polynomial $P$ such that for every integer $n$, $P(n)$ is the number of integral points in $n\Delta $. This polynomial satisfies a duality property: $(-1)^dP(-n)$ is the number of integral vertices in the interior of $P$. This duality property has been interpreted as Serre duality on toric varieties by Khovanskii in the 80's. The geometric interpretation of the coefficients of $P$ is still an open problem despite a huge literature. See http://icps.u-strasbg.fr/~clauss/Ehrhart.html for a short bibliography, and https://en.wikipedia.org/wiki/Ehrhart_polynomial for an introduction to the subject.
Bill Gates co-authored the following paper in the 1970s with Christos Papadimitriou:
"Bounds for sorting by prefix reversal," Discrete Mathematics 27 (1979), no. 1, 47–57, MR0534952.
Not sure if Gates counts as an amateur, but he is at least a college dropout. :)
The only reason I know this is because once I ran across a book or article that discusses the results in this paper and then says something like, "Yes, this is THE Bill Gates." I was almost certain the book or article was by Knuth, but now I can't find the reference in any of my Knuth books. If someone else knows the reference I'm talking about, I would be grateful if they would post it as a comment to my answer. (It now bothers me that I can't find that reference. :) )
Kurt Heegner was a radio engineer by trade, but gave (essentially) the first proof of the Gauss class number one problem in 1952: namely that $\mathbb{Q}\sqrt{d}$ has class number $1$ if and only if $d \in \{-1, -2, -3, -7, -11, -19, -43, -67, -163\}$. Unfortunately, his work was largely ignored until around 1967, two years after his death.
His ideas also led to the development of Heegner points, which are very influential in modern number theory.
I was hoping that someone would add David Smith to this list. But a few days has passed and no one has brought this up. So please allow me to kick this thread a bit.
As discussed in another answer, it is recently discovered that the “einstein” tiling exists, settling a long-standing open problem. This discovery is primarily credited to David Smith, who describes himself as a shape hobbyist.
The story is also featured in QuantaMagazine.
Richard Friedberg, then an undergraduate pre-medical student, independently solved Post's problem (of whether there are intermediate Turing degrees) by the priority-injury method. This was a significant open problem at the time, so the result made news:
1956 news article "Senior solves logic problem, astounds mathematicians"
In Gödel's now famous letter to von Neumann that introduced the P vs NP problem, Gödel wrote
I do not know if you have heard that “Post’s problem”, whether there are degrees of unsolvability among problems of the form (∃y)φ(y, x), where φ is recursive, has been solved in the positive sense by a very young man by the name of Richard Friedberg. The solution is very elegant. Unfortunately, Friedberg does not intend to study mathematics, but rather medicine (apparently under the influence of his father).
Friedberg ended up becoming a physicist (Wikipedia biography).
How about Saul Kripke?
Kripke-Platek set theory "is used all over the place, in recursion theory and set theory, [b]oth in classical results, and in fairly recent ones."
Modern philosophers (of science, mathematics, language etc. -- analytical philosophers) are probably a rich source of the list you seek. many do not have above-undergrad training in math, although I would use the definition of 'amateur' that we think of when we think of the ancients. that is, people who are distinctly in another field but make contributions to mathematics as part of their work or hobby.
As for other philosophers/logicians (off the top):
Putnam
Frank Ramsey (I don't have the rep to post more links--had a bunch for this question)
I know that by the time we get to someone like Ramsey, everyone's like: "surely that's not an amateur mathematician" but by the definition given in the question, i think he fits.
At any rate, you can probably find the names you're looking for in analytical philosophy, (mathematical/computational) economics/biology/linguistics, and so on.
The problem, I suspect, will be (in addition to the definition of 'amateur' which is not too difficult in my opinion, as long as you are satisfied with it for your list) the definitions of 'important' and 'discovery'. For example, I've known about Kripke's contributions for a while but I don't know, even now, whether this community considers them as important.
In 2002, Manindra Agrawal, Neeraj Kayal, and Nitin Saxena proved the amazing result "PRIMES is in P," i.e., there is a deterministic, polynomial time algorithm for determining whether a given number is prime or not. The resulting paper was published in the Annals in 2004. The story was covered in the New York Times. The trio won the 2006 Fulkerson Prize, and the 2006 Gödel Prize.
In 2002, Kayal and Saxena were undergraduate students. I'd say that counts as "amateur." | ||||
2453 | dbpedia | 2 | 1 | https://en.wikipedia.org/wiki/Peter_Shor | en | Peter Shor | [
"https://en.wikipedia.org/static/images/icons/wikipedia.png",
"https://en.wikipedia.org/static/images/mobile/copyright/wikipedia-wordmark-en.svg",
"https://en.wikipedia.org/static/images/mobile/copyright/wikipedia-tagline-en.svg",
"https://upload.wikimedia.org/wikipedia/commons/thumb/e/ed/Peter_Shor_2017_Dira... | [] | [] | [
""
] | null | [
"Contributors to Wikimedia projects"
] | 2002-08-06T15:20:35+00:00 | en | /static/apple-touch/wikipedia.png | https://en.wikipedia.org/wiki/Peter_Shor | American mathematician
Not to be confused with the British politician Peter Shore.
Peter Williston Shor (born August 14, 1959) is an American professor of applied mathematics at MIT. He is known for his work on quantum computation, in particular for devising Shor's algorithm, a quantum algorithm for factoring exponentially faster than the best currently-known algorithm running on a classical computer.
Early life and education
[edit]
Shor was born in New York City to Joan Bopp Shor and S. W. Williston Shor.[10][11] He grew up in Washington, D.C. and Mill Valley, California.[10] While attending Tamalpais High School, he placed third in the 1977 USA Mathematical Olympiad.[12] After graduation that year, he won a silver medal at the International Math Olympiad in Yugoslavia (the U.S. team achieved the most points per country that year).[13][14] He received his B.S. in Mathematics in 1981 for undergraduate work at Caltech,[15] and was a Putnam Fellow in 1978. He earned his PhD in Applied Mathematics from MIT in 1985.[16] His doctoral advisor was F. Thomson Leighton, and his thesis was on probabilistic analysis of bin-packing algorithms.
Career
[edit]
After being awarded his PhD by MIT, he spent one year as a postdoctoral researcher at the University of California, Berkeley, and then accepted a position at Bell Labs in New Providence, New Jersey. It was there he developed Shor's algorithm. This development was inspired by Simon's problem, where he first solved the discrete log problem (which relates point-finding on a hypercube to a torus) and,
"Later that week, I was able to solve the factoring problem as well. There’s a strange relation between discrete log and factoring."[17]
Due to their similarity as HSP problems, Shor discovered a related factoring problem (Shor's algorithm) that same week for which he was awarded the Nevanlinna Prize at the 23rd International Congress of Mathematicians in 1998[18][19] and the Gödel Prize in 1999.[20] In 1999, he was awarded a MacArthur Fellowship.[21] In 2017, he received the Dirac Medal of the ICTP and for 2019 the BBVA Foundation Frontiers of Knowledge Award in Basic Sciences.[22]
Shor began his MIT position in 2003. Currently, he is the Henry Adams Morss and Henry Adams Morss, Jr. Professor of Applied Mathematics in the Department of Mathematics at MIT.[23] He also is affiliated with CSAIL.[24]
He received a Distinguished Alumni Award from Caltech in 2007.[15]
On October 1, 2011, he was inducted into the American Academy of Arts and Sciences.[25][26] He was elected as an ACM Fellow in 2019 "for contributions to quantum-computing, information theory, and randomized algorithms".[27] He was elected as a member of the National Academy of Sciences in 2002.[28] In 2020, he was elected a member of the National Academy of Engineering for pioneering contributions to quantum computation.[29]
In an interview published in Nature on October 30, 2020, Shor said that he considers post-quantum cryptography to be a solution to the quantum threat, although a lot of engineering effort is required to switch from vulnerable algorithms.[30]
Along with three others, Shor was awarded the 2023 Breakthrough Prize in Fundamental Physics for "foundational work in the field of quantum information."[9]
See also
[edit]
Entanglement-assisted classical capacity
Keller's conjecture
Stabilizer code
Quantum capacity
Notes
[edit]
Peter W. Shor at DBLP Bibliography Server .
Peter Shor's Home Page at MIT.
Quantum Computing Expert Peter Shor Receives Carnegie Mellon's 1998 Dickson Prize in Science.
Peter Williston Shor at the Mathematics Genealogy Project
Peter Shor's results at International Mathematical Olympiad
The story of Shor's algorithm — Youtube.
Lectures and panels | ||||||
2453 | dbpedia | 3 | 10 | https://www.scientificamerican.com/article/searching-for-easy-answers-hard-computational-questions/ | en | Searching for Easy Answers to Hard Computational Questions | [
"https://static.scientificamerican.com/sciam/cache/file/ACCDD1C6-2886-41D5-BF41E58E34C80BBF_source.jpg",
"https://www.scientificamerican.com/static/assets/footerProductImg-f6732134.png"
] | [] | [] | [
""
] | null | [
"John Pavlus"
] | 2012-09-01T04:00:00+00:00 | Whether or not machines can quickly answer yes-or-no questions could affect everything from national security to the limits of human knowledge | en | https://www.scientificamerican.com/account/sciam-favicon.ico | Scientific American | https://www.scientificamerican.com/article/searching-for-easy-answers-hard-computational-questions/ | On a snowy day in Princeton, N.J., in March 1956, a short, owlish-looking man named Kurt Gödel wrote his last letter to a dying friend. Gödel addressed John von Neumann formally even though the two had known each other for decades as colleagues at the Institute for Advanced Study in Princeton. Both men were mathematical geniuses, instrumental in establishing the U.S.'s scientific and military supremacy in the years after World War II. Now, however, von Neumann had cancer, and there was little that even a genius like Gödel could do except express a few overoptimistic pleasantries and then change the subject:
Dear Mr. von Neumann:
With the greatest sorrow I have learned of your illness…. As I hear, in the last months you have undergone a radical treatment and I am happy that this treatment was successful as desired, and that you are now doing better….
On supporting science journalism
If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.
Since you now, as I hear, are feeling stronger, I would like to allow myself to write you about a mathematical problem, of which your opinion would very much interest me….
Gödel's description of this problem is utterly unintelligible to nonmathematicians. (Indeed, he may simply have been trying to take von Neumann's mind off of his illness by engaging in an acutely specialized version of small talk.) He wondered how long it would take for a hypothetical machine to spit out answers to a problem. What he concluded sounds like something out of science fiction:
If there really were [such] a machine … this would have consequences of the greatest importance. Namely, it would obviously mean that … the mental work of a mathematician concerning Yes-or-No questions could be completely replaced by a machine.
By “mental work,” Gödel didn't mean trivial calculations like adding 2 and 2. He was talking about the intuitive leaps that mathematicians take to illuminate entirely new areas of knowledge. Twenty-five years earlier Gödel's now famous incompleteness theorems had forever transformed mathematics. Could a machine be made to churn out similar world-changing insights on demand?
A few weeks after Gödel sent his letter, von Neumann checked into Walter Reed Army Medical Center in Washington, D.C., where he died less than a year later, never having answered his friend. But the problem would outlive both of them. Now known as P versus NP, Gödel's question went on to become an organizing principle of modern computer science. It has spawned an entirely new area of research called computational complexity theory—a fusion of mathematics, science and engineering that seeks to prove, with total certainty, what computers can and cannot do under realistic conditions.
But P versus NP is about much more than just the plastic-and-silicon contraptions we call computers. The problem has practical implications for physics and molecular biology, cryptography, national security, evolution, the limits of mathematics and perhaps even the nature of reality. This one question sets the boundaries for what, in theory, we will ever be able to compute. And in the 21st century the limits of computation look more and more like the limits of human knowledge itself.
The Bet
Michael Sipser was only a graduate student, but he knew someone would solve the P versus NP problem soon. He even thought he might be the one to do it. It was the fall of 1975, and he was discussing the problem with Leonard Adleman, a fellow graduate student in the computer science department at the University of California, Berkeley. “I had a fascination with P versus NP, had this feeling that I was somehow able to understand it in a way that went beyond the way everyone else seemed to be approaching it,” says Sipser, who is now head of the mathematics department at the Massachusetts Institute of Technology. He was so sure of himself that he made a wager that day with Adleman: P versus NP would be solved by the end of the 20th century, if not sooner. The terms: one ounce of pure gold.
Sipser's bet made a kind of poetic sense because P versus NP is itself a problem about how quickly other problems can be solved. Sometimes simply following a checklist of steps will get you to the end result in relatively short order. Think of grocery shopping: you tick off the items one by one until you reach the end of the list. Complexity theorists label these problems P, for “polynomial time,” which is a mathematically precise way of saying that no matter how long the grocery list becomes, the amount of time that it will take to tick off all the items will never grow at an unmanageable rate.
In contrast, many more problems may or may not be practical to solve by simply ticking off items on a list, but checking the solution is easy. A jigsaw puzzle is a good example: even though it may take effort to put together, you can recognize the right solution just by looking at it. Complexity theorists call these quickly checkable, “jigsaw puzzle–like” problems NP.
Four years before Sipser made his bet, a mathematician named Stephen Cook had proved that these two kinds of problems are related: every quickly solvable P problem is also a quickly checkable NP problem. The P versus NP question that emerged from Cook's insight—and that has hung over the field ever since—asks if the reverse is also true: Are all quickly checkable problems quickly solvable as well? Intuitively speaking, the answer seems to be no. Recognizing a solved jigsaw puzzle (“Hey, you got it!”) is hardly the same thing as doing all the work to find the solution. In other words, P does not seem to equal NP.
What fascinated Sipser was that nobody had been able to mathematically prove this seemingly obvious observation. And without a proof, a chance remained, however unlikely or strange, that all NP problems might actually be P problems in disguise. P and NP might be equal—and because computers can make short work of any problem in P, P equals NP would imply that computers' problem-solving powers are vastly greater than we ever imagined. They would be exactly what Gödel described in his letter to von Neumann: mechanical oracles that could efficiently answer just about any question put to them, so long as they could be programmed to verify the solution.
Sipser knew this outcome was vanishingly improbable. Yet proving the opposite, much likelier, case—that P is not equal to NP—would be just as groundbreaking.
Like Gödel's incompleteness theorems, which revealed that mathematics must contain true but unprovable propositions, a proof showing that P does not equal NP would expose an objective truth concerning the limitations of knowledge. Solving a jigsaw puzzle and recognizing that one is solved are two fundamentally different things, and there are no shortcuts to knowledge, no matter how powerful our computers get.
Proving a negative is always difficult, but Gödel had done it. So to Sipser, making his bet with Adleman, 25 years seemed like more than enough time to get the job done. If he couldn't prove that P did not equal NP himself, someone else would. And he would still be one ounce of gold richer.
Complicated Fast
Adleman shared Sipser's fascination, if not his confidence, because of one cryptic mathematical clue. Cook's paper establishing that P problems are all NP had also proved the existence of a special kind of quickly checkable type of problem called NP-complete. These problems act like a set of magic keys: if you find a fast algorithm for solving one of them, that algorithm will also unlock the solution to every other NP problem and prove that P equals NP.
There was just one catch: NP-complete problems are among the hardest anyone in computer science had ever seen. And once discovered, they began turning up everywhere. Soon after Cook's paper appeared, one of Adleman's mentors at Berkeley, Richard M. Karp, published a landmark study showing that 21 classic computational problems were all NP-complete. Dozens, then hundreds, soon followed. “It was like pulling a finger out of a dike,” Adleman says. Scheduling air travel, packing moving boxes into a truck, solving a Sudoku puzzle, designing a computer chip, seating guests at a wedding reception, playing Tetris and thousands of other practical, real-world problems have been proved to be NP-complete.
How could this tantalizing key to solving P versus NP seem so commonplace and so uncrackable at the same time? “That's why I was interested in studying the P versus NP problem,” says Adleman, who is now a professor at the University of Southern California. “The power and breadth of these computational questions just seemed deeply awesome. But we certainly didn't understand them. And it didn't seem like we would be understanding them anytime soon.” (Adleman's pessimism about P versus NP led to a world-changing invention: a few years after making his bet, Adleman and his colleagues Ronald Rivest and Adi Shamir exploited the seeming incommensurability of P and NP to create their eponymous RSA encryption algorithm, which remains in wide use for online banking, communications and national security applications.)
NP-complete problems are hard because they get complicated fast. Imagine you are a backpacker planning a trip through a number of cities in Europe, and you want a route that takes you through each city while minimizing the total distance you will need to travel. How do you find the best route? The simplest method is just to try out each possibility. With five cities to visit, you need to check only 12 possible routes. With 10 cities, the number of possible routes mushrooms to more than 180,000. At 60 cities, the number of paths exceeds the number of atoms in the known universe. This computational nightmare is known as the traveling salesman problem, and in over 80 years of intense study, no one has ever found a general way to solve it that works better than trying every possibility one at a time.
That is the perverse essence of NP-completeness—and of P versus NP: not only are all NP-complete problems equally impossible to solve except in the simplest cases—even if your computer has more memory than God and the entire lifetime of the universe to work with—they seem to pop up everywhere. In fact, these NP-complete problems don't just frustrate computer scientists. They seem to put limits on the capabilities of nature itself.
Nature's Code
The pioneering Dutch programmer Edsger Dijkstra understood that computational questions have implications beyond mathematics. He once remarked that “computer science is no more about computers than astronomy is about telescopes.” In other words, computation is a behavior exhibited by many systems besides those made by Google and Intel. Indeed, any system that transforms inputs into outputs by a set of discrete rules—including those studied by biologists and physicists—can be said to be computing.
In 1994 mathematician Peter Shor proved that cleverly arranged subatomic particles could break modern encryption schemes. In 2002 Adleman used strands of DNA to find an optimal solution to an instance of the traveling salesman problem. And in 2005 Scott Aaronson, an expert in quantum computing who is now at M.I.T.'s Computer Science and Artificial Intelligence Laboratory, used soap bubbles, of all things, to efficiently compute optimal solutions to a problem known as the Steiner tree. These are all exactly the kinds of NP problems that computers should choke their circuit boards on. Do these natural systems know something about P versus NP that computers don't?
“Of course not,” Aaronson says. His soap bubble experiment was actually a reductio ad absurdum of the claim that simple physical systems can somehow transcend the differences between P and NP problems. Although the soap bubbles did “compute” perfect solutions to the minimum Steiner tree in a few instances, they quickly failed as the size of the problem increased, just like a computer would. Adleman's DNA-strand experiment hit the same wall. Shor's quantum algorithm does work in all instances, but the factoring problem that it cracks is almost certainly not NP-complete. Therefore, the algorithm doesn't provide the key that would unlock every other NP problem. Biology, classical physics and quantum systems all seem to support the idea that NP-complete problems have no shortcuts. And that would only be true if P did not equal NP.
“Of course, we still can't prove it with airtight certainty,” Aaronson says. “But if we were physicists instead of complexity theorists, ‘P does not equal NP’ would have been declared a law of nature long ago—just like the fact that nothing can go faster than the speed of light.” Indeed, some physical theories about the fundamental nature of the universe—such as the holographic principle, suggested by Stephen Hawking's work on black holes—imply that the fabric of reality itself is not continuous but made of discrete bits, just like a computer [see “Is Space Digital?” by Michael Moyer; Scientific American, February]. Therefore, the apparent intractability of NP problems—and the limitations on knowledge that this implies—may be baked into the universe at the most fundamental level.
Brain Machine
So if the very universe itself is beholden to the computational limits imposed by P versus NP, how can it be that NP-complete problems seem to get solved all the time—even in instances where finding these solutions should take trillions of years or more?
For example, as a human fetus gestates in the womb, its brain wires itself up out of billions of individual neurons. Finding the best arrangement of these cells is an NP-complete problem—one that evolution appears to have solved. “When a neuron reaches out from one point to get to a whole bunch of other synapse points, it's basically a graph-optimization problem, which is NP-hard,” says evolutionary neurobiologist Mark Changizi. Yet the brain doesn't actually solve the problem—it makes a close approximation. (In practice, the neurons consistently get within 3 percent of the optimal arrangement.) The Caenorhabditis elegans worm, which has only 302 neurons, still doesn't have a perfectly optimal neural-wiring diagram, despite billions on billions of generations of natural selection acting on the problem. “Evolution is constrained by P versus NP,” Changizi says, “but it works anyway because life doesn't always require perfection to function well.”
And neither, it turns out, do computers. That modern computers can do anything useful at all—much less achieve the wondrous feats we all take for granted on our video-game consoles and smartphones—is proof that the problems in P encompass a great many of our computing needs. For the rest, often an imperfect approximating algorithm is good enough. In fact, these “good enough” algorithms can solve immensely complex search and pattern-matching problems, many of which are technically NP-complete. These solutions are not always mathematically optimal in every case, but that doesn't mean they aren't useful.
Take Google, for instance. Many complexity researchers consider NP problems to be, in essence, search problems. But according to Google's director of research Peter Norvig, the company takes pains to avoid dealing with NP problems altogether. “Our users care about speed more than perfection,” he says. Instead Google researchers optimize their algorithms for an even faster computational complexity category than P (referred to as linear time) so that search results appear nearly instantaneously. And if a problem comes up that cannot be solved in this way? “We either reframe it to be easier, or we don't bother,” Norvig says.
That is the legacy and the irony of P versus NP. Writing to von Neumann in 1956, Gödel thought the problem held the promise of a future filled with infallible reasoning machines capable of replacing “the mental work of a mathematician” and churning out bold new truths at the push of a button. Instead decades of studying P versus NP have helped build a world in which we extend our machines' problem-solving powers by embracing their limitations. Lifelike approximation, not mechanical perfection, is how Google's autonomous cars can drive themselves on crowded Las Vegas freeways and IBM's Watson can guess its way to victory on Jeopardy.
Gold Rush
The year 2000 came and went, and Sipser mailed Adleman his ounce of gold. “I think he wanted it to be embedded in a cube of Lucite, so he could put it on his desk or something,” Sipser says. “I didn't do that.” That same year the Clay Mathematics Institute in Cambridge, Mass., offered a new bounty for solving P versus NP: $1 million. The prize helped to raise the problem's profile, but it also attracted the attention of amateurs and cranks; nowadays, like many prominent complexity theorists, Sipser says, he regularly receives unsolicited e-mails asking him to review some new attempt to prove that P does not equal NP—or worse, the opposite.
Although P versus NP remains unsolved, many complexity researchers still think it will yield someday. “I never really gave up on it,” Sipser says. He claims to still pull out pencil and paper from time to time and work on it—almost for recreation, like a dog chewing on a favorite bone. P versus NP is, after all, an NP problem itself: the only way to find the answer is to keep searching. And while that answer may never come, if it does, we will know it when we see it. | ||||
2453 | dbpedia | 1 | 33 | https://scottaaronson.blog/%3Fp%3D4450 | en | Optimized | [
"https://149663533.v2.pressablecdn.com/wp-content/plugins/really-simple-facebook-twitter-share-buttons/images/specificfeeds_follow.png",
"https://149663533.v2.pressablecdn.com/wp-content/plugins/really-simple-facebook-twitter-share-buttons/images/specificfeeds_follow.png",
"https://149663533.v2.pressablecdn.com... | [] | [] | [
""
] | null | [] | null | The Blog of Scott Aaronson | en | Shtetl-Optimized | https://scottaaronson.blog/ | Q1: Who will you be voting for in November?
A: Kamala Harris (and mainstream Democrats all down the ballot), of course.
Q2: Of course?
A: If the alternative is Trump, I would’ve voted for Biden’s rotting corpse. Or for Hunter Biden. Or for…
Q3: Why can’t you see this is just your Trump Derangement Syndrome talking?
A: Look, my basic moral commitments remain pretty much as they’ve been since childhood. Namely, that I’m on the side of reason, Enlightenment, scientific and technological progress, secular government, pragmatism, democracy, individual liberty, justice, intellectual honesty, an American-led peaceful world order, preservation of the natural world, mitigation of existential risks, and human flourishing. (Crazy and radical, I know.)
Only when choosing between candidates who all espouse such values, do I even get the luxury of judging them on any lower-order bits. Sadly, I don’t have that luxury today. Trump’s values, such as they are, would seem to be “America First,” protectionism, vengeance, humiliation of enemies, winning at all costs, authoritarianism, the veneration of foreign autocrats, and the veneration of himself. No amount of squinting can ever reconcile those with the values I listed before.
Q4: Is that all that’s wrong with him?
A: No, there are also the lies, and worst of all the “Big Lie.” Trump is the first president in US history to incite a mob to try to overturn the results of an election. He was serious! He very nearly succeeded, and probably would have, had Mike Pence been someone else. It’s now inarguable that Trump rejects the basic rules of our system, or “accepts” them only when he wins. We’re numb from having heard it so many times, but it’s a big deal, as big a deal as the Civil War was.
Q5: Oh, so this is about your precious “democracy.” Why do you care? Haven’t you of all people learned that the masses are mostly idiots and bullies, who don’t deserve power? As Curtis Yarvin keeps trying to explain to you, instead of “democracy,” you should want a benevolent king or dictator-CEO, who could offer a privileged position to the competent scientists like yourself.
A: Yeah, so how many examples does history furnish where that worked out well? I suppose you might make a partial case for Napoleon, or Ataturk? More to the point: even if benevolent, science-and-reason-loving authoritarian strongmen are possible in theory, do you really expect me to believe that Trump could be one of them? I still love how Scott Alexander put it in 2016:
Can anyone honestly say that Trump or his movement promote epistemic virtue? That in the long-term, we’ll be glad that we encouraged this sort of thing, that we gave it power and attention and all the nutrients it needed to grow? That the road to whatever vision of a just and rational society we imagine, something quiet and austere with a lot of old-growth trees and Greek-looking columns, runs through LOCK HER UP?
I don’t like having to vote for the lesser of two evils. But at least I feel like I know who it is.
Q6: But what about J. D. Vance? He got his start in Silicon Valley, was championed by Peter Thiel, and is obviously highly intelligent. Doesn’t he seem like someone who might listen to and empower tech nerds like yourself?
A: Who can say what J. D. Vance believes? Here are a few choice quotes of his from eight years ago:
I’m obviously outraged at Trump’s rhetoric, and I worry most of all about how welcome Muslim citizens feel in their own country. But I also think that people have always believed crazy shit (I remember a poll from a few years back suggesting that a near majority of democratic voters blame ‘the Jews’ for the financial crisis). And there have always been demagogues willing to exploit the people who believe crazy shit.
The more white people feel like voting for trump, the more black people will suffer. I really believe that.
[Trump is] just a bad man. A morally reprehensible human being.
To get from that to being Trump’s running mate is a Simone-Biles-like feat of moral acrobatics. Vance reminds me of the famous saying by L. Ron Hubbard from his pre-Dianetics days: “If a man really wants to make a million dollars, the best way would be to start his own religion.” (And I feel like Harris’s whole campaign strategy should just be to replay Vance’s earlier musings in wall-to-wall ads while emphasizing her agreement with them.) No, Vance is not someone I trust to share my values, if he has values at all.
Q7: What about the other side’s values, or lack thereof? I mean, don’t you care that the whole Democratic establishment—including Harris—colluded to cover up that Biden was senile and cognitively unfit to be president now, let alone for another term?
A: Look, we’ve all seen what happens as a relative gets old. It’s gradual. It’s hard for anyone to say at which specific moment they can no longer drive a car, or be President of the United States, or whatever. This means that I don’t necessarily read evil intent into the attempts to cover up Biden’s decline—merely an epic, catastrophic failure of foresight. That failure of foresight itself would’ve been a huge deal in normal circumstances, but these are not normal circumstances—not if you believe, as I do, that the alternative is the beginning of the end of a 250-year-old democratic experiment.
Q8: Oh stop being so melodramatic. What terrible thing happened to you because of Trump’s first term? Did you lose your job? Did fascist goons rough you up in the street?
A: Well, my Iranian PhD student came close to having his visa revoked, and it became all but impossible to recruit PhD students from China. That sucked, since I care about my students’ welfare like I care about my own. Also, the downfall of Roe v. Wade, which enabled Texas’ draconian new abortion laws, made it much harder for us to recruit faculty at UT Austin. But I doubt any of that will impress you. “Go recruit American students,” you’ll say. “Go recruit conservative faculty who are fine with abortion being banned.”
The real issue is that Trump was severely restrained in his first term, by being surrounded by people who (even if, in many cases, they started out loyal to him) were also somewhat sane and valued the survival of the Republic. Alas, he learned from that, and he won’t repeat that mistake the next time.
Q9: Why do you care so much about Trump’s lies? Don’t you realize that all politicians lie?
A: Yes, but there are importantly different kinds of lies. There are white lies. There are scheming, 20-dimensional Machiavellian lies, like a secret agent’s cover story (or is that only in fiction?). There are the farcical, desperate, ever-shifting lies of the murderer to the police detective or the cheating undergrad to the professor. And then there are the lies of bullies and mob bosses and populist autocrats, which are special and worse.
These last, call them power-lies, are distinguished by the fact that they aren’t even helped by plausibility. Often, as with conspiracy theories (which strongly overlap with power-lies), the more absurd the better. Obama was born in Kenya. Trump’s crowd was the biggest in history. The 2020 election was stolen by a shadowy conspiracy involving George Soros and Dominion and Venezuela.
The central goal of a power-lie is just to demonstrate your power to coerce others into repeating it, much like with the Party making Winston Smith affirm 2+2=5, or Petruchio making Katharina call the sun the moon in The Taming of the Shrew. A closely-related goal is as a loyalty test for your own retinue.
It’s Trump’s embrace of the power-lie that puts him beyond the pale for me.
Q10: But Scott, we haven’t even played our “Trump” card yet. Starting on October 7, 2023, did you not witness thousands of your supposed allies, the educated secular progressives on “the right side of history,” cheer the sadistic mass-murder of Jews—or at least, make endless excuses for those who did? Did this not destabilize your entire worldview? Will you actually vote for a party half of which seems at peace with the prospect of your family members’ physical annihilation? Or will you finally see who your real friends now are: Arkansas MAGA hillbillies who pray for your people’s survival?
A: Ah, this is your first slash that’s actually drawn blood. I won’t pretend that the takeover of part of the US progressive coalition by literal Hamasniks hasn’t been one of the most terrifying experiences of my life. Yes, if I had to be ruled by either (a) a corrupt authoritarian demagogue or (b) an idiot college student chanting for “Intifada Revolution,” I’d be paralyzed. So it’s lucky that I don’t face that choice! I get to vote, once more, for a rather boring mainstream Democrat—alongside at least 70% of American Jews. The idea of Harris as an antisemite would be ludicrous even if she didn’t have a Jewish husband or wasn’t strongly considering a pro-Israel Jew as her running mate.
Q11: Sure, Kamala Harris might mouth all the right platitudes about Israel having a right to defend itself, but she’ll constantly pressure Israel to make concessions to Hamas and Hezbollah. She’ll turn a blind eye to Iran’s imminent nuclearization. Why don’t you stay up at night worrying that, if you vote for a useful idiot like her, you’ll have Israel’s annihilation and a second Holocaust on your conscience forever?
A: Look, oftentimes—whenever, for example, I’m spending hours reading anti-Zionists on Twitter—I feel like there’s no limit to how intensely Zionist I am. On reflection, though, there is a limit. Namely, I’m not going to be more Zionist than the vast majority of my Israeli friends and colleagues—the ones who served in the IDF, who in some cases did reserve duty in Gaza, who prop up the Israeli economy with their taxes, and who will face the consequences of whatever happens more directly than I will. With few exceptions, these friends despise the Trump/Bibi alliance with white-hot rage, and they desperately want more moderate leadership in both countries.
Q12: Suppose I concede that Kamala is OK on Israel. We both know that she’s not the future of the Democratic Party, any more than Biden is. The future is what we all saw on campuses this spring. “Houthis Houthis make us proud, turn another ship around.” How can you vote for a party whose rising generation seems to want you and your family dead?
A: Let me ask you something. When Trump won in 2016, did that check the power of the campus radicals? Or as Scott Alexander prophesied at the time, did it energize and embolden them like nothing else, by dramatically confirming their theology of a planet held hostage by the bullying, misogynistic rich white males? I fundamentally reject your premise that, if I’m terrified of crazy left-wing extremists, then a good response is to vote for the craziest right-wing extremists I can find, in hopes that the two will somehow cancel each other out. Instead I should support a coherent Enlightenment alternative to radicalism, or the closest thing to that available.
Q13: Even leaving aside Israel, how can you not be terrified by what the Left has become? Which side denounced you on social media a decade ago, as a misogynist monster who wanted all women to be his sex slaves? Which side tried to ruin your life and career? Did we, the online rightists, do that? No. We did not. We did nothing worse to you than bemusedly tell you to man up, grow a pair, and stop pleading for sympathy from feminists who will hate you no matter what.
A: I’ll answer with a little digression. Back in 2017, when Kamala Harris was in the Senate, her office invited me to DC to meet with them to provide advice about the National Quantum Initiative Act, which Kamala was then spearheading. Kamala herself sent regrets that she couldn’t meet me, because she had to be at the Kavanaugh hearings. I have (nerdy, male) friends who did meet her about tech policy and came away with positive impressions.
And, I dunno, does that sound like someone who wants me dead for the crime of having been born a nerdy heterosexual male? Or having awkwardly and ineptly asked women on dates, including the one who became my wife? OK, maybe Amanda Marcotte wants me dead for those crimes. Maybe Arthur Chu does (is he still around?). Good that they’re not running for president then.
Q14: Let me try one more time to show you how much your own party hates you. Which side has been at constant war against the SAT and other standardized tests, and merit-based college admissions, and gifted programs, and academic tracking and acceleration, and STEM magnet schools, and every single other measure by which future young Scott Aaronsons (and Saket Agrawals) might achieve their dreams in life? Has that been our side, or theirs?
A: To be honest, I haven’t seen the Trump or Harris campaigns take any position on any of these issues. Even if they did, there’s very little that the federal government can do: these battles happen in individual states and cities and counties and universities. So I’ll vote for Harris while continuing to advocate for what I think is right in education policy.
Q15: Can you not see that Kamala Harris is a vapid, power-seeking bureaucratic machine—that she has no fixed principles at all? For godsakes, she all but condemned Biden as a racist in the 2020 primary, then agreed to serve as his running mate!
A: I mean, she surely has more principles than Vance does. As far as I can tell, for example, she’s genuinely for abortion rights (as I am). Even if she believed in nothing, though, better a cardboard cutout on which values I recognize are written, than a flesh-and-blood person shouting values that horrify me.
Q16: What, if anything, could Republicans do to get you to vote for them?
A: Reject all nutty conspiracy theories. Fully, 100% commit to the peaceful transfer of power. Acknowledge the empirical reality of human-caused climate change, and the need for both technological and legislative measures to slow it and mitigate its impacts. Support abortion rights, or at least a European-style compromise on abortion. Republicans can keep the anti-wokeness stuff, which actually seems to have become their defining issue. If they do all that, and also the Democrats are taken over by frothing radicals who want to annihilate the state of Israel and abolish the police … that’s, uh, probably the point when I start voting Republican.
Q17: Aha, so you now admit that there exist conceivable circumstances that would cause you to vote Republican! In that case, why did you style yourself “Never-Trump From Here to Eternity”?
A: Tell you what, the day the Republicans (and Trump himself?) repudiate authoritarianism and start respecting election outcomes, is the day I’ll admit my title was hyperbolic.
Q18: In the meantime, will you at least treat us Trump supporters with civility and respect?
A: Not only does civil disagreement not compromise any of my values, it is a value to which I think we should all aspire. And to whatever extent I’ve fallen short of that ideal—even when baited into it—I’m sorry and I’ll try to do better. Certainly, age and experience have taught me that there’s hardly anyone so far gone that I can’t find something on which I agree with them, while disagreeing with most of the rest of the world.
The news these days feels apocalyptic to me—as if we’re living through, if not the last days of humanity, then surely the last days of liberal democracy on earth.
All the more reason to ignore all of that, then, and blog instead about the notorious Busy Beaver function! Because holy moly, what news have I got today. For lovers of this super-rapidly-growing sequence of integers, I’ve honored to announce the biggest Busy Beaver development that there’s been since 1983, when I slept in a crib and you booted up your computer using a 5.25-inch floppy. That was the year when Allen Brady determined that BusyBeaver(4) was equal to 107. (Tibor Radó, who invented the Busy Beaver function in the 1960s, quickly proved with his student Shen Lin that the first three values were 1, 6, and 21 respectively. The fourth value was harder.)
Only now, after an additional 41 years, do we know the fifth Busy Beaver value. Today, an international collaboration called bbchallenge is announcing that it’s determined, and even formally verified using the Coq proof system, that BB(5) is equal to 47,176,870—the value that’s been conjectured since 1990, when Heiner Marxen and Jürgen Buntrock discovered a 5-state Turing machine that runs for exactly 47,176,870 steps before halting, when started on a blank tape. The new bbchallenge achievement is to prove that all 5-state Turing machines that run for more steps than 47,176,870, actually run forever—or in other words, that 47,176,870 is the maximum finite number of steps for which any 5-state Turing machine can run. That’s what it means for BB(5) to equal 47,176,870.
For more on this story, see Ben Brubaker’s superb article in Quanta magazine, or bbchallenge’s own announcement. For more background on the Busy Beaver function, see my 2020 survey, or my 2017 big numbers lecture, or my 1999 big numbers essay, or the Googology Wiki page, or Pascal Michel’s survey.
The difficulty in pinning down BB(5) was not just that there are a lot of 5-state Turing machines (16,679,880,978,201 of them to be precise, although symmetries reduce the effective number). The real difficulty is, how do you prove that some given machine runs forever? If a Turing machine halts, you can prove that by simply running it on your laptop until halting (at least if it halts after a “mere” ~47 million steps, which is child’s-play). If, on the other hand, the machine runs forever, via some never-repeating infinite pattern rather than a simple infinite loop, then how do you prove that? You need to find a mathematical reason why it can’t halt, and there’s no systematic method for finding such reasons—that was the great discovery of Gödel and Turing nearly a century ago.
More precisely, the Busy Beaver function grows faster than any function that can be computed, and we know that because if a systematic method existed to compute arbitrary BB(n) values, then we could use that method to determine whether a given Turing machine halts (if the machine has n states, just check whether it runs for more than BB(n) steps; if it does, it must run forever). This is the famous halting problem, which Turing proved to be unsolvable by finite means. The Busy Beaver function is Turing-uncomputability made flesh, a finite function that scrapes the edge of infinity.
There’s also a more prosaic issue. Proofs that particular Turing machines run forever tend to be mind-numbingly tedious. Even supposing you’ve found such a “proof,” why should other people trust it, if they don’t want to spend days staring at the outputs of your custom-written software?
And so for decades, a few hobbyists picked away at the BB(5) problem. One, who goes by the handle “Skelet”, managed to reduce the problem to 43 holdout machines whose halting status was still undetermined. Or maybe only 25, depending who you asked? (And were we really sure about the machines outside those 43?)
The bbchallenge collaboration improved on the situation in two ways. First, it demanded that every proof of non-halting be vetted carefully. While this went beyond the original mandate, a participant named “mxdys” later upped the standard to fully machine-verifiable certificates for every non-halting machine in Coq, so that there could no longer be any serious question of correctness. (This, in turn, was done via “deciders,” programs that were crafted to recognize a specific type of parameterized behavior.) Second, the collaboration used an online forum and a Discord server to organize the effort, so that everyone knew what had been done and what remained to be done.
Despite this, it was far from obvious a priori that the collaboration would succeed. What if, for example, one of the 43 (or however many) Turing machines in the holdout set turned out to encode the Goldbach Conjecture, or one of the other great unsolved problems of number theory? Then the final determination of BB(5) would need to await the resolution of that problem. (We do know, incidentally, that there’s a 27-state Turing machine that encodes Goldbach.)
But apparently the collaboration got lucky. Coq proofs of non-halting were eventually found for all the 5-state holdout machines.
As a sad sidenote, Allen Brady, who determined the value of BB(4), apparently died just a few days before the BB(5) proof was complete. He was doubtful that BB(5) would ever be known. The reason, he wrote in 1988, was that “Nature has probably embedded among the five-state holdout machines one or more problems as illusive as the Goldbach Conjecture. Or, in other terms, there will likely be nonstopping recursive patterns which are beyond our powers of recognition.”
Maybe I should say a little at this point about what the 5-state Busy Beaver—i.e., the Marxen-Buntrock Turing machine that we now know to be the champion—actually does. Interpreted in English, the machine iterates a certain integer function g, which is defined by
g(x) = (5x+18)/3 if x = 0 (mod 3),
g(x) = (5x+22)/3 if x = 1 (mod 3),
g(x) = HALT if x = 2 (mod 3).
Starting from x=0, the machine computes g(0), g(g(0)), g(g(g(0))), and so forth, halting if and if it ever reaches … well, HALT. The machine runs for millions of steps because it so happens that this iteration eventually reaches HALT, but only after a while:
0 → 6 → 16 → 34 → 64 → 114 → 196 → 334 → 564 → 946 → 1584 → 2646 → 4416 → 7366 → 12284 → HALT.
(And also, at each iteration, the machine runs for a number of steps that grows like the square of the number x.)
Some readers might be reminded of the Collatz Conjecture, the famous unsolved problem about whether, if you repeatedly replace a positive integer x by x/2 if x is even or 3x+1 if x is odd, you’ll always eventually reach x=1. As Scott Alexander would say, this is not a coincidence because nothing is ever a coincidence. (Especially not in math!)
It’s a fair question whether humans will ever know the value of BB(6). Pavel Kropitz discovered, a couple years ago, that BB(6) is at least 10^10^10^10^10^10^10^10^10^10^10^10^10^10^10 (i.e., 10 raised to itself 15 times). Obviously Kropitz didn’t actually run a 6-state Turing machine for that number of steps until halting! Instead he understood what the machine did—and it turned out to apply an iterative process similar to the g function above, but this time involving an exponential function. And the process could be proven to halt after ~15 rounds of exponentiation.
Meanwhile Tristan Stérin, who coordinated the bbchallenge effort, tells me that a 6-state machine was recently discovered that “iterates the Collatz-like map {3x/2, (3x-1)/2} from the number 8 and halts if and only if the number of odd terms ever gets bigger than twice the number of even terms.” This shows that, in order to determine the value of BB(6), one would first need to prove or disprove the Collatz-like conjecture that that never happens.
Basically, if and when artificial superintelligences take over the world, they can worry about the value of BB(6). And then God can worry about the value of BB(7).
I first learned about the BB function in 1996, when I was 15 years old, from a book called The New Turing Omnibus by A. K. Dewdney. From what I gather, Dewdney would go on to become a nutty 9/11 truther. But that’s irrelevant to the story. What matters was that his book provided my first exposure to many of the key concepts of computer science, and probably played a role in my becoming a theoretical computer scientist at all.
And of all the concepts in Dewdney’s book, the one I liked the most was the Busy Beaver function. What a simple function! You could easily explain its definition to Archimedes, or Gauss, or any of the other great mathematicians of the past. And yet, by using it, you could name definite positive integers (BB(10), for example) incomprehensibly larger than any that they could name.
It was from Dewdney that I learned that the first four Busy Beaver numbers were the unthreatening-looking 1, 6, 21, and 107 … but then that the fifth value was already unknown (!!), and at any rate at least 47,176,870. I clearly remember wondering whether BB(5) would ever be known for certain, and even whether I might be the one to determine it. That was almost two-thirds of my life ago.
As things developed, I played no role whatsoever in the determination of BB(5) … except for this. Tristan Stérin tells me that reading my survey article, The Busy Beaver Frontier, was what inspired him to start and lead the bbchallenge collaboration that finally cracked the problem. It’s hard to express how gratified that makes me.
Why care about determining particular values of the Busy Beaver function? Isn’t this just a recreational programming exercise, analogous to code golf, rather than serious mathematical research?
I like to answer that question with another question: why care about humans landing on the moon, or Mars? Those otherwise somewhat arbitrary goals, you might say, serve as a hard-to-fake gauge of human progress against the vastness of the cosmos. In the same way, the quest to determine the Busy Beaver numbers is one concrete measure of human progress against the vastness of the arithmetical cosmos, a vastness that we learned from Gödel and Turing won’t succumb to any fixed procedure. The Busy Beaver numbers are just … there, Platonically, as surely as 13 was prime long before the first caveman tried to arrange 13 rocks into a nontrivial rectangle and failed. And yet we might never know the sixth of these numbers and only today learned the fifth.
Anyway, huge congratulations to the bbchallenge team on their accomplishment. At a terrifying time for the world, I’m happy that, whatever happens, at least I lived to see this.
Dana, the kids, and I got back to the US last week after a month spent in England and then Israel. We decided to visit Israel because … uhh, we heard there’s never been a better time.
We normally go every year to visit Dana’s family and our many friends there, and to give talks. Various well-meaning friends suggested that maybe we should cancel or postpone this year—given, you know, the situation. To me, though, the situation felt like all the more reason to go. To make Israel seem more and more embattled, dangerous, isolated, abnormal, like not an acceptable place to visit (much less live), in order to crater its economy, demoralize its population, and ultimately wipe it from the face of earth … that is explicitly much of the world’s game plan right now, laid out with shocking honesty since October 7 (a day that also showed us what the “decolonization” will, concretely, look like). So, if I oppose this plan, then how could I look myself in the mirror while playing my tiny part in it? Shouldn’t I instead raise a middle finger to those who’d murder my family, and go?
Besides supporting our friends and relatives, though, I wanted to see the post-October-7 reality for myself, rather than just spending hours per day reading about it on social media. I wanted to form my own impression of the mood in Israel: fiercely determined? angry? hopeless? just carrying on like normal?
Anyway, in two meeting-packed weeks, mostly in Tel Aviv but also in Jerusalem, Haifa, and Be’er Sheva, I saw stuff that could support any of those narratives. A lot was as I’d expected, but not everything. In the rest of this post, I’ll share eleven observations:
(1) This presumably won’t shock anyone, but in post-October-7 Israel, you indeed can’t escape October 7. Everywhere you look, on every building, in every lobby, hanging from every highway overpass, there are hostage posters and “Bring Them Home Now” signs and yellow ribbons—starting at the airport, where every single passenger is routed through a long corridor of hostage posters, each one signed and decorated by the hostage’s friends and family. It sometimes felt as though Yad Vashem had expanded to encompass the entire country. Virtually everyone we talked to wanted to share their stories and opinions about the war, most of all their depression and anger. While there was also plenty of discussion about quantum error mitigation and watermarking of large language models and local family events, no one even pretended to ignore the war.
(2) Having said that, the morning after we landed, truthfully, the first thing that leapt out at me wasn’t anything to do with October 7, hostages, or Gaza. It was the sheer number of children playing outside, in any direction you looked. Full, noisy playgrounds on block after block. It’s one thing to know intellectually that Israel has by far the highest birthrate of any Western country, another to see it for yourself. The typical secular family probably has three kids; the typical Orthodox family has more. (The Arab population is of course also growing rapidly, both in Israel and in the West Bank and Gaza.) New apartment construction is everywhere you look in Tel Aviv, despite building delays caused by the war. And it all seems perfectly normal … unless you’ve lived your whole life in environments where 0.8 or 1.2 children per couple is the norm.
This, of course, has giant implications for anyone interested in Israel’s future. It’s like, a million Israeli leftists could get fed up and flee to the US or Canada or Switzerland, and Israel would still have a large and growing Jewish population—because having a big family is “just what people do” in a state that was founded to defy the Holocaust. In particular: anyone who dreams of dismantling the illegal, settler-colonial, fascist Zionist ethnostate, and freeing Palestine from river to sea, had better have some plan for what they’re going to do with all these millions of young Jews, who don’t appear to be going anywhere.
(3) The second thing I noticed was the heat—comparable to the Texas summer heat that we try to escape when possible. Because of the roasting sun, our own two pampered offspring mostly refused to go outside during daytime, and we mostly met friends indoors. I more than once had the dark thought that maybe Israel will survive Hamas, Hezbollah, Iran, and its own Jewish extremists … only to be finished off in the end (along with much of the rest of the planet) by global warming. I wonder whether Israel will manage to engineer its way out of the crisis, as it dramatically engineered its way out of its water crisis via desalination. The Arab petrostates have been trying to engineer their way out of the Middle East’s increasingly Mercury-like climate, albeit with decidedly mixed results.
(4) But nu, what did our Israeli friends say about the war? Of course it’s a biased sample, because our friends are mostly left-wing academics and tech workers. But, at risk of overgeneralizing: they’re unhappy. Very, very unhappy. As for Bibi and his far-right yes-men? Our friends’ rage at them was truly a sight to behold. American progressives are, like, mildly irked by Trump in comparison. Yes, our friends blame Bibi for the massive security and intelligence failures that allowed October 7 to happen. They blame him for dragging out the war to stave off elections. They blame him for empowering the contemptible Ben-Gvir and Smotrich. They blame him for his failure to bring back the remaining hostages. Most of all, they blame him for refusing even to meet with the hostage families, and more broadly, for evading responsibility for all that he did wrong, while arrogating credit for any victories (like the rescue of Noa Argamani).
(5) One Israeli friend offered to take me along to the giant anti-Bibi rally that now happens every Saturday night in Azrieli Center in Tel Aviv. (She added that, if I left before 9pm, it would reduce the chances of the police arresting me.) As the intrepid blogger-investigator I am, of course I agreed.
While many of the protesters simply called for new elections to replace Netanyahu (a cause that I 3000% support), others went further, demanding a deal to free the hostages and an immediate end to the war (even if, as they understood, that would leave Hamas in power).
Watching the protesters, smelling their pot smoke that filled the air, I was seized by a thought: these Israeli leftists actually see eye-to-eye with the anti-Israel American leftists on a huge number of issues. In a different world, they could be marching together as allies. Except, of course, for one giant difference: namely, the Tel Aviv protesters are proudly waving Israeli flags (sometimes modified to add anti-Bibi images, or to depict the Star of David “crying”), rather than burning or stomping on those flags. They’re marching to save the Israel that they know and remember, rather than to destroy it.
(6) We did meet one ultra-right-wing (and Orthodox) academic colleague. He was virtually the only person we met on this trip who seemed cheerful and optimistic about Israel’s future. He brought me to his synagogue to celebrate the holiday of Shavuot, while he himself stood guarding the door of the synagogue with a gargantuan rifle (his volunteer duty since October 7). He has six kids.
(7) Again and again, our secular liberal friends told us they’re thinking about moving from Israel, because if the Bibi-ists entrench their power (and of course the demographics are trending in that direction), then they don’t see that the country has any worthwhile future for them or their children. Should this be taken more seriously than the many Americans who promise that this time, for real, they’ll move to Canada if Trump wins? I’m not sure. I can only report what I heard.
(8) At the same time, again and again I got the following question from Israelis (including the leftist ones): how bad is the situation for Jews in the US? Have the universities been taken over by militant anti-Zionists, like it shows in the news? I had to answer: it’s complicated. Because I live my life enbubbled in the STEM field of computer science, surrounded by friends and colleagues of many backgrounds, ethnicities, religions, and political opinions who are thoughtful and decent (otherwise, why would they be my friends and colleagues?), I’m able to live a very nice life even in the midst of loud protesters calling to globalize the intifada against my family.
If, on the other hand, I were in a typical humanities department? Yeah, then I’d be pretty terrified. My basic options would be to (a) shut up about my (ironically) moderate, middle-of-the-road opinions on Israel/Palestine, such as support for the two-state solution; (b) live a miserable and embattled existence; or (c) pack up and move, for example to Israel.
An astounding irony right now is that, just as Israeli leftists are talking about moving from Israel, some of my American Jewish friends have talked to me about moving to Israel, to escape a prejudice that they thought died with their grandparents. I don’t know where the grass is actually greener (or is it brown everywhere?). Nor do I know how many worriers will actually follow through. What’s clear is that, both in Israel and in the diaspora, Jews are feeling an existential fear that they haven’t felt for generations.
(9) Did I fear for my own family’s safety during the trip? Not really. Maybe I should have. When we visited Haifa, we found that GPS was scrambled all across northern Israel, to make targeting harder for Hezbollah missiles. As a result, we couldn’t use Google Maps, got completely lost driving, and had to change plans with our friends. For the first time, now I really feel angry at Hezbollah: they made my life worse and it’s personal!
The funniest part, though, was how the scrambling was implemented: when you opened Google Maps anywhere in the north, it told you that you were in Beirut. It then dutifully gave you walking or driving directions to wherever you were going in Israel, passing through Syria close to Damascus (“warning: this route passes through multiple countries”).
(10) The most darkly comical thing that I heard on the entire trip: “oh, no, I don’t object in the slightest if the anti-Zionists want to kill us all. I only object if they want to kill us because of an incorrect understanding of the relevant history.” Needless to say, this was a professor.
(11) After my two-week investigation, what grand insight can I offer about Israel’s future? Not much, but maybe this: I think we can definitively rule out the scenario where Israel, having been battered by October 7, and bracing itself to be battered worse by Hezbollah, just sort of … withers away and disappears. Yes, Israel might get hotter, more crowded, more dangerous, more right-wing, and more Orthodox. But it will stay right where it is, unless and until its enemies destroy it in a cataclysmic war. You can’t scare people away, break their will, if they believe they have nowhere else on the planet to go. You can only kill them or else live next to them in peace, as the UN proposed in 1947 and as Oslo proposed in the 1990s. May we live to see peace.
Anyway, on that pleasant note, time soon to tune in to the Trump/Biden debate! I wonder who these two gentlemen are, and what they might stand for?
(See here for Boaz Barak’s obituary, and here for Lance Fortnow’s—they cover different aspects of Luca’s legacy from each other and from this post. Also, click here to register for a free online TCS4All talk that Luca was scheduled to give, and that will now be given in his memory, this Monday at 3:30pm Eastern time.)
Luca Trevisan, one of the world’s leading theoretical computer scientists, has succumbed to cancer in Italy, at only 52 years old. I was privileged to know Luca for a quarter-century, first as my complexity theory and cryptography professor at UC Berkeley and as a member of my dissertation committee, and then as a friend and colleague and fellow CS theory blogger.
I regret that I learned of the seriousness of Luca’s condition only a few days ago. So yesterday morning I wrote him a farewell email, under the impression that, while he was now in hospice care, he had at least a few more weeks. Alas, he probably never saw it. So I’m hereby making the email into a memorial post, with small changes mostly to protect people’s privacy.
Dear Luca,
Dana, the kids, and I were traveling in Israel for the past two weeks, when I received the shocking and sad news that this might be my last chance to write to you.
At risk of stating the obvious — you had a very large and positive effect on my life and career. Starting with the complexity theory summer school at the Institute for Advanced Study in 2000, which was the first time we met and also the first time I really experienced the glories of complexity at full blast. And then continuing at Berkeley, TA’ing your algorithms class, which you had to cancel on 9/11 (although students still somehow showed up for office hours lugging their CLRS books…), and dealing with that student who obviously cheated on the midterm although I had stupidly given back to her the evidence that would prove it.
And then your graduate complexity course, where I was very proud to get 100% on your exam, having handwritten it on a train while everyone else used LaTeX (which, embarrassingly, I was still learning). I was a bit less proud to present the Razborov-Rudich paper to the class, and to get questions from you that proved that I understood it less thoroughly than I thought. I emerged from your course far better prepared to do complexity theory than when I entered it.
Later I took your cryptography course, where I came to you afterwards one day to point out that with a quantum computer, you could pull out big Fourier coefficients without all the bother of the Goldreich-Levin theorem. And you said sure, but then you would need a quantum computer. Over 20 years later, Goldreich and Levin (and you?) can say with satisfaction that we still don’t have that scalable quantum computer … but we’re much much closer, I swear!
I still feel bad about the theory lunch talk I gave in 2003, on my complexity-theoretic version of Aumann’s agreement theorem, where I used you and Umesh as characters instead of Alice and Bob, and which then led to unintended references to “Luca’s posterior” (probability distribution, I meant).
I also feel bad about delaying so long the completion of my PhD thesis, until well after I’d started my postdoc in Princeton, so that my former officemate needed to meet you on a street corner in San Francisco to sign the signature page the night before the deadline.
But then a few years later, when Avi and I did the algebrization paper, the fact that you seemed to like it mattered more to me than just about anything else.
Thank you for the excellent dinner when I met you some years ago in Rome. Thank you for the Trevisan-Tulsiani-Vadhan paper, which answered a question we had about BosonSampling (and you probably didn’t even know you were doing quantum computing when you wrote that paper!). Thank you for your blog. Thank you for everything you did for me.
I always enjoyed your dry humor, much of which might sadly be lost to time, unless others wrote it down or it’s on YouTube or something. Two examples spring to my mind across the decades:
“From my previous lecture, you may have gotten the impression that everything in derandomization is due to Nisan and Wigderson, but this is not the case: Avi has been working with other people as well.”
After I’d explained that I’d be spending a semester in Jerusalem to work with Avi, despite (at that time) knowing only the most rudimentary Hebrew, such as how to say “please” and “excuse me”: “you mean there are words in Hebrew for ‘please’ and ‘excuse me’?”
Speaking of which, my current trip to Israel has given me many opportunities to reflect on mortality — for all the obvious war-related reasons of course, but also because while we were here, we unexpectedly had to attend two shivas of people in our social circle who died during our trip, one of them from cancer. And we learned about a close friend whose stepson has a brain tumor and might or might not make it. Cancer is a bitch.
Anyway, there’s much more I could write, but I imagine you’re getting flooded with emails right now from all the people whose lives you’ve touched, so I won’t take up more of your time. You’ve made a real difference to the world, to theoretical computer science, and to your friends and colleagues, one that many people would envy.
Best,
Scott | |||||
2453 | dbpedia | 0 | 90 | https://evol-biol.ru/deutsch/index_en.html | en | Проблемы Эволюции | [
"https://evol-biol.ru/design/img/creature.png",
"https://evol-biol.ru/design/img/navDiv.png",
"https://evol-biol.ru/design/img/navDiv.png",
"https://evol-biol.ru/design/img/navDiv.png",
"https://evol-biol.ru/design/img/navDiv.png",
"https://evol-biol.ru/design/img/navDiv.png",
"https://evol-biol.ru/desi... | [] | [] | [
""
] | null | [] | null | null | The Fabric of Reality
Deutsch D.
Д. Дойч. Структура реальности. 1997.
PENGUIN BOOKS
THE FABRIC OF REALITY
Born in Haifa, Israel, David Deutsch was educated at Cambridge University and Oxford University. He is a member of the Quantum Computation and Cryptography Research Group at the Clarendon Laboratory, Oxford University. His papers on quantum computation laid the foundations for that field, and he is an authority on the theory of parallel universes.
Praise for The Fabric of Reality
"Full of refreshingly oblique, provocative insights ... Quantum mechanics, Deutsch insists, must be taken not just as a predictive tool, but as an explanation for how the world really works."
— George Johnson, The New York Times
"David Deutsch is a deeply knowledgeable professional physicist who has no truck with mystical false analogies ... [he] has become the most eloquent spokesman of the Many Universes interpretation of quantum behavior, and [The Fabric of Reality] makes this theme coherent with some well-thought-out views of epistemology, of biological evolution, and of the theory of computation."
— Richard Dawkins
"In the library of physics for laypeople, Deutsch's book is unique. Correction: it is multiversal, existing in innumerable universes that Deutsch argues exist alongside the ‘real’ universe that people perceive. Explaining that, and persuading the reader of its scientific truth, makes this work unique ... the confidence with which Deutsch presents his views, and the absence of condescension in his style, accesses nonscientists to his seemingly alien world(s)."
— ALA Booklist
"David Deutsch is one of Britain's most original thinkers. In this major work he confronts the deepest questions of existence head on, challenging traditional notions of reality with a new worldview that interweaves physics, biology, computing, and philosophy. I haven't been so inspired since I read Douglas Hofstadter's Gödel, Escher, Bach."
— Paul Davies, author of About Time: Einstein's
Unfinished Revolution
"Deutsch provides a model of reality that is as provocative as it is complex. ... An intellectually stimulating read for the science-literate and motivated layperson.... The author exhibits... a thorough knowledge of his subject matter. ... In a field where scientific inquiry challenges not only our imagination but basic assumptions about our physical world, this volume provides the essential information needed for future debates."
— Publishers Weekly
The Fabric of Reality
The Science of Parallel Universes
— and Its Implications
DAVID DEUTSCH
PENGUIN BOOKS
PENGUIN BOOKS
Published by the Penguin Group
Penguin Group (USA) Inc., 375 Hudson Street, New York, New York 10014, U.S.A.
Penguin Books Ltd, 80 Strand, London WC2R ORL, England
Penguin Books Australia Ltd, 250 Camberwell Road, Camberwell, Victoria 3124, Australia
Penguin Books Canada Ltd, 10 Alcorn Avenue, Toronto, Ontario, Canada M4V 3B2
Penguin Books India (P) Ltd, 11 Community Centre, Panchsheel Park, New Delhi — 110 017, India
Penguin Books (N.Z.) Ltd, Cnr Rosedale and Airborne Roads, Albany, Auckland, New Zealand
Penguin Books (South Africa) (Pty) Ltd, 24 Sturdee Avenue,
Rosebank, Johannesburg 2196, South Africa
Penguin Books Ltd, Registered Offices: 80 Strand, London WC2R ORL, England
First published in Great Britain by Allen Lane The Penguin Press,
Penguin Books Ltd. 1997
First published in the United States of America by Allen Lane The Penguin Press,
an imprint of Viking Penguin, a member of Penguin Putnam Inc., 1997
Published in Penguin Books 1998
9 10
Copyright © David Deutsch, 1997
All rights reserved
THE LIBRARY OF CONGRESS HAS CATALOGUED
THE AMERICAN HARDCOVER EDITION AS FOLLOWS:
Deutsch, David. The fabric of reality / David Deutsch.
p. cm.
Includes bibliographical references and index.
ISBN 0-7139-9061-9 (hc.)
ISBN 0 14 02.7541 X (pbk.)
1. Reality. 2. Physics—Philosophy. 3. Life. 4. Cosmology. I. Title.
QC6.4.R42D48 1997 530'.01—dc21 97-6171
Printed in the United States of America
Set in Monotype Sabon
Figures drawn by Nigel Andrews
Except in the United States of America, this book is sold subject to the
condition that it shall not, by way of trade or otherwise, be lent, re-sold, hired
out, or otherwise circulated without the publisher's prior consent in any form of
binding or cover other than that in which it is published and without a similar
condition including this condition being imposed on the subsequent purchaser.
Dedicated to the memory of Karl Popper, Hugh Everett and
Alan Turing, and to Richard Dawkins. This book takes their
ideas seriously.
Contents
Preface ix Acknowledgements x 1 The Theory of Everything 1 2 Shadows 32 3 Problem-solving 55 4 Criteria for Reality 73 5 Virtual Reality 98 6 Universality and the Limits of Computation 123 7 A Conversation About Justification 141 8 The Significance of Life 167 9 Quantum Computers 194 10 The Nature of Mathematics 222 11 Time: The First Quantum Concept 258 12 Time Travel 289 13 The Four Strands 321 14 The Ends of the Universe 344 Bibliography 367 Index 171
<< >>
Preface
If there is a single motivation for the world-view set out in this book, it is that thanks largely to a succession of extraordinary scientific discoveries, we now possess some extremely deep theories about the structure of reality. If we are to understand the world on more than a superficial level, it must be through those theories and through reason, and not through our preconceptions, received opinion or even common sense. Our best theories are not only truer than common sense, they make far more sense than common sense does. We must take them seriously, not merely as pragmatic foundations for their respective fields but as explanations of the world. And I believe that we can achieve the greatest understanding if we consider them not singly but jointly, for they are inextricably related.
It may seem odd that this suggestion — that we should try to form a rational and coherent world-view on the basis of our best, most fundamental theories — should be at all novel or controversial. Yet in practice it is. One reason is that each of these theories has, when it is taken seriously, very counter-intuitive implications. Consequently, all sorts of attempts have been made to avoid facing those implications, by making ad hoc modifications or reinterpretations of the theories, or by arbitrarily narrowing their domain of applicability, or simply by using them in practice but drawing no wider conclusions from them. I shall criticize some of these attempts (none of which, I believe, has much merit), but only when this happens to be a convenient way of explaining the theories themselves. For this book is not primarily a defence of these theories: it is an investigation of what the fabric of reality would be like if they were true.
<< >>
Acknowledgements
The development of the ideas in this book was greatly assisted by conversations with Bryce DeWitt, Artur Ekert, Michael Lockwood, Enrico Rodrigo, Dennis Sciama, Frank Tipler, John Wheeler and Kolya Wolf.
I am grateful to my friends and colleagues Ruth Chang, Artur Ekert, David Johnson-Davies, Michael Lockwood, Enrico Rodrigo and Kolya Wolf, to my mother Tikvah Deutsch, and to my editors Caroline Knight and Ravi Mirchandani (of Penguin Books) and John Woodruff, and especially to Sarah Lawrence, for their thorough, critical reading of earlier drafts of this book, and for suggesting many corrections and improvements. I am also grateful to those who have read and commented on parts of the manuscript, including Harvey Brown, Steve Graham, Rossella Lupaccini, Svein Olav Nyberg, Oliver and Harriet Strimpel, and especially Richard Dawkins and Frank Tipler.
<< >>
1
The Theory of Everything
I remember being told, when I was a small child, that in ancient times it was still possible for a very learned person to know everything that was known. I was also told that nowadays so much is known that no one could conceivably learn more than a tiny fraction of it, even in a long lifetime. The latter proposition surprised and disappointed me. In fact, I refused to believe it. I did not know how to justify my disbelief. But I knew that I did not want things to be like that, and I envied the ancient scholars.
It was not that I wanted to memorize all the facts that were listed in the world's encyclopaedias: on the contrary, I hated memorizing facts. That is not the sense in which I expected it to be possible to know everything that was known. It would not have disappointed me to be told that more publications appear every day than anyone could read in a lifetime, or that there are 600,000 known species of beetle. I had no wish to track the fall of every sparrow. Nor did I imagine that an ancient scholar who supposedly knew everything that was known would have known everything of that sort. I had in mind a more discriminating idea of what should count as being known. By ‘known’, I meant understood.
The idea that one person might understand everything that is understood may still seem fantastic, but it is distinctly less fantastic than the idea that one person could memorize every known fact. For example, no one could possibly memorize all known observational data on even so narrow a subject as the motions of the planets, but many astronomers understand those motions to the full extent that they are understood. This is possible because {1} understanding does not depend on knowing a lot of facts as such, but on having the right concepts, explanations and theories. One comparatively simple and comprehensible theory can cover an infinity of indigestible facts. Our best theory of planetary motions is Einstein's general theory of relativity, which early in the twentieth century superseded Newton's theories of gravity and motion. It correctly predicts, in principle, not only all planetary motions but also all other effects of gravity to the limits of accuracy of our best measurements. For a theory to predict something ‘in principle’ means that the predictions follow logically from the theory, even if in practice the amount of computation that would be needed to generate some of the predictions is too large to be technologically feasible, or even too large for it to be physically possible for us to carry it out in the universe as we find it.
Being able to predict things or to describe them, however accurately, is not at all the same thing as understanding them. Predictions and descriptions in physics are often expressed as mathematical formulae. Suppose that I memorize the formula from which I could, if I had the time and the inclination, calculate any planetary position that has been recorded in the astronomical archives. What exactly have I gained, compared with memorizing those archives directly? The formula is easier to remember — but then, looking a number up in the archives may be even easier than calculating it from the formula. The real advantage of the formula is that it can be used in an infinity of cases beyond the archived data, for instance to predict the results of future observations. It may also yield the historical positions of the planets more accurately, because the archived data contain observational errors. Yet even though the formula summarizes infinitely more facts than the archives do, knowing it does not amount to understanding planetary motions. Facts cannot be understood just by being summarized in a formula, any more than by being listed on paper or committed to memory. They can be understood only by being explained. Fortunately, our best theories embody deep explanations as well as accurate predictions. For example, the general theory of relativity explains gravity in terms of a new, four-dimensional geometry of {2} curved space and time. It explains precisely how this geometry affects and is affected by matter. That explanation is the entire content of the theory; predictions about planetary motions are merely some of the consequences that we can deduce from the explanation.
What makes the general theory of relativity so important is not that it can predict planetary motions a shade more accurately than Newton's theory can, but that it reveals and explains previously unsuspected aspects of reality, such as the curvature of space and time. This is typical of scientific explanation. Scientific theories explain the objects and phenomena of our experience in terms of an underlying reality which we do not experience directly. But the ability of a theory to explain what we experience is not its most valuable attribute. Its most valuable attribute is that it explains the fabric of reality itself. As we shall see, one of the most valuable, significant and also useful attributes of human thought generally is its ability to reveal and explain the fabric of reality.
Yet some philosophers — and even some scientists — disparage the role of explanation in science. To them, the basic purpose of a scientific theory is not to explain anything, but to predict the outcomes of experiments: its entire content lies in its predictive formulae. They consider that any consistent explanation that a theory may give for its predictions is as good as any other — or as good as no explanation at all — so long as the predictions are true. This view is called instrumentalism (because it says that a theory is no more than an ‘instrument’ for making predictions). To instrumentalists, the idea that science can enable us to understand the underlying reality that accounts for our observations is a fallacy and a conceit. They do not see how anything a scientific theory may say beyond predicting the outcomes of experiments can be more than empty words. Explanations, in particular, they regard as mere psychological props: a sort of fiction which we incorporate in theories to make them more easily remembered and entertaining. The Nobel prize-winning physicist Steven Weinberg was in instrumentalist mood when he made the following extraordinary comment about Einstein's explanation of gravity: {3}
The important thing is to be able to make predictions about images on the astronomers’ photographic plates, frequencies of spectral lines, and so on, and it simply doesn't matter whether we ascribe these predictions to the physical effects of gravitational fields on the motion of planets and photons [as in pre-Einsteinian physics] or to a curvature of space and time. (Gravitation and Cosmology, p. 147)
Weinberg and the other instrumentalists are mistaken. What we ascribe the images on astronomers’ photographic plates to does matter, and it matters not only to theoretical physicists like myself, whose very motivation for formulating and studying theories is the desire to understand the world better. (I am sure that this is Weinberg's motivation too: he is not really driven by an urge to predict images and spectra!) For even in purely practical applications, the explanatory power of a theory is paramount and its predictive power only supplementary. If this seems surprising, imagine that an extraterrestrial scientist has visited the Earth and given us an ultra-high-technology ‘oracle’ which can predict the outcome of any possible experiment, but provides no explanations. According to instrumentalists, once we had that oracle we should have no further use for scientific theories, except as a means of entertaining ourselves. But is that true? How would the oracle be used in practice? In some sense it would contain the knowledge necessary to build, say, an interstellar spaceship. But how exactly would that help us to build one, or to build another oracle of the same kind — or even a better mousetrap? The oracle only predicts the outcomes of experiments. Therefore, in order to use it at all we must first know what experiments to ask it about. If we gave it the design of a spaceship, and the details of a proposed test flight, it could tell us how the spaceship would perform on such a flight. But it could not design the spaceship for us in the first place. And even if it predicted that the spaceship we had designed would explode on take-off, it could not tell us how to prevent such an explosion. That would still be for us to work out. And before we could work it out, before we could even begin to improve the design in any way, we should have to understand, among other things, how the {4} spaceship was supposed to work. Only then would we have any chance of discovering what might cause an explosion on take-off. Prediction — even perfect, universal prediction — is simply no substitute for explanation.
Similarly, in scientific research the oracle would not provide us with any new theory. Not until we already had a theory, and had thought of an experiment that would test it, could we possibly ask the oracle what would happen if the theory were subjected to that test. Thus, the oracle would not be replacing theories at all: it would be replacing experiments. It would spare us the expense of running laboratories and particle accelerators. Instead of building prototype spaceships, and risking the lives of test pilots, we could do all the testing on the ground with pilots sitting in flight simulators whose behaviour was controlled by the predictions of the oracle.
The oracle would be very useful in many situations, but its usefulness would always depend on people's ability to solve scientific problems in just the way they have to now, namely by devising explanatory theories. It would not even replace all experimentation, because its ability to predict the outcome of a particular experiment would in practice depend on how easy it was to describe the experiment accurately enough for the oracle to give a useful answer, compared with doing the experiment in reality. After all, the oracle would have to have some sort of ‘user interface’. Perhaps a description of the experiment would have to be entered into it, in some standard language. In that language, some experiments would be harder to specify than others. In practice, for many experiments the specification would be too complex to be entered. Thus the oracle would have the same general advantages and disadvantages as any other source of experimental data, and it would be useful only in cases where consulting it happened to be more convenient than using other sources. To put that another way: there already is one such oracle out there, namely the physical world. It tells us the result of any possible experiment if we ask it in the right language (i.e. if we do the experiment), though in some cases it is impractical for us to ‘enter a description of the experiment’ in the {5} required form (i.e. to build and operate the apparatus). But it provides no explanations.
In a few applications, for instance weather forecasting, we may be almost as satisfied with a purely predictive oracle as with an explanatory theory. But even then, that would be strictly so only if the oracle's weather forecast were complete and perfect. In practice, weather forecasts are incomplete and imperfect, and to make up for that they include explanations of how the forecasters arrived at their predictions. The explanations allow us to judge the reliability of a forecast and to deduce further predictions relevant to our own location and needs. For instance, it makes a difference to me whether today's forecast that it will be windy tomorrow is based on an expectation of a nearby high-pressure area, or of a more distant hurricane. I would take more precautions in the latter case. Meteorologists themselves also need explanatory theories about weather so that they can guess what approximations it is safe to incorporate in their computer simulations of the weather, what additional observations would allow the forecast to be more accurate and more timely, and so on.
Thus the instrumentalist ideal epitomized by our imaginary oracle, namely a scientific theory stripped of its explanatory content, would be of strictly limited utility. Let us be thankful that real scientific theories do not resemble that ideal, and that scientists in reality do not work towards that ideal.
An extreme form of instrumentalism, called positivism (or logical positivism), holds that all statements other than those describing or predicting observations are not only superfluous but meaningless. Although this doctrine is itself meaningless, according to its own criterion, it was nevertheless the prevailing theory of scientific knowledge during the first half of the twentieth century! Even today, instrumentalist and positivist ideas still have currency. One reason why they are superficially plausible is that, although prediction is not the purpose of science, it is part of the characteristic method of science. The scientific method involves postulating a new theory to explain some class of phenomena and then performing a crucial experimental test, an experiment for which the old theory predicts {6} one observable outcome and the new theory another. One then rejects the theory whose predictions turn out to be false. Thus the outcome of a crucial experimental test to decide between two theories does depend on the theories’ predictions, and not directly on their explanations. This is the source of the misconception that there is nothing more to a scientific theory than its predictions. But experimental testing is by no means the only process involved in the growth of scientific knowledge. The overwhelming majority of theories are rejected because they contain bad explanations, not because they fail experimental tests. We reject them without ever bothering to test them. For example, consider the theory that eating a kilogram of grass is a cure for the common cold. That theory makes experimentally testable predictions: if people tried the grass cure and found it ineffective, the theory would be proved false. But it has never been tested and probably never will be, because it contains no explanation — either of how the cure would work, or of anything else. We rightly presume it to be false. There are always infinitely many possible theories of that sort, compatible with existing observations and making new predictions, so we could never have the time or resources to test them all. What we test are new theories that seem to show promise of explaining things better than the prevailing ones do.
To say that prediction is the purpose of a scientific theory is to confuse means with ends. It is like saying that the purpose of a spaceship is to burn fuel. In fact, burning fuel is only one of many things a spaceship has to do to accomplish its real purpose, which is to transport its payload from one point in space to another. Passing experimental tests is only one of many things a theory has to do to achieve the real purpose of science, which is to explain the world.
As I have said, explanations are inevitably framed partly in terms of things we do not observe directly: atoms and forces; the interiors of stars and the rotation of galaxies; the past and the future; the laws of nature. The deeper an explanation is, the more remote from immediate experience are the entities to which it must refer. {7} But these entities are not fictional: on the contrary, they are part of the very fabric of reality.
Explanations often yield predictions, at least in principle. Indeed, if something is, in principle, predictable, then a sufficiently complete explanation must, in principle, make complete predictions (among other things) about it. But many intrinsically unpredictable things can also be explained and understood. For example, you cannot predict what numbers will come up on a fair (i.e. unbiased) roulette wheel. But if you understand what it is in the wheel's design and operation that makes it fair, then you can explain why predicting the numbers is impossible. And again, merely knowing that the wheel is fair is not the same as understanding what makes it fair.
It is understanding, and not mere knowing (or describing or predicting), that I am discussing. Because understanding comes through explanatory theories, and because of the generality that such theories may have, the proliferation of recorded facts does not necessarily make it more difficult to understand everything that is understood. Nevertheless most people would say — and this is in effect what was being said to me on the occasion I recalled from my childhood — that it is not only recorded facts which have been increasing at an overwhelming rate, but also the number and complexity of the theories through which we understand the world. Consequently (they say), whether or not it was ever possible for one person to understand everything that was understood at the time, it is certainly not possible now, and it is becoming less and less possible as our knowledge grows. It might seem that every time a new explanation or technique is discovered that is relevant to a given subject, another theory must be added to the list that anyone wishing to understand that subject must learn; and that when the number of such theories in any one subject becomes too great, specializations develop. Physics, for example, has split into the sciences of astrophysics, thermodynamics, particle physics, quantum field theory, and many others. Each of these is based on a theoretical framework at least as rich as the whole of physics was a hundred years ago, and many are already fragmenting into sub-specializations. The more we discover, it seems, the further {8} and more irrevocably we are propelled into the age of the specialist, and the more remote is that hypothetical ancient time when a single person's understanding might have encompassed all that was understood.
Confronted with this vast and rapidly growing menu of the collected theories of the human race, one may be forgiven for doubting that an individual could so much as taste every dish in a lifetime, let alone, as might once have been possible, appreciate all known recipes. Yet explanation is a strange sort of food — a larger portion is not necessarily harder to swallow. A theory may be superseded by a new theory which explains more, and is more accurate, but is also easier to understand, in which case the old theory becomes redundant, and we gain more understanding while needing to learn less than before. That is what happened when Nicolaus Copernicus's theory of the Earth travelling round the Sun superseded the complex Ptolemaic system which had placed the Earth at the centre of the universe. Or a new theory may be a simplification of an existing one, as when the Arabic (decimal) notation for numbers superseded Roman numerals. (The theory here is an implicit one. Each notation renders certain operations, statements and thoughts about numbers simpler than others, and hence it embodies a theory about which relationships between numbers are useful or interesting.) Or a new theory may be a unification of two old ones, giving us more understanding than using the old ones side by side, as happened when Michael Faraday and James Clerk Maxwell unified the theories of electricity and magnetism into a single theory of electromagnetism. More indirectly, better explanations in any subject tend to improve the techniques, concepts and language with which we are trying to understand other subjects, and so our knowledge as a whole, while increasing, can become structurally more amenable to being understood.
Admittedly, it often happens that even when old theories are thus subsumed into new ones, the old ones are not entirely forgotten. Even Roman numerals are still used today for some purposes. The cumbersome methods by which people once calculated that {9} XIX times XVII equals CCCXXIII are never applied in earnest any more, but they are no doubt still known and understood somewhere — by historians of mathematics for instance. Does this mean that one cannot understand ‘everything that is understood’ without knowing Roman numerals and their arcane arithmetic? It does not. A modern mathematician who for some reason had never heard of Roman numerals would nevertheless already possess in full the understanding of their associated mathematics. By learning about Roman numerals, that mathematician would be acquiring no new understanding, only new facts — historical facts, and facts about the properties of certain arbitrarily defined symbols, rather than new knowledge about numbers themselves. It would be like a zoologist learning to translate the names of species into a foreign language, or an astrophysicist learning how different cultures group stars into constellations.
It is a separate issue whether knowing the arithmetic of Roman numerals might be necessary in the understanding of history. Suppose that some historical theory — some explanation — depended on the specific techniques used by the ancient Romans for multiplication (rather as, for instance, it has been conjectured that their specific plumbing techniques, based on lead pipes, which poisoned their drinking water, contributed to the decline of the Roman Empire). Then we should have to know what those techniques were if we wanted to understand history, and therefore also if we wanted to understand everything that is understood. But in the event, no current explanation of history draws upon multiplication techniques, so our records of those techniques are mere statements of facts. Everything that is understood can be understood without learning those facts. We can always look them up when, for instance, we are deciphering an ancient text that mentions them.
In continually drawing a distinction between understanding and ‘mere’ knowing, I do not want to understate the importance of recorded, non-explanatory information. This is of course essential to everything from the reproduction of a micro-organism (which has such information in its DNA molecules) to the most abstract human thinking. So what distinguishes understanding from mere {10} knowing? What is an explanation, as opposed to a mere statement of fact such as a correct description or prediction? In practice, we usually recognize the difference easily enough. We know when we do not understand something, even if we can accurately describe and predict it (for instance, the course of a known disease of unknown origin), and we know when an explanation helps us to understand it better. But it is hard to give a precise definition of ‘explanation’ or ‘understanding’. Roughly speaking, they are about ‘why’ rather than ‘what’; about the inner workings of things; about how things really are, not just how they appear to be; about what must be so, rather than what merely happens to be so; about laws of nature rather than rules of thumb. They are also about coherence, elegance and simplicity, as opposed to arbitrariness and complexity, though none of those things is easy to define either. But in any case, understanding is one of the higher functions of the human mind and brain, and a unique one. Many other physical systems, such as animals’ brains, computers and other machines, can assimilate facts and act upon them. But at present we know of nothing that is capable of understanding an explanation — or of wanting one in the first place — other than a human mind. Every discovery of a new explanation, and every act of grasping an existing explanation, depends on the uniquely human faculty of creative thought.
One can think of what happened to Roman numerals as a process of ‘demotion’ of an explanatory theory to a mere description of facts. Such demotions happen all the time as our knowledge grows. Originally, the Roman system of numerals did form part of the conceptual and theoretical framework through which the people who used them understood the world. But now the understanding that used to be obtained in that way is but a tiny facet of the far deeper understanding embodied in modern mathematical theories, and implicitly in modern notations.
This illustrates another attribute of understanding. It is possible to understand something without knowing that one understands it, or even without having specifically heard of it. This may sound paradoxical, but of course the whole point of deep, general explanations is that they cover unfamiliar situations as well as familiar {11} ones. If you were a modern mathematician encountering Roman numerals for the first time, you might not instantly realize that you already understood them. You would first have to learn the facts about what they are, and then think about those facts in the light of your existing understanding of mathematics. But once you had done that, you would be able to say, in retrospect, ‘Yes, there is nothing new to me in the Roman system of numerals, beyond mere facts.’ And that is what it means to say that Roman numerals, in their explanatory role, are fully obsolete.
Similarly, when I say that I understand how the curvature of space and time affects the motions of planets, even in other solar systems I may never have heard of, I am not claiming that I can call to mind, without further thought, the explanation of every detail of the loops and wobbles of any planetary orbit. What I mean is that I understand the theory that contains all those explanations, and that I could therefore produce any of them in due course, given some facts about a particular planet. Having done so, I should be able to say in retrospect, ‘Yes, I see nothing in the motion of that planet, other than mere facts, which is not explained by the general theory of relativity.’ We understand the fabric of reality only by understanding theories that explain it. And since they explain more than we are immediately aware of, we can understand more than we are immediately aware that we understand.
I am not saying that when we understand a theory it necessarily follows that we understand everything it can explain. With a very deep theory, the recognition that it explains a given phenomenon may itself be a significant discovery requiring independent explanation. For example, quasars — extremely bright sources of radiation at the centre of some galaxies — were for many years one of the mysteries of astrophysics. It was once thought that new physics would be needed to explain them, but now we believe that they are explained by the general theory of relativity and other theories that were already known before quasars were discovered. We believe that quasars consist of hot matter in the process of falling into black holes (collapsed stars whose gravitational field is so intense that nothing can escape from them). Yet reaching that {12} conclusion has required years of research, both observational and theoretical. Now that we believe we have gained a measure of understanding of quasars, we do not think that this understanding is something we already had before. Explaining quasars, albeit through existing theories, has given us genuinely new understanding. Just as it is hard to define what an explanation is, it is hard to define when a subsidiary explanation should count as an independent component of what is understood, and when it should be considered as being subsumed in the deeper theory. It is hard to define, but not so hard to recognize: as with explanations in general, in practice we know a new explanation when we are given one. Again, the difference has something to do with creativity. Explaining the motion of a particular planet, when one already understands the general explanation of gravity, is a mechanical task, though it may be a very complex one. But using existing theory to account for quasars requires creative thought. Thus, to understand everything that is understood in astrophysics today, you would have to know the theory of quasars explicitly. But you would not have to know the orbit of any specific planet.
So, even though our stock of known theories is indeed snowballing, just as our stock of recorded facts is, that still does not necessarily make the whole structure harder to understand than it used to be. For while our specific theories are becoming more numerous and more detailed, they are continually being ‘demoted’ as the understanding they contain is taken over by deep, general theories. And those theories are becoming fewer, deeper and more general. By ‘more general’ I mean that each of them says more, about a wider range of situations, than several distinct theories did previously. By ‘deeper’ I mean that each of them explains more — embodies more understanding — than its predecessors did, combined.
Centuries ago, if you had wanted to build a large structure such as a bridge or a cathedral you would have engaged a master builder. He would have had some knowledge of what it takes to give a structure strength and stability with the least possible expense and effort. He would not have been able to express much of this knowledge {13} in the language of mathematics and physics, as we can today. Instead, he relied mainly on a complex collection of intuitions, habits and rules of thumb, which he had learned from his apprentice-master and then perhaps amended through guesswork and long experience. Even so, these intuitions, habits and rules of thumb were in effect theories, explicit and inexplicit, and they contained real knowledge of the subjects we nowadays call engineering and architecture. It was for the knowledge in those theories that you would have hired him, pitifully inaccurate though it was compared with what we have today, and of very narrow applicability. When admiring centuries-old structures, people often forget that we see only the surviving ones. The overwhelming majority of structures built in medieval and earlier times have collapsed long ago, often soon after they were built. That was especially so for innovative structures. It was taken for granted that innovation risked catastrophe, and builders seldom deviated much from designs and techniques that had been validated by long tradition. Nowadays, in contrast, it is quite rare for any structure — even one that is unlike anything that has ever been built before — to fail because of faulty design. Anything that an ancient master builder could have built, his modern colleagues can build better and with far less human effort. They can also build structures which he could hardly have dreamt of, such as skyscrapers and space stations. They can use materials which he had never heard of, such as fibreglass or reinforced concrete, and which he could hardly have used even if he could somehow have been given them, for he had only a scanty and inaccurate understanding of how materials work.
Progress to our current state of knowledge was not achieved by accumulating more theories of the same kind as the master builder knew. Our knowledge, both explicit and inexplicit, is not only much greater than his but structurally different too. As I have said, the modern theories are fewer, more general and deeper. For each situation that the master builder faced while building something in his repertoire — say, when deciding how thick to make a load-bearing wall — he had a fairly specific intuition or rule of thumb, which, however, could give hopelessly wrong answers if applied to {14} novel situations. Today one deduces such things from a theory that is general enough for it to be applied to walls made of any material, in all situations: on the Moon, underwater, or wherever. The reason why it is so general is that it is based on quite deep explanations of how materials and structures work. To find the proper thickness of a wall that is to be made from an unfamiliar material, one uses the same theory as for any other wall, but starts the calculation by assuming different facts — by using different numerical values for the various parameters. One has to look up those facts, such as the tensile strength and elasticity of the material, but one needs no additional understanding.
That is why, despite understanding incomparably more than an ancient master builder did, a modern architect does not require a longer or more arduous training. A typical theory in a modern student's syllabus may be harder to understand than any of the master builder's rules of thumb; but the modern theories are far fewer, and their explanatory power gives them other properties such as beauty, inner logic and connections with other subjects which make them easier to learn. Some of the ancient rules of thumb are now known to be erroneous, while others are known to be true, or to be good approximations to the truth, and we know why that is so. A few are still in use. But none of them is any longer the source of anyone's understanding of what makes structures stand up.
I am not, of course, denying that specialization is occurring in many subjects in which knowledge is growing, including architecture. This is not a one-way process, for specializations often disappear too: wheels are no longer designed or made by wheelwrights, nor ploughs by ploughwrights, nor are letters written by scribes. It is nevertheless quite evident that the deepening, unifying tendency I have been describing is not the only one at work: a continual broadening is going on at the same time. That is, new ideas often do more than just supersede, simplify or unify existing ones. They also extend human understanding into areas that were previously not understood at all — or whose very existence was not guessed at. They may open up new opportunities, new problems, new {15} specializations and even new subjects. And when that happens it may give us, at least temporarily, more to learn in order to understand it all.
The science of medicine is perhaps the most frequently cited case of increasing specialization seeming to follow inevitably from increasing knowledge, as new cures and better treatments for more diseases are discovered. But even in medicine the opposite, unifying tendency is also present, and is becoming stronger. Admittedly, many functions of the body are still poorly understood, and so are the mechanisms of many diseases. Consequently some areas of medical knowledge still consist mainly of collections of recorded facts, together with the skills and intuitions of doctors who have experience of particular diseases and particular treatments, and who pass on these skills and intuitions from one generation to the next. Much of medicine, in other words, is still in the rule-of-thumb era, and when new rules of thumb are discovered there is indeed more incentive for specialization. But as medical and biochemical research comes up with deeper explanations of disease processes (and healthy processes) in the body, understanding is also on the increase. More general concepts are replacing more specific ones as common, underlying molecular mechanisms are found for dissimilar diseases in different parts of the body. Once a disease can be understood as fitting into a general framework, the role of the specialist diminishes. Instead, physicians coming across an unfamiliar disease or a rare complication can rely increasingly on explanatory theories. They can look up such facts as are known. But then they may be able to apply a general theory to work out the required treatment, and expect it to be effective even if it has never been used before.
Thus the issue of whether it is becoming harder or easier to understand everything that is understood depends on the overall balance between these two opposing effects of the growth of knowledge: the increasing breadth of our theories, and their increasing depth. Breadth makes it harder; depth makes it easier. One thesis of this book is that, slowly but surely, depth is winning. In other words, the proposition that I refused to believe as a child is indeed {16} false, and practically the opposite is true. We are not heading away from a state in which one person could understand everything that is understood, but towards it.
It is not that we shall soon understand everything. That is a completely different issue. I do not believe that we are now, or ever shall be, close to understanding everything there is. What I am discussing is the possibility of understanding everything that is understood. That depends more on the structure of our knowledge than on its content. But of course the structure of our knowledge — whether it is expressible in theories that fit together as a comprehensible whole — does depend on what the fabric of reality, as a whole, is like. If knowledge is to continue its open-ended growth, and if we are nevertheless heading towards a state in which one person could understand everything that is understood, then the depth of our theories must continue to grow fast enough to make this possible. That can happen only if the fabric of reality is itself highly unified, so that more and more of it can become understood as our knowledge grows. If that happens, then eventually our theories will become so general, deep and integrated with one another that they will effectively become a single theory of a unified fabric of reality. This theory will still not explain every aspect of reality: that is unattainable. But it will encompass all known explanations, and will apply to the whole fabric of reality in so far as it is understood. Whereas all previous theories related to particular subjects, this will be a theory of all subjects: a Theory of Everything.
It will not, of course, be the last such theory, only the first. In science we take it for granted that even our best theories are bound to be imperfect and problematic in some ways, and we expect them to be superseded in due course by deeper, more accurate theories. Such progress is not brought to a halt when we discover a universal theory. For example, Newton gave us the first universal theory of gravity and a unification of, among other things, celestial and terrestrial mechanics. But his theories have been superseded by Einstein's general theory of relativity which additionally incorporates geometry (formerly regarded as a branch of mathematics) into {17} physics, and in so doing provides far deeper explanations as well as being more accurate. The first fully universal theory — which I shall call the Theory of Everything — will, like all our theories before and after it, be neither perfectly true nor infinitely deep, and so will eventually be superseded. But it will not be superseded through unifications with theories about other subjects, for it will already be a theory of all subjects. In the past, some great advances in understanding came about through great unifications. Others came through structural changes in the way we were understanding a particular subject — as when we ceased to think of the Earth as being the centre of the universe. After the first Theory of Everything, there will be no more great unifications. All subsequent great discoveries will take the form of changes in the way we understand the world as a whole: shifts in our world-view. The attainment of a Theory of Everything will be the last great unification, and at the same time it will be the first across-the-board shift to a new world-view. I believe that such a unification and shift are now under way. The associated world-view is the theme of this book. I must stress immediately that I am not referring merely to the ‘theory of everything’ which some particle physicists hope they will soon discover. Their ‘theory of everything’ would be a unified theory of all the basic forces known to physics, namely gravity, electromagnetism and nuclear forces. It would also describe all the types of subatomic particles that exist, their masses, spins, electric charges and other properties, and how they interact. Given a sufficiently precise description of the initial state of any isolated physical system, it would in principle predict the future behaviour of the system. Where the exact behaviour of a system was intrinsically unpredictable, it would describe all possible behaviours and predict their probabilities. In practice, the initial states of interesting systems often cannot be ascertained very accurately, and in any case the calculation of the predictions would be too complicated to be carried out in all but the simplest cases. Nevertheless, such a unified theory of particles and forces, together with a specification of the initial state of the universe at the Big Bang (the violent explosion with which the universe began), would in principle {18} contain all the information necessary to predict everything that can be predicted (Figure 1.1).
But prediction is not explanation. The hoped-for ‘theory of everything’, even if combined with a theory of the initial state, will at best provide only a tiny facet of a real Theory of Everything. It may predict everything (in principle). But it cannot be expected to explain much more than existing theories do, except for a few phenomena that are dominated by the nuances of subatomic interactions, such as collisions inside particle accelerators, and the exotic history of particle transmutations in the Big Bang. What motivates the use of the term ‘theory of everything’ for such a narrow, albeit fascinating, piece of knowledge? It is, I think, another mistaken view of the nature of science, held disapprovingly by many critics of science and (alas) approvingly by many scientists, namely that science is essentially reductionist. That is to say, science allegedly explains things reductively — by analysing them into components. For example, the resistance of a wall to being penetrated or knocked down is explained by regarding the wall as a vast aggregation of interacting molecules. The properties of those molecules are themselves explained in terms of their constituent atoms, and the interactions of these atoms with one another, and so on down to the smallest particles and most basic forces. Reductionists think that all scientific explanations, and perhaps all sufficiently deep explanations of any kind, take that form.
The reductionist conception leads naturally to a classification of {19} objects and theories in a hierarchy, according to how close they are to the ‘lowest-level’ predictive theories that are known. In this hierarchy, logic and mathematics form the immovable bedrock on which the edifice of science is built. The foundation stone would be a reductive ‘theory of everything’, a universal theory of particles, forces, space and time, together with some theory of what the initial state of the universe was. The rest of physics forms the first few storeys. Astrophysics and chemistry are at a higher level, geology even higher, and so on. The edifice branches into many towers of increasingly high-level subjects like biochemistry, biology and genetics. Perched at the tottering, stratospheric tops are subjects like the theory of evolution, economics, psychology and computer science, which in this picture are almost inconceivably derivative. At present, we have only approximations to a reductive ‘theory of everything’. These can already predict quite accurate laws of motion for individual subatomic particles. From these laws, present-day computers can calculate the motion of any isolated group of a few interacting particles in some detail, given their initial state. But even the smallest speck of matter visible to the naked eye contains trillions of atoms, each composed of many subatomic particles, and is continually interacting with the outside world; so it is quite infeasible to predict its behaviour particle by particle. By supplementing the exact laws of motion with various approximation schemes, we can predict some aspects of the gross behaviour of quite large objects — for instance, the temperature at which a given chemical compound will melt or boil. Much of basic chemistry has been reduced to physics in this way. But for higher-level sciences the reductionist programme is a matter of principle only. No one expects actually to deduce many principles of biology, psychology or politics from those of physics. The reason why higher-level subjects can be studied at all is that under special circumstances the stupendously complex behaviour of vast numbers of particles resolves itself into a measure of simplicity and comprehensibility. This is called emergence: high-level simplicity ‘emerges’ from low-level complexity. High-level phenomena about which there are comprehensible facts that are not simply deducible from {20} lower-level theories are called emergent phenomena. For example, a wall might be strong because its builders feared that their enemies might try to force their way through it. This is a high-level explanation of the wall's strength, not deducible from (though not incompatible with) the low-level explanation I gave above. ‘Builders’, ‘enemies’, ‘fear’ and ‘trying’ are all emergent phenomena. The purpose of high-level sciences is to enable us to understand emergent phenomena, of which the most important are, as we shall see, life, thought and computation.
By the way, the opposite of reductionism, holism — the idea that the only legitimate explanations are in terms of higher-level systems — is an even greater error than reductionism. What do holists expect us to do? Cease our search for the molecular origin of diseases? Deny that human beings are made of subatomic particles? Where reductive explanations exist, they are just as desirable as any other explanations. Where whole sciences are reducible to lower-level sciences, it is just as incumbent upon us as scientists to find those reductions as it is to discover any other knowledge.
A reductionist thinks that science is about analysing things into components. An instrumentalist thinks that it is about predicting things. To either of them, the existence of high-level sciences is merely a matter of convenience. Complexity prevents us from using fundamental physics to make high-level predictions, so instead we guess what those predictions would be if we could make them — emergence gives us a chance of doing that successfully — and supposedly that is what the higher-level sciences are about. Thus to reductionists and instrumentalists, who disregard both the real structure and the real purpose of scientific knowledge, the base of the predictive hierarchy of physics is by definition the ‘theory of everything’. But to everyone else scientific knowledge consists of explanations, and the structure of scientific explanation does not reflect the reductionist hierarchy. There are explanations at every level of the hierarchy. Many of them are autonomous, referring only to concepts at that particular level (for instance, ‘the bear ate the honey because it was hungry’). Many involve deductions in the opposite direction to that of reductive explanation. That is, {21} they explain things not by analysing them into smaller, simpler things but by regarding them as components of larger, more complex things — about which we nevertheless have explanatory theories. For example, consider one particular copper atom at the tip of the nose of the statue of Sir Winston Churchill that stands in Parliament Square in London. Let me try to explain why that copper atom is there. It is because Churchill served as prime minister in the House of Commons nearby; and because his ideas and leadership contributed to the Allied victory in the Second World War; and because it is customary to honour such people by putting up statues of them; and because bronze, a traditional material for such statues, contains copper, and so on. Thus we explain a low-level physical observation — the presence of a copper atom at a particular location — through extremely high-level theories about emergent phenomena such as ideas, leadership, war and tradition. There is no reason why there should exist, even in principle, any lower-level explanation of the presence of that copper atom than the one I have just given. Presumably a reductive ‘theory of everything’ would in principle make a low-level prediction of the probability that such a statue will exist, given the condition of (say) the solar system at some earlier date. It would also in principle describe how the statue probably got there. But such descriptions and predictions (wildly infeasible, of course) would explain nothing. They would merely describe the trajectory that each copper atom followed from the copper mine, through the smelter and the sculptor's studio, and so on. They could also state how those trajectories were influenced by forces exerted by surrounding atoms, such as those comprising the miners’ and sculptor's bodies, and so predict the existence and shape of the statue. In fact such a prediction would have to refer to atoms all over the planet, engaged in the complex motion we call the Second World War, among other things. But even if you had the superhuman capacity to follow such lengthy predictions of the copper atom's being there, you would still not be able to say, ‘Ah yes, now I understand why it is there.’ You would merely know that its arrival there in that way was inevitable (or likely, or whatever), given all the atoms’ initial configurations and the laws {22} of physics. If you wanted to understand why, you would still have no option but to take a further step. You would have to inquire into what it was about that configuration of atoms, and those trajectories, that gave them the propensity to deposit a copper atom at this location. Pursuing this inquiry would be a creative task, as discovering new explanations always is. You would have to discover that certain atomic configurations support emergent phenomena such as leadership and war, which are related to one another by high-level explanatory theories. Only when you knew those theories could you understand fully why that copper atom is where it is.
In the reductionist world-view, the laws governing subatomic particle interactions are of paramount importance, as they are the base of the hierarchy of all knowledge. But in the real structure of scientific knowledge, and in the structure of our knowledge generally, such laws have a much more humble role.
What is that role? It seems to me that none of the candidates for a ‘theory of everything’ that has yet been contemplated contains much that is new by way of explanation. Perhaps the most innovative approach from the explanatory point of view is superstring theory, in which extended objects, ‘strings’, rather than point-like particles, are the elementary building blocks of matter. But no existing approach offers an entirely new mode of explanation — new in the sense of Einstein's explanation of gravitational forces in terms of curved space and time. In fact, the ‘theory of everything’ is expected to inherit virtually its entire explanatory structure — its physical concepts, its language, its mathematical formalism and the form of its explanations — from the existing theories of electromagnetism, nuclear forces and gravity. Therefore we may look to this underlying structure, which we already know from existing theories, for the contribution of fundamental physics to our overall understanding.
There are two theories in physics which are considerably deeper than all others. The first is the general theory of relativity, which as I have said is our best theory of space, time and gravity. The second, quantum theory, is even deeper. Between them, these two {23} theories (and not any existing or currently envisaged theory of subatomic particles) provide the detailed explanatory and formal framework within which all other theories in modern physics are expressed, and they contain overarching physical principles to which all other theories conform. A unification of general relativity and quantum theory — to give a quantum theory of gravity — has been a major quest of theoretical physicists for several decades, and would have to form part of any theory of everything in either the narrow or the broad sense of the term. As we shall see in the next chapter, quantum theory, like relativity, provides a revolutionary new mode of explanation of physical reality. The reason why quantum theory is the deeper of the two lies more outside physics than within it, for its ramifications are very wide, extending far beyond physics — and even beyond science itself as it is normally conceived. Quantum theory is one of what I shall call the four main strands of which our current understanding of the fabric of reality is composed.
Before I say what the other three strands are, I must mention another way in which reductionism misrepresents the structure of scientific knowledge. Not only does it assume that explanation always consists of analysing a system into smaller, simpler systems, it also assumes that all explanation is of later events in terms of earlier events; in other words, that the only way of explaining something is to state its causes. And this implies that the earlier the events in terms of which we explain something, the better the explanation, so that ultimately the best explanations of all are in terms of the initial state of the universe.
A ‘theory of everything’ which excludes a specification of the initial state of the universe is not a complete description of physical reality because it provides only laws of motion; and laws of motion, by themselves, make only conditional predictions. That is, they never state categorically what happens, but only what will happen at one time given what was happening at another time. Only if a complete specification of the initial state is provided can a complete description of physical reality in principle be deduced. Current cosmological theories do not provide a complete specification of {24} the initial state, even in principle, but they do say that the universe was initially very small, very hot and very uniform in structure. We also know that it cannot have been perfectly uniform because that would be incompatible, according to the theory, with the distribution of galaxies we observe across the sky today. The initial variations in density, ‘lumpiness’, would have been greatly enhanced by gravitational clumping (that is, relatively dense regions would have attracted more matter and become denser), so they need only have been very slight initially. But, slight though they were, they are of the greatest significance in any reductionist description of reality, because almost everything that we see happening around us, from the distribution of stars and galaxies in the sky to the appearance of bronze statues on planet Earth, is, from the point of view of fundamental physics, a consequence of those variations. If our reductionist description is to cover anything more than the grossest features of the observed universe, we need a theory specifying those all-important initial deviations from uniformity.
Let me try to restate this requirement without the reductionist bias. The laws of motion for any physical system make only conditional predictions, and are therefore compatible with many possible histories of that system. (This issue is independent of the limitations on predictability that are imposed by quantum theory, which I shall discuss in the next chapter.) For instance, the laws of motion governing a cannon-ball fired from a gun are compatible with many possible trajectories, one for every possible direction and elevation in which the gun could have been pointing when it was fired (Figure 1.2). Mathematically, the laws of motion can be expressed as a set of equations called the equations of motion. These have many different solutions, one describing each possible trajectory. To specify which solution describes the actual trajectory, we must provide supplementary data — some data about what actually happens. One way of doing that is to specify the initial state, in this case the direction in which the gun was pointing. But there are other ways too. For example, we could just as well specify the final state — the position and direction of motion of the cannon-ball {25}
at the moment it lands. Or we could specify the position of the highest point of the trajectory. It does not matter what supplementary data we give, so long as they pick out one particular solution of the equations of motion. The combination of any such supplementary data with the laws of motion amounts to a theory that describes everything that happens to the cannon-ball between firing and impact.
Similarly, the laws of motion for physical reality as a whole would have many solutions, each corresponding to a distinct history. To complete the description, we should have to specify which history is the one that has actually occurred, by giving enough supplementary data to yield one of the many solutions of the equations of motion. In simple cosmological models at least, one way of giving such data is to specify the initial state of the universe. But alternatively we could specify the final state, or the state at any other time; or we could give some information about the initial state, some about the final state, and some about states in between. In general, the combination of enough supplementary data of any sort with the laws of motion would amount to a complete description, in principle, of physical reality.
For the cannon-ball, once we have specified, say, the final state it is straightforward to calculate the initial state, and vice versa, so there is no practical difference between different methods of specifying the supplementary data. But for the universe most such {26} calculations are intractable. I have said that we infer the existence of ‘lumpiness’ in the initial conditions from observations of ‘lumpiness’ today. But that is exceptional: most of our knowledge of supplementary data — of what specifically happens — is in the form of high-level theories about emergent phenomena, and is therefore by definition not practically expressible in the form of statements about the initial state. For example, in most solutions of the equations of motion the initial state of the universe does not have the right properties for life to evolve from it. Therefore our knowledge that life has evolved is a significant piece of the supplementary data. We may never know what, specifically, this restriction implies about the detailed structure of the Big Bang, but we can draw conclusions from it directly. For example, the earliest accurate estimate of the age of the Earth was made on the basis of the biological theory of evolution, contradicting the best physics of the day. Only a reductionist prejudice could make us feel that this was somehow a less valid form of reasoning, or that in general it is more ‘fundamental’ to theorize about the initial state than about emergent features of reality.
Even in the domain of fundamental physics, the idea that theories of the initial state contain our deepest knowledge is a serious misconception. One reason is that it logically excludes the possibility of explaining the initial state itself — why the initial state was what it was — but in fact we have explanations of many aspects of the initial state. And more generally, no theory of time can possibly explain it in terms of anything ‘earlier’; yet we do have deep explanations, from general relativity and even more from quantum theory, of the nature of time (see Chapter 11).
Thus the character of many of our descriptions, predictions and explanations of reality bear no resemblance to the ‘initial state plus laws of motion’ picture that reductionism leads to. There is no reason to regard high-level theories as in any way ‘second-class citizens’. Our theories of subatomic physics, and even of quantum theory or relativity, are in no way privileged relative to theories about emergent properties. None of these areas of knowledge can possibly subsume all the others. Each of them has logical {27} implications for the others, but not all the implications can be stated, for they are emergent properties of the other theories’ domains. In fact, the very terms ‘high level’ and ‘low level’ are misnomers. The laws of biology, say, are high-level, emergent consequences of the laws of physics. But logically, some of the laws of physics are then ‘emergent’ consequences of the laws of biology. It could even be that, between them, the laws governing biological and other emergent phenomena would entirely determine the laws of fundamental physics. But in any case, when two theories are logically related, logic does not dictate which of them we ought to regard as determining, wholly or partly, the other. That depends on the explanatory relationships between the theories. The truly privileged theories are not the ones referring to any particular scale of size or complexity, nor the ones situated at any particular level of the predictive hierarchy — but the ones that contain the deepest explanations. The fabric of reality does not consist only of reductionist ingredients like space, time and subatomic particles, but also of life, thought, computation and the other things to which those explanations refer. What makes a theory more fundamental, and less derivative, is not its closeness to the supposed predictive base of physics, but its closeness to our deepest explanatory theories.
Quantum theory is, as I have said, one such theory. But the other three main strands of explanation through which we seek to understand the fabric of reality are all ‘high level’ from the point of view of quantum physics. They are the theory of evolution (primarily the evolution of living organisms), epistemology (the theory of knowledge) and the theory of computation (about computers and what they can and cannot, in principle, compute). As I shall show, such deep and diverse connections have been discovered between the basic principles of these four apparently independent subjects that it has become impossible to reach our best understanding of any one of them without also understanding the other three. The four of them taken together form a coherent explanatory structure that is so far-reaching, and has come to encompass so much of our understanding of the world, that in my view it may already {28} properly be called the first real Theory of Everything. Thus we have arrived at a significant moment in the history of ideas — the moment when the scope of our understanding begins to be fully universal. Up to now, all our understanding has been about some aspect of reality, untypical of the whole. In the future it will be about a unified conception of reality: all explanations will be understood against the backdrop of universality, and every new idea will automatically tend to illuminate not just a particular subject, but, to varying degrees, all subjects. The dividend of understanding that we shall eventually reap from this last great unification may far surpass that yielded by any previous one. For we shall see that it is not only physics that is being unified and explained here, and not only science, but also potentially the far reaches of philosophy, logic and mathematics, ethics, politics and aesthetics; perhaps everything that we currently understand, and probably much that we do not yet understand.
What conclusion, then, would I address to my younger self, who rejected the proposition that the growth of knowledge was making the world ever less comprehensible? I would agree with him, though I now think that the important issue is not really whether what our particular species understands can be understood by one of its members. It is whether the fabric of reality itself is truly unified and comprehensible. There is every reason to believe that it is. As a child, I merely knew this; now I can explain it.
TERMINOLOGY
epistemology The study of the nature of knowledge and the processes that create it.
explanation (roughly) A statement about the nature of things and the reasons for things.
instrumentalism The view that the purpose of a scientific theory is to predict the outcomes of experiments.
positivism An extreme form of instrumentalism which holds that all statements other than those describing or predicting {29} observations are meaningless. (This view is itself meaningless according to its own criterion.)
reductive A reductive explanation is one that works by analysing things into lower-level components.
reductionism The view that scientific explanations are inherently reductive.
holism The idea that the only legitimate explanations are in terms of higher-level systems; the opposite of reductionism.
emergence An emergent phenomenon is one (such as life, thought or computation) about which there are comprehensible facts or explanations that are not simply deducible from lower-level theories, but which may be explicable or predictable by higher-level theories referring directly to that phenomenon.
SUMMARY
Scientific knowledge, like all human knowledge, consists primarily of explanations. Mere facts can be looked up, and predictions are important only for conducting crucial experimental tests to discriminate between competing scientific theories that have already passed the test of being good explanations. As new theories supersede old ones, our knowledge is becoming both broader (as new subjects are created) and deeper (as our fundamental theories explain more, and become more general). Depth is winning. Thus we are not heading away from a state in which one person could understand everything that was understood, but towards it. Our deepest theories are becoming so integrated with one another that they can be understood only jointly, as a single theory of a unified fabric of reality. This Theory of Everything has a far wider scope than the ‘theory of everything’ that elementary particle physicists are seeking, because the fabric of reality does not consist only of reductionist ingredients such as space, time and subatomic particles, but also, for example, of life, thought and computation. The four main strands of explanation which may constitute the first Theory of Everything are: {30}
quantum physics Chapters 2, 9, 11, 12, 13, 14
epistemology Chapters 3, 4, 7, 10, 13, 14
the theory of computation Chapters 5, 6, 9, 10, 13, 14
the theory of evolution Chapters 8, 13, 14.
The next chapter is about the first and most important of the four strands, quantum physics.
<< >>
2
Shadows
There is no better, there is no more open door by which you can enter into the study of natural philosophy, than by considering the physical phenomena of a candle.
Michael Faraday (A Course of Six Lectures
on the Chemical History of a Candle)
In his popular Royal Institution lectures on science, Michael Faraday used to urge his audiences to learn about the world by considering what happens when a candle burns. I am going to consider an electric torch (or flashlight) instead. This is quite fitting, for much of the technology of an electric torch is based on Faraday's discoveries.
I am going to describe some experiments which demonstrate phenomena that are at the core of quantum physics. Experiments of this sort, with many variations and refinements, have been the bread and butter of quantum optics for many years. There is no controversy about the results, yet even now some of them are hard to believe. The basic experiments are remarkably austere. They require neither specialized scientific instruments nor any great knowledge of mathematics or physics — essentially, they involve nothing but casting shadows. But the patterns of light and shadow that an ordinary electric torch can cast are very strange. When considered carefully they have extraordinary ramifications. Explaining them requires not just new physical laws but a new level of description and explanation that goes beyond what was previously regarded as being the scope of science. But first, it reveals {32} the existence of parallel universes. How can it? What conceivable pattern of shadows could have implications like that?
Imagine an electric torch switched on in an otherwise dark room. Light emanates from the filament of the torch's bulb and fills out part of a cone. In order not to complicate the experiment with reflected light, the walls of the room should be totally absorbent, matt black. Alternatively, since we are only imagining these experiments, we could imagine a room of astronomical size, so that there is no time for any light to reach the walls and return before the experiment is completed. Figure 2.1 illustrates the situation. But it is somewhat misleading: if we were observing the torch from the side we should be able to see neither it nor, of course, its light. Invisibility is one of the more straightforward properties of light. We see light only if it enters our eyes (though we usually speak of seeing the object in our line of sight that last affected that light).
We cannot see light that is just passing by. If there were a reflective object in the beam, or even some dust or water droplets to scatter the light, we could see where it was. But there is nothing in the beam, and we are observing from outside it, so none of its light reaches us. An accurate representation of what we should see would be a completely black picture. If there were a second source of light we might be able to see the torch, but still not its light. Beams {33} of light, even the most intense light that we can generate (from lasers), pass through each other as if nothing were there at all.
Figure 2.1 does show that the light is brightest near the torch, and gets dimmer farther away as the beam spreads out to illuminate an ever larger area. To an observer within the beam, backing steadily away from the torch, the reflector would appear ever smaller and then, when it could only be seen as a single point, ever fainter. Or would it? Can light really be spread more and more thinly without limit? The answer is no. At a distance of approximately ten thousand kilometres from the torch, its light would be too faint for the human eye to detect and the observer would see nothing. That is, a human observer would see nothing; but what about an animal with more sensitive vision? Frogs’ eyes are several times more sensitive than human eyes — just enough to make a significant difference in this experiment. If the observer were a frog, and it kept moving ever farther away from the torch, the moment at which it entirely lost sight of the torch would never come. Instead, the frog would see the torch begin to flicker. The flickers would come at irregular intervals that would become longer as the frog moved farther away. But the brightness of the individual flickers would not diminish. At a distance of one hundred million kilometres from the torch, the frog would see on average only one flicker of light per day, but that flicker would be as bright as any that it observed at any other distance.
Frogs cannot tell us what they see. So in real experiments we use photomultipliers (light detectors which are even more sensitive than frogs’ eyes), and we thin out the light by passing it through dark filters, rather than by observing it from a hundred million kilometres away. But the principle is the same, and so is the result: neither apparent darkness nor uniform dimness, but flickering, with the individual flickers equally bright no matter how dark a filter we use. This flickering indicates that there is a limit to how thinly light can be evenly spread. Borrowing the terminology of goldsmiths, one might say that light is not infinitely ‘malleable’. Like gold, a small amount of light can be evenly spread over a very large area, but eventually if one tries to spread it out further it {34} gets lumpy. Even if gold atoms could somehow be prevented from clumping together, there is a point beyond which they cannot be subdivided without ceasing to be gold. So the only way in which one can make a one-atom-thick gold sheet even thinner is to space the atoms farther apart, with empty space between them. When they are sufficiently far apart it becomes misleading to think of them as forming a continuous sheet. For example, if each gold atom were on average several centimetres from its nearest neighbour, one might pass one's hand through the ‘sheet’ without touching any gold at all. Similarly, there is an ultimate lump or ‘atom’ of light, a photon. Each flicker seen by the frog is caused by a photon striking the retina of its eye. What happens when a beam of light gets fainter is not that the photons themselves get fainter, but that they get farther apart, with empty space between them (Figure 2.2). When the beam is very faint it can be misleading to call it a ‘beam’, for it is not continuous. During periods when the frog sees nothing it is not because the light entering its eye is too weak to affect the retina, but because no light has entered its eye at all.
This property of appearing only in lumps of discrete sizes is called quantization. An individual lump, such as a photon, is called a quantum (plural quanta). Quantum theory gets its name from this property, which it attributes to all measurable physical quantities — not just to things like the amount of light, or the mass of gold, which
are quantized because the entities concerned, though apparently continuous, are really made of particles. Even for quantities like distance (between two atoms, say), the notion of a continuous range of possible values turns out to be an idealization. There are no measurable continuous quantities in physics. There are many new effects in quantum physics, and on the face of it quantization is one of the tamest, as we shall see. Yet in a sense it remains the key to all the others, for if everything is quantized, how does any quantity change from one value to another? How does any object get from one place to another if there is not a continuous range of intermediate places for it to be on the way? I shall explain how in Chapter 9, but let me set that question aside for the moment and return to the vicinity of the torch, where the beam looks continuous because every second it pours about 1014 (a hundred trillion) photons into an eye that looks into it.
Is the boundary between the light and the shadow perfectly sharp, or is there a grey area? There is usually a fairly wide grey area, and one reason for this is shown in Figure 2.3. There is a dark region (called the umbra) where light from the filament cannot reach. There is a bright region which can receive light from anywhere on the filament. And because the filament is not a geometrical point, but has a certain size, there is also a penumbra between the bright and dark regions: a region which can receive light from some parts of the filament but not from others. If one observes from within the penumbra, one can see only part of the filament and the illumination is less there than in the fully illuminated, bright region.
However, the size of the filament is not the only reason why real torchlight casts penumbras. The light is affected in all sorts of other ways by the reflector behind the bulb, by the glass front of the torch, by various seams and imperfections, and so on. So we expect quite a complicated pattern of light and shadow from a real torch, just because the torch itself is quite complicated. But the incidental properties of torches are not the subject of these experiments. Behind our question about torchlight there is a more fundamental question about light in general: is there, in principle, any limit on how sharp a shadow can be (in other words, on how narrow a {36}
penumbra can be)? For instance, if the torch were made of perfectly black (non-reflecting) material, and if one were to use smaller and smaller filaments, could one then make the penumbra narrower and narrower, without limit?
Figure 2.3 makes it look as though one could: if the filament had no size, there would be no penumbra. But in drawing Figure 2.3 I have made an assumption about light, namely that it travels only in straight lines. From everyday experience we know that it does, for we cannot see round corners. But careful experiments show that light does not always travel in straight lines. Under some circumstances it bends.
This is hard to demonstrate with a torch alone, just because it is difficult to make very tiny filaments and very black surfaces. These practical difficulties mask the limits that fundamental physics imposes on the sharpness of shadows. Fortunately, the bending of light can also be demonstrated in a different way. Suppose that the light of a torch passes through two successive small holes in otherwise opaque screens, as shown in Figure 2.4, and that the emerging light falls on a third screen beyond. Our question now is this: if the experiment is repeated with ever smaller holes and with ever {37}
greater separation between the first and second screens, can one bring the umbra — the region of total darkness — ever closer, without limit, to the straight line through the centres of the two holes? Can the illuminated region between the second and third screens be confined to an arbitrarily narrow cone? In goldsmiths’ terminology, we are now asking something like ‘how "ductile" is light’ — how fine a thread can it be drawn into? Gold can be drawn into threads one ten-thousandth of a millimetre thick.
It turns out that light is not as ductile as gold! Long before the holes get as small as a ten-thousandth of a millimetre, in fact even with holes as large as a millimetre or so in diameter, the light begins noticeably to rebel. Instead of passing through the holes in straight lines, it refuses to be confined and spreads out after each hole. And as it spreads, it ‘frays’. The smaller the hole is, the more the light spreads out from its straight-line path. Intricate patterns of light and shadow appear. We no longer see simply a bright region and a dark region on the third screen, with a penumbra in between, but instead concentric rings of varying thickness and brightness. There is also colour, because white light consists of a mixture of photons of various colours, and each colour spreads and frays in a slightly different pattern. Figure 2.5 shows a typical pattern that might be formed on the third screen by white light that has passed through holes in the first two screens. Remember, {38}
there is nothing happening here but the casting of a shadow. Figure 2.5 is just the shadow that would be cast by the second screen in Figure 2.4. If light travelled only in straight lines, there would only be a tiny white dot (much smaller than the central bright spot in Figure 2.5), surrounded by a very narrow penumbra. Outside that there would be pure umbra — total darkness.
Puzzling though it may be that light rays should bend when passing through small holes, it is not, I think, fundamentally disturbing. In any case, what matters for our present purposes is that it does bend. This means that shadows in general need not look like silhouettes of the objects that cast them. What is more, this is not just a matter of blurring, caused by penumbras. It turns out that an obstacle with an intricate pattern of holes can cast a shadow of an entirely different pattern.
Figure 2.6 shows, at roughly its actual size, a part of the pattern of shadows cast three metres from a pair of straight, parallel slits in an otherwise opaque barrier. The slits are one-fifth of a {39}
millimetre apart, and illuminated by a parallel-sided beam of pure red light from a laser on the other side of the barrier. Why laser light and not torchlight? Only because the precise shape of a shadow also depends on the colour of the light in which it is cast; white light, as produced by a torch, contains a mixture of all visible colours, so it can cast shadows with multicoloured fringes. Therefore in experiments about the precise shapes of shadows we are better off using light of a single colour. We could put a coloured filter (such as a pane of coloured glass) over the front of the torch, so that only light of that colour would get through. That would help, but filters are not all that discriminating. A better method is to use laser light, for lasers can be tuned very accurately to emit light of whatever colour we choose, with almost no other colour present.
If light travelled in straight lines, the pattern in Figure 2.6 would consist simply of a pair of bright bands one-fifth of a millimetre apart (too close to distinguish on this scale), with sharp edges and with the rest of the screen in shadow. But in reality the light bends in such a way as to make many bright bands and dark bands, and no sharp edges at all. If the slits are moved sideways, so long as they remain within the laser beam, the pattern also moves by the same amount. In this respect it behaves exactly like an ordinary large-scale shadow. Now, what sort of shadow is cast if we cut a second, identical pair of slits in the barrier, interleaved with the existing pair, so that we have four slits at intervals of one-tenth of a millimetre? We might expect the pattern to look almost exactly like Figure 2.6. After all, the first pair of slits, by itself, casts the shadows in Figure 2.6, and as I have just said, the second pair, by itself, would cast the same pattern, shifted about a tenth of a {40} millimetre to the side — in almost the same place. We even know that light beams normally pass through each other unaffected. So the two pairs of slits together should give essentially the same pattern again, though twice as bright and slightly more blurred.
In reality, though, what happens is nothing like that. The real shadow of a barrier with four straight, parallel slits is shown in Figure 2.7(a). For comparison I have repeated, below it, the illustration of the two-slit pattern (Figure 2.7(b)). Clearly, the four-slit shadow is not a combination of two slightly displaced two-slit shadows, but has a new and more complicated pattern. In this pattern there are places, such as the point marked X, which are dark on the four-slit pattern, but bright on the two-slit pattern. These places were bright when there were two slits in the barrier, but went dark when we cut a second pair of slits for the light to pass through. Opening those slits has interfered with the light that was previously arriving at X.
So, adding two more light sources darkens the point X; removing them illuminates it again. How? One might imagine two photons heading towards X and bouncing off each other like billiard balls. Either photon alone would have hit X, but the two together interfere with each other so that they both end up elsewhere. I shall show in a moment that this explanation cannot be true. Nevertheless, the basic idea of it is inescapable: something must be coming through that second pair of slits to prevent the light from the first pair from
reaching X. But what? We can find out with the help of some further experiments.
First, the four-slit pattern of Figure 2.7(a) appears only if all four slits are illuminated by the laser beam. If only two of them are illuminated, a two-slit pattern appears. If three are illuminated, a three-slit pattern appears, which looks different again. So whatever causes the interference is in the light beam. The two-slit pattern also reappears if two of the slits are filled by anything opaque, but not if they are filled by anything transparent. In other words, the interfering entity is obstructed by anything that obstructs light, even something as insubstantial as fog. But it can penetrate anything that allows light to pass, even something as impenetrable (to matter) as diamond. If complicated systems of mirrors and lenses are placed anywhere in the apparatus, so long as light can travel from each slit to a particular point on the screen, what will be observed at that point will be part of a four-slit pattern. If light from only two slits can reach a particular point, part of a two-slit pattern will be observed there, and so on.
So, whatever causes interference behaves like light. It is found everywhere in the light beam and nowhere outside it. It is reflected, transmitted or blocked by whatever reflects, transmits or blocks light. You may be wondering why I am labouring this point. Surely it is obvious that it is light; that is, what interferes with photons from each slit is photons from the other slits. But you may be inclined to doubt the obvious after the next experiment, the denouement of the series.
What should we expect to happen when these experiments are performed with only one photon at a time? For instance, suppose that our torch is moved so far away that only one photon per day is falling on the screen. What will our frog, observing from the screen, see? If it is true that what interferes with each photon is other photons, then shouldn't the interference be lessened when the photons are very sparse? Should it not cease altogether when there is only one photon passing through the apparatus at any one time? We might still expect penumbras, since a photon might be capable of changing course when passing through a slit (perhaps {42} by striking a glancing blow at the edge). But what we surely could not observe is any place on the screen, such as X, that receives photons when two slits are open, but which goes dark when two more are opened.
Yet that is exactly what we do observe. However sparse the photons are, the shadow pattern remains the same. Even when the experiment is done with one photon at a time, none of them is ever observed to arrive at X when all four slits are open. Yet we need only close two slits for the flickering at X to resume.
Could it be that the photon splits into fragments which, after passing through the slits, change course and recombine? We can rule that possibility out too. If, again, we fire one photon through the apparatus, but use four detectors, one at each slit, then at most one of them ever registers anything. Since in such an experiment we never observe two of the detectors going off at once, we can tell that the entities that they detect are not splitting up.
So, if the photons do not split into fragments, and are not being deflected by other photons, what does deflect them? When a single photon at a time is passing through the apparatus, what can be coming through the other slits to interfere with it?
Let us take stock. We have found that when one photon passes through this apparatus,
it passes through one of the slits, and then something interferes with it, deflecting it in a way that depends on what other slits are open;
the interfering entities have passed through some of the other slits;
the interfering entities behave exactly like photons ...
... except that they cannot be seen.
I shall now start calling the interfering entities ‘photons’. That is what they are, though for the moment it does appear that photons come in two sorts, which I shall temporarily call tangible photons and shadow photons. Tangible photons are the ones we can see, or {43} detect with instruments, whereas the shadow photons are intangible (invisible) — detectable only indirectly through their interference effects on the tangible photons. (Later, we shall see that there is no intrinsic difference between tangible and shadow photons: each photon is tangible in one universe and intangible in all the other parallel universes — but I anticipate.) What we have inferred so far is only that each tangible photon has an accompanying retinue of shadow photons, and that when a photon passes through one of our four slits, some shadow photons pass through the other three slits. Since different interference patterns appear when we cut slits at other places in the screen, provided that they are within the beam, shadow photons must be arriving all over the illuminated part of the screen whenever a tangible photon arrives. Therefore there are many more shadow photons than tangible ones. How many? Experiments cannot put an upper bound on the number, but they do set a rough lower bound. In a laboratory the largest area that we could conveniently illuminate with a laser might be about a square metre, and the smallest manageable size for the holes might be about a thousandth of a millimetre. So there are about 1012 (one trillion) possible hole-locations on the screen. Therefore there must be at least a trillion shadow photons accompanying each tangible one.
Thus we have inferred the existence of a seething, prodigiously complicated, hidden world of shadow photons. They travel at the speed of light, bounce off mirrors, are refracted by lenses, and are stopped by opaque barriers or filters of the wrong colour. Yet they do not trigger even the most sensitive detectors. The only thing in the universe that a shadow photon can be observed to affect is the tangible photon that it accompanies. That is the phenomenon of interference. Shadow photons would go entirely unnoticed were it not for this phenomenon and the strange patterns of shadows by which we observe it.
Interference is not a special property of photons alone. Quantum theory predicts, and experiment confirms, that it occurs for every sort of particle. So there must be hosts of shadow neutrons accompanying every tangible neutron, hosts of shadow electrons {44} accompanying every electron, and so on. Each of these shadow particles is detectable only indirectly, through its interference with the motion of its tangible counterpart.
It follows that reality is a much bigger thing than it seems, and most of it is invisible. The objects and events that we and our instruments can directly observe are the merest tip of the iceberg.
Now, tangible particles have a property that entitles us to call them, collectively, a universe. This is simply their defining property of being tangible, that is, of interacting with each other, and hence of being directly detectable by instruments and sense organs made of other tangible particles. Because of the phenomenon of interference, they are not wholly partitioned off from the rest of reality (that is, from the shadow particles). If they were, we should never have discovered that there is more to reality than tangible particles. But to a good approximation they do resemble the universe that we see around us in everyday life, and the universe referred to in classical (pre-quantum) physics.
For similar reasons, we might think of calling the shadow particles, collectively, a parallel universe, for they too are affected by tangible particles only through interference phenomena. But we can do better than that. For it turns out that shadow particles are partitioned among themselves in exactly the same way as the universe of tangible particles is partitioned from them. In other words, they do not form a single, homogeneous parallel universe vastly larger than the tangible one, but rather a huge number of parallel universes, each similar in composition to the tangible one, and each obeying the same laws of physics, but differing in that the particles are in different positions in each universe.
A remark about terminology. The word ‘universe’ has traditionally been used to mean ‘the whole of physical reality’. In that sense there can be at most one universe. We could stick to that definition, and say that the entity we have been accustomed to calling ‘the universe’ — namely, all the directly perceptible matter and energy around us, and the surrounding space — is not the whole universe after all, but only a small portion of it. Then we should have to invent a new name for that small, tangible portion. But most {45} physicists prefer to carry on using the word ‘universe’ to denote the same entity that it has always denoted, even though that entity now turns out to be only a small part of physical reality. A new word, multiverse, has been coined to denote physical reality as a whole.
Single-particle interference experiments such as I | ||||||||
2453 | dbpedia | 2 | 66 | https://blog.computationalcomplexity.org/2003/ | en | Computational Complexity | https://blog.computationalcomplexity.org/favicon.ico | https://blog.computationalcomplexity.org/favicon.ico | [
"https://resources.blogblog.com/img/icon18_email.gif",
"https://resources.blogblog.com/img/icon18_edit_allbkg.gif",
"https://resources.blogblog.com/img/icon18_email.gif",
"https://resources.blogblog.com/img/icon18_edit_allbkg.gif",
"https://resources.blogblog.com/img/icon18_email.gif",
"https://resources.... | [] | [] | [
""
] | null | [
"Lance Fortnow"
] | null | Computational Complexity and other fun stuff in math and computer science from Lance Fortnow and Bill Gasarch | en | https://blog.computationalcomplexity.org/favicon.ico | https://blog.computationalcomplexity.org/2003/ | Today is the 100th anniversary of the birth of John von Neumann, one of the greatest mathematicians of the 20th century. Let me discuss two aspects of his work, one big, one small, that have greatly affected computational complexity.
The von Neumann min-max theorem showed that every finite zero-sum two-person game has optimal mixed strategies. More formally, let A be the payoff matrix of a game, then
maxx miny xTAy = miny maxx xTAy where x and y are probability vectors.
Andrew Yao used the min-max theorem to prove what we now call the Yao Principle: The worst case expected runtime of a randomized algorithm for any input equals best case running time of a deterministic algorithm for a worst-case distribution of inputs. The Yao principle has proven invaluable for proving upper and lower bounds for deterministic and probabilistic algorithms.
How can you get a fair coin by flipping a coin of unknown bias? You use the von Neumann coin-flipping trick: Flip the biased coin twice. If you get heads then tails output HEADS. If you get tails then heads output TAILS. Otherwise repeat. This procedure will output HEADS or TAILS with equal probability and if the bias is not too close to zero or one the expected number of repetitions is relatively small.
The von Neumann coin flipping trick is the first in a long line of research in complexity extracting random bits from weak random sources.
John von Neumann passed away February 8, 1957 in Washington, DC.
On Saturday I visited the Einstein Exhibit at Chicago's Field Museum. Some manuscripts and letters and a nice exhibit explaining why time must vary if the speed of light remains a constant made this an interesting but not a must-see exhibit. The biggest surprise for me came from seeing how Einstein's fame happened overnight instead of the more gradual fame I would have expected. In 1919 a solar eclipse showed that light from stars do bend from gravitational forces. Einstein's fame grew immediately and his name became synonymous with genius.
This superstardom for a scientist doesn't seem to happen today. When Andrew Wiles proved Fermat's last theorem he did get some deserved attention but he never became a true household name. When you realize Wiles has hit the upper limit of fame a mathematician can receive (ruling out people like Ted Kaczynski and John Nash) one can see the Einstein effect of science may never return.
On the other hand, University of Chicago paleontologist Paul Sereno headlines the social page of the Chicago Tribune at the "Party With Giants." Perhaps scientists can still achieve more than fifteen minutes of fame after all.
Time for another of my favorite open problems.
Does NP=UP imply the polynomial-time hierarchy collapses?
UP is the class of languages accepted by nondeterministic polynomial-time Turing machines that have at most one accepting computation for all inputs.
This problem has loose connections to Valiant-Vazirani but Hemaspaandra, Naik, Ogiwara and Selman have the most closely related result. Consider the following proposition.
(*) There is a set A in NP such that for all satisfiable formula φ there is a unique satisfying assignment a of φ such that (φ,a) is in A.
Hemaspaandra et. al. show that (*) implies the polynomial-time hierarchy collapses to the second level.
For all we know, (*) and NP=UP are incomparable. If (*) holds for some A in P then NP=UP by just guessing a. We don't know whether NP=UP implies (*) since the accepting computations of a UP machine accepting SAT need not reveal a satisfying assignment of a formula.
There exist an oracle relative to which UP=NP≠co-NP. A relativized world with UP=NP and Σ2p≠Π2p remains open.
Every scientific field has their own rules for the order of authors in a paper. In theoretical computer science, tradition dictates that we list the authors alphabetically by last name. I don't agree with this tradition; rarely do all the co-authors of a paper play an equal role. The decision whether to add someone as a co-author, and thus an equal, often becomes difficult.
But breaking with tradition can have its own problems. I have three papers that break the alphabetical rule though two were in biology which has its own rules. In the other back in 1990, Carsten Lund, a graduate student at the time, made the key step in developing an interactive proof system for the permanent. For that we made him first author in the Lund-Fortnow-Karloff-Nisan paper. In retrospect I regret this decision. It only added confusion to those who cited the paper. Also did Lund not play as important a role in other papers where we kept alphabetical order? Breaking with tradition, even with the best of intentions, can often cause more harm than good.
Want an easy rule to greatly improve your writing? Just avoid the following words, particularly in the abstract and introduction of your papers. am is are was were be been
Avoiding these seven forms of "to be" will force you to write in the active tense instead of the passive making your sentences less boring. For example, instead of "It is known that all functions can be computed securely in the information theoretic setting" use "We can compute all functions securely in the information theoretic setting."
Taking this rule to the extreme can lead to some very convoluted sentences but, I promise, forcing yourself to think actively about every statement you write will make a great difference in your prose. In almost all cases the right answer is "not to be."
Over the last 15 years, two very active research areas seem at odds. Derandomization results have shown us that we can often remove randomness from computation but interactive proof systems and PCPs exhibit incredible power from randomness. There is no contradiction here, just two very different ways we use randomness in complexity: for searching and for hiding.
Typically we think of randomness for searching, for example finding disagreements with Fermat's little theorem to show a number is composite or taking random walks on graphs to show they are connected. Derandomization results have given us reasons to believe we can replace the randomness in these computations with pseudorandom number generators.
Randomness can also play the role of hiding, since no one can predict future coin tosses. In interactive proofs we make the jump from NP to PSPACE because of randomness. For PCPs with O(log n) queries the jump goes from P to NP and with poly queries from NP to NEXP, in the later case classes which are provable different. In all these cases the prover cannot cheat because it cannot predict coin tosses not yet made by the verifier. A verifier using a pseudorandom generator will fail here, since the prover could then predict the verifier's actions.
AM protocols have the verifier flip coins first so no hiding going on, rather searching for a statement Merlin can prove and we expect some derandomization for AM. The result that MA is in AM says that sometimes we can replace hiding randomness with searching randomness.
Paz Carmi, Yinnon Haviv and Enav Weinreb from Ben-Gurion University have solved the regular language problem I posted last month.
The problem came from Janos Simon based on a homework question in Kozen's book. Let L(A)={x|x^m is in A for some m}. The homework question asked to show that L(A) is regular if A is regular. The question Janos asks was how many states do we need for a DFA for L(A) if A has n states. Carmi, Haviv and Weinreb show that an exponential number of states are required.
Not only did they solve the problem but also sent me this nice write-up of the proof. I believe this is the first time someone has directly solved a problem I've given on this weblog. I owe each of them a beer (or non-alcoholic beverage of their choice).
Update 12/9: I received the following today from Markus Holzer.
It seems that I have missed your posting about the problem last month. The problem you have stated was already solved in June by Jeff Shallit and co-authors. They have given a lower bound on the DFA accepting root, by considering the (largest) syntactic monoid induced by two generators. The latter problem on syntactic monoid size is of its own interest and I was working on that for a while, therefore I know the result of Shallit et al on the root descriptional complexity. Maybe you also owe the beers to Shallit et al.
The Cornell University Library has announced it will drop a substantial number of their Elsevier subscriptions, part of a general problem Cornell and other libraries are facing with higher costs and different pricing models from commercial academic publishers.
I have wanted to write a post on this topic for a while but I find it difficult to truly understand the problems or the potential solutions. Elsevier does a nice job with their portal and their publishing, but because of consolidation and cheap distribution via the internet, they have changed their pricing model in ways that make it difficult for many libraries to afford all of the journals that they need.
This poses some moral questions: Should we avoid submitting our papers to Elsevier journals? Is it wrong for me to serve on the editorial board of the Elsevier-published Information and Computation? I just don't know.
First an update on NSF program solicitations: The Formal and Mathematical Foundations cluster has posted its solicitation which includes computational complexity. The deadline is March 4. The program announcement for the Emerging Models and Technologies for Computation Cluster, which includes quantum and biological computing, is still under development. Also the ITR solicitation has also been posted, with some major changes from previous years.
Donald Knuth's tribute to Robert Floyd highlights the December SIGACT News. Also reviews of a bunch of crypto books, a column on sublinear-time algorithms and the complexity theory column on "Post's Lattice with Applications to Complexity Theory."
In my mailbox yesterday was not one but five copies of SIGACT News shrink-wrapped together. Once I unwrapped them and looked at the labels, only the outer one belonged to me. There were two for other professors in my department, one for our library and one for the library of Loyola University Chicago, which is on the other side of the city. I'm not sure if it was a mistake or some attempt by the ACM to reduce mailing costs, but I hope this is a one-time occurrence.
XML (eXtensible Markup Language) has become quite a popular data format in recent years. XML roughly corresponds to a tree. For example,
<person><name>Harry</name><age>29</age></person>
<person><name>Jane</name><major>Computer Science</major></person>
represents a tree. The root having two children, each labeled "person". The first of these children has two children named "name" and "age". The first of those children has a leaf node labeled with the phrase "Harry". For a larger example, see the RSS feed for my weblog.
XML was designed as a flexible way to present documents for later displaying. Since the XML format can be easily produced and parsed, XML also serves as a standardized method for transferring data between databases, far better than the old CSV (Comma-Separated Values) format.
Recently there have been some work on directly manipulating and querying the XML data. As a theorist, this seems like a bad idea, particularly for larger databases. While XML completely represents the underlying tree, it is not a good implementation of that tree. Basic tree operations like parent and sibling are very expensive just looking at XML. About the only thing one can do quickly with XML is depth-first search. Far better to "shred" the data into a better tree implementation like a DOM (Document Object Model) or a full-fledged database and do the work there, rewriting a new XML if needed.
One issue though is when the XML file is on the order of 5-10 GB, a bit larger than what can be stored in the memory of a typical desktop machine. One can stream through the file rather quickly but cannot recreate the tree. This opens up some interesting theoretical questions:
Given a stream of data in XML format, how can one do efficient analysis and manipulations of the underlying tree? I suspect one would want to sometimes shred subtrees, but you cannot determine the size of the subtree until after its been streamed. Perhaps some randomness or streaming the file multiple times might be helpful.
XML might not be the right model of a tree for this purpose. What is the best way to stream a tree or other data structure to allow an efficient implementation of the basic operations of the data structure? Perhaps some redundancy might be useful.
I thought I should mention some of my favorite and most frustrating open questions over the years. Here's one of them:
Let f:{0,1}n→{0,1}. Let h and g be n-variable degree d polynomials over the reals. Suppose for all x in {0,1}n, g(x)≠0 and f(x)=h(x)/g(x). Is there a constant k such that the decision-tree complexity of f is bounded by dk?
The decision-tree (or query) complexity is the number of bits of x that need to be viewed to determine f(x). The queries to the bits of x can be adaptive. I'm particularly interested in the case where d is poly-logarithmic in n.
Nisan and Szegedy answer the question in the affirmative if g(x)=1. Their result holds even if f(x) is only approximated by h(x). However if we allow arbitrary g(x), h(x)/g(x) can closely approximate the OR function which requires looking at all of the bits. The case where we require exact equality of f(x) and h(x)/g(x) is the open question at hand.
Back in my science-fiction reading days, I particularly remember one editorial written in one of those anthology magazines about 1980: In the near future, you will be able to access, via your personal computer, any science fiction story right after it has been written. If you like a certain author, you can read other stories from that author, even if we didn't decide to put it in this magazine. In this future world, will you still need me, the editor? The answer is yes, for there will be way too much dreck out there for you to find the good stories within, and you will need people like me to point them out to you.
The future is now and though I haven't kept up with science fiction, the same issue applies to academic publications. Recent posts by Michael Nielsen and on Slashdot have asked: With nearly all new papers in physics and computer science easily accessible on the web, how do you find the ones worth reading?
Conferences have traditionally played this role in computer science. But, by definition, paper choices are decisions by committee and with the massive growth in the field, many good papers do not appear in the major conferences.
What we need are "editors"! You can help. Write a survey paper, or spend a page in your research papers discussing the important earlier results in a field. Maintain a web page pointing to papers you find interesting. Start a weblog saying what you find interesting--you don't have to post long or often, just to say, hey, this paper is worth looking at. This way people with similar interests to you can find out what at least you think is important. Only by many of us working together can we make the interesting papers stand out.
It happened right after I started high school in suburban New Jersey, the start of the Science Times section in Tuesday's New York Times. The Science Times not only helped get me excited about science but made me feel others could get excited over science as well. I've kept reading it off and on during these past 25 years. The Science Times has reported on a fair amount of research in complexity and theoretical computer science, for a time some joked that a result was not important until it appeared in the New York Times.
Today the New York Times celebrates the 25th Anniversary Issue of the Science Times. It features 25 questions such as Does Science Matter? and What Is the Most Important Problem in Math Today? (Hint: It's not P versus NP).
I'll end this post with a quote from the essay of Alan Lightman:
All of the scientists I've known have at least one quality in common: they do what they do because they love it, and because they cannot imagine doing anything else. In a sense, this is the real reason a scientist does science. Because the scientist must. Such a compulsion is both blessing and burden. A blessing because the creative life, in any endeavor, is a gift filled with beauty and not given to everyone, a burden because the call is unrelenting and can drown out the rest of life.
The Computer and Information Science and Engineering Directorate of the NSF has completed it reorganization. The CISE web site details the new structure.
CISE now has four divisions. Instead of each division have a large number of specific programs, each division contains a smaller number of clusters covering a broader research area. I'm happy to see "Computational Complexity" specifically mentioned in the Formal and Mathematical Foundations Cluster in the Division of Computing & Communication Foundations. However it shares that cluster with such diverse topics as "computational algorithms for high-end scientific and engineering applications" and "analysis of images, video, and multimedia information." Hopefully funding panels will meet in the more specific areas to avoid trying to compare proposals from vastly different areas of computer science.
Quantum and Biological Computing sit in a different CCF cluster, Emerging Models and Technologies for Computation. This shows NSF's hopes for these new technologies but may also give them a way to phase out these areas if the technologies don't show promise.
Program announcements for the CCF clusters are still under development. The ITR solicitation is still not expected until Thanksgiving. So if you plan a grant proposal this year, you'll still need to wait.
The NEC Research Institute (NECI) died just over a year ago. I didn't feel comfortable talking about it then so let me say a few words now.
I joined NECI in 1999 just after its tenth anniversary. When I joined its mission and focus was basic research in computer science and physics. NECI gave me considerable time and resources to study fundamental questions in computational complexity. It was an exciting place to be.
Soon thereafter some changes were occurring. NEC modified the mission of NECI to focus on producing technologies with basic research secondary. Some researchers (though not us theorists) were encouraged to join "technology groups" to find practical applications of their research. But during this time, the administrators always supported basic research and I never felt uncomfortable doing theory.
But then on November 1, 2002, NECI merged with NEC CCR&L, a more applied research lab in the same building to form NEC Laboratories America. The new mission makes no mention of basic research. The scientists in charge were replaced by engineering/management types. Many of the research scientists, particularly physicists, were let go.
My job was never in immediate danger but NEC was no longer the place for me and so I went on the job market; no one was surprised when I decided to leave.
A corporation like NEC needs to make decisions for the health of the company. I do not fault NEC for the decisions that it made and they gave me a few great years. Still I mourn the NEC Research Institute, quite a special place during its thirteen year run.
Last week I started an experiment using instant messaging. I thank the many of you who sent me IMs, a great way for me to meet you, the readers of this weblog. I plan to keep trying IM for a while but I have had learned a few lessons which seem obvious in retrospect.
Instant messaging can be a time sink. I love communicating with people, which is the main reason I keep this weblog going. However, as most academics, I have much going on and can't afford to have many lengthy discussions. I've also learned there is no clean way to end an IM conversation. So please feel free to IM me but don't take it personally if I rudely keep the conversation short.
Just because the nice icon on the home page says I'm online it doesn't mean that I am at my computer and available to chat at the moment. Often I am and I will but if not I will eventually see your message and respond. If there is really is something important that you want to discuss with me via IM we can setup a scheduled time via email. I often do this with phone calls so why not IM too?
I've also discovered IM conversations can be recorded, posted on the web and could be used in a court of law. I need to be careful about what I say.
I have already had some interesting research conversations and ideas for weblog posts via IM. The last post came in part because of some IM questions about the Feigenbaum-Fortnow paper. Email became a powerful research tool when email use hit a critical mass among computer scientists sometime in the mid-late 80's. I believe IM will also follow that curve and I hope to keep ahead of it and perhaps nudge it a little bit.
Let f be a function that maps Σn to Σn. Let U represent the uniform distribution on Σn and D be the distribution that one gets by applying f to a string drawn from U.
We wish to find f that change x but keep the underlying distribution close to the same, in particular we want the following properties,
(1) Prob(f(x)≠x)≥2/3 when x is drawn from U.
(2) U and D should be statistically close, informally no algorithm making a polynomial number of samples will be able to distinguish, with high confidence, whether those samples came from D or U.
Achieving such an f is easy, consider f that just flips the first bit of x. (1) holds all the time and U=D.
Suppose we add a restriction to f:
(3) In the bits where x and f(x) differ those bits are 1 in x and 0 in f(x). For example, f(011)=010 is allowed, but f(011)=f(111) is not.
An f fulfilling (1), (2) and (3) is impossible. (1) and (3) means that f will reduce the number of ones on most of the strings and taking say n3 samples we will notice a statistical difference in the number of bits which are 1 depending on whether the samples were drawn from U or D.
Suppose we replaced (3) with a weaker restriction:
(3') In the first bit where x and f(x) differ, that bit is 1 in x an 0 in f(x). So f(110)=011 is allowed but f(001)=010 is not allowed.
Can an f fulfilling (1), (2) and (3') exist? Not so clear, but Peter Shor found a simple example: f(0n)=0n, and for the other x, f(x)=x-1 where x is viewed as a nonnegative integer written in binary. D is indistinguishable from U yet f changes nearly every string.
These questions are related to an old paper I had with Joan Feigenbaum which has gotten some recent attention because of a nice new FOCS paper by Bogdanov and Trevisan that builds on our paper. The proofs in these papers work partly because (1), (2) and (3) cannot all happen even for arbitrary distributions U. Both papers give a negative result for a nonadaptive case; the adaptive case corresponds to (1), (2) and (3') and Shor's example shows that the proof techniques will not directly lead to a solution in the adaptive case which remain open.
A quantum computing graduate student sent me email over the weekend. He had thought he had proven some surprising results about the class PP and was wondering if he was making some mistake. After some discussion here was his reply:
Ok I get it. Somehow I jumped to the conclusion that PPP was PP.
There is one more for your blog: A⊆ B implies B⊆ AB but not AB⊆ B (duh!)
He goes on to say he made his quantum leap to conclusions since for the quantum class BQP, PBQP=BQP, he thought the same property must hold for all classes.
I present this because he suggested it for my weblog and as a public service for those who might make a similar mistake. Yes, in case you were wondering, for reasonable classes A (like A=P), B⊆AB without needing to assume A⊆B.
Here is one of my favorite examples of a bad proof for what turns out to be a correct theorem.
Theorem: If NP is in BPP then the whole polynomial-time hierarchy is in BPP.
Let's focus on simply showing Σ2 is in BPP if NP is in BPP. The rest is straightforward induction. Here is our first proof:
Σ2=NPNP⊆ NPBPP⊆BPPBPP=BPP. Do you see the problem with this proof?
To get a correct proof (first due to Zachos) we need to use Arthur Merlin games. Consider a Σ2 language L as an ∃∀ expression. Since NP is in BPP, we can replace the ∀ with a probabilistic test. This gives us what is known as MA or a Merlin-Arthur game where the powerful Merlin sends a message that Arthur can probabilistically verify. A now classic result shows that MA is contained in AM, where Arthur provides a random string to Merlin who must then provide a proof based on that string. Once again we apply the NP in BPP assumption to allow Arthur to simulate Merlin probabilistically and now we have a BPP algorithm for L.
The problem in the first proof is in the second "⊆". The assumption NP in BPP does not imply NPA in BPPA for all A.
How has the internet most affected the study of science? In one word: communication: the ability for scientists to discuss and share their research with each other quickly and cheaply. So I strive to find new ways to use the internet to improve communication. Starting this weblog is one such example. I'd thought I would try another: Instant Messaging.
Now many of you are thinking I am crazy, but for different reasons. Some of you out there have been using instant messaging for years and wondering how I could consider it s "new" technology. But many of you out there have barely figured out how to read your email attachments and have hardly even heard of IM.
On a trial basis, for my weblog readers, I will welcome your instant messages. Talk to me about this weblog, about complexity and computer science in general or about whatever you want. Maybe I'll start a trend and all computer scientists will IM each other. Maybe not but it's worth trying out.
I'm using Yahoo Instant Messaging; my Yahoo id is the imaginative "fortnow" (note: I do not read email sent to fortnow@yahoo.com). I put a button on the left column of the weblog home page that tells you when I am online and you can click to connect. I look forward to hearing from you.
There is a big reorganization in the CISE directorate of NSF. To understand what's happening, let's review the previous structure..
The National Science Foundation, like most government bureaucracies, has a tree-like structure. At top is the office of the director (Rita Colwell). Below that are several directorates including the Directorate for Computer and Information Science and Engineering (CISE) headed by Peter Freeman. By law every organization in NSF cannot be just "science" but "science and engineering" except for the Foundation itself.
Below CISE were several divisions, including Computer-Communications Research (C-CR) headed by Kamal Abdali. C-CR ha several programs including the Theory program headed by Ding-Zhu Du.
Peter Freeman, who recently became head of CISE, has decided to reorganize the whole directorate. Exactly what it will become should be announced next week but there are some hints in this presentation. Change is always scary but I'm hopeful theory will survive. I'll give more details when I know them.
To overcome the tree structure of NSF, there are a number of cross-disciplinary programs. One such program, Information and Technology Research (ITR), has produced several large, medium and small grants to a variety of projects, including many applications of theory. This is the last year of ITR solicitations and the calls have been well behind schedule, probably not unrelated to the CISE reorganization. This year's topic will be on "ITR for National Priorities" with more details promised by Thanksgiving. Unconfirmed rumors have the program will be more focused and only making medium sized grants.
There are two computer science departments on the University of Chicago campus. The one I belong to, a department in the physical sciences division of the University and the other, the Toyota Technological Institute at Chicago (TTI-C). What is TTI-C?
The Toyota Technological Institute, a university covering various engineering disciplines located in Nagoya City, Japan, was founded in 1981 from funds from the Toyota Motor Corporation as directed by the Toyoda family. They decided to start a computer science department and locate it in the states to have a broader access to computer science faculty and students. For various reasons they settled on Chicago and set up an agreement with the University of Chicago, using space in the University of Chicago Press building. TTI-C has just officially started up and have already signed up a few strong faculty members including theorist Adam Kalai and Fields medalist Stephen Smale. TTI-C plans to increase its faculty size and start up a graduate program in the near future.
Although there will be some sharing of courses and a few of our faculty (including myself) sit on a Local Academic Advisory Council for TTI-C, TTI-C will formally maintain itself as a separate institution from the University. Nevertheless close collaborations between our department and TTI-C has already established an exciting research environment for our combined faculty and students.
This is for my friends in Boston who suggested I do a sports post.
One of the great parts of my job is working with people from around the world. I was working with a graduate student, Luis Antunes, from Portugal when we found out that Portugal would play the US in the 2002 World Cup. We had various rounds of taunting back and forth with me fully knowing the US didn't stand a chance in that match. When the US did win, Luis tells me the whole country went into a deep depression. By contrast, for the most part people in the US didn't care.
I can now understand Portugal's pain as the city of Chicago has gone into a similar kind of quiet depression over the Cubs failure to advance to the world series. Impressive what sports can do to the psyche of a city or a country.
Memo to my friends in Boston: Hope things go well for the Sox so your city doesn't end up feeling tomorrow like Chicago does today.
I have tried to keep politics out of this weblog with the exception of issues related to science, in particular science funding and immigration. To celebrate America's fiscal new year, let's talk about immigration.
Congress has declined to renew the higher annual cap on H1-B visas, rolling them back to 65,000 for the fiscal year starting today from 195,000 in 2000. H1-B's allow "employers to hire foreign workers with special skills they can't find among American job applicants," typically for high-tech jobs. But H1-B's are also used for visiting researchers at industrial research labs and some university positions. When the limit is reached, the government will no longer issue more visas until the start of the next fiscal year.
At NEC, we had postdocs who had to delay their start date until October for this reason, including in some cases those who wanted to start at the beginning of summer. With the limit dramatically decreased, if the job market starts perking up, we could hit the limit much earlier. This could make a real dent in international cooperation in science.
What is a one-way function, intuitively a function that is easy to compute and hard to invert? Taking this intuitive idea to a formal definition has yield two quite different meanings, sometimes causing confusion.
The first directly tries to translate the intuition. A function f is one-way if
f is 1-1 (so an inverse is unique),
f is length increasing (so the output of the inverse function is not too large),
f is computable in polynomial time, and
there is no polynomial-time computable g such that for all x, g(f(x))=x.
This is a nice clean definition that fulfills the intuition but is not that useful for cryptography, since f could be easily invertible on all but a small number of inputs, or with stronger adversaries. To handle these issues we have a different looking definition.
A function f is r(n)-secure one-way if
There is a function l(n)≥n such that f maps strings of length n to strings of length l(n),
f is computable in polynomial time, and
for all probabilistic polynomial-time algorithms A, the probability that f(A(f(x)))=f(x) is at most r(n) where the probability is taken over x chosen uniformly from the strings of length n and the random coins used by A.
There are many variations on both definitions and a considerable amount of theory devoted to each. Grollmann and Selman show that one-way functions of the first kind exist if and only if P ≠ UP. On the other hand Håstad, Impagliazzo, Levin and Luby show that from any one-way function of the second kind, one can create a pseudorandom generator.
At one point I tried using complexity-theoretic one-way functions and cryptographic one-way functions to distinguish the two, but this only caused confusion. So we have to live with the fact that we have these two definitions with the same name and we'll have to just use context to figure out which definition is appropriate. If you give a talk or write a paper about one-way functions, it never hurts to distinguish which version you are talking about.
Last week I posed the following question:
(1) Exhibit an NP-complete language L, such that for all lengths n≥1, L contains exactly half (2n-1) of the strings of length n.
This question was posed by Ryan O'Donnell and solved by Boaz Barak. Here is a proof sketch.
By using standard padding and encoding tools, (1) is equivalent to
(2) There is an NP-complete language L and a polynomial-time computable function f such that for every n, there are exactly f(n) strings in L of length n.
First we show how to achieve (2) if we replace "strings" with "total witnesses." Consider pair of formulas φ and ¬φ. The total number of satisfying assignments between them total 2n if the have n variables. We just create an encoding that puts φ and ¬φ at the same length. The total number of witnesses at that length is equal to 2n times the number of formula pairs encoded at that length.
We now prove (2) by creating a language L that encodes the following at the same length for each φ
φ, where φ is satisfiable.
(φ,w) where w is a satisfying assignment for φ and there is another satisfying assignment u<w for φ.
You can check that the language is NP-complete and the total number of strings in L for each φ is just the number of satisfying assignments of φ.
A colleague of mine, who shall remain nameless, likes to schedule time for research, a certain set block of time during the day where he puts off all his todo's and concentrates on science. Sounds good but often his chair will stop by for some discussion or an impromptu meeting. The colleague will say, "Sorry, but I reserved this time for research", but that argument didn't fly, the chair said he could do research anytime. One day he said instead, "Sorry I have a squash game" and the chair replied that they would talk at a future time. Welcome to the academic world, where research gets trumped by a meeting that itself can be trumped by a squash game.
Is scheduling time for research a good idea? It depends on your personality and your research style. If you find yourself with no time to think about an interesting problem because too much else is happening then yes, best to schedule a few hours where you promise yourself you will do nothing else but research during those times. This means more than not preparing for class but also ignoring your computer. Checking email and surfing the web are themselves great time sinks.
In my case, I find it difficult to just start thinking about research at a given time. So I use the rule that research trumps all and when inspiration hits me, or someone comes to my office with a research question, I drop everything I can to work on the problem. Okay, I can't skip a class for research but email, weblog posts, referee reports, etc., should never stand in the way of science.
The call for papers for the 2004 ACM Conference on Electronic Commerce is now available. I'm posting this note as my duty as a program committee member to spread the word of the conference.
Why would an electronic commerce conference want me, a complexity theorist, as a PC member? Electronic commerce has many surprising connections to computational complexity. Consider complex auction situations where different buyers wish to purchase different items with varying needs for combinations of these auctions. One needs to design such auctions which decisions made by the buyers, as well as determining the winner must be computationally efficient. This in addition to the usual needs of auctions to be revenue generating, robust against players trying to cheat the system and other notions of "fairness."
In a more philosophical sense, what is a large financial market but some sort of massive parallel computation device that takes pieces of information and produces prices for securities. How can we model and analyze this process? Computational complexity should play a major role in understanding this model of computing and allow us to develop more efficient financial markets.
I have a gap in my knowledge of work in theory done between 1979 (the publication of Hopcroft and Ullman) and 1985 (when I started graduate school). So every now and then I see a new result from this time that I should have known years ago. Here is an example from the Winter 1982 SIGACT News, a variation of the regular language pumping lemma due to Donald Stanat and Stephen Weiss.
Theorem: If L is regular then there is a positive integer n such that for every string x of length at least n, there are strings u, v and w with v nonempty such that x=uvw and for all strings r and t and integers k≥0, rut is in L if and only if ruvkt is in L.
What surprises me about this result is that w does not appear in the conclusion and that the initial r could put the finite automaton in any state before it gets to u. Here is a sketch of the proof.
Let s be the number of states of a finite automaton accepting L. Let yi be the first i bits of x. For any initial state a, yi will map it to some state b. So one can consider yi as a function mapping states to states. There are at most ss such functions so if |x|≥ss there is an i and a j, i<j such that yi and yj represent the same function. We let u=x1...xi-1 and v=xi...xj-1. The rest follows like the usual pumping lemma.
Using a result of Jaffe, Stanat and Weiss show that this condition is not only necessary but also sufficient to characterize the regular languages.
Cleaning out my office I came across some old SIGACT News that Bill Gear had given me when he cleaned out his office after his retirement. The Winter 1982 edition is quite interesting. I was a freshman in college that year, well before I was part of the theory community.
There are some interesting technical articles that I will get to in future posts. But the first two pages were letters to the editor that are chilling reminders of the cold war during that time.
On page two was the following short note from Witold Lipski, Jr. and Antoni Mazurkiewicz from the Polish Academy of Sciences.
We are very sorry to inform you that due to the situation in Poland we do not see any chance to organize our 1982 Conference on Mathematical Foundations of Computer Science.
MFCS started in 1972 as an annual conference rotating between Poland and Czechoslovakia, and now between Poland, Slovakia and the Czech Republic. There was no conferences in 1982 or 1983 and the conference did not return to Poland until 1989.
Talking about the Czechs, there was a much longer letter on page one from James Thatcher of IBM. Here are some excerpts.
On a recent trip to Europe, I visited Prague and had the pleasure of talking with Dr. Ivan M. Havel who is a friend and colleague of many years. Ivan Havel received his Ph.D. in CS from Berkeley in 1971. He joined the Institute for Applied Computer Technology in Prague in 1972 and then in 1974 became a member of the Czechoslovakian Academy of Sciences, in the Institute of Information Theory and Automation.
Ivan's brother, Vaclav Havel, an internationally known playwright, was imprisoned in 1979 for four and a half years for his activities in connection with the Charter 1977 movement.
In 1980, possibly related to his refusal to denounce his brother, Ivan Havel was removed from his position in the Academy of Sciences and was unemployed for several months. Last May, he and Vaclav's wife were arrested and charged with "subversion" for allegedly "collecting and distributing written material oriented against the socialist state and social establishment, with hostile intentions." After four days detention, they were released.
He is employed as a programmer-analyst by META, a home-worker program for the handicapped.
Ivan Havel remained a programmer until after the Velvet Revolution in 1989. After some political work in 1990, he became a docent (associate professor) at Charles University and director of the Center for Theoretical Study where he remains today.
His brother Vaclav went on to become president of the Czech Republic.
When I move back to Chicago, I will go back to my old email address . I got to thinking about how my career can be described by my email addresses.
As an undergrad at Cornell, I spent several years working for computer services writing an email system in assembly language for the IBM 370. The system was scrapped shortly after I left for grad school at Berkeley. After a year at Berkeley, I followed by advisor, Michael Sipser, to MIT.
I had email addresses at Cornell and Berkeley but I have long since forgotten them. At MIT I wanted the userid "lance", but the name was taken by Lance Glasser, then an MIT professor. So my email became .
When I graduated and went to Chicago, I decided to stick with the userid "fortnow" for an email of . This bucked the trend at the time of having first names for email at Chicago so I had to have aliased to . When the university started system wide email I got though also works.
When I did a sabbatical in Amsterdam my email became or simply . When I moved to the NEC Research Institute my email because aliased to and when the NEC Research Institute became NEC Laboratories America I got my current email .
In addition to this, the ACM has created permanent email addresses, permanent as long as you are an ACM member and I did create an address though I never did give it out (until now). My brother and I now own the domain fortnow.com and I have what I do call my permanent address, . I also am the default receiver for fortnow.com mail, which means that addresses like , or even will all go to me.
All of the email addresses in this post still work and forward to me. But I will stick to using two main email addresses, for work related email and for non-work emails.
I used javascript to generate the emails in this post to avoid adding even more to my heavy spam load. We'll see if it works or whether I start getting spam sent to .
A few months ago I had a post describing information markets, a system of buying and selling securities that pay off if a given future event happens. Based on the price of a security, one can get an estimate of the probability that that event will occur. Studies have shown that information markets are better predictors than polls or experts.
Information markets have taken a blow in the past few days. The US Department of Defense has cancelled a program that would have set up limited futures markets on securities based on terroristic activities. They bowed to pressure from senators who consider it morally wrong to bet on events on future terrorist attacks. I understand their concerns but computer scientists and economists have produced what could have been a powerful tool in controlling terrorism and it is quite a shame to see it discarded so easily.
David Pennock sent me some links on a more positive point of view from CNN, Fortune and Wired and a fun CNN piece on the Tradesports Poindexter future.
Update (8/1): A well-written New York Times column A Good Idea with Bad Press and a nicely argued opinion piece by David Pennock.
Way back when I was a graduate student, I moved from Berkeley to MIT. I put what few belongings I had into boxes and shipped them via UPS. My brother flew out and we drove across the country together. Those were the days.
Now making the move back to Chicago is not nearly so simple. We have houses to sell and buy. Getting our kids ready for a new school. Real estate agents, lawyers, mortgage and insurance people to deal with. Meanwhile there is academic work that needs to get done before the real move. Conference and grant deadlines don't move to accommodate my move.
So this weblog might get a little spotty until I get settled into Chicago, sometime in mid-September. I'll try to find some time for some posts during that time but don't expect too much. If you are having complexity weblog withdrawal check out the archives. Nice thing about complexity--old stuff doesn't (usually) get stale.
Another rump session talk by Scott Aaronson showed that BQP/qpoly is contained in EXP/poly. In other words, everything efficiently quantumly computable with a polynomial amount of arbitrarily entangled quantum advice can be simulated in exponential time with a polynomial amount of classical advice.
Let me try to put this in context while avoiding quantum mechanics. Advice is method for encoding a different program for each input length. We define the class P/poly as those languages computable in polynomial time with access to a polynomially-long advice string an where the string an depends only on the length of the input. P/poly is equivalent to those problems having nonuniform polynomial-size circuits.
Quantum advice is a bit more tricky, since it can be in a superposition of regular advice strings. Formally, quantum advice is an exponentially long vector of numbers βa where βa is the amplitude of advice string a. For simplicity let us assume those numbers are real and we'll also have the restriction that the sum of the squares of the amplitudes is one.
You can see there are far more ways to give quantum advice than classical advice. But the quantum machines are limited in how they can use the advice. Harry Buhrman asked whether one can give any limit at all to what one can do with quantum advice. Scott Aaronson gives an answer: No better than classical advice as long as you are allowed (classical) exponential time.
Ideally one would like that efficient quantum algorithms with quantum advice can be simulated with efficient quantum algorithms with classical advice. Still Aaronson's result shows that even with fully entangled advice one cannot get all the information out of it.
During the rump session of complexity, Nikolai Vereshchagin presented a combinatorial theorem that he proved using Kolmogorov complexity. Let A be a finite subset of N×N where N is the set of natural numbers. Let m be the size of A, r be the number of nonempty rows of A and c the number of nonempty columns.
We say A is good is every nonempty row has m/r elements and every nomempty column has m/c elements of A. A rectangle has this property, as does a diagonal. We say A is k-good if every nonempty row has at most km/r elements and every nonempty column has at most km/c elements. A is good if it is 1-good.
Vereshchagin's Theorem: There is a constant c such that for all finite subsets B of N×N with n = log |B| there is a partition of B into at most nc sets each of which is nc-good.
Vereshchagin asks whether there is a purely combinatorial proof of this theorem. If you know of one let me know.
For those who know some Kolmogorov complexity, let me sketch the proof: We label each point (x,y) of B with the following five values: KB(x,y), KB(x), KB(y), KB(x|y) and KB(y|x). We partition the points into sets with the same labels. Standard counting arguments from Kolmogorov complexity show that each partition is nc-good for some c.
Update
A doctor, lawyer and mathematician were discussing whether it was better to have a wife or a girlfriend. The doctor said it was better to have a wife because it is medically safer to have a single partner. The lawyer said it was better to have a girlfriend to avoid the legal hassles of marriage. The mathematician said it was better to have both.
"Both?" said the doctor and the lawyer. "Yes," said the mathematician, "That way the wife thinks I'm with the girlfriend, the girlfriend thinks I'm with the wife and I can do some math."
I was reminded of that joke by the recent New York Times article Pure Math, Pure Joy and the accompanying slideshow. Those pictures look all too familiar.
The greatest lovers of math though are not the famous mathematicians at places like Berkeley and Harvard. Rather the mathematicians who take low-paying jobs with high teaching loads at less-strong colleges or move from visiting position to visiting position just to have some occasional time to do math. They have a dedication (or perhaps an addiction) I can never fully appreciate.
Expander graphs informally are graphs that given any subset S that is not too large, the set of vertices connected to S contains a large number of vertices outside of S. There are many constructions and applications for expander graphs leading to entire courses on the subject.
The adjacency matrix A of a graph G of n vertices is an n×n matrix such that ai,j is 1 if there is an edge between vertices i and j and 0 otherwise. Noga Alon noticed that a graph that has a large gap between the first and second eigenvalue of the adjacency matrix will be a good expander.
We can use ε-biased sets to get expanders. Let S be a ε-biased set for Fm for F the field of 2 elements. Consider the graph G consisting of 2m vertices labelled with the elements of Fm and an edge from x to y if y=x+s or x=y+s. This kind of graph G is known as a Cayley graph.
By looking at the eigenvalues the adjacency matrix A of G we can show G is an expander. The eigenvectors v are just the vectors corresponding to the functions g in L described earlier. For any vector a we have
(Ag)(a) = Σs in S g(a+s) = g(a) Σs in S g(s) since g(a+s) = g(a)g(s). Let g(S) = Σs in S g(s). We now have that Ag = g(S) g. So g is an eigenvector with eigenvalue g(S). If g is the constant one function then g(S)=|S|. Since S is an ε-biased set, g(S)≤ε|S| for every other g, so the second eigenvalue is much smaller than the largest eigenvalue and G must be an expander.
The June 2003 SIGACT News is out. Aduri Pavan wrote this months Complexity Theory Column on "Comparison of Reductions and Completeness Notions".
As I have mentioned before in this weblog, I heartily encourage joining SIGACT, the ACM Special Interest Group on Algorithms and Computation Theory. You get the SIGACT News, discounts on conferences and as I discovered last night from home, you apparently get online access to the STOC proceedings. Not to mention supporting the theory community. All this for the low price of $18 ($9 for students).
What about the ACM itself? I have been an ACM member since graduate school since I feel it is important to support the main computer science organization. But for the additional $96 ($42 for students) there are no real significant benefits over joining SIGACT alone.
ε-biased sets are an interesting concept that I have seen recently in a few papers but never seemed to have a clear description. At FCRC Eli Ben-Sasson gave me a good explanation and I will try to recreate it here.
Let F be the field of 2 elements 0 and 1 with addition and multiplication done modulo 2. Fix a dimension m. Let L be the set of functions g mapping elements of Fm to {-1,1} with the property that g(x+y)=g(x)g(y). Here x+y represents addition done coordinate-wise modulo 2. One example of a g in L is g(x1,x2,x3)=(-1)x1 (-1)x3.
There is the trivial function g in L that always maps to 1. For every non-trivial g in L exactly half of the elements in Fm map to 1 and the others to -1. If one picks a reasonably large subset S of Fm at random then high probability, g will map about half the elements to 1 and the rest to -1. In other words the expected value of g(x) for x uniformly chosen in S is smaller than some small value ε. If this is true we say S is ε-biased for g.
An ε-biased set is a set S such that for all nontrivial g in L, S is ε-biased for g. Formally this means that
Σx in S g(x) ≤ ε|S|. Not only do reasonable size ε-biased sets exists but they can be found efficiently. Naor and Naor found the first efficiently constructible ε-biased sets of size polynomial in m and 1/ε.
One can extend the notion of ε-biased sets to fields F of p elements for arbitrary prime p. L would now be the set of functions g mapping elements of Fm to the complex pth roots of unity, e2π(j/p)i for 0≤j≤p-1 again with the property that g(x+y)=g(x)g(y). Various constructions have created generalized ε-biased sets of size polynomial in m, 1/ε and log p.
For applications let me quote from the recent STOC paper by Ben-Sasson, Sudan, Vadhan and Wigderson that used ε-biased sets to get efficient low-degree tests and smaller probabilistically checkable proofs. You can get more information and references from that paper.
Since the introduction of explicit ε-biased sets, the set and diversity of applications of these objects grew quickly, establishing their fundamental role in theoretical computer science. The settings where ε-biased sets are used include: the direct derandomization of algorithms such as fast verification of matrix multiplication and communication protocols for equality; the construction of almost k-wise independent random variables, which in turn have many applications; inapproximability results for quadratic equation over GF(2); learning theory; explicit constructions of Ramsey graphs; and elementary constructions of Cayley expanders.
After the FCRC meetings I attended were concluded, I headed up to UCSD for the celebration of Walter Savitch for his sixtieth birthday and upcoming retirement. He gained his fame in complexity for Savitch's Theorem that shows "P=NP" for space.
I learned quite a bit at the meeting. Walt Savitch was Steve Cook's first student, his only student while Cook was at Berkeley in his pre-Toronto pre-"SAT is NP-complete" days. Also as Cook said, Savitch is the only student he has had with a theorem named after him. That theorem made up a good part of Savitch's Ph.D. thesis. At the celebration Cook gave an overview on propositional proof systems.
After coming to UCSD, Savitch did some work on computational linguistics and one of the leaders of the field, Aravind Joshi, gave a talk on combining trees to keep the structure when parsing sentences.
Savitch is probably best known now in computer science for his textbooks in introductory programming that likley many of you have used.
Congrats Walt on a fine career and here's hoping retirement doesn't slow you down.
As promised I added links to the papers in the post on the STOC business meeting. Let me say some more words on the winner of the Gödel prize
Valiant developed the concept of PAC (Probably Approximably Correct) learning as roughly where a learner sees a small number of labelled examples from a distribution and with high confidence will generate a hypothesis that with high probability will correctly label instances drawn from the same distribution.
A strong learner has confidence close to 100%; a weak learner has confidence only slightly better than 50%. Schapire, using a technique called boosting, showed how to convert a weak learner to a strong learner. This is a wonderful theoretical result but the algorithm had problems that made it difficult to implement.
In their Gödel prize winning paper, A decision-theoretic generalization of on-line learning and an application to boosting, Freund and Schapire develop the adaboost algorithm that solves many of these issues and has become a staple of the theoretical and practical machine learning community.
Boosting has its own web site where you can find much more information about the algorithms and applications.
Alonzo Church was born a hundred years ago today in Washington, DC. Church is best known for the λ-calculus, a simple method for expressing and applying functions that has the same computational power as Turing machines.
With Rosser in 1936, he showed that λ-expressions that reduce to an irreducible normal form have a unique normal form. In that same year he showed the impossibility of decided whether such a normal form existed.
Church's thesis, which he states as a definition: "An effectively calculable function of the positive integers is a λ-definable function of the positive integers."
Again in 1936, Kleene and Church showed that computing normal forms have the equivalent power of the recursive functions of Turing machines. And thus the Church-Turing thesis was born: Everything computable is computable by a Turing machine.
The λ-calculus also set the stage for many of the functional programming languages like lisp and scheme.
Alonzo Church passed away on August 11, 1995 in Ohio.
I have mixed feelings about the Federated Computing Research Conference. It is a good idea to get many different areas of computer science together. I do get to see many people I haven't seen in years who went into non-theoretical areas of CS.
On the other hand 2200 participants made the place quite crowded and it seemed to take away from the informal atmosphere of most theory conference. Since STOC and Electronic Commerce had nearly a complete overlap I jumped back and forth between talks never really feeling fully part of either conference.
For the first time the Complexity conference was not part of FCRC because 2003 is a Europe year for Complexity. In an informal poll I took of STOC people interested in complexity most liked having both conferences at the same place but would rather that happen in isolation, like last year in Montreal, rather than as part of the much larger FCRC meeting.
In what seems to be a trend in CS conferences, wireless internet was made available at the conference site. As you walked around you would pass many people sitting on chairs and on the ground hunched over their laptops disconnected from the conference and connected into another world. Seemed a bit depressing but I too found the net hard to resist--it is always tempting to simply open my laptop and connect, checking email and posting to this weblog. | |||
2453 | dbpedia | 2 | 31 | https://academickids.com/encyclopedia/index.php/Peter_Shor | en | Academic Kids | [
"http://www.academickids.com/encyclopedia/encyclopedia.jpg"
] | [] | [] | [
""
] | null | [] | null | en | /favicon.ico | null | Peter Shor (born August 14, 1959) is an American theoretical computer scientist most famous for his work on quantum computation, in particular for devising a quantum algorithm for factoring exponentially faster than the best currently-known algorithm running on a classical computer (see Shor's algorithm). He was working then at AT&T Bell Laboratories in 1994. He was the recipient of the Nevanlinna Prize in 1998, a MacArthur Fellowship in 1999, and a Gödel prize in 1999 among other prizes. Currently, he is a professor of applied mathematics at MIT, and he is affiliated with CSAIL.
He received his B.S. in Mathematics in 1981 for undergraduate work at Caltech, and was a Putnam Fellow in 1978. He then earned his Ph.D. in Applied Mathematics from MIT in 1985. His doctoral advisor was Tom Leighton, and his thesis was on probabilistic analysis of bin-packing algorithms. After graduating, he spent one year in a post-doctoral position at Berkeley, and then accepted a position at Bell Laboratories. Shor began his current MIT position in 2003. | ||||||
2453 | dbpedia | 0 | 28 | https://igorpak.wordpress.com/category/awards-2/ | en | Igor Pak's blog | [
"https://science.tamu.edu/wp-content/uploads/2019/04/ICM2014_CoexHallD_WideShot.jpg",
"https://igorpak.wordpress.com/wp-content/uploads/2019/03/cropped-g6-12x9.png?w=50",
"https://igorpak.wordpress.com/wp-content/uploads/2019/03/cropped-g6-12x9.png?w=50",
"https://pixel.wp.com/b.gif?v=noscript"
] | [] | [] | [
""
] | null | [] | null | Posts about Awards written by igorpak | en | Igor Pak's blog | https://igorpak.wordpress.com/category/awards-2/ | By and large, math journals treat the authors like a pesky annoyance, sort of the way a local electric company treats its customers. As in — yes, serving you is our business, but if you don’t like our customer service where else are you going to go? Not all editors operate that way, absolutely not all referees, but so many it’s an accepted norm. We all know that and all play some role in the system. And we all can do better, because we deserve better.
In fact, many well meaning mathematicians do become journal editors, start new journals, and even join the AMS and other professional societies’ governing bodies which oversee the journals. This helps sometimes, but they quickly burn out or get disillusioned. At the end, this only makes second order improvements while the giant sclerotic system continues its descent from bad to worse.
Like everyone else, I took this as a given. I even made some excuses: evil publishers, the overwhelming growth of submissions, everyone stressed and overworked, papers becoming more technical and harder to referee, etc., etc. For decades I watched many math journals turn from friendly if not particularly warm communal endeavors, to zones of hostility.
Only most recently, it occurred to me that it doesn’t have to be this way. We should have better journals, and we deserve a better treatment (I was really off the mark in my first line of this post). Demanding better journals is neither a fantasy nor a manifesto. In fact, physicists have already figured it all out. This post is largely about how they do it, with some lessons and suggestions.
What we have
If you don’t know what I am talking about, walk to any mathematician you see at a conference. If you have a choice, choose the one who looks bored, staring intensely at their shoes. Ask them for their most frustrating journal publishing story. You may as well sit down — the answer might take awhile. Even if they don’t know you (or maybe especially if they don’t know you), they will just unload a litany of the most horrifying stories that would make you question the sanity of people staying in this profession.
Then ask them why do they persevere and keep submitting and resubmitting their papers given that the arXiv is a perfectly fine way to disseminate their work. You won’t hear a coherent answer, but rather the usual fruit salad of practical matters: something about jobs, CVs, graduate students, grants, Deans, promotions, etc. Nobody will ever mention that their goal is to increase their readership, verify the arguments, improve their presentation style, etc., ostensibly the purpose of mathematical journals.
While my personal experience is a relatively happy one, I do have some scars to show and some stories to tell (see this, that and a bit in that blog posts on publishing struggles). There is no need to rehash them. I also know numerous stories of many people because I have asked them these questions. In fact, every time I publish something like this blog post (about the journals’ hall of shame), I get a host of new horror stories by email, with an understanding that I am not allowed to share them.
The adversarial relationship and countless bad experiences make it is easy to lose sight of the big picture. In many ways we are privileged in mathematics to have relatively few bad and for-profit actors. Money and grant funding matters less. We don’t have extreme urgency to publish. We have some relatively objective ways to evaluate papers (by checking the proofs). One really can work on the Moon, as long as one has a laptop and unlimited internet (and breathable air, I suppose).
We have it good, or at least we did when we started sliding into abyss. Because the alarms are not ringing, the innovation in response has stuttered. We are all just chugging along. Indeed, other than a few new online journals, relatively little has changed in the past two decades.
This is in sharp contrast with physics, which had very few of the advantages that math has (depending on the area). Besieged on all sides, physics community was forced to adapt faster and arguably better in response to changes in the publishing landscape. In fact, the innovations they made are so natural to them, their eyes open wide in disbelief when they hear how we continue to publish math papers.
The following is a story of the Physical Review E (PRE), one of the journals of the American Physical Society (APS). I will start with what I learned about the PRE and APS inner working, their culture, successes and challenges, some of which ring very familiar. Only afterwards I will get back to math publishing, the AMS and how we squandered our advantages.
What’s special about PRE?
I chose to write about the PRE because I published my own paper there and enjoyed the experience. To learn more about the journal, I spoke to a number of people affiliated with PRE in different capacities, from the management to members of the Editorial Board, to frequent authors and reviewers. These interviews were rather extensive and the differences with the math publishing culture are much too vast to summarize in a single blog post. I will only highlight things I personally found remarkable, and a few smaller things that can be easily emulated by math journals.
PRE’s place in the physics journal universe
PRE is one of five similarly named “area journals”: PRA, PRB, etc. More generally, it is one of 18 journals of the APS. Other journals include Physical Review Letters (PRL is APS’s flagship journal which published only very short papers), Physical Review X (PRX is another APS’s leading journal, online only, gold open access, publishes longer articles, extremely selective), Reviews of Modern Physics (APS’s highest cited journal which publishes only survey articles), and a number of more specialized journals.
The APS is roughly similar to the the AMS in its prominence and reach in the US. APS’s main publishing competition include the Institute of Physics (IOP, a UK physics society with 85 titles, roughly similar to the LMS), Nature Portfolio (a division of Springer Nature with 156 titles only a few of them in physics), and to a lesser extent Science by AAAS, various Elsevier, SIAM journals, and some MDPI titles.
Journal structure
The PRE editorial structure is rather complicated. Most of the editorial work is done by an assortment of Associate Editors, some of whom are employed full time by the APS (all of them physics PhD’s), and some are faculty in physics or adjacent fields from around the world, typically full time employed at research universities. Such Associate Editors receive a 2 year renewable contract and sometimes work with the APS for many years. Both professional and part time editors do a lot of work handling papers, rejecting some papers outright, inviting referees, etc.
The leadership of PRE is currently in flux, but until recently included Managing Editor, a full time APS employee responsible for running the journal (such as overseeing the work of associate editors), and a university based Lead Editor overseeing the research direction. The APS is currently reviewing applications for a newly created position of Chief Editor who will presumably replace Managing Editor, and is supposed to oversee the work of the Lead Editor and the rest of the editorial team (see this ad).
There is also an “Editorial Board”, whose name might be confusing to math readers. This is really a board of appeals (more on this later), where people serve a 3 year term without pay, giving occasional advice to associate editors and lending their credibility to the journal. Serving on the Editorial Board is both a service to the community and minor honor.
Submissions
The APS is aware of the role the arXiv plays in the community as the main dissemination venue, with journals as an afterthought. So it encourages submissions consisting of arXiv numbers and subject areas. Note that this makes it different from Nature and Science titles, which forbid arXiv or other online postings both for copyright reasons and so not to spoil future headline worthy press releases.
The submissions to all APS journals are required to be in a house two column style with a tiny font. Тhere are sharp word count limits for the “letters” (short communications) and the “articles”. These are rather annoying to calculate (how do you count formulas? tables?), and the journals’ online software is leaves much to be desired.
Desk rejections
At PRE, about 15-20% of all papers are rejected within days after the initial screening by managing or associate editors, who then assign the remaining papers according to research areas. Some associate editors are reluctant to do this at all, and favor at least one report supplemented by initial judgement. This percentage is a little lower than at the (more selective) PRL where it is reported to be 20-25%. Note that all APS journals pay special attention to the style, so it’s important to make an effort to avoid being rejected by a non-expert just because of that.
Curiously, before 2004, the percentage was even lower at PRL, but the APS did some rather interesting research on the issue. It concluded that such papers consume a lot of resources and rarely survive the review process (see this report). Of course, this percentage is relatively low by math standards — several math journals I know have about 30-50% desk rejections, with another 30-40% after a few quick opinions. On the other hand, at Science, over 83% papers get rejected without an external review.
Review process
Almost all the work is handled by associate editors closest to the area. The APS made a major overhaul of its classification of physics areas in 2016, to bring it to modern age (from the old one which resembles the AMS MSC). Note aside: I have been an advocate for an overhaul of MSC for a while, which I called a “historical anachronism” in this long MO answer (itself written about 14 years ago). At the very least the MSC should upgrade its tree structure (with weird horizontal “see also…” links) to a more appropriate poset structure.
Now, associate editors start with desk rejections. If the paper looks publishable, they send it to referees with the goal of obtaining two reports. The papers tend to be much shorter and more readable by the general scientific audience compared with the average math paper, and good style is emphasized as a goal. The reviewers are given only three weeks to write the report, but that time can be extended upon request (by a few more weeks, not months).
Typically, editors aim to finish the first round in three months, so the paper can be published in under six months. Only few papers lag beyond six months at which point, the editors told me, they get genuinely embarrassed. The reason is often an extreme difficulty in finding referees. Asking 4-8 potential referees is normal, but on rare occasions the numbers can be as high as 10-20.
Acceptance rate
In total, PRE receives about 3,500-4,000 submissions a year, of which about 55-60% get accepted, an astonishingly high percentage when compared to even second tier math journals. The number of submissions has been slowly decreasing in recent years, perhaps reflecting many new publications venues. Some editors/authors mentioned MDPI as new evil force (I called MDPI parasitic rather than predatory in this blog post).
For comparison, PRL is an even bigger operation which handles over twice as many papers. I estimate that PRL accepts roughly 20-25% of submissions, probably the lowest rate of all APS journals. In a more extreme behavior, Nature accepts about 8% submissions to publish about 800 papers, while Science accepts about 6% submissions to publish about 640 papers per year.
It is worth putting number published paper in perspective by comparing them with other journals. PRE and PRL publish about 1,800 and 2,100 papers per year, respectively. Other APS journals publish even more: PRD publishes about 4,000, and PRB close to 5,000 papers a year.
For math journals true acceptance ratios are hard to find and these numbers tend to be meaningless anyway due to self-selection and high cost of waiting for rejection. But numbers of published papers are easily available: Jour. AMS publishes about 25, Mathematika about 50, Proc. LMS about 60, Forum Math. Sigma in the range of 60-120, Bull. LMS in the range of 100-150, Trans. AMS about 250, Adv. Math. about 350, IMRN in the range of 300-500, and Proc. AMS about 450 papers per year. These are boutique numbers compared to the APS editorial machine. In the opposite extreme, MDPI Mathematics recently achieved the output of about 5,000 papers a year (I am sure they are very proud).
Publication
When a paper is accepted at PRE, it is sent to production which APS outsources. There are two quick rounds of approval of LaTeX versions compiled in the house style and proofread by a professional. It then gets published online with a unique identifier, usually within 2-3 weeks from the date of acceptance. Old fashioned volumes and numbers do exist, but of no consequence as they are functions of the publication date. There is zero backlog.
Strictly speaking there is still a print version of the PRE. I was told it is delivered to about 30 libraries worldwide that apparently are unconcerned with deforestation and willing to pay the premium. In truth, nobody really wants to read these paper versions. The volumes are so thick and heavy, it is hard to even lift them up from a library shelf. Not to dwell on this too much, but some graduate students I know are unaware even which building houses our math library at UCLA. It’s hard to blame them, especially after COVID…
Appeals
When a paper is rejected, the authors have the right to appeal the decision. The paper is sent to a member of the Editorial Board closest to the area. The editor reads both the paper and the referee reports, then writes their own report, which they sign and send to the authors. More often than not the decision is confirmed, but reversals do happen.
Since what’s “important” is ultimately subjective, appeals serve an important check on Associate Editors and helps keep peace in the community. Numerically, only about 3-5% of rejected papers are sent for an appeal, about 2-3 papers per Editorial Board member each year.
Embarrassingly for the whole field, I cannot think of a single math journal with an appeals process (except, interestingly, for MDPI Mathematics, which famously has the selectivity of a waste bucket). Even Nature has an appeals process, and nobody ever thinks of them as too friendly.
Note: some math journals do allow resubmissions of previously rejected papers. These papers tend to be major revisions of previous versions and typically go the same editor, defeating the point of the appeal.
Editorial system
The APS has its own online editorial system which handles the submissions, and has an unprecedented level of transparency compared to that of math journals I am familiar with. The authors can see a complete log of dates of communications with (anonymized) referees, the actions of editors, etc. In math, the best you can get is “under review” which brings cold comfort.
The editors work as a team, jointly handling all incoming email and submission/resubmission traffic. Routine tasks like forwarding the revision to the first round referees are handled by first person available, but the editorial decisions (accept/reject, choices of referees), are made by the assigned Associate Editor. If an Associate Editor has a week long backlog or is expecting some inactivity, his queue is immediately redistributed between other editors.
Relations between APS journals
Many PRE papers first arrive to PRL where they are quickly rejected. The editorial system allows editors from one journal see all actions and reports in all other APS journals. If the rejected PRL paper fits the scope of PRE and there are reports suggesting PRE might be suitable, PRE editors try to invite such papers. This speeds up the process and simplifies life to everyone involved.
For longer papers, PRE editors also browse rejections from PRX, etc. From time to time, business oriented managers at the APS raise a possibility of creating a lower tier journal where they would publish many papers rejected from PRA–PRE (translation: “why shouldn’t APS get some of MDPI money?”), but the approach to maintain standards keep winning for now. From what I hear, this might change soon enough…
Note: In principle, several editorial systems by Elsevier and the like, do allow transferring papers between math journals. In practice, I haven’t seen this feature ever used (I could be wrong). Additionally, often there are firewalls which preclude editors in one journal from see reports in the other, making the feature useless.
Survey articles
The APS publishes Reviews of Modern Physics, which is fully dedicated to survey articles. Associate Editors are given a budget to solicit such articles and incentivize the authors by paying them about $1,500 for completion within a year, but only $750 is the project took longer. The articles vary in length and scope, from about 15 to about 70 pages (when converted from APS to the bulky AMS style, these pages numbers would more than double). There are also independent submissions which very rarely get accepted as the journal aims to maintain its reputation and relevance. Among all APS publications, this journal is best cited by a wide margin.
We note that there are very few math journals dedicated to surveys, despite a substantial need for expository work. Besides Proc. ICM and Séminaire Bourbaki series which are by invitation only, we single out the Bull. AMS, EMS Surveys and Russian Math Surveys (in Russian, but translated by IOP). Despite Rota’s claim “You are more likely to be remembered by your expository work“, publishing surveys remains difficult unless you opt for a special issue or a conference proceedings. In the last two years I wrote two rather long surveys — on combinatorial interpretations and on linear extensions. Word of advice: if you want to have an easy academic life I don’t recommend doing that — they just eat up your time.
At PRE, there are no surveys, but the editors occasionally solicit “perspectives”. These are forward looking articles suggesting important questions and directions (more like public NSF grant applications than surveys). They publish about five such articles a years, hoping to bring the number up to about ten in the future.
Profiled articles
In 2014, following the approach of popular magazines, PRE started making “Editors’ Suggestions”. These are a small number of articles the editors chose to highlight, both formally and on the website. They are viewed as minor research award that can be listed on CVs by the authors.
Outstanding referee award
The APS instituted this award in 2008, to encourage quick and thorough refereeing. This is a lifetime award and comes with a diploma size plaque which can be hang on the wall. More importantly, it can be submitted to your Department Chair and your friendly Dean as a community validation of your otherwise anonymous efforts.
Each year, there are a total of about 150 awardees selected across all APS journals (out of tens of thousands referees), of which about 10 are from PRE. This selection is taken very seriously. The nominations are done by Associate Editors and then discussed at the editorial meetings. For further details, see this 2009 article about the award by the former Editor-in-Chief of Physical Reviews, which ends with
We feel that the award program has been most successful, and we will be continuing it at APS. [Gene D. Sprouse, Recognizing referees at the American Physical Society]
Note that such distinguished referee awards are not limited to APS or even physics. It’s a simple idea which occurred to journals across “practical” disciplines: accounting, finance, economic geography, economics, public management, regional science, etc., but also e.g. in atmospheric chemistry and philosophy. Why wouldn’t a single math journal have such an award?? Count be flabbergasted.
Community relations
As we mentioned above, in much of physics, the arXiv is a preferred publication venue since the field tends to develop at rapid pace, so strictly speaking the journal publications are not necessary. In some areas, a publication in Nature or Science is key to success, especially for a junior researcher, so the authors are often willing to endure various associated indignities (including no arXiv postings) and if successful pay for the privilege. However, in many theoretical and non-headline worthy areas, these journals are not an option, which is where PRL, PRE and other APS journals come in.
In a way, PRE operates as a digital local newspaper which provides service to the community in the friendliest way possible. It validates the significance of papers needed for job related purposes, helps the authors to improve the style, does not bite newcomers, and does not second guess their experimental finding (there are other venues which do that). It provides a quick turn around and rarely rejects even moderately good papers.
When I asked both the editors and the authors how they feel about PRE, I heard a lot of warmth, the type of feeling I have not heard from anyone towards math journals. There is a feeling of community when the editors tell me that they often publish their own papers at PRE, when the authors want to become editors, etc. In contrast, I heard a lot of vitriol towards Nature and Science, and an outright disdain towards MDPI physics journals.
It could be that my sample size was too small and heavily biased. Indeed, when I polled the authors of MDPI Mathematics (a flagship MDPI journal), most authors expressed high level of satisfaction with the journal, that they would consider submitting there again. One of my heroes, Ravi P. Agarwal who I profiled in this blog post, published an astounding 37 papers in that journal, which clearly found its target audience (so much that it stopped spamming people, or maybe it’s just me).
Note aside: Personally, the only journal I actually cared about was the storied JCTA where my senior colleague Bruce Rothschild was the Editor in Chief for 25 years, and where I would publish my best combinatorics papers. In 2020, the editorial board resigned in mass and formed Combin. Theory. I am afraid, my feelings have not transferred to CT, nor have they stayed with JCTA which continues to publish. They just evaporated.
Money matters
Despite a small army of professional editors, the APS journals provide a healthy albeit slowly decreasing revenue stream (about $43 mil. in 2022, combined from all journals, see 2022 tax disclosures on ProPublica website). The journals are turning a profit for the APS (spent on managers and various APS activities) despite all the expenses. They are spending more and making more money than the AMS (compare with their 2022 tax disclosures on ProPublica). There is much more to say here, but this post is already super long and the fun part is only starting.
Back to math journals
In the 20th century world with its print publishing, having a local peer review print journals made sense. A university of a group of universities would join forces with a local publisher and starts the presses. That’s where local faculty would publish their own papers, that’s where they would publish conference proceeding, etc. How else do you explain Duke Mathematical Journal, Israel Journal of Mathematics, Moscow Mathematical Journal, Pacific Journal of Mathematics, and Siberian Journal of Mathematics? I made a lot of fun at the geographical titles in this blog post, and I maintain that they sound completely outdated (I published in all five of these, naturally).
Now, in the 21st century, do we really need math journals? This may sound like a ridiculous question, with two standard replies:
We need peer review, i.e. some entity must provide a certificate that someone anonymous read the paper and takes responsibility for its validity (sound weak isn’t it?).
We need formal validation, i.e. we need to have something to write on our CVs. Different journals have different levels of prestige associated with them leading to distinctions in research recognition (and thus jobs, promotions, grants, etc.)
Fair enough, but are you sure that the journals as we have them are the best vehicles for either of these goals? Does anyone really believes that random online journals do a serious peer review? Where is this idea coming from, that the journals with its obvious biases should be conferring importance of the paper?
How are we supposed to use journals to evaluate the candidates, if these journals have uncertain rankings and in fact the relative rankings of two journals can vary depending on the area? Shouldn’t we separate the peer review aspect which makes multiple submission costly and unethical, from the evaluation aspects which desperately needs competition between the journals?
Again, this all sounds ridiculous if you don’t step back and look objectively at our publishing mess where a math paper can languish in journals for over a year, after which it is returned without a single referee report just because someone decided that at the end the paper is not good enough to be refereed. This happened to me multiple times, and to so many other people I lost count (in one instance, this happened after 3 years of waiting!)
Publishing utopia
Now, I know a lot of people whose dream publishing universe is a lot of run-by-mathematicians not for profit small online publications. It’s great to rid of Elsevier and their ilk, but it would not solve the issues above. In fact, this would bring a lot of anarchy and further loss of standards.
From my perspective, in a perfect world, “the people” (or at least the AMS), would create one mega journal, where the arXiv papers could be forwarded by the authors if they wish. Hundreds of editors (some full time, some part time) divided into arXiv subject areas, would make the initial screening, and keep say 30-40% of them to be send for review. Based on my reading of the arXiv stats, that gives about 10-15K papers a year to be refereed, a number way below what APS handles. The mega journal would only check validity and “publish” only based on correctness.
Publication at the mega journal would already be a distinction albeit a minor one. To ensure some competition, we would probably need to break this mega journal into several (say, 3-5) independently run baby megas, so the authors have a choice where to submit. In the utopia I am imagining, the level of rigor would be the same across all baby megas. It would also be a way to handle MDPI journals which would be left with a reject pile.
This wouldn’t take anything away from the top journals (think Annals) who would not want to outsource their peer review. In fact, I heard of major Annals papers studied by six (!) independent teams of referees, that’s above and beyond. But I also heard of Annals papers which seem to had no technical check at all (like this one by this guy), so the quality is maybe inconsistent.
So what about distinctions? The remnants of the existing general journals would be free from peer review. They would place bids on the best papers attracting them “modulo publication in the mega journal” with some clear set deadlines. The authors would accept the best bid, like graduate admissions, and the paper will be linked to the journal website in the “arXiv overlay” style.
Alternatively, some specialized or non-exclusive journals will make their own selections for best papers in their areas, which could be viewed as awards. One paper could get multiple such awards, and “best journal where the paper could be accepted” optimization issue would disappear completely. This would make a better, more fair world. At the very least, such awards would remove the pressure to publish in the top journals if you have a strong result.
Even better, one can imagine a competitive conference system in the style of CS theory conferences (but also in some areas of Discrete Math) emerging in this scenario. The conference submission could require a prior arXiv posting and later keep track of “verified” papers (accepted to the mega journal). When disentangled from the peer review, these conference could lead to more progress on emerging tools and ideas, and to even the playing field for researchers from small and underfunded universities across the world.
Note that there are already some awards for math papers given by third parties, but only a handful. Notably, AIM has this unusual award. More recently, a new Frontiers of Science Award was introduced for “best recent papers” (nice cash prize for a paper already published in the Annals and the like). Of course, most CS theory conferences have been giving them for decades (the papers later get published by the journals).
Would it work? Wouldn’t the mega journal be just another utility company with terrible service? Well, I don’t know and we will probably never get to find out. That’s why I called it a utopia, not a serious proposal. But it can hardly get any worse. I think pure math and CS theory are unique in requiring true correctness. When correctness is disentangled from evaluating novelty and importance, the point of the mega journal would be to help the authors get their proofs right and the papers accepted. Until then, journal editors (and referees to a smaller degree) have a conflict of interest — helping the authors might mean hurting the journal and vice versa. Guess who usually gets hurt at the end?
Back to reality
Obviously, I have no hopes that the “mega journal” would ever come to life. But NOT because it’s technically impossible or financially unsound. In other fields, communities manage somehow. The APS is a workable approximation of that egalitarian idea. Recently, eLife made another major experiment in publishing — we’ll see how that works out.
But in a professional society such as the AMS where new leadership handpicks two candidates for future leadership in a stale election? With a declining membership? Which claims the Fellow of the AMS award as it biggest achievement? Oh, please! Really, the best we can hope for is for a large “lower tier” journals with a high acceptance ratio. Why would AMS want that? I am glad you asked:
Case for higher acceptance rates at AMS journals
One argument why so few papers get published in good (think top 100) math journals is that math papers can be much longer than typical physics papers, so they take more print space and take longer to referee. However, this argument does not translate well into the digital age. Nor does that apply to Bull. LMS or Proc. AMS, of course, which publish mostly short papers. We mention in passing that while greater length is unavoidable sometimes, mathematicians tend to forget that brevity is a feature, not a bug.
Of course, math editors’ main argument in favor of low acceptance ratios is that this allows one to maintain high quality of papers. While true on its face, when applied uniformly this approach has major negative implications to the community.
Think of college acceptance rates. It’s true that Harvard maintains its prestige by having a ridiculously low acceptance ratio, and being private it’s hard to blame it (not that I am fan of the choices they make either, but this post is about something else). But should major public universities like UCLA do the same? What about community colleges? You see what I mean.
There is an obvious public good in AMS maintaining a large, free, friendly but thorough publication venue for papers that don’t meet the Trans. AMS threshold. This might not be the “mega journal” utopia, but it would be a major step forward. If SIAM, EMS, LMS and other major math societies set up something similar, we would actually be in a good place as the middle tier small journals would start changing their publishing model in response.
Short list of minor suggestions
As you can probably tell by now, in my opinion most math publishers are behind the curve in innovation and community relations. Let me summarize some basic ideas based on the discussion above that seem more approachable:
Stop wasting paper and fully move to electronic publishing.
Do not limit numbers of papers or pages. Rather, aim for as many good papers as you can.
Improve your electronic editorial system to make it more transparent.
Help editors work as a team, and incentivize them financially. Pay for 20% employment to experts across the world to help you run the journal.
Set up new math journals fully dedicated to survey articles, both solicited and contributed.
Create an appeals procedure and add a new type of senior editors who would take the job seriously.
Institute a number of awards: for best long, short and survey articles in your journal, and for best referees. Make an effort to be fair by taking input from all editors.
Journal studies
If you read up to this point, you are probably wondering why most of these simple ideas hadn’t been widely discussed. Clearly, somebody is asleep at the wheel. Or, perhaps, doesn’t want to rock the boat (I am mixing my metaphors here, sorry). In case of for profit publishers like Springer and Elsevier, I can see why — they know all this stuff from their journals in other areas, but are very busy counting the money.
But the AMS Council can sure use a “Chair of journal innovation” whose job would be to conduct journal studies (like the many APS studies I mentioned above), or at least read other publishers’ studies. An amateur like me shouldn’t be able to tell you anything new that you couldn’t learn by googling. Perhaps, start by subscribing to an excellent newsletter Journalology fully dedicated to these ideas.
Acknowledgements.
I am extremely grateful to editors Dirk Jan Bukman, Alexander Kusenko, Valerio Lucarini, Mason Porter and Uwe Täuber, for kindly agreeing to be interviewed on the subject and for being so generous with their time. I am also thankful to several frequent APS contributors who wished to remain anonymous. If I misstated or misunderstood anything, the fault is all mine, obviously.
P.S. Mark Wilson kindly invited me to write a column for the AMS Notices on the issue of publishing. This prompted me to spend many hours thinking about the subject and talking to many physicists. At the end, I submitted a very short and non-polemical version of this blog post. If it ever gets accepted and published I will link it here.
Unity here, unity there, unity shmunity is everywhere. You just can’t avoid hearing about it. Every day, no matter the subject, somebody is going to call for it. Be it in Ukraine or Canada, Taiwan or Haiti, everyone is calling for unity. President Biden in his Inaugural Address called for it eight times by my count. So did former President Bush on every recent societal issue: here, there, everywhere. So did Obama and Reagan. I am sure just about every major US politician made the same call at some point. And why not? Like the “world peace“, the unity is assumed to be a universal good, or at least an inspirational if quickly forgettable goal.
Take the Beijing Olympic Games, which proudly claims that their motto “demonstrates unity and a collective effort” towards “the goal of pursuing world unity, peace and progress”. Come again? While The New York Times isn’t buying the whole “world unity” thing and calls the games “divisive” it still thinks that “Opening Ceremony [is] in Search of Unity.” Vox is also going there, claiming that the ceremony “emphasized peace, world unity, and the people around the world who have battled the pandemic.” So it sounds to me that despite all the politics, both Vox and the Times think that this mythical unity is something valuable, right? Well, ok, good to know…
Closer to home, you see the same themes said about the International Congress of Mathematicians to be held in St. Petersburg later this year. Here is Arkady Dvorkovich, co-chair of the Executive Organizing Committee and former Deputy Prime Minister of Russia: “It seems to us that Russia will be able to truly unite mathematicians from all over the world“. Huh? Are you sure? Unite in what exactly? Because even many Russian mathematicians are not on board with having the ICM in St. Petersburg. And among those from “all over the world”, quite a few are very openly boycotting the congress, so much that even the IMU started to worry. Doesn’t “unity” mean “for all”, as in ∀?
Unity of mathematics
Frequent readers of this blog can probably guess where I stand on the “unity”. Even in my own area of Combinatorics, I couldn’t find much of it at all. I openly mocked “the feeling of unity of mathematics” argument in favor of some conjectures. I tried but could never understand Noga Alon’s claim that “mathematics should be considered as one unit” other than a political statement by a former PC Chair of the 2006 ICM.
So, about this “unity of mathematics”. Like, really? All of mathematics? Quick, tell me what exactly do the Stochastic PDEs, Algebraic Number Theory, Enumerative Combinatorics and Biostatistics have in common? Anything comes to mind? Anything at all? Ugh. Let’s make another experiment. Say, I tell you that only two of these four areas have Fields medals. Can you guess which ones? Oh, you can? Really, it was that easy?? Doesn’t this cut against all of this alleged “unity”?
Anyway, let’s be serious. Mathematics is not a unit. It’s not even a “patterned tapestry” of connected threads. It’s a human endeavor. It’s an assorted collection of scientific pursuits unconstrained by physical experiments. Some of them are deep, some shallow, some are connected to others, and some are motivated by real world applications. You check the MSC 2020 classification, and there is everything under the sun, 224 pages in total. It’s preposterous to look for and expect to find some unity there. There is none to be found.
Let me put it differently. Take poetry. Like math, it’s a artistic endeavor. Poems are written by the people and for the people. To enjoy. To recall when in need or when in a mood. Like math papers. Now, can anyone keep a straight face and say “unity of poetry“? Of course not. If anything, it’s the opposite. In poetry, having a distinctive voice is celebrated. Diverse styles are lauded. New forms are created. Strong emotions are evoked. That’s the point. Why would math be any different then?
What exactly unites us?
Mathematicians, I mean. Not much, I suppose, to the contrary of math politicians’ claims:
I like to think that increasing breadth in research will help the mathematical sciences to recognize our essential unity. (Margaret Wright, SIAM President, 1996)
Huh? Isn’t this like saying that space exploration will help foster cross-cultural understanding? Sounds reasonable until you actually think about what is being said…
Even the style of doing research is completely different. Some prove theorems, some make heavy computer computations, some make physical experiments, etc. At the end, some write papers and put them on the arXiv, some write long books full of words (e.g. mathematical historians), some submit to competitive conferences (e.g. in theoretical computer science), some upload software packages and experimental evidence to some data depositary. It’s all different. Don’t be alarmed, this is normal.
In truth, very little unites us. Some mathematicians work at large state universities, others at small private liberal arts colleges with a completely different approach to teaching. Some have a great commitment to math education, some spend every waking hour doing research, while others enjoy frequent fishing trips thanks to tenure. Some go into university administration or math politics, while others become journal editors.
In truth, only two things unites us — giant math societies like the AMS and giant conferences like ICMs and joint AMS/MAA/SIAM meetings. Let’s treat them separately, but before we go there, let’s take a detour just to see what an honest unrestricted public discourse sounds like:
What to do about the Olympics
The answer depends on who you ask, obviously. And opinions are abound. I personally don’t care other than the unfortunate fact that 2028 Olympics will be hosted on my campus. But we in math should learn how to be critical, so here is a range of voices that I googled. Do with them as you please.
Some are sort of in favor:
I still believe the Olympics contribute a net benefit to humanity. (Beth Daley, The Conversation, Feb. 2018)
Some are positive if a little ambivalent:
The Games aren’t dead. Not by a longshot. But it’s worth noting that the reason they are alive has strikingly little to do with games, athletes or medals. (L. Jon Wertheim, Time, June 2021)
Some like The New York Times are highly critical, calling it “absurdity”. Some are blunt:
More and more, the international spectacle has become synonymous with overspending, corruption, and autocratic regimes. (Yasmeen Serhan, The Atlantic, Aug. 2021)
yet unwilling to make the leap and call it quits. Others are less shy:
You can’t reform the Olympics. The Olympics are showing us what they are, and what they’ve always been. (Gia Lappe and Jonny Coleman, Jacobin, July 2021)
and
Boil down all the sanctimonious drivel about how edifying the games are, and you’re left with the unavoidable truth: The Olympics wreck lives. (Natalie Shure, The New Republic, July 2021)
What is the ICM
Well, it’s a giant collective effort. A very old tradition. Medals are distributed. Lots of talks. Speakers are told that it’s an honor to be chosen. Universities issue press releases. Yes, like this one. Rich countries set up and give away travel grants. Poor countries scramble to pay for participants. The host country gets dubious PR benefits. A week after it’s over everyone forgets it ever happened. Life goes on.
I went to just one ICM, in Rio in 2018. It was an honor to be invited. But the experience was decidedly mixed. The speakers were terrific mathematicians, all of them. Many were good speakers. A few were dreadful in both content and style. Some figured they are giving talks in their research seminar rather than to a general audience, so I left a couple of such talks in middle. Many talks in parallel sections were not even recorded. What a shame!
The crowds were stupefying. I saw a lot of faces. Some were friendly, of people I hadn’t seen in years, sometimes 20 years. Some were people I knew only by name. It was nice to say hello, to shake their hand. But there were thousands more. Literally. An ocean of people. I was drowning. This was the worst place for an introvert.
While there, I asked a lot of people how did they like the ICM. Some were electrified by the experience and had a decent enough time. Some looked like fish out of the water — when asked they just stared at me incomprehensively silently saying “What are you, an idiot?” Some told me they just went to the opening ceremony and left for the beach for the rest of the ICM. Assaf Naor said that he loved everything. I was so amused by that, I asked if I could quote him. “Yes,” he said, “you can quote me: I loved absolutely every bit of the ICM”. Here we go — not everyone is an introvert.
Whatever happened at the ICM
Unlike the Olympics, math people tend to be shy in their ICM criticism. In his somewhat unfortunately titled but otherwise useful historical book “Mathematicians of the World, Unite!” the author, Guillermo Curbera, largely stays exuberant about the subject. He does mention some critical stories, like this one:
Charlotte Angas Scott reported bluntly on the presentation of papers in the congress, which in her opinion were “usually shockingly bad” since “instead of speaking to the audience, [the lecturer] reads his paper to himself in a monotone that is sometimes hurried, sometimes hesitating, and frequently bored . . . so that he is often tedious and incomprehensible.” (Paris 1900 Chapter, p. 24)
Curbera does mention in passing that the were some controversies: Grothendieck refused to attend ICM Moscow in 1966 for political reasons, Margulis and Novikov were not allowed by the Soviet Union to leave the country to receive their Fields medals. Well, nobody’s perfect, right?
Most reports I found on the web are highly positive. Read, for example, Gil Kalai’s blog posts on the ICM 2018. Everything was great, right? Even Doron Zeilberger, not known for holding his tongue, is mostly positive (about the ICM Beijing in 2002). He does suggest that the invited speakers “should go to a ‘training camp‘” for some sort of teacher training re-education, apparently not seeing the irony, or simply under impression of all those great things in Beijing.
The only (highly controversial) criticism that I found was from Ulf Persson who starts with:
The congresses are by now considered to be monstrous affairs very different from the original intimate gatherings where group pictures could be taken.
He then continues to talk about various personal inconveniences, his misimpressions about the ICM setting, the culture, the city, etc., all in a somewhat insensitive and rather disparaging manner. Apparently, this criticism and misimpressions earned a major smackdown from Marcelo Viana, the ICM 2018 Organizing Committee Chair, who wrote that this was a “piece of bigotry” by somebody who is “poorly informed”. Fair enough. I agree with that and with the EMS President Volker Mehrmann who wrote in the same EMS newsletter that the article was “very counterproductive”. Sure. But an oversized 4 page reaction to an opinion article in a math newsletter from another continent seem indicative that the big boss hates criticism. Because we need all that “unity”, right?
Anyway, don’t hold your breath to see anything critical about the ICM St. Petersburg later this year. Clearly, everything is going to be just fantastic, nothing controversial about it. Right…
What to do about the ICM
Stop having them in the current form. It’s the 21st century, and we are starting the third year of the pandemic. All talks can be moved online so that everyone can watch them either as they happen, or later on YouTube. Let me note that I’ve sat in the bleachers of these makeshift 1000+ people convention center auditoriums where the LaTeX formulas are barely visible. This is what the view is like:
Note that the ICM is not like a sports event — there is literally nothing at stake. Also, there are usually no questions afterwards anyway. You are all better off watching the talks later on your laptop, perhaps even on a x1.5 speed. To get the idea, imagine watching this talk in a huge room full of people…. Even better, we can also spread out these online lectures across the time zones so that people from different countries can participate. Something like this World Relay in Combinatorics.
Really, all that CO2 burned to get humans halfway across the world to seat in a crowded space is not doing anyone any good. If the Nobel Prizes can be awarded remotely, so can the Fields medals. Tourism value aside, the amount of meaningful person-to-person interaction is so minimal in a large crowd, I am struggling to find a single good reason at all to have these extravaganzas in-person.
What to do about the AMS
I am not a member of any math societies so it’s not my place to tell them what to do. As a frequent contributor to AMS journals and a former editor of one of them, I did call on the AMS to separate its society business form the publishing, but given that their business model hinges on the books and journals they sell, this is unlikely. Still, let me make some quick observations which might be helpful.
The AMS is clearly getting less and less popular. I couldn’t find the exact membership numbers, but their “dues and outreach” earnings have been flat for a while. Things are clearly not going in the right direction, so much that the current AMS President Ruth Charney sent out a survey earlier this week asking people like me why do we not want to join.
People seem to realize that they have many different views on all thing math related and are seeking associations which are a better fit. One notable example is the Just Mathematics Collective which has several notable boycott initiatives. Another is the Association for Mathematical Research formed following various controversies. Note that there is a great deal of disagreements between these two, see e.g. here, there and there.
I feel these are very good developments. It’s healthy to express disagreements on issues you consider important. And while I disagree with other things in the article below, I do agree with this basic premise:
Totalitarian countries have unity. Democratic republics have disagreement. (Kevin Williamson, Against Unity, National Review, Jan. 2021)
So everyone just chill. Enjoy diverse views and opinions. Disagree with the others. And think twice before you call for “unity” of anything, or praise the ephemeral “unity of mathematics”. There is none.
Conjectures are a staple of mathematics. They are everywhere, permeating every area, subarea and subsubarea. They are diverse enough to avoid a single general adjective. They come in al shapes and sizes. Some of them are famous, classical, general, important, inspirational, far-reaching, audacious, exiting or popular, while others are speculative, narrow, technical, imprecise, far-fetched, misleading or recreational. That’s a lot of beliefs about unproven claims, yet we persist in dispensing them, inadvertently revealing our experience, intuition and biases.
The conjectures also vary in attitude. Like a finish line ribbon they all appear equally vulnerable to an outsider, but in fact differ widely from race to race. Some are eminently reachable, the only question being who will get there first (think 100 meter dash). Others are barely on the horizon, requiring both great effort, variety of tools, and an extended time commitment (think ironman triathlon). The most celebrated third type are like those Sci-Fi space expeditions in requiring hundreds of years multigenerational commitments, often losing contact with civilization it left behind. And we can’t forget the romantic fourth type — like the North Star, no one actually wants to reach them, as they are largely used for navigation, to find a direction in unchartered waters.
Now, conjectures famously provide a foundation of the scientific method, but that’s not at all how we actually think of them in mathematics. I argued back in this pointed blog post that citations are the most crucial for the day to day math development, so one should take utmost care in making references. While this claim is largely uncontroversial and serves as a raison d’être for most GoogleScholar profiles, conjectures provide a convenient idealistic way out. Thus, it’s much more noble and virtuous to say “I dedicated my life to the study of the XYZ Conjecture” (even if they never publish anything), than “I am working hard writing so many papers to gain respect of my peers, get a promotion, and provide for my family“. Right. Obviously…
But given this apparent (true or perceived) importance of conjectures, are you sure you are using them right? What if some/many of these conjectures are actually wrong, what then? Should you be flying that starship if there is no there there? An idealist would argue something like “it’s a journey, not a destination“, but I strongly disagree. Getting closer to the truth is actually kind of important, both as a public policy and on an individual level. It is thus pretty important to get it right where we are going.
What are conjectures in mathematics?
That’s a stupid question, right? Conjectures are mathematical claims whose validity we are trying to ascertain. Is that all? Well, yes, if you don’t care if anyone will actually work on the conjecture. In other words, something about the conjecture needs to interesting and inspiring.
What makes a conjecture interesting?
This is a hard question to answer because it is as much psychological as it is mathematical. A typical answer would be “oh, because it’s old/famous/beautiful/etc.” Uhm, ok, but let’s try to be a little more formal.
One typically argues “oh, that’s because this conjecture would imply [a list of interesting claims and known results]”. Well, ok, but this is self-referential. We already know all those “known results”, so no need to prove them again. And these “claims” are simply other conjectures, so this is really an argument of the type “this conjecture would imply that conjecture”, so not universally convincing. One can argue: “look, this conjecture has so many interesting consequences”. But this is both subjective and unintuitive. Shouldn’t having so many interesting conjectural consequences suggest that perhaps the conjecture is too strong and likely false? And if the conjecture is likely to be false, shouldn’t this make it uninteresting?
Also, wouldn’t it be interesting if you disprove a conjecture everyone believes to be true? In some sense, wouldn’t it be even more interesting if until now everyone one was simply wrong?
None of this are new ideas, of course. For example, faced with the need to justify the “great” BC conjecture, or rather 123 pages of survey on the subject (which is quite interesting and doesn’t really need to be justified), the authors suddenly turned reflective. Mindful of self-referential approach which they quickly discard, they chose a different tactic:
We believe that the interest of a conjecture lies in the feeling of unity of mathematics that it entails. [M.P. Gomez Aparicio, P. Julg and A. Valette, “The Baum-Connes conjecture“, 2019]
Huh? Shouldn’t math be about absolute truths, not feelings? Also, in my previous blog post, I mentioned Noga Alon‘s quote that Mathematics is already “one unit“. If it is, why does it need a new “feeling of unity“? Or is that like one of those new age ideas which stop being true if you don’t reinforce them at every occasion?
If you are confused at this point, welcome to the club! There is no objective way to argue what makes certain conjectures interesting. It’s all in our imagination. Nikolay Konstantinov once told me that “mathematics is a boring subject because every statement is equivalent to saying that some set is empty.” He meant to be provocative rather than uninspiring. But the problem he is underlying is quite serious.
What makes us believe a conjecture is true?
We already established that in order to argue that a conjecture is interesting we need to argue it’s also true, or at least we want to believe it to be true to have all those consequences. Note, however, that we argue that a conjecture is true in exactly the same way we argue it’s interesting: by showing that it holds is some special cases, and that it would imply other conjectures which are believed to be true because they are also checked in various special cases. So in essence, this gives “true = interesting” in most cases. Right?
This is where it gets complicated. Say, you are working on the “abc conjecture” which may or may not be open. You claim that it has many consequences, which makes it both likely true and interesting. One of them is the negative solution to the Erdős–Ulam problem about existence of a dense set in the plane with rational pairwise distances. But a positive solution to the E-U problem implies the Harborth’s conjecture (aka the “integral Fáry problem“) that every graph can be drawn in the plane with rational edge lengths. So, counterintuitively, if you follow the logic above shouldn’t you be working on a positive solution to Erdős–Ulam since it would both imply one conjecture and give a counterexample to another? For the record, I wouldn’t do that, just making a polemical point.
I am really hoping you see where I am going. Since there is no objective way to tell if a conjecture is true or not, and what exactly is so interesting about it, shouldn’t we discard our biases and also work towards disproving the conjecture just as hard as trying to prove it?
What do people say?
It’s worth starting with a general (if slightly poetic) modern description:
In mathematics, [..] great conjectures [are] sharply formulated statements that are most likely true but for which no conclusive proof has yet been found. These conjectures have deep roots and wide ramifications. The search for their solution guides a large part of mathematics. Eternal fame awaits those who conquer them first. Remarkably, mathematics has elevated the formulation of a conjecture into high art. [..] A well-chosen but unproven statement can make its author world-famous, sometimes even more so than the person providing the ultimate proof. [Robbert Dijkgraaf, The Subtle Art of the Mathematical Conjecture, 2019]
Karl Popper thought that conjectures are foundational to science, even if somewhat idealized the efforts to disprove them:
[Great scientists] are men of bold ideas, but highly critical of their own ideas: they try to find whether their ideas are right by trying first to find whether they are not perhaps wrong. They work with bold conjectures and severe attempts at refuting their own conjectures. [Karl Popper, Heroic Science, 1974]
Here is how he reconciled somewhat the apparent contradiction:
On the pre-scientific level we hate the very idea that we may be mistaken. So we cling dogmatically to our conjectures, as long as possible. On the scientific level, we systematically search for our mistakes. [Karl Popper, quoted by Bryan Magee, 1971]
Paul Erdős was, of course, a champion of conjectures and open problems. He joked that the purpose of life is “proof and conjecture” and this theme is repeatedly echoed when people write about him. It is hard to overestimate his output, which included hundreds of talks titled “My favorite problems“. He wrote over 180 papers with collections of conjectures and open problems (nicely assembled by Zbl. Math.)
Peter Sarnak has a somewhat opposite point of view, as he believes one should be extremely cautious about stating a conjecture so people don’t waste time working on it. He said once, only half-jokingly:
Since we reward people for making a right conjecture, maybe we should punish those who make a wrong conjecture. Say, cut off their fingers. [Peter Sarnak, UCLA, c. 2012]
This is not an exact quote — I am paraphrasing from memory. Needless to say, I disagree. I don’t know how many fingers he wished Erdős should lose, since some of his conjectures were definitely disproved: one, two, three, four, five, and six. This is not me gloating, the opposite in fact. When you are stating hundreds of conjectures in the span of almost 50 years, having only a handful to be disproved is an amazing batting average. It would, however, make me happy if Sarnak’s conjecture is disproved someday.
Finally, there is a bit of a controversy whether conjectures are worth as much as theorems. This is aptly summarized in this quote about yet another champion of conjectures:
Louis J. Mordell [in his book review] questioned Hardy‘s assessment that Ramanujan was a man whose native talent was equal to that of Euler or Jacobi. Mordell [..] claims that one should judge a mathematician by what he has actually done, by which Mordell seems to mean, the theorems he has proved. Mordell’s assessment seems quite wrong to me. I think that a felicitous but unproved conjecture may be of much more consequence for mathematics than the proof of many a respectable theorem. [Atle Selberg, “Reflections Around the Ramanujan Centenary“, 1988]
So, what’s the problem?
Well, the way I see it, the efforts made towards proving vs. disproving conjectures is greatly out of balance. Despite all the high-minded Popper’s claims about “severe attempts at refuting their own conjectures“, I don’t think there is much truth to that in modern math sciences. This does not mean that disproofs of famous conjectures aren’t celebrated. Sometimes they are, see below. But it’s clear to me that the proofs are celebrated more frequently, and to a much greater degree. I have only anecdotal evidence to support my claim, but bear with me.
Take prizes. Famously, Clay Math Institute gives $1 million for a solution of any of these major open problems. But look closely at the rules. According to the item 5b, except for the P vs. NP problem and the Navier–Stokes Equation problem, it gives nothing ($0) for a disproof of these problems. Why, oh why?? Let’s look into CMI’s “primary objectives and purposes“:
To recognize extraordinary achievements and advances in mathematical research.
So it sounds like CMI does not think that disproving the Riemann Hypothesis needs to be rewarded because this wouldn’t “advance mathematical research”. Surely, you are joking? Whatever happened to “the opposite of a profound truth may well be another profound truth“? Why does the CMI wants to put its thumb on the scale and support only one side? Do they not want to find out the solution whatever it is? Shouldn’t they be eager to dispense with the “wrong conjecture” so as to save numerous researches from “advances to nowhere“?
I am sure you can see that my blood is boiling, but let’s proceed to the P vs. NP problem. What if it’s independent of ZFC? Clearly, CMI wouldn’t pay for proving that. Why not? It’s not like this kind of thing never happened before (see obligatory link to CH). Some people believe that (or at least they did in 2012), and some people like Scott Aaronson take this seriously enough. Wouldn’t this be a great result worthy of an award as much as the proof that P=NP, or at least a nonconstructive proof that P=NP?
If your head is not spinning hard enough, here is another amusing quote:
Of course, it’s possible that P vs. NP is unprovable, but that that fact itself will forever elude proof: indeed, maybe the question of the independence of P vs. NP is itself independent of set theory, and so on ad infinitum! But one can at least say that, if P vs. NP (or for that matter, the Riemann hypothesis, Goldbach’s conjecture, etc.) were proven independent of ZF, it would be an unprecedented development. [Scott Aaronson, P vs. NP, 2016].
Speaking of Goldbach’s Conjecture, the most talked about and the most intuitively correct statement in Number Theory that I know. In a publicity stunt, for two years there was a $1 million prize by a publishing house for the proof of the conjecture. Why just for the proof? I never heard of anyone not believing the conjecture. If I was the insurance underwriter for the prize (I bet they had one), I would allow them to use “for the proof or disproof” for a mere extra $100 in premium. For another $50 I would let them use “or independent of ZF” — it’s a free money, so why not? It’s such a pernicious idea of rewarding only one kind of research outcome!
Curiously, even for Goldbach’s Conjecture, there is a mild divergence of POVs on what the future holds. For example, Popper writes (twice in the same book!) that:
[On whether Goldbach’s Conjecture is ‘demonstrable’] We don’t know: perhaps we may never know, and perhaps we can never know. [Karl Popper, Conjectures and Refutations, 1963]
Ugh. Perhaps. I suppose anything can happen… For example, our civilizations can “perhaps” die out in the next 200 years. But is that likely? Shouldn’t the gloomy past be a warning, not a prediction of the future? The only thing more outrageously pessimistic is this theological gem of a quote:
Not even God knows the number of permutations of 1000 avoiding the 1324 pattern. [Doron Zeilberger, quoted here, 2005]
Thanks, Doron! What a way to encourage everyone! Since we know from numerical estimates that this number is ≈ 3.7 × 101017 (see this paper and this follow up), Zeilberger is suggesting that large pattern avoidance numbers are impossibly hard to compute precisely, already in the range of only about 1018 digits. I really hope he is proved wrong in his lifetime.
But I digress. What I mean to emphasize, is that there are many ways a problem can be resolved. Yet some outcomes are considered more valuable than others. Shouldn’t the research achievements be rewarded, not the desired outcome? Here is yet another colorful opinion on this:
Given a conjecture, the best thing is to prove it. The second best thing is to disprove it. The third best thing is to prove that it is not possible to disprove it, since it will tell you not to waste your time trying to disprove it. That’s what Gödel did for the Continuum Hypothesis. [Saharon Shelah, Rutgers Univ. Colloqium, 2001]
Why do I care?
For one thing, disproving conjectures is part of what I do. Sometimes people are a little shy to unambiguously state them as formal conjectures, so they phrase them as questions or open problems, but then clarify that they believe the answer is positive. This is a distinction without a difference, or at least I don’t see any (maybe they are afraid of Sarnak’s wrath?) Regardless, proving their beliefs wrong is still what I do.
For example, here is my old bog post on my disproof of the Noonan-Zeiberger Conjecture (joint with Scott Garrabrant). And in this recent paper (joint with Danny Nguyen), we disprove in one big swoosh both Barvinok’s Problem, Kannan’s Problem, and Woods Conjecture. Just this year I disproved three conjectures:
The Kirillov–Klyachko Conjecture (2004) that the reduced Kronecker coefficients satisfy the saturation property (this paper, joint with Greta Panova).
The Brandolini et al. Conjecture (2019) that concrete lattice polytopes can multitile the space (this paper, joint with Alexey Garber).
Kenyon’s Problem (c. 2005) that every integral curve in R3 is a boundary of a PL surface comprised of unit triangles (this paper, joint with Alexey Glazyrin).
On top of that, just two months ago in this paper (joint with Han Lyu), we showed that the remarkable independence heuristic by I. J. Good for the number of contingency tables, fails badly even for nearly all uniform marginals. This is not exactly disproof of a conjecture, but it’s close, since the heuristic was introduced back in 1950 and continues to work well in practice.
In addition, I am currently working on disproving two more old conjectures which will remain unnamed until the time we actually resolve them (which might never happen, of course). In summary, I am deeply vested in disproving conjectures. The reasons why are somewhat complicated (see some of them below). But whatever my reasons, I demand and naively fully expect that my disproofs be treated on par with proofs, regardless whether this expectation bears any relation to reality.
My favorite disproofs and counterexamples:
There are many. Here are just a few, some famous and some not-so-famous, in historical order:
Fermat‘s conjecture (letter to Pascal, 1640) on primality of Fermat numbers, disproved by Euler (1747)
Tait’s conjecture (1884) on hamiltonicity of graphs of simple 3-polytopes, disproved by W.T. Tutte (1946)
General Burnside Problem (1902) on finiteness of periodic groups, resolved negatively by E.S. Golod (1964)
Keller’s conjecture (1930) on tilings with unit hypercubes, disproved by Jeff Lagarias and Peter Shor (1992)
Borsuk’s Conjecture (1932) on partitions of convex sets into parts of smaller diameter, disproved by Jeff Kahn and Gil Kalai (1993)
Hirsch Conjecture (1957) on the diameter of graphs of convex polytopes, disproved by Paco Santos (2010)
Woods’s conjecture (1972) on the covering radius of certain lattices, disproved by Oded Regev, Uri Shapira and Barak Weiss (2017)
Connes embedding problem (1976), resolved negatively by Zhengfeng Ji, Anand Natarajan, Thomas Vidick, John Wright and Henry Yuen (2020)
In all these cases, the disproofs and counterexamples didn’t stop the research. On the contrary, they gave a push to further (sometimes numerous) developments in the area.
Why should you disprove conjectures?
There are three reasons, of different nature and importance.
First, disproving conjectures is opportunistic. As mentioned above, people seem to try proving much harder than they try disproving. This creates niches of opportunity for an open-minded mathematician.
Second, disproving conjectures is beautiful. Let me explain. Conjectures tend to be rigid, as in “objects of the type pqr satisfy property abc.” People like me believe in the idea of “universality“. Some might call it “completeness” or even “Murphy’s law“, but the general principle is always the same. Namely: it is not sufficient that one wishes that all pqr satisfy abc to actually believe in the implication; rather, there has to be a strong reason why abc should hold. Barring that, pqr can possibly be almost anything, so in particular non-abc. While some would argue that non-abc objects are “ugly” or at least “not as nice” as abc, the idea of universality means that your objects can be of every color of the rainbow — nice color, ugly color, startling color, quiet color, etc. That kind of palette has its own sense of beauty, but it’s an acquired taste I suppose.
Third, disproving conjectures is constructive. It depends on the nature of the conjecture, of course, but one is often faced with necessity to construct a counterexample. Think of this as an engineering problem of building some pqr which at the same time is not abc. Such construction, if at all possible, might be difficult, time consuming and computer assisted. But so what? What would you rather do: build a mile-high skyscraper (none exist yet) or prove that this is impossible? Curiously, in CS Theory both algorithms and (many) complexity results are constructive (you need gadgets). Even the GCT is partially constructive, although explaining that would take us awhile.
What should the institutions do?
If you are an institution which awards prizes, stop with the legal nonsense: “We award […] only for a publication of a proof in a top journal”. You need to set up a scientific committee anyway, since otherwise it’s hard to tell sometimes if someone deserves a prize. With mathematicians you can expect anything anyway. Some would post two arXiv preprints, give a few lectures and then stop answering emails. Others would publish only in a journal where they are Editor-in-Chief. It’s stranger than fiction, really.
What you should do is say in the official rules: “We have [this much money] and an independent scientific committee which will award any progress on [this problem] partially or in full as they see fit.” Then a disproof or an independence result will receive just as much as the proof (what’s done is done, what else are you going to do with the money?) This would also allow some flexibility for partial solutions. Say, somebody proves Goldbach’s Conjecture for integers > exp(exp(10100000)), way way beyond computational powers for the remaining integers to be checked. I would give this person at least 50% of the prize money, leaving the rest for future developments of possibly many people improving on the bound. However, under the old prize rules such person gets bupkes for their breakthrough.
What should the journals do?
In short, become more open to results of computational and experimental nature. If this sounds familiar, that’s because it’s a summary of Zeilberger’s Opinions, viewed charitably. He is correct on this. This includes publishing results of the type “Based on computational evidence we believe in the following UVW conjecture” or “We develop a new algorithm which confirms the UVW conjecture for n<13″. These are still contributions to mathematics, and the journals should learn to recognize them as such.
To put in context of our theme, it is clear that a lot more effort has been placed on proofs than on finding counterexamples. However, in many areas of mathematics there are no small counterexamples, so a heavy computational effort is crucial for any hope of finding one. Such work is not be as glamorous as traditional papers. But really, when it comes to standards, if a journal is willing to publish the study of something like the “null graphs“, the ship has sailed for you…
Let me give you a concrete example where a computational effort is indispensable. The curious Lovász conjecture states that every finite connected vertex-transitive graph contains a Hamiltonian path. This conjecture got to be false. It hits every red flag — there is really no reason why pqr = “vertex transitive” should imply abc = “Hamiltonian”. The best lower bound for the length of the longest (self-avoiding) path is only about square root of the number of vertices. In fact, even the original wording by Lovász shows he didn’t believe the conjecture is true (also, I asked him and he confirmed).
Unfortunately, proving that some potential counterexample is not Hamiltonian is computationally difficult. I once had an idea of one (a nice cubic Cayley graph on “only” 3600 vertices), but Bill Cook quickly found a Hamiltonian cycle dashing my hopes (it was kind of him to look into this problem). Maybe someday, when the TSP solvers are fast enough on much larger graphs, it will be time to return to this problem and thoroughly test it on large Cayley graphs. But say, despite long odds, I succeed and find a counterexample. Would a top journal publish such a paper?
Editor’s dilemma
There are three real criteria for evaluation a solution of an open problem by the journal:
Is this an old, famous, or well-studied problem?
Are the tools interesting or innovative enough to be helpful in future studies?
Are the implications of the solution to other problems important enough?
Now let’s make a hypothetical experiment. Let’s say a paper is submitted to a top math journal which solves a famous open problem in Combinatorics. Further, let’s say somebody already proved it is equivalent to a major problem in TCS. This checks criteria 1 and 3. Until not long ago it would be rejected regardless, so let’s assume this is happening relatively recently.
Now imagine two parallel worlds, where in the first world the conjecture is proved on 2 pages using beautiful but elementary linear algebra, and in the second world the conjecture is disproved on a 2 page long summary of a detailed computational search. So in neither world we have much to satisfy criterion 2. Now, a quiz: in which world the paper will be published?
If you recognized that the first world is a story of Hao Huang‘s elegant proof of the induced subgraphs of hypercubes conjecture, which implies the sensitivity conjecture. The Annals published it, I am happy to learn, in a welcome break with the past. But unless we are talking about some 200 year old famous conjecture, I can’t imagine the Annals accepting a short computational paper in the second world. Indeed, it took a bit of a scandal to accept even the 400 year old Kepler’s conjecture which was proved in a remarkable computational work.
Now think about this. Is any of that fair? Shouldn’t we do better as a community on this issue?
What do other people do?
Over the years I asked a number of people about the uncertainty created by the conjectures and what do they do about it. The answers surprised me. Here I am paraphrasing them:
Some were dumbfounded: “What do you mean this conjecture could be false? It has to be true, otherwise nothing I am doing make much sense.”
Others were simplistic: “It’s an important conjecture. Famous people said it’s true. It’s my job to prove it.”
Third were defensive: “Do you really think this conjecture could be wrong? Why don’t you try to disprove it then? We’ll see who is right.”
Fourth were biblical: “I tend to work 6 days a week towards the proof and one day towards the disproof.”
Fifth were practical: “I work on the proof until I hit a wall. I use the idea of this obstacle to try constructing potential counterexamples. When I find an approach to discard such counterexamples, I try to generalize the approach to continue working on the proof. Continue until either side wins.”
If the last two seem sensible to you to, that’s because they are. However, I bet fourth are just grandstanding — no way they actually do that. The fifth sound great when this is possible, but that’s exceedingly rare, in my opinion. We live in a technical age when proving new results often requires great deal of effort and technology. You likely have tools and intuition to work in only one direction. Why would you want to waste time working in another?
What should you do?
First, remember to make conjectures. Every time you write a paper, tell a story of what you proved. Then tell a story of what you wanted to prove but couldn’t. State it in the form of a conjecture. Don’t be afraid to be wrong, or be right but oversharing your ideas. It’s a downside, sure. But the upside is that your conjecture might prove very useful to others, especially young researchers. In might advance the area, or help you find a collaborator to resolve it.
Second, learn to check your conjectures computationally in many small cases. It’s important to give supporting evidence so that others take your conjectures seriously.
Third, learn to make experiments, explore the area computationally. That’s how you make new conjectures.
Fourth, understand yourself. Your skill, your tools. Your abilities like problem solving, absorbing information from the literature, or making bridges to other fields. Faced with a conjecture, use this knowledge to understand whether at least in principle you might be able to prove or disprove a conjecture.
Fifth, actively look for collaborators. Those who have skills, tools, or abilities you are missing. More importantly, they might have a different POV on the validity of the conjecture and how one might want to attack it. Argue with them and learn from them.
Sixth, be brave and optimistic! Whether you decide to prove, disprove a conjecture, or simply state a new conjecture, go for it! Ignore the judgements by the likes of Sarnak and Zeilberger. Trust me — they don’t really mean it.
Recently, there has been plenty of discussions on math journals, their prices, behavior, technology and future. I have been rather reluctant to join the discussion in part due to my own connection to Elsevier, in part because things in Combinatorics are more complicated than in other areas of mathematics (see below), but also because I couldn’t reconcile several somewhat conflicting thoughts that I had. Should all existing editorial boards revolt and all journals be electronic? Or perhaps should we move to “pay-for-publishing” model? Or even “crowd source refereeing”? Well, now that the issue a bit cooled down, I think I figured out exactly what should happen to math journals. Be patient – a long explanation is coming below.
Quick test questions
I would like to argue that the debate over the second question is the general misunderstanding of the first question in the title. In fact, I am pretty sure most mathematicians are quite a bit confused on this, for a good reason. If you think this is easy, quick, answer the following three questions:
1) Published paper has a technical mistake invalidating the main result. Is this a fault of author, referee(s), handling editor, managing editor(s), a publisher, or all of the above? If the reader find such mistake, who she/he is to contact?
2) Published paper proves special case of a known result published 20 years earlier in an obscure paper. Same question. Would the answer change if the author lists the paper in the references?
3) Published paper is written in a really poor English. Sections are disorganized and the introduction is misleading. Same question.
Now that you gave your answers, ask a colleague. Don’t be surprised to hear a different point of view. Or at least don’t be surprised when you hear mine.
What do referees do?
In theory, a lot. In practice, that depends. There are few official journal guides to referees, but there are several well meaning guides (see also here, here, here, here §4.10, and a nice discussion by Don Knuth §15). However, as any editor can tell you, you never know what exactly did the referee do. Some reply within 5 min, some after 2 years. Some write one negative sentence, some 20 detailed pages, some give an advice in the style “yeah, not a bad paper, cites me twice, why not publish it”, while others a brushoff “not sure who this person is, and this problem is indeed strongly related to what I and my collaborators do, but of course our problems are much more interesting/important – rejection would be best”. The anonymity is so relaxing, doing a poor job is just too tempting. The whole system hinges on the shame, sense of responsibility, and personal relationship with the editor.
A slightly better questions is “What do good referees do?” The answer is – they don’t just help the editor make acceptance/rejection decision. They help the authors. They add some background the authors don’t know, look for missing references, improve on the proofs, critique the exposition and even notation. They do their best, kind of what ideal advisors do for their graduate students, who just wrote an early draft of their first ever math paper.
In summary, you can’t blame the referees for anything. They do what they can and as much work as they want. To make a lame comparison, the referees are like wind and the editors are a bit like sailors. While the wind is free, it often changes direction, sometimes completely disappears, and in general quite unreliable. But sometimes it can really take you very far. Of course, crowd sourcing refereeing is like democracy in the army – bad even in theory, and never tried in practice.
First interlude: refereeing war stories
I recall a curious story by Herb Wilf, on how Don Knuth submitted a paper under assumed name with an obscure college address, in order to get full refereeing treatment (the paper was accepted and eventually published under Knuth’s real name). I tried this once, to unexpected outcome (let me not name the journal and the stupendous effort I made to create a fake identity). The referee wrote that the paper was correct, rather interesting but “not quite good enough” for their allegedly excellent journal. The editor was very sympathetic if a bit condescending, asking me not to lose hope, work on my papers harder and submit them again. So I tried submitting to a competing but equal in statue journal, this time under my own name. The paper was accepted in a matter of weeks. You can judge for yourself the moral of this story.
A combinatorialist I know (who shall remain anonymous) had the following story with Duke J. Math. A year and a half after submission, the paper was rejected with three (!) reports mostly describing typos. The authors were dismayed and consulted a CS colleague. That colleague noticed that the three reports were in .pdf but made by cropping from longer files. Turns out, if the cropping is made straightforwardly, the cropped portions are still hidden in the files. Using some hacking software the top portions of the reports were uncovered. The authors discovered that they are extremely positive, giving great praise of the paper. Now the authors believe that the editor despised combinatorics (or their branch of combinatorics) and was fishing for a bad report. After three tries, he gave up and sent them cropped reports lest they think somebody else considers their paper worthy of publishing in the grand old Duke (cf. what Zeilberger wrote about Duke).
Another one of my stories is with the Journal of AMS. A year after submission, one of my papers was rejected with the following remarkable referee report which I quote here in full:
The results are probably well known. The authors should consult with experts.
Needless to say, the results were new, and the paper was quickly published elsewhere. As they say, “resistance is futile“.
What do associate/handling editors do?
Three little things, really. They choose referees, read their reports and make the decisions. But they are responsible for everything. And I mean for everything, both 1), 2) and 3). If the referee wrote a poorly researched report, they should recognize this and ignore it, request another one. They should ensure they have more than one opinion on the paper, all of them highly informed and from good people. If it seems the authors are not aware of the literature and referee(s) are not helping, they should ensure this is fixed. It the paper is not well written, the editors should ask the authors to rewrite it (or else). At Discrete Mathematics, we use this page by Doug West, as a default style to math grammar. And if the reader finds a mistake, he/she should first contact the editor. Contacting the author(s) is also a good idea, but sometimes the anonymity is helpful – the editor can be trusted to bring bad news and if possible, request a correction.
B.H. Neumann described here how he thinks the journal should operate. I wish his views held widely today. The book by Krantz, §5.5, is a good outline of the ideal editorial experience, and this paper outlines how to select referees. However, this discussion (esp. Rick Durrett’s “rambling”) is more revealing. Now, the reason most people are confused as to who is responsible for 1), 2) and 3), is the fact that while some journals have serious proactive editors, others do not, or their work is largely invisible.
What do managing editors and publishers do?
In theory, managing editors hire associate editors, provide logistical support, distribute paper load, etc. In practice they also serve as handling editors for a large number of papers. The publishers… You know what the publishers do. Most importantly, they either pay editors or they don’t. They either charge libraries a lot, or they don’t. Publishing is a business, after all…
Who wants free universal electronic publishing?
Good mathematicians. Great mathematicians. Mathematicians who write well and see no benefit in their papers being refereed. Mathematicians who have many students and wish the publishing process was speedier and less cumbersome, so their students can get good jobs. Mathematicians who do not value the editorial work and are annoyed when the paper they want to read is “by subscription only” and thus unavailable. In general, these are people who see having to publish as an obstacle, not as a benefit.
Who does not want free universal electronic publishing?
Publishers (of course), libraries, university administrators. These are people and establishments who see value in existing order and don’t want it destroyed. Also: mediocre mathematicians, bad mathematicians, mathematicians from poor countries, mathematicians who don’t have access to good libraries (perhaps, paradoxically). In general, people who need help with their papers. People who don’t want a quick brush-off “not good enough” or “probably well known”, but those who need advice on the references, on their English, on how the papers are structured and presented, and on what to do next.
So, who is right?
Everyone. For some mathematicians, having all journals to be electronic with virtually no cost is an overall benefit. But at the very least, “pro status quo” crowd have a case, in my view. I don’t mean that Elsevier pricing policy is reasonable, I am talking about the big picture here. In a long run, I think of journals as non-profit NGO‘s, some kind of nerdy versions of Nobel Peace Prize winning Médecins Sans Frontières. While I imagine that in the future many excellent top level journals will be electronic and free, I also think many mid-level journals in specific areas will be run by non-profit publishers, not free at all, and will employ a number of editorial and technical stuff to help the authors, both of papers they accept and reject. This is a public service we should strive to perform, both for the sake of those math papers, and for development of mathematics in other countries.
Right now, the number of mathematicians in the world is already rather large and growing. Free journals can do only so much. Without high quality editors paid by the publishers, with a large influx of papers from the developing world, there is a chance we might loose the traditional high standards for published second tier papers. And I really don’t want to think of a mathematics world once the peer review system is broken. That’s why I am not in the “free publishing camp” – in an effort to save money, we might loose something much more valuable – the system which gives foundation and justification of our work.
Second interlude: journals vis-à-vis combinatorics
I already wrote about the fate of combinatorics papers in the Annals, especially when comparison with Number Theory. My view was gloomy but mildly optimistic. In fact, since that post was written couple more combinatorics papers has been accepted. Good. But let me give you a quiz. Here are two comparable highly selective journals – Duke J. Math. and Composito Math. In the past 10 years Composito published exactly one (!) paper in Combinatorics (defined as primary MSC=05), of the 631 total. In the same period, Duke published 8 combinatorics papers of 681 total.
Q: Which of the two (Composito or Duke) treats combinatorics papers better?
A: Composito, of course.
The reasoning is simple. Forget the anecdotal evidence in the previous interlude. Just look at the “aim and scope” of the journals vs. these numbers. Here is what Compsito website says with a refreshing honesty:
By tradition, the journal published by the foundation focuses on papers in the main stream of pure mathematics. This includes the fields of algebra, number theory, topology, algebraic and analytic geometry and (geometric) analysis. Papers on other topics are welcome if they are of interest not only to specialists.
Translation: combinatorics papers are not welcome (as are papers in many other fields). I think this is totally fair. Nothing wrong with that. Clearly, there are journals which publish mostly in combinatorics, and where papers in none of these fields would be welcome. In fact there is a good historical reason for that. Compare this with what Duke says on its website:
Published by Duke University Press since its inception in 1935, the Duke Mathematical Journal is one of the world’s leading mathematical journals. Without specializing in a small number of subject areas, it emphasizes the most active and influential areas of current mathematics.
See the difference? They don’t name their favorite areas! How are the authors supposed to guess which are these? Clearly, Combinatorics with its puny 1% proportion of Duke papers is not a subject area that Duke “emphasizes”. Compare it with 104 papers in Number Theory (16%) and 124 papers in Algebraic Geometry (20%) over the same period. Should we conclude that in the past 10 years, Combinatorics was not “the most active and influential”, or perhaps not “mathematics” at all? (yes, some people think so) I have my own answer to this question, and I bet so do you…
Note also, that things used to be different at Duke. For example, exactly 40 years earlier, in the period 1963-1973, Duke published 47 papers in combinatorics out of 972 total, even though the area was only in its first stages of development. How come? The reason is simple: Leonard Carlitz was Managing Editor at the time, and he welcomed papers from a number of prominent combinatorialists active during that time, such as Andrews, Gould, Moon, Riordan, Stanley, Subbarao, etc., as well as a many of his own papers.
So, ideally, what will happen to math journals?
That’s actually easy. Here are my few recommendations and predictions.
1) We should stop with all these geography based journals. That’s enough. I understand the temptation for each country, or university, or geographical entity to have its own math journal, but nowadays this is counterproductive and a cause for humor. This parochial patriotism is perhaps useful in sports (or not), but is nonsense in mathematics. New journals should emphasize new/rapidly growing areas of mathematics underserved by current journals, not new locales where printing presses are available.
2) Existing for profit publishers should realize that with the growth of arXiv and free online competitors, their business model is unsustainable. Eventually all these journals will reorganize into a non-profit institutions or foundations. This does not mean that the journals will become electronic or free. While some probably will, others will remain expensive, have many paid employees (including editors), and will continue to provide services to the authors, all supported by library subscriptions. These extra services are their raison d’être, and will need to be broadly advertised. The authors would learn not to be surprised of a quick one line report from free journals, and expect a serious effort from “expensive journals”.
3) The journals will need to rethink their structure and scope, and try to develop their unique culture and identity. If you have two similar looking free electronic journals, which do not add anything to the papers other than their .sty file, the difference is only the editorial board and history of published papers. This is not enough. All journals, except for the very top few, will have to start limiting their scope to emphasize the areas of their strength, and be honest and clear in advertising these areas. Alternatively, other journals will need to reorganize and split their editorial board into clearly defined fields. Think Proc. LMS, Trans. AMS, or a brand new Sigma, which basically operate as dozens of independent journals, with one to three handling editors in each. While highly efficient, in a long run this strategy is also unsustainable as it leads to general confusion and divergence in the quality of these sub-journals.
4) Even among the top mathematicians, there is plenty of confusion on the quality of existing mathematics journals, some of which go back many decades. See e.g. a section of Tim Gowers’s post about his views of the quality of various Combinatorics journals, since then helpfully updated and corrected. But at least those of us who have been in the area for a while, have the memory of the fortune of previously submitted papers, whether our own, or our students, or colleagues. A circumstantial evidence is better than nothing. For the newcomers or outsiders, such distinctions between journals are a mystery. The occasional rankings (impact factor or this, whatever this is) are more confusing than helpful.
What needs to happen is a new system of awards recognizing achievements of individual journals and/or editors, in their efforts to improve the quality of the journals, attracting top papers in the field, arranging fast refereeing, etc. Think a mixture of Pulitzer Prize and J.D. Power and Associates awards – these would be a great help to understand the quality of the journals. For example, the editors of the Annals clearly hustled to referee within a month in this case (even if motivated by PR purposes). It’s an amazing speed for a technical 50+ page paper, and this effort deserves recognition.
Full disclosure: Of the journals I singled out, I have published once in both JAMS and Duke. Neither paper is in Combinatorics, but both are in Discrete Mathematics, when understood broadly.
Imagine you have plenty of money and dozens of volunteers. You decide to award one or two fellowships a year to the best of the best of the best in math sciences. Easy, right? Then how do you repeatedly fail at this, without anyone notice? Let me tell you how. It’s an interesting story, so bear with me.
A small warning. Although it may seem I am criticizing Hertz Foundation, my intention is to show its weakness so it can improve.
What is the Hertz Foundation?
Yesterday I wrote a recommendation letter to the Hertz Foundation. Although a Fellow myself, I never particularly cared for the foundation, mostly because it changed so little in my life (I received it only for two out of five years of eligibility). But I became rather curious as to what usually happens to Hertz Fellows. I compiled the data, and found the results quite disheartening. While perhaps excellent in other fields, I came to believe that Hertz does barely a mediocre job awarding fellowships in mathematics. And now that I think about it, this was all completely predictable.
First, a bit of history. John Hertz was the Yellow Cab founder and car rental entrepreneur (thus the namesake company), and he left a lot of money dedicated for education in “applied physical sciences”, now understood to include applied mathematics. What exactly is “applied mathematics” is rather contentious, so the foundation wisely decided that “it is up to each fellowship applicant to advocate to us his or her specific field of interest as an ‘applied physical science’.”
In practice, according to the website, about 600 applicants in all areas of science and engineering apply for a fellowship. Applications are allowed only either in the senior year of college or 1st year of grad school. The fellowships are generous and include both the stipend and the tuition; between 15 and 20 students are awarded every year. Only US citizen and permanent residents are eligible, and the fellowship can be used only in one of the 47 “tenable schools” (more on this below). The Foundation sorts the applications, and volunteers interview some of them in the first round. In the second round, pretty much only one person interviews all that advanced, and the decision is made. Historically, only one or two fellowships in mathematical sciences are awarded each year (this includes pure math, applied math, and occasionally theoretical CS or statistics).
Forty years of Math Hertz Fellowships in numbers
The Hertz Foundation website has a data on all past fellows. I compiled the data in Hertz-list which spanned 40 years (1971-2010), listed by the year the fellowship ended, which usually but not always coincided with graduation. There were 67 awardees in mathematics, which makes it about 1.7 fellowships a year. The Foundation states that it awarded “over 1000 fellowships” so I guess about 5-6% went into maths (perhaps, fewer in recent years). Here is who gets them.
1) Which schools are awarded? Well, only 44 US graduate programs are allowed to administer the fellowships. The reasons (other than logistical) are unclear to me. Of those programs that are “in”, you have University of Rochester (which nearly lost its graduate program), and UC Santa Cruz (where rumors say a similar move had been considered). Those which are “out” include graduate programs at Brown, UPenn, Rutgers, UNC Chapel Hill, etc. The real distribution is much more skewed, of course. Here is a complete list of awards per institution:
MIT – 14
Harvard, Princeton – 8
Caltech, NYU – 7
Berkeley, Stanford – 5
UCLA – 3
CMU, Cornell, U Chicago – 2
GA Tech, JHU, RPI, Rice – 1
In summary, only 15 universities had at least one award (34%), and just 7 universities were awarded 54 fellowships (i.e. 16% of universities received 81% of all fellowships). There is nothing wrong with this per se, just a variation on the 80-20 rule you might argue. But wait! Hertz Foundation is a non-profit institution and the fellowship itself comes with a “moral commitment“. Even if you need to interfere with “free marketplace” of acceptance decisions (see P.S. below), wouldn’t it be in the spirit of John Hertz’s original goal, to make a special effort to distribute the awards more widely? For example, Simons Foundation is not shy about awarding fellowship to institutions many of which are not even | |||||
2453 | dbpedia | 2 | 71 | https://www.timetoast.com/timelines/the-history-of-the-computer-science | en | The History of the Computer Science timeline. | https://s3.amazonaws.com/s3.timetoast.com/public/uploads/photo/7494647/image/23b01248f5c7291d1df0799d9c166f9c?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAJB6ZCNNAN7BE7WDQ%2F20240827%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20240827T111009Z&X-Amz-Expires=604800&X-Amz-SignedHeaders=host&X-Amz-Signature=02d9d1cf15c7d4302acbe3f0f1eee2f7eb439db0fe6a63612242dbc0938f31af | https://s3.amazonaws.com/s3.timetoast.com/public/uploads/photo/7494647/image/23b01248f5c7291d1df0799d9c166f9c?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAJB6ZCNNAN7BE7WDQ%2F20240827%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20240827T111009Z&X-Amz-Expires=604800&X-Amz-SignedHeaders=host&X-Amz-Signature=02d9d1cf15c7d4302acbe3f0f1eee2f7eb439db0fe6a63612242dbc0938f31af | [
"https://s3.amazonaws.com/s3.timetoast.com/public/uploads/photos/7494647/computer-science1_medium_square.jpg?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAJB6ZCNNAN7BE7WDQ%2F20240827%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20240827T111009Z&X-Amz-Expires=604800&X-Amz-SignedHeaders=host&X-Amz-Signature=228... | [] | [] | [
"timeline",
"timeline maker",
"interactive",
"create",
"historical",
"time",
"visualization",
"chronology",
"chronological",
"reference"
] | null | [] | 1614-01-01T00:00:00+00:00 | en | /favicon.ico | Timetoast Timelines | https://www.timetoast.com/timelines/the-history-of-the-computer-science | Binary Logic
4.In 1702, Gottfried Wilhelm Leibnitz developed logic in a formal, mathematical sense with his writings on the binary numeral system. In his system, the ones and zeros also represent true and false values or on and off states. But it took more than a century before George Boole published his Boolean algebra in 1854 with a complete system that allowed computational processes to be mathematically modeled
Analitycal Engine
2.In 1837 Charles Babbage first described his Analytical Engine which is accepted as the first design for a modern computer. The analytical engine had expandable memory, an arithmetic unit, and logic processing capabilities able to interpret a programming language with loops and conditional branching.
Rise of Mathematics
12.Work on calculating machines continued. Some special-purpose calculating machines were built. For example, in 1919, E. O. Carissan (1880-1925), a lieutenant in the French infantry, designed and had built a marvelous mechanical device for factoring integers and testing them for primality. The Spaniard Leonardo Torres y Quevedo (1852-1936) built some electromechanical calculating devices, including one that played simple chess endgames.
Computer Invention
3.Before the 1920s, computers (sometimes computors) were human clerks that performed computations. They were usually under the lead of a physicist. Many thousands of computers were employed in commerce, government, and research establishments. Most of these computers were women.Some performed astronomical calculations for calendars, others ballistic tables for the military
Diffrential Ananlizer
5.The history of computing remembers colorful characters like Babbage, but others who played important—if supporting—roles are less well known. At the time when C-T-R was becoming IBM, the world's most powerful calculators were being developed by US government scientist Vannevar Bush (1890–1974). In 1925, Bush made the first of a series of unwieldy contraptions with equally cumbersome names: the New Recording Product Integraph Multiplier. Later, he built a machine called the Differential Analyz
Shannon and information theory
8.Up to and during the 1930s, electrical engineers were able to build electronic circuits to solve mathematical and logic problems, but most did so in an ad hoc manner, lacking any theoretical rigor. This changed with Claude Elwood Shannon's publication of his 1937 master's thesis, A Symbolic Analysis of Relay and Switching Circuits. While taking an undergraduate philosophy class, Shannon had been exposed to Boole's work, and recognized that it could be used to arrange electromechanical relays (
Alan Turing and the Turing Machine
11.The mathematical foundations of modern computer science began to be laid by Kurt Gödel with his incompleteness theorem (1931). In this theorem, he showed that there were limits to what could be proved and disproved within a formal system. This led to work by Gödel and others to define and describe these formal systems, including concepts such as mu-recursive functions.
Early Computer Hardware
7.In 1941, Konrad Zuse developed the world's first functional program-controlled computer, the Z3.in 1998, it was shown to be Turing-complete in principle.[13][14] Zuse also developed the S2 computing machine, considered the first process-controlled computer. He founded one of the earliest computer businesses in 1941, producing the Z4,which became the world's first commercial computer. In 1946, he designed the first high-level programming language, Plankalkül
The Theory of Databases
14.The theory of databases saw major advances with the work of Edgar F. Codd on relational databases. Codd won the Turing award in 1981.
Unix, a very influential operating system, was developed at Bell Laboratories by Ken Thompson (b. 1943) and Dennis Ritchie (b. 1941). Brian Kernighan and Ritchie together developed C, an influential programming language
Wartime brings the birth of the electronic digital computer
13.The calculations required for ballistics during World War II spurred the development of the general-purpose electronic digital computer. At Harvard, Howard H. Aiken (1900-1973) built the Mark I electromechanical computer in 1944, with the assistance of IBM.
Military code-breaking also led to computational projects. Alan Turing was involved in the breaking of the code behind the German machine, the Enigma, at Bletchley Park in England. The British built a computing device, the Colossus, to ass
John von Neumann and the von Neumann architecture
10.In 1946, a model for computer architecture was introduced and became known as Von Neumann architecture. Since 1950, the von Neumann model provided uniformity in subsequent computer designs. The von Neumann architecture was considered innovative as it introduced an idea of allowing machine instructions and data to share memory space.[citation needed] The von Neumann model is composed of three major parts, the arithmetic logic unit (ALU), the memory, and the instruction processing unit (IPU). I
Wiener and Cybernetics
9.From experiments with anti-aircraft systems that interpreted radar images to detect enemy planes, Norbert Wiener coined the term cybernetics from the Greek word for "steersman." He published "Cybernetics" in 1948, which influenced artificial intelligence. Wiener also compared computation, computing machinery, memory devices, and other cognitive similarities with his analysis of brain waves.[citation needed]
The first actual computer bug was a moth. It was stuck in between the relays on the Har
The Pilot ACE computer
The Pilot ACE computer, with 800 vacuum tubes, and mercury delay lines for its main memory, became operational on 10 May 1950 at the National Physical Laboratory near London. It was a preliminary version of the full ACE, which had been designed by Alan Turing.
The integrated circuit
The integrated circuit invented by Jack Kilby at Texas Instruments.
Robert Noyce, who later set up Intel, also worked separately on the invention. Intel later went on to perfect the microprocessor. The patent was applied for in 1959 and granted in 1964. This patent wasn't accepted by Japan so Japanese businesses could avoid paying any fees, but in 1989 – after a 30-year legal battle – Japan granted the patent; so all Japanese companies paid fees up until the year 2001 – long after the patent be
Large Scale Integration (microprocesors)
Computers built after 1972 are often called 'fourth generation' computers, based on LSI (Large Scale Integration) of circuits (such as microprocessors) – typically 500 or more components on a chip. Later developments include VLSI (Very Large Scale Integration) of integrated circuits 5 years later – typically 10,000 componecomputing power has increased the basic technology
Personal computers
6.By 1974, Intel had launched a popular microprocessor known as the 8080 and computer hobbyists were soon building home computers around it. The first was the MITS Altair 8800, built by Ed Roberts. With its front panel covered in red LEDlights and toggle switches, it was a far cry from modern PCs and laptops. Even so, it sold by the thousand and earned Roberts a fortune. The Altair inspired a Californian electronics wizard name Steve Wozniak (1950–) to develop a computer of his own.
The Mouse
Mouse conceived by Douglas Engelbart The Mouse was not to become popular until 1983 with Apple Computer's Lisa and Macintosh and not adopted by IBM until 1987 – although compatible computers such as the Amstrad PC1512 were fitted with mice before this date.
Nintendo Entertainment System.
The Nintendo Entertainment System (also abbreviated as NES) is an 8-bit home video game console that was developed and manufactured by Nintendo. It was initially released in Japan as the Family Computer (Japanese: ファミリーコンピュータ Hepburn: Famirī Konpyūta?) (also known by the portmanteau abbreviation Famicom (ファミコン Famikon?) and abbreviated as FC) on July 15, 1983, and was later released in North America during 1985, in Europe during 1986, and Australia in 1987. In South Korea, it was known as the Hy | |||
2453 | dbpedia | 1 | 24 | https://gilkalai.wordpress.com/2021/10/24/dream-a-little-dream-quantum-computer-poetry-for-the-skeptics-part-ii-the-classics/ | en | Dream a Little Dream: Quantum Computer Poetry for the Skeptics (Part II, The Classics) | [
"https://gilkalai.wordpress.com/wp-content/uploads/2015/11/cropped-gil-jirka.jpg",
"https://gilkalai.wordpress.com/wp-content/uploads/2012/03/dgw.png",
"https://gilkalai.wordpress.com/wp-content/uploads/2020/10/qc-3.png?w=640",
"https://gilkalai.wordpress.com/wp-content/uploads/2021/01/robert-alicki-and-gk.pn... | [] | [] | [
""
] | null | [
"David Deutsch"
] | 2021-10-24T00:00:00 | Quantum poetry for the skeptics had long roots, and, also here, Peter Shor along with Jennifer Shor had a pioneering role. Volker Strassen's response is the earliest poem known to me on the skeptics' side. We will start with Jennifer and Peter's poem and to Volker Strassen's response and move on to other heroic quantum… | en | https://s1.wp.com/i/favicon.ico | Combinatorics and more | https://gilkalai.wordpress.com/2021/10/24/dream-a-little-dream-quantum-computer-poetry-for-the-skeptics-part-ii-the-classics/ | Quantum poetry for the skeptics had long roots, and, also here, Peter Shor along with Jennifer Shor had a pioneering role. Volker Strassen’s response is the earliest poem known to me on the skeptics’ side. We will start with Jennifer and Peter’s poem and to Volker Strassen’s response and move on to other heroic quantum poem-writers Daniel Gottesman and John Preskill. (Here is a link to Part I in the series.)
Classic Quantum Poetry and Art
Jennifer and Peter Shor and Volker Strassen (before 1998)
(source)
Here is what Peter wrote:
” My wife and I wrote the following for a poetry contest by Science News. It didn’t win, so I posted it on my web page.
If computers that you build are quantum,
Then spies of all factions will want ’em.
Our codes will all fail,
And they’ll read our email,
Till we’ve crypto that’s quantum, and daunt ’em.
Jennifer and Peter Shor
When he introduced me at the 1998 International Congress of Mathematicians, Prof. Volker Strassen recited my limerick, and added a reply:
To read our E-mail, how mean
of the spies and their quantum machine;
Be comforted though,
they do not yet know
how to factorize twelve or fifteen.
Volker Strassen”
Early sonnet and poetic art work by Daniel Gottesman
(Here is the link Quantum Error Correction Sonnet.)
Quantum Error Correction Sonnet
By Daniel Gottesman
We cannot clone, perforce; instead, we split
Coherence to protect it from that wrong
That would destroy our valued quantum bit
And make our computation take too long.
Correct a flip and phase – that will suffice.
If in our code another error’s bred,
We simply measure it, then God plays dice,
Collapsing it to X or Y or Zed.
We start with noisy seven, nine, or five
And end with perfect one. To better spot
Those flaws we must avoid, we first must strive
To find which ones commute and which do not.
With group and eigenstate, we’ve learned to fix
Your quantum errors with our quantum tricks.
February 25, 1999
Above: Daniel’s artistic view of quantum error correction
John Preskill
John is a prolific poet, I am not aware of poetry he wrote specifically for quantum computer skeptics (It’s never too late for that, go John!), but he has several other quantum poems. (See also below for a poetic slogan by John.) His anyon poem
Anyon, Anyon (click for the full poem.)
Starts with
Anyon, anyon, where do you roam?
Braid for a while before you go home.
Here is a recent haiku by John
Qubits very cold
A dil fridge holds them gently
Mustn’t decohere
#QuantumHaiku
And a 2001 poem quantum cryptography. (The link contains some attempts to find rhymes for “Daniel Gottesman”.)
Quantum Cryptography
Alice said to her friend Eve,
“Why do you practice to deceive?
You know I need to talk to Bob.
Without that I won’t have a job.
“Bob can’t know where my note has been.
He thinks that you are listening in.
He wonders if it’s safe enough
For me to send him secret stuff.
“And Bob’s right not to trust you, Eve,
With quantum tricks stuffed up your sleeve.
But he thinks we can freeze you out,
With quantum tricks we’ve learned about.
“With quantum states, what we achieve
Defeats whatever you conceive.
So even Bob has to believe
That you can’t hear us, can you Eve?”
John Preskill, November 1, 2001
Poetic classic quantum slogans and quotes
John Preskill: slogan for quantum error correction (classic, 1997)
We can fight entanglement with entanglement
Daniel Gottesman: slogan for quantum error correction (1999, see sonnet above)
With group and eigenstate, we’ve learned to fix
Your quantum errors with our quantum tricks.
Scott Aaronson: (source) (repeatedly in various forms since 2012 or so.)
The number one application of quantum computers is to disprove Gil Kalai who said they are impossible.
Gil Kalai (2013):
The importance of quantum error-correction to physics is similar to the importance of non-deterministic computation to the theory of computing. Their importance is that they cannot be achieved.
(source)
John Martinis (2014) (source)
ECH ≠ MNP
ECH stands for “Experimentally Crazy Hard” and refers to monumental experimental physics achievements. MNP refers to “Maximum Nature Publications”.
(Martinis led the Sycamore experiment, and Nature is the journal where the sycamore paper appeared.)
David Deutsch
Quantum computation is a distinctively new way of harnessing nature. It will be the first technology that allows useful tasks to be performed in collaboration between parallel universes.
To me quantum computation is a new and deeper and better way to understand the laws of physics, and hence understanding physical reality as a whole.
Quantum computers as seen by Michel Dyakonov, myself, and Alexander Vlasov
Motivation | ||||
2453 | dbpedia | 2 | 30 | https://gilkalai.wordpress.com/2014/03/18/why-quantum-computers-cannot-work-the-movie/ | en | Why Quantum Computers Cannot Work: The Movie! | [
"https://gilkalai.wordpress.com/wp-content/uploads/2015/11/cropped-gil-jirka.jpg",
"https://gilkalai.wordpress.com/wp-content/uploads/2014/03/nr2.png?w=300&h=200",
"https://gilkalai.wordpress.com/wp-content/uploads/2014/03/nr1.png?w=300&h=214",
"https://gilkalai.wordpress.com/wp-content/uploads/2014/03/nr3.pn... | [] | [] | [
""
] | null | [] | 2014-03-18T00:00:00 | Update (April 2016) : Here is a link to a new post on my May 2016 Notices AMS paper. Here are links to a videotaped lecture in two parts entitled "why quantum computers cannot work" recorded at the Simons Institute for the Theory of Computing on December 2013 and two additional videos: a short talk on… | en | https://s1.wp.com/i/favicon.ico | Combinatorics and more | https://gilkalai.wordpress.com/2014/03/18/why-quantum-computers-cannot-work-the-movie/ | |||||
2453 | dbpedia | 3 | 9 | http://dmatheorynet.blogspot.com/2010/09/2011-godel-prize-call-for-nominations.html | en | Theory Announcements: 2011 Gödel Prize: Call for Nominations | http://dmatheorynet.blogspot.com/favicon.ico | http://dmatheorynet.blogspot.com/favicon.ico | [
"https://resources.blogblog.com/img/icon18_email.gif",
"https://resources.blogblog.com/img/icon18_edit_allbkg.gif",
"https://resources.blogblog.com/img/widgets/arrow_dropdown.gif",
"https://resources.blogblog.com/img/icon_feed12.png",
"https://resources.blogblog.com/img/widgets/subscribe-netvibes.png",
"h... | [] | [] | [
""
] | null | [] | null | The Gödel Prize for outstanding papers in the area of theoretical computer science is sponsored jointly by the European Association for Theo... | http://dmatheorynet.blogspot.com/favicon.ico | http://dmatheorynet.blogspot.com/2010/09/2011-godel-prize-call-for-nominations.html | The Gödel Prize for outstanding papers in the area of theoretical computer science is sponsored jointly by the European Association for Theoretical Computer Science (EATCS) and the Association for Computing Machinery, Special Interest Group on Algorithms and Computation Theory (ACM-SIGACT). The award is presented annually, with the presentation taking place alternately at the International Colloquium on Automata, Languages, and Programming (ICALP) and the ACM Symposium on Theory of Computing (STOC). The nineteenth prize will be awarded at the 43rd ACM symposium on the Theory of Computingto be held as part of FCRC in San Jose, California, in June 2011. The Prize is named in honor of Kurt Gödel in recognition of his major contributions to mathematical logic and of his interest, discovered in a letter he wrote to John von Neumann shortly before von Neumann’s death, in what has become the famous “P versus NP” question. The Prize includes an award of USD 5000.
AWARD COMMITTEE : The winner of the Prize is selected by a committee of six members. The EATCS President and the SIGACT Chair each appoint three members to the committee, to serve staggered three-year terms. The committee is chaired alternately by representatives of EATCS and SIGACT. The 2011 Award Committee consists of Sanjeev Arora (Princeton), Josep Diaz (Universitat Politecnica de Catalunya), Cynthia Dwork (Microsoft Research), Mogens Nielsen (University of Aarhus), Mike Paterson (University of Warwick) and Eli Upfal (Brown University).
ELIGIBILITY : The last change of rules goes back to the 2005 Prize. The (parametric) rule can be found on websites of both SIGACT and EATCS. The rule for the 2011 Prize is given below and supersedes any different interpretation of the parametric rule. Any research paper or series of papers by a single author or by a team of authors is deemed eligible if
(i)the paper was published in a recognized refereed journal no later than December 10, 2010;
(ii)the main results were not published (in either preliminary or final form) in a journal or conference proceedings before January 1st, 1998.
The research work nominated for the award should be in the area of theoretical computer science. The term “theoretical computer science” is meant to encompass, but is not restricted to, those areas covered by ICALP and STOC. Nominations are encouraged from the broadest spectrum of the theoretical computer science community so as to ensure that potential award winning papers are not overlooked. The Award Committee shall have the ultimate authority to decide whether a particular paper is eligible for the Prize.
NOMINATIONS : Nominations for the award should be submitted by email to the Award Committee Chair : Eli Upfal eli@cs.brown.edu . To be considered, nominations for the 2011 Prize must be received by December 10, 2010. Nominations may be made by any member of the scientific community. It is the duty of the Award Committee to actively solicit nominations. A nomination should contain a brief summary of the technical content of the paper(s) and a brief explanation of its significance. A printable copy of the research paper or papers should accompany the nomination. The nomination must state the date and venue of the first conference or workshop publication or state that no such publication has occurred. The work may be in any language. However, if it is not in English, a more extended summary written in English should be enclosed. Additional recommendations in favor of the nominated work may also be enclosed. To be considered for the award, the paper or series of papers must be recommended by at least two individuals, either in the form of two distinct nominations or one nomination including recommendations from two different people. Those intending to submit a nomination are encouraged to contact the Award Committee Chair by email well in advance. The Award Committee will accept informal proposals of potential nominees, as well as tentative offers to prepare formal nominations. The “Subject” line of all related messages should begin with
“Gödel 2011”.
SELECTION PROCESS: Although the Award Committee is encouraged to consult with the theoretical computer science community at large, the Award Committee is solely responsible for the selection of the winner of the award. The Prize may be shared by more than one paper or series of papers, and the Award Committee reserves the right to declare no winner at all. All matters relating to the selection process that are not specified here are left to the discretion of the Award Committee.
PAST WINNERS:
2010: S. Arora. Polynomial-time approximation schemes for Euclidean TSP and other geometric problems, Journal ACM 45(5), (1998), 753-782. J.S.B. Mitchell. Guillotine subdivisions approximate polygonal subdivisions: A simple polynomial-time approximation scheme for geometric TSP, k-MST, and related problems, SIAM J. Computing 28(4), (1999), 1298-1309.
2009: Omer Reingold, Salil Vadhan, and Avi Wigderson, “Entropywaves, the zig-zag graph product, and new constant-degree expanders”, Annals of Mathematics, 155 (2002), 157–187.
Omer Reingold, “Undirected connectivity in log-space”, Journal of the ACM 55
(2008), 1–24.
2008: Daniel A. Spielman and Shang-Hua Teng, “Smoothed analysis of algorithms : Why the simplex algorithm usually takes polynomial time”, Journal of the ACM, 51 (2004), 385–463.
2007: Alexander A. Razborov and Steven Rudich, “Natural Proofs”, Journal of Computer and System Sciences, 55 (1997), 24–35.
2006: Manindra Agrawal, Neeraj Kayal, and Nitin Saxena, “PRIMES is in P”, Annals of
Mathematics, 160 (2004), 1–13.
2005: Noga Alon, Yossi Matias and Mario Szegedy, “The space complexity of approximating
the frequency moments”, Journal of Computer and System Sciences, 58 (1999), 137– 147.
2004: Maurice Herlihy and Nir Shavit, “The Topological Structure of Asynchronous Computation”, Journal of the ACM, 46 (1999), 858–923.
Michael Saks and Fotios Zaharoglou, “Wait-Free k-Set Agreement Is Impossible :
The Topology of Public Knowledge”, SIAM Journal of Computing, 29 (2000), 1449–1483.
2003: Yoav Freund and Robert Schapire, “A Decision Theoretic Generalization of On-Line Learning and an Application to Boosting”, Journal of Computer and System Sciences 55 (1997), 119–139.
2002: Géraud Sénizergues, “L(A)=L(B) ? Decidaility results from complete formal systems”, Theoretical Computer Science 251 (2001), 1–166.
2001: Uriel Feige, Shafi Goldwasser, László Lovász, Shmuel Safra, and Mario Szegedy, “Interactive proofs and the hardness of approximating cliques”, Journal of the ACM 43 (1996), 268–292.
Sanjeev Arora and Shmuel Safra, “Probabilistic checking of proofs : a new characterization of NP”, Journal of the ACM 45 (1998), 70–122.
Sanjeev Arora, Carsten Lund, Rajeev Motwani, Madhu Sudan, and Mario Szegedy,
“Proof verification and the hardness of approximation problems”, Journal of the ACM 45 (1998), 501–555.
2000: Moshe Y. Vardi and Pierre Wolper, “Reasoning about infinite computations”, Information and Computation 115 (1994), 1–37.
1999: Peter W. Shor, “Polynomial-time algorithms for prime factorization and discrete logarithms on a quantum computer”, SIAM Journal on Computing 26 (1997), 1484–1509.
1998: Seinosuke Toda, “PP is as hard as the polynomial-time hierarchy”, SIAM Journal on Computing 20 (1991), 865–877.
1997: Joseph Halpern and Yoram Moses, “Knowledge and common knowledge in a distributed environment”, Journal of the ACM 37 (1990), 549–587.
1996: Alistair Sinclair and Mark Jerrum, “Approximate counting unform generation and rapidly mixing Markov chains”, Information and Computation 82 (1989), 93–133.
Mark Jerrum and Alistair Sinclair, “Approximating the permanent”, SIAM Journal
on Computing 18 (1989), 1149–1178.
1995: Neil Immerman, “Nondeterministic space is closed under complementation”, SIAM Journal on Computing 17 (1988), 935–938.
Róbert Szelepcsényi, “The method of forced enumeration for nondeterministic automata”, Acta Informatica 26 (1988), 279–284.
1994: Johan Håstad, “Almost optimal lower bounds for small depth circuits”, Advances in Computing Research 5 (1989), 143–170.
1993 László Babai and Shlomo Moran, “Arthur-Merlin games : a randomized proof system and a hierarchy of complexity classes”, Journal of Computer and System Sciences 36 (1988), 254–276.
Shafi Goldwasser, Silvio Micali and Charles Rackoff, “The knowledge complexity | ||||
2453 | dbpedia | 3 | 93 | https://www.yumpu.com/en/document/view/20828033/annual-report-2001-fields-institute-university-of-toronto | en | Annual Report 2001 - Fields Institute - University of Toronto | [
"https://assets.yumpu.com/release/ou6ZPgO72P294QN/v5/img/logo/Yumpu_Logo_RGB.png",
"https://assets.yumpu.com/release/ou6ZPgO72P294QN/v5/img/account/document_privacy_modal/step1.png",
"https://assets.yumpu.com/release/ou6ZPgO72P294QN/v5/img/account/document_privacy_modal/step2.png",
"https://img.yumpu.com/2082... | [] | [] | [
"fields.utoronto.ca",
"institute",
"fields",
"mathematics",
"mathematical",
"toronto",
"quantum",
"symplectic",
"geometry",
"workshop",
"algebras",
"annual",
"www.fields.utoronto.ca"
] | null | [
"Yumpu.com"
] | null | Annual Report 2001 - Fields Institute - University of Toronto | en | yumpu.com | https://www.yumpu.com/en/document/view/20828033/annual-report-2001-fields-institute-university-of-toronto | Attention! Your ePaper is waiting for publication!
By publishing your document, the content will be optimally indexed by Google via AI and sorted into the right category for over 500 million ePaper readers on YUMPU.
This will ensure high visibility and many readers!
Inappropriate
You have already flagged this document.
Thank you, for helping us keep this platform clean.
The editors will have a look at it as soon as possible. | |||||
2453 | dbpedia | 1 | 49 | https://igorpak.wordpress.com/tag/peter-sarnak/ | en | Igor Pak's blog | [
"https://igorpak.wordpress.com/wp-content/uploads/2024/04/image.png?w=1024",
"https://igorpak.wordpress.com/wp-content/uploads/2024/04/image-1.png?w=1024",
"https://igorpak.wordpress.com/wp-content/uploads/2024/04/image-2.png?w=1024",
"https://igorpak.wordpress.com/wp-content/uploads/2024/04/image-3.png?w=731... | [] | [] | [
""
] | null | [] | null | Posts about Peter Sarnak written by igorpak | en | Igor Pak's blog | https://igorpak.wordpress.com/tag/peter-sarnak/ | Conjectures are a staple of mathematics. They are everywhere, permeating every area, subarea and subsubarea. They are diverse enough to avoid a single general adjective. They come in al shapes and sizes. Some of them are famous, classical, general, important, inspirational, far-reaching, audacious, exiting or popular, while others are speculative, narrow, technical, imprecise, far-fetched, misleading or recreational. That’s a lot of beliefs about unproven claims, yet we persist in dispensing them, inadvertently revealing our experience, intuition and biases.
The conjectures also vary in attitude. Like a finish line ribbon they all appear equally vulnerable to an outsider, but in fact differ widely from race to race. Some are eminently reachable, the only question being who will get there first (think 100 meter dash). Others are barely on the horizon, requiring both great effort, variety of tools, and an extended time commitment (think ironman triathlon). The most celebrated third type are like those Sci-Fi space expeditions in requiring hundreds of years multigenerational commitments, often losing contact with civilization it left behind. And we can’t forget the romantic fourth type — like the North Star, no one actually wants to reach them, as they are largely used for navigation, to find a direction in unchartered waters.
Now, conjectures famously provide a foundation of the scientific method, but that’s not at all how we actually think of them in mathematics. I argued back in this pointed blog post that citations are the most crucial for the day to day math development, so one should take utmost care in making references. While this claim is largely uncontroversial and serves as a raison d’être for most GoogleScholar profiles, conjectures provide a convenient idealistic way out. Thus, it’s much more noble and virtuous to say “I dedicated my life to the study of the XYZ Conjecture” (even if they never publish anything), than “I am working hard writing so many papers to gain respect of my peers, get a promotion, and provide for my family“. Right. Obviously…
But given this apparent (true or perceived) importance of conjectures, are you sure you are using them right? What if some/many of these conjectures are actually wrong, what then? Should you be flying that starship if there is no there there? An idealist would argue something like “it’s a journey, not a destination“, but I strongly disagree. Getting closer to the truth is actually kind of important, both as a public policy and on an individual level. It is thus pretty important to get it right where we are going.
What are conjectures in mathematics?
That’s a stupid question, right? Conjectures are mathematical claims whose validity we are trying to ascertain. Is that all? Well, yes, if you don’t care if anyone will actually work on the conjecture. In other words, something about the conjecture needs to interesting and inspiring.
What makes a conjecture interesting?
This is a hard question to answer because it is as much psychological as it is mathematical. A typical answer would be “oh, because it’s old/famous/beautiful/etc.” Uhm, ok, but let’s try to be a little more formal.
One typically argues “oh, that’s because this conjecture would imply [a list of interesting claims and known results]”. Well, ok, but this is self-referential. We already know all those “known results”, so no need to prove them again. And these “claims” are simply other conjectures, so this is really an argument of the type “this conjecture would imply that conjecture”, so not universally convincing. One can argue: “look, this conjecture has so many interesting consequences”. But this is both subjective and unintuitive. Shouldn’t having so many interesting conjectural consequences suggest that perhaps the conjecture is too strong and likely false? And if the conjecture is likely to be false, shouldn’t this make it uninteresting?
Also, wouldn’t it be interesting if you disprove a conjecture everyone believes to be true? In some sense, wouldn’t it be even more interesting if until now everyone one was simply wrong?
None of this are new ideas, of course. For example, faced with the need to justify the “great” BC conjecture, or rather 123 pages of survey on the subject (which is quite interesting and doesn’t really need to be justified), the authors suddenly turned reflective. Mindful of self-referential approach which they quickly discard, they chose a different tactic:
We believe that the interest of a conjecture lies in the feeling of unity of mathematics that it entails. [M.P. Gomez Aparicio, P. Julg and A. Valette, “The Baum-Connes conjecture“, 2019]
Huh? Shouldn’t math be about absolute truths, not feelings? Also, in my previous blog post, I mentioned Noga Alon‘s quote that Mathematics is already “one unit“. If it is, why does it need a new “feeling of unity“? Or is that like one of those new age ideas which stop being true if you don’t reinforce them at every occasion?
If you are confused at this point, welcome to the club! There is no objective way to argue what makes certain conjectures interesting. It’s all in our imagination. Nikolay Konstantinov once told me that “mathematics is a boring subject because every statement is equivalent to saying that some set is empty.” He meant to be provocative rather than uninspiring. But the problem he is underlying is quite serious.
What makes us believe a conjecture is true?
We already established that in order to argue that a conjecture is interesting we need to argue it’s also true, or at least we want to believe it to be true to have all those consequences. Note, however, that we argue that a conjecture is true in exactly the same way we argue it’s interesting: by showing that it holds is some special cases, and that it would imply other conjectures which are believed to be true because they are also checked in various special cases. So in essence, this gives “true = interesting” in most cases. Right?
This is where it gets complicated. Say, you are working on the “abc conjecture” which may or may not be open. You claim that it has many consequences, which makes it both likely true and interesting. One of them is the negative solution to the Erdős–Ulam problem about existence of a dense set in the plane with rational pairwise distances. But a positive solution to the E-U problem implies the Harborth’s conjecture (aka the “integral Fáry problem“) that every graph can be drawn in the plane with rational edge lengths. So, counterintuitively, if you follow the logic above shouldn’t you be working on a positive solution to Erdős–Ulam since it would both imply one conjecture and give a counterexample to another? For the record, I wouldn’t do that, just making a polemical point.
I am really hoping you see where I am going. Since there is no objective way to tell if a conjecture is true or not, and what exactly is so interesting about it, shouldn’t we discard our biases and also work towards disproving the conjecture just as hard as trying to prove it?
What do people say?
It’s worth starting with a general (if slightly poetic) modern description:
In mathematics, [..] great conjectures [are] sharply formulated statements that are most likely true but for which no conclusive proof has yet been found. These conjectures have deep roots and wide ramifications. The search for their solution guides a large part of mathematics. Eternal fame awaits those who conquer them first. Remarkably, mathematics has elevated the formulation of a conjecture into high art. [..] A well-chosen but unproven statement can make its author world-famous, sometimes even more so than the person providing the ultimate proof. [Robbert Dijkgraaf, The Subtle Art of the Mathematical Conjecture, 2019]
Karl Popper thought that conjectures are foundational to science, even if somewhat idealized the efforts to disprove them:
[Great scientists] are men of bold ideas, but highly critical of their own ideas: they try to find whether their ideas are right by trying first to find whether they are not perhaps wrong. They work with bold conjectures and severe attempts at refuting their own conjectures. [Karl Popper, Heroic Science, 1974]
Here is how he reconciled somewhat the apparent contradiction:
On the pre-scientific level we hate the very idea that we may be mistaken. So we cling dogmatically to our conjectures, as long as possible. On the scientific level, we systematically search for our mistakes. [Karl Popper, quoted by Bryan Magee, 1971]
Paul Erdős was, of course, a champion of conjectures and open problems. He joked that the purpose of life is “proof and conjecture” and this theme is repeatedly echoed when people write about him. It is hard to overestimate his output, which included hundreds of talks titled “My favorite problems“. He wrote over 180 papers with collections of conjectures and open problems (nicely assembled by Zbl. Math.)
Peter Sarnak has a somewhat opposite point of view, as he believes one should be extremely cautious about stating a conjecture so people don’t waste time working on it. He said once, only half-jokingly:
Since we reward people for making a right conjecture, maybe we should punish those who make a wrong conjecture. Say, cut off their fingers. [Peter Sarnak, UCLA, c. 2012]
This is not an exact quote — I am paraphrasing from memory. Needless to say, I disagree. I don’t know how many fingers he wished Erdős should lose, since some of his conjectures were definitely disproved: one, two, three, four, five, and six. This is not me gloating, the opposite in fact. When you are stating hundreds of conjectures in the span of almost 50 years, having only a handful to be disproved is an amazing batting average. It would, however, make me happy if Sarnak’s conjecture is disproved someday.
Finally, there is a bit of a controversy whether conjectures are worth as much as theorems. This is aptly summarized in this quote about yet another champion of conjectures:
Louis J. Mordell [in his book review] questioned Hardy‘s assessment that Ramanujan was a man whose native talent was equal to that of Euler or Jacobi. Mordell [..] claims that one should judge a mathematician by what he has actually done, by which Mordell seems to mean, the theorems he has proved. Mordell’s assessment seems quite wrong to me. I think that a felicitous but unproved conjecture may be of much more consequence for mathematics than the proof of many a respectable theorem. [Atle Selberg, “Reflections Around the Ramanujan Centenary“, 1988]
So, what’s the problem?
Well, the way I see it, the efforts made towards proving vs. disproving conjectures is greatly out of balance. Despite all the high-minded Popper’s claims about “severe attempts at refuting their own conjectures“, I don’t think there is much truth to that in modern math sciences. This does not mean that disproofs of famous conjectures aren’t celebrated. Sometimes they are, see below. But it’s clear to me that the proofs are celebrated more frequently, and to a much greater degree. I have only anecdotal evidence to support my claim, but bear with me.
Take prizes. Famously, Clay Math Institute gives $1 million for a solution of any of these major open problems. But look closely at the rules. According to the item 5b, except for the P vs. NP problem and the Navier–Stokes Equation problem, it gives nothing ($0) for a disproof of these problems. Why, oh why?? Let’s look into CMI’s “primary objectives and purposes“:
To recognize extraordinary achievements and advances in mathematical research.
So it sounds like CMI does not think that disproving the Riemann Hypothesis needs to be rewarded because this wouldn’t “advance mathematical research”. Surely, you are joking? Whatever happened to “the opposite of a profound truth may well be another profound truth“? Why does the CMI wants to put its thumb on the scale and support only one side? Do they not want to find out the solution whatever it is? Shouldn’t they be eager to dispense with the “wrong conjecture” so as to save numerous researches from “advances to nowhere“?
I am sure you can see that my blood is boiling, but let’s proceed to the P vs. NP problem. What if it’s independent of ZFC? Clearly, CMI wouldn’t pay for proving that. Why not? It’s not like this kind of thing never happened before (see obligatory link to CH). Some people believe that (or at least they did in 2012), and some people like Scott Aaronson take this seriously enough. Wouldn’t this be a great result worthy of an award as much as the proof that P=NP, or at least a nonconstructive proof that P=NP?
If your head is not spinning hard enough, here is another amusing quote:
Of course, it’s possible that P vs. NP is unprovable, but that that fact itself will forever elude proof: indeed, maybe the question of the independence of P vs. NP is itself independent of set theory, and so on ad infinitum! But one can at least say that, if P vs. NP (or for that matter, the Riemann hypothesis, Goldbach’s conjecture, etc.) were proven independent of ZF, it would be an unprecedented development. [Scott Aaronson, P vs. NP, 2016].
Speaking of Goldbach’s Conjecture, the most talked about and the most intuitively correct statement in Number Theory that I know. In a publicity stunt, for two years there was a $1 million prize by a publishing house for the proof of the conjecture. Why just for the proof? I never heard of anyone not believing the conjecture. If I was the insurance underwriter for the prize (I bet they had one), I would allow them to use “for the proof or disproof” for a mere extra $100 in premium. For another $50 I would let them use “or independent of ZF” — it’s a free money, so why not? It’s such a pernicious idea of rewarding only one kind of research outcome!
Curiously, even for Goldbach’s Conjecture, there is a mild divergence of POVs on what the future holds. For example, Popper writes (twice in the same book!) that:
[On whether Goldbach’s Conjecture is ‘demonstrable’] We don’t know: perhaps we may never know, and perhaps we can never know. [Karl Popper, Conjectures and Refutations, 1963]
Ugh. Perhaps. I suppose anything can happen… For example, our civilizations can “perhaps” die out in the next 200 years. But is that likely? Shouldn’t the gloomy past be a warning, not a prediction of the future? The only thing more outrageously pessimistic is this theological gem of a quote:
Not even God knows the number of permutations of 1000 avoiding the 1324 pattern. [Doron Zeilberger, quoted here, 2005]
Thanks, Doron! What a way to encourage everyone! Since we know from numerical estimates that this number is ≈ 3.7 × 101017 (see this paper and this follow up), Zeilberger is suggesting that large pattern avoidance numbers are impossibly hard to compute precisely, already in the range of only about 1018 digits. I really hope he is proved wrong in his lifetime.
But I digress. What I mean to emphasize, is that there are many ways a problem can be resolved. Yet some outcomes are considered more valuable than others. Shouldn’t the research achievements be rewarded, not the desired outcome? Here is yet another colorful opinion on this:
Given a conjecture, the best thing is to prove it. The second best thing is to disprove it. The third best thing is to prove that it is not possible to disprove it, since it will tell you not to waste your time trying to disprove it. That’s what Gödel did for the Continuum Hypothesis. [Saharon Shelah, Rutgers Univ. Colloqium, 2001]
Why do I care?
For one thing, disproving conjectures is part of what I do. Sometimes people are a little shy to unambiguously state them as formal conjectures, so they phrase them as questions or open problems, but then clarify that they believe the answer is positive. This is a distinction without a difference, or at least I don’t see any (maybe they are afraid of Sarnak’s wrath?) Regardless, proving their beliefs wrong is still what I do.
For example, here is my old bog post on my disproof of the Noonan-Zeiberger Conjecture (joint with Scott Garrabrant). And in this recent paper (joint with Danny Nguyen), we disprove in one big swoosh both Barvinok’s Problem, Kannan’s Problem, and Woods Conjecture. Just this year I disproved three conjectures:
The Kirillov–Klyachko Conjecture (2004) that the reduced Kronecker coefficients satisfy the saturation property (this paper, joint with Greta Panova).
The Brandolini et al. Conjecture (2019) that concrete lattice polytopes can multitile the space (this paper, joint with Alexey Garber).
Kenyon’s Problem (c. 2005) that every integral curve in R3 is a boundary of a PL surface comprised of unit triangles (this paper, joint with Alexey Glazyrin).
On top of that, just two months ago in this paper (joint with Han Lyu), we showed that the remarkable independence heuristic by I. J. Good for the number of contingency tables, fails badly even for nearly all uniform marginals. This is not exactly disproof of a conjecture, but it’s close, since the heuristic was introduced back in 1950 and continues to work well in practice.
In addition, I am currently working on disproving two more old conjectures which will remain unnamed until the time we actually resolve them (which might never happen, of course). In summary, I am deeply vested in disproving conjectures. The reasons why are somewhat complicated (see some of them below). But whatever my reasons, I demand and naively fully expect that my disproofs be treated on par with proofs, regardless whether this expectation bears any relation to reality.
My favorite disproofs and counterexamples:
There are many. Here are just a few, some famous and some not-so-famous, in historical order:
Fermat‘s conjecture (letter to Pascal, 1640) on primality of Fermat numbers, disproved by Euler (1747)
Tait’s conjecture (1884) on hamiltonicity of graphs of simple 3-polytopes, disproved by W.T. Tutte (1946)
General Burnside Problem (1902) on finiteness of periodic groups, resolved negatively by E.S. Golod (1964)
Keller’s conjecture (1930) on tilings with unit hypercubes, disproved by Jeff Lagarias and Peter Shor (1992)
Borsuk’s Conjecture (1932) on partitions of convex sets into parts of smaller diameter, disproved by Jeff Kahn and Gil Kalai (1993)
Hirsch Conjecture (1957) on the diameter of graphs of convex polytopes, disproved by Paco Santos (2010)
Woods’s conjecture (1972) on the covering radius of certain lattices, disproved by Oded Regev, Uri Shapira and Barak Weiss (2017)
Connes embedding problem (1976), resolved negatively by Zhengfeng Ji, Anand Natarajan, Thomas Vidick, John Wright and Henry Yuen (2020)
In all these cases, the disproofs and counterexamples didn’t stop the research. On the contrary, they gave a push to further (sometimes numerous) developments in the area.
Why should you disprove conjectures?
There are three reasons, of different nature and importance.
First, disproving conjectures is opportunistic. As mentioned above, people seem to try proving much harder than they try disproving. This creates niches of opportunity for an open-minded mathematician.
Second, disproving conjectures is beautiful. Let me explain. Conjectures tend to be rigid, as in “objects of the type pqr satisfy property abc.” People like me believe in the idea of “universality“. Some might call it “completeness” or even “Murphy’s law“, but the general principle is always the same. Namely: it is not sufficient that one wishes that all pqr satisfy abc to actually believe in the implication; rather, there has to be a strong reason why abc should hold. Barring that, pqr can possibly be almost anything, so in particular non-abc. While some would argue that non-abc objects are “ugly” or at least “not as nice” as abc, the idea of universality means that your objects can be of every color of the rainbow — nice color, ugly color, startling color, quiet color, etc. That kind of palette has its own sense of beauty, but it’s an acquired taste I suppose.
Third, disproving conjectures is constructive. It depends on the nature of the conjecture, of course, but one is often faced with necessity to construct a counterexample. Think of this as an engineering problem of building some pqr which at the same time is not abc. Such construction, if at all possible, might be difficult, time consuming and computer assisted. But so what? What would you rather do: build a mile-high skyscraper (none exist yet) or prove that this is impossible? Curiously, in CS Theory both algorithms and (many) complexity results are constructive (you need gadgets). Even the GCT is partially constructive, although explaining that would take us awhile.
What should the institutions do?
If you are an institution which awards prizes, stop with the legal nonsense: “We award […] only for a publication of a proof in a top journal”. You need to set up a scientific committee anyway, since otherwise it’s hard to tell sometimes if someone deserves a prize. With mathematicians you can expect anything anyway. Some would post two arXiv preprints, give a few lectures and then stop answering emails. Others would publish only in a journal where they are Editor-in-Chief. It’s stranger than fiction, really.
What you should do is say in the official rules: “We have [this much money] and an independent scientific committee which will award any progress on [this problem] partially or in full as they see fit.” Then a disproof or an independence result will receive just as much as the proof (what’s done is done, what else are you going to do with the money?) This would also allow some flexibility for partial solutions. Say, somebody proves Goldbach’s Conjecture for integers > exp(exp(10100000)), way way beyond computational powers for the remaining integers to be checked. I would give this person at least 50% of the prize money, leaving the rest for future developments of possibly many people improving on the bound. However, under the old prize rules such person gets bupkes for their breakthrough.
What should the journals do?
In short, become more open to results of computational and experimental nature. If this sounds familiar, that’s because it’s a summary of Zeilberger’s Opinions, viewed charitably. He is correct on this. This includes publishing results of the type “Based on computational evidence we believe in the following UVW conjecture” or “We develop a new algorithm which confirms the UVW conjecture for n<13″. These are still contributions to mathematics, and the journals should learn to recognize them as such.
To put in context of our theme, it is clear that a lot more effort has been placed on proofs than on finding counterexamples. However, in many areas of mathematics there are no small counterexamples, so a heavy computational effort is crucial for any hope of finding one. Such work is not be as glamorous as traditional papers. But really, when it comes to standards, if a journal is willing to publish the study of something like the “null graphs“, the ship has sailed for you…
Let me give you a concrete example where a computational effort is indispensable. The curious Lovász conjecture states that every finite connected vertex-transitive graph contains a Hamiltonian path. This conjecture got to be false. It hits every red flag — there is really no reason why pqr = “vertex transitive” should imply abc = “Hamiltonian”. The best lower bound for the length of the longest (self-avoiding) path is only about square root of the number of vertices. In fact, even the original wording by Lovász shows he didn’t believe the conjecture is true (also, I asked him and he confirmed).
Unfortunately, proving that some potential counterexample is not Hamiltonian is computationally difficult. I once had an idea of one (a nice cubic Cayley graph on “only” 3600 vertices), but Bill Cook quickly found a Hamiltonian cycle dashing my hopes (it was kind of him to look into this problem). Maybe someday, when the TSP solvers are fast enough on much larger graphs, it will be time to return to this problem and thoroughly test it on large Cayley graphs. But say, despite long odds, I succeed and find a counterexample. Would a top journal publish such a paper?
Editor’s dilemma
There are three real criteria for evaluation a solution of an open problem by the journal:
Is this an old, famous, or well-studied problem?
Are the tools interesting or innovative enough to be helpful in future studies?
Are the implications of the solution to other problems important enough?
Now let’s make a hypothetical experiment. Let’s say a paper is submitted to a top math journal which solves a famous open problem in Combinatorics. Further, let’s say somebody already proved it is equivalent to a major problem in TCS. This checks criteria 1 and 3. Until not long ago it would be rejected regardless, so let’s assume this is happening relatively recently.
Now imagine two parallel worlds, where in the first world the conjecture is proved on 2 pages using beautiful but elementary linear algebra, and in the second world the conjecture is disproved on a 2 page long summary of a detailed computational search. So in neither world we have much to satisfy criterion 2. Now, a quiz: in which world the paper will be published?
If you recognized that the first world is a story of Hao Huang‘s elegant proof of the induced subgraphs of hypercubes conjecture, which implies the sensitivity conjecture. The Annals published it, I am happy to learn, in a welcome break with the past. But unless we are talking about some 200 year old famous conjecture, I can’t imagine the Annals accepting a short computational paper in the second world. Indeed, it took a bit of a scandal to accept even the 400 year old Kepler’s conjecture which was proved in a remarkable computational work.
Now think about this. Is any of that fair? Shouldn’t we do better as a community on this issue?
What do other people do?
Over the years I asked a number of people about the uncertainty created by the conjectures and what do they do about it. The answers surprised me. Here I am paraphrasing them:
Some were dumbfounded: “What do you mean this conjecture could be false? It has to be true, otherwise nothing I am doing make much sense.”
Others were simplistic: “It’s an important conjecture. Famous people said it’s true. It’s my job to prove it.”
Third were defensive: “Do you really think this conjecture could be wrong? Why don’t you try to disprove it then? We’ll see who is right.”
Fourth were biblical: “I tend to work 6 days a week towards the proof and one day towards the disproof.”
Fifth were practical: “I work on the proof until I hit a wall. I use the idea of this obstacle to try constructing potential counterexamples. When I find an approach to discard such counterexamples, I try to generalize the approach to continue working on the proof. Continue until either side wins.”
If the last two seem sensible to you to, that’s because they are. However, I bet fourth are just grandstanding — no way they actually do that. The fifth sound great when this is possible, but that’s exceedingly rare, in my opinion. We live in a technical age when proving new results often requires great deal of effort and technology. You likely have tools and intuition to work in only one direction. Why would you want to waste time working in another?
What should you do?
First, remember to make conjectures. Every time you write a paper, tell a story of what you proved. Then tell a story of what you wanted to prove but couldn’t. State it in the form of a conjecture. Don’t be afraid to be wrong, or be right but oversharing your ideas. It’s a downside, sure. But the upside is that your conjecture might prove very useful to others, especially young researchers. In might advance the area, or help you find a collaborator to resolve it.
Second, learn to check your conjectures computationally in many small cases. It’s important to give supporting evidence so that others take your conjectures seriously.
Third, learn to make experiments, explore the area computationally. That’s how you make new conjectures.
Fourth, understand yourself. Your skill, your tools. Your abilities like problem solving, absorbing information from the literature, or making bridges to other fields. Faced with a conjecture, use this knowledge to understand whether at least in principle you might be able to prove or disprove a conjecture.
Fifth, actively look for collaborators. Those who have skills, tools, or abilities you are missing. More importantly, they might have a different POV on the validity of the conjecture and how one might want to attack it. Argue with them and learn from them. | |||||
2453 | dbpedia | 3 | 11 | https://mathoverflow.net/questions/44244/what-recent-discoveries-have-amateur-mathematicians-made | en | What recent discoveries have amateur mathematicians made? | [
"https://cdn.sstatic.net/Sites/mathoverflow/Img/logo.svg?v=3a674b060adf",
"https://i.sstatic.net/lg0bO.png",
"https://mathoverflow.net/posts/44244/ivc/0eda?prg=ae7ee04e-6b0d-44f6-b249-ff2f24a8beab"
] | [] | [] | [
""
] | null | [] | 2010-10-30T14:20:24 | E.T. Bell called Fermat the Prince of Amateurs. One hundred years ago Ramanujan amazed the mathematical world. In between were many important amateurs and mathematicians off the beaten path, but what | en | https://cdn.sstatic.net/Sites/mathoverflow/Img/favicon.ico?v=8bbfe38cfc48 | MathOverflow | https://mathoverflow.net/questions/44244/what-recent-discoveries-have-amateur-mathematicians-made | About ten years ago Ahcène Lamari and Nicholas Buchdahl independently proved that all compact complex surfaces with even first Betti number are Kahler. This was known since 1983, but earlier proofs made use of the classification of surfaces to reduce to hard case-by-case verification.
At the time, Lamari was a teacher at a high school in Paris. Apparently he announced his result by crashing a conference in Paris and going up to Siu (who had proved the last case in the earlier proof in 1983) with a copy of his proof. Lamari's proof was published in the Annales de l'Institut Fourier in 1999 (Courants kählériens et surfaces compactes, Annales de l'institut Fourier, 49 no. 1 (1999), p. 263-285, doi:10.5802/aif.1673), next to Buchdahl's (On compact Kähler surfaces, Annales de l'institut Fourier, 49 no. 1 (1999), p. 287-302, doi: 10.5802/aif.1674)
Greg Egan. He's a very renowned science fiction writer who holds a bachelor degree in mathematics. He wrote, as a coauthor, 2 articles which were published in peer-reviewed journals, one of them is with John Baez. The first one was written when he was approximately 40 years old.
There's also more eccentric example of Andrew Beal, which is much more known in the world of poker. He made however one minor conjecture in number theory for whose proof or disproof he offers $100,000.
And there's also a list on wikipedia which might be worth going through.
Edit: (nov-2018) Some recent progress by G. Egan has been made with an anonymous 4chan-member, on a problem on permutations.
The complete graph on $n$ vertices $K_n$ is not planar for $n \geq 5.$ One may ask: what is the maximum Euler characteristic $\gamma(K_n)$ among all compact orientable surfaces into which $K_n$ may be embedded? It is a nice exercise to embed $K_5,$ $K_6,$ and $K_7$ into the torus. The final result was that $\gamma(K_n) = 2 \lfloor \frac{n (7 - n)}{12} \rfloor.$ In 1968 this theorem had been proven for "all cases except $n = 18,20,$ and $23.$ The proof was completed, at the end of the sixties, by Jean Mayer, a professor of French literature (!), when he found embeddings for these three values." (Surface topology, Firby and Gardiner, p. 111).
Kenneth A. Perko Jr. is a lawyer and an amateur topologist (with graduate-level training). In 1974 he found that two knots that were listed as separate knots in C. N. Little's "On knots, with a census for order 10" (1885) and similar tables, were actually identical.
Mathoverflow-user Daniel Moskovich recounted earlier on this site:
Little (with Tait and Kirkman) compiled his tables combinatorially. He drew all possible 4-valent graphs with some number of vertices (in this case 10), and resolved 4-valent vertices into crossings in all possible ways. He ended up with 210 knots. Then he worked BY HAND to eliminate doubles, by making physical models with string. He failed to bring these two knots to the same position, and concluded that they must be different. It took almost 100 years to find the ambient isotopy which shows that there are the same knot.
The book "Knots and Links" by Dale Rolfsen, published two years after Perko's publication, still lists the knots as different, they are knots [; 10_{161} ;] and [; 10_{162} ;] in Appendix C.
An anonymous poster of a 4chan messaging board, in thinking about how long it would take to watch a 14-episode nonlinear anime program in any order, improved the lower bound for a length of a superpermutation. A superpermutation is a string that contains each permutation of $n$ elements as a substring. See OEIS A180632. Superpermutations are somewhat similar to De Bruijn sequences.
Whether the anonymous poster meets the definition of "amateur" may never be known, but the posting was from 2011, and apparently was noted by a handful of other mathematicians who think about these things not long afterwards.
The story has taken off in the public recently in part because Greg Egan, who was previously mentioned, has also in October 2018 improved the upper bound on the length of a minimal superpermutation.
Quanta Magazine has a nice article as well.
Eugène Ehrhart was a high school teacher when he discovered the so-called Ehrhart polynomial, at the age of 55. He got his PhD at the age of 60.
Let $\Delta$ be a polytope with integral vertices in $\bf R^d$. Then there exists a polynomial $P$ such that for every integer $n$, $P(n)$ is the number of integral points in $n\Delta $. This polynomial satisfies a duality property: $(-1)^dP(-n)$ is the number of integral vertices in the interior of $P$. This duality property has been interpreted as Serre duality on toric varieties by Khovanskii in the 80's. The geometric interpretation of the coefficients of $P$ is still an open problem despite a huge literature. See http://icps.u-strasbg.fr/~clauss/Ehrhart.html for a short bibliography, and https://en.wikipedia.org/wiki/Ehrhart_polynomial for an introduction to the subject.
Bill Gates co-authored the following paper in the 1970s with Christos Papadimitriou:
"Bounds for sorting by prefix reversal," Discrete Mathematics 27 (1979), no. 1, 47–57, MR0534952.
Not sure if Gates counts as an amateur, but he is at least a college dropout. :)
The only reason I know this is because once I ran across a book or article that discusses the results in this paper and then says something like, "Yes, this is THE Bill Gates." I was almost certain the book or article was by Knuth, but now I can't find the reference in any of my Knuth books. If someone else knows the reference I'm talking about, I would be grateful if they would post it as a comment to my answer. (It now bothers me that I can't find that reference. :) )
Kurt Heegner was a radio engineer by trade, but gave (essentially) the first proof of the Gauss class number one problem in 1952: namely that $\mathbb{Q}\sqrt{d}$ has class number $1$ if and only if $d \in \{-1, -2, -3, -7, -11, -19, -43, -67, -163\}$. Unfortunately, his work was largely ignored until around 1967, two years after his death.
His ideas also led to the development of Heegner points, which are very influential in modern number theory.
I was hoping that someone would add David Smith to this list. But a few days has passed and no one has brought this up. So please allow me to kick this thread a bit.
As discussed in another answer, it is recently discovered that the “einstein” tiling exists, settling a long-standing open problem. This discovery is primarily credited to David Smith, who describes himself as a shape hobbyist.
The story is also featured in QuantaMagazine.
Richard Friedberg, then an undergraduate pre-medical student, independently solved Post's problem (of whether there are intermediate Turing degrees) by the priority-injury method. This was a significant open problem at the time, so the result made news:
1956 news article "Senior solves logic problem, astounds mathematicians"
In Gödel's now famous letter to von Neumann that introduced the P vs NP problem, Gödel wrote
I do not know if you have heard that “Post’s problem”, whether there are degrees of unsolvability among problems of the form (∃y)φ(y, x), where φ is recursive, has been solved in the positive sense by a very young man by the name of Richard Friedberg. The solution is very elegant. Unfortunately, Friedberg does not intend to study mathematics, but rather medicine (apparently under the influence of his father).
Friedberg ended up becoming a physicist (Wikipedia biography).
How about Saul Kripke?
Kripke-Platek set theory "is used all over the place, in recursion theory and set theory, [b]oth in classical results, and in fairly recent ones."
Modern philosophers (of science, mathematics, language etc. -- analytical philosophers) are probably a rich source of the list you seek. many do not have above-undergrad training in math, although I would use the definition of 'amateur' that we think of when we think of the ancients. that is, people who are distinctly in another field but make contributions to mathematics as part of their work or hobby.
As for other philosophers/logicians (off the top):
Putnam
Frank Ramsey (I don't have the rep to post more links--had a bunch for this question)
I know that by the time we get to someone like Ramsey, everyone's like: "surely that's not an amateur mathematician" but by the definition given in the question, i think he fits.
At any rate, you can probably find the names you're looking for in analytical philosophy, (mathematical/computational) economics/biology/linguistics, and so on.
The problem, I suspect, will be (in addition to the definition of 'amateur' which is not too difficult in my opinion, as long as you are satisfied with it for your list) the definitions of 'important' and 'discovery'. For example, I've known about Kripke's contributions for a while but I don't know, even now, whether this community considers them as important.
In 2002, Manindra Agrawal, Neeraj Kayal, and Nitin Saxena proved the amazing result "PRIMES is in P," i.e., there is a deterministic, polynomial time algorithm for determining whether a given number is prime or not. The resulting paper was published in the Annals in 2004. The story was covered in the New York Times. The trio won the 2006 Fulkerson Prize, and the 2006 Gödel Prize.
In 2002, Kayal and Saxena were undergraduate students. I'd say that counts as "amateur." | ||||
2453 | dbpedia | 0 | 13 | https://arxiv.org/html/2403.09682v1 | en | On the Theory of Quantum and Towards Practical Computation | [
"https://arxiv.org/html/x1.png",
"https://arxiv.org/html/x2.png",
"https://arxiv.org/html/x3.png",
"https://arxiv.org/html/x4.png",
"https://arxiv.org/html/x5.png",
"https://arxiv.org/html/x6.png",
"https://arxiv.org/html/x7.png",
"https://arxiv.org/html/x8.png"
] | [] | [] | [
"Quantum Computation Fundamentals Review History Open Questions Quantum Phenomena Technology Algorithm Design Pattern Application."
] | null | [] | null | en | null | HTML conversions sometimes display errors due to content that did not convert correctly from the source. This paper uses the following packages that are not yet supported by the HTML conversion tool. Feedback on these issues are not necessary; they are known and are being worked on.
failed: chronology
Authors: achieve the best HTML results from your LaTeX submissions by following these best practices.
License: CC BY 4.0
arXiv:2403.09682v1 [physics.pop-ph] 07 Feb 2024
11institutetext: University of Zagreb Faculty of Organization and Informatics,
Republic of Croatia
On the Theory of Quantum and Towards Practical Computation
Robert Kudelić
Abstract
Quantum computing exposes the brilliance of quantum mechanics through computer science and, as such, gives oneself a marvelous and exhilarating journey to go through. This article leads along that journey with a historical and current outlook on quantum computation that is geared toward computer experts but also to experts from other disciplines as well. It is an article that will bridge the vast gap between classical and quantum computation and open an entering wedge through which one will be able to both bring himself up to speed on quantum computation and, intrinsically, in a straightforward manner, become acquainted with it. We are indeed in luck to be living in an age where computing is being reinvented, and not only seeing history in the making firsthand but, in fact, having the opportunity to be the ones who are reinventing–and that is quite a thought.
Keywords:
Quantum Computation Fundamentals Review History Open Questions Quantum Phenomena Technology Algorithm Design Pattern Application.
1 For Once It All Began
How vast the chasm is, how difficult it is to grasp it, and how steep the learning curve has become–and perhaps always has been–is a realization to which one arrives when, for the first time, tries to bring oneself to a destination called quantum (QTM) computation. It is an awe-inspiring journey that through this article we will relive, unsealing its complex secrets, and gradually grasping computation known as quantum computation .
Before we therefore begin with the subject at hand, it would be of interest to give a brief historical background and a more forward motivation behind this work . It all began long ago, perhaps some years before what is typically remembered. All the way back in 1935, the principles of quantum mechanics where already heavily discussed [27, 66], namely superposition (particle being in multiple states at the same time, until observed [64]) and entanglement (correlation between particle states no matter the distance between them [64]), which we will soon define in more detail, that are so crucial to quantum computation as well [4]. A number of decades prior to those events, on December 14, 1900, to be exact, Max Planck struck the beginning of quantum mechanics "at a meeting of the German Physical Society". [191] Those were tumultuous and exciting days, I presume , but the best was yet to come. A few decades have passed, and ideas and research were advancing to and fro. Some scientists, excited, trying to advance the theory of quantum mechanics, while others were working against it, but not only against it, even fighting it –which in science is business as usual: That which nature’s physical systems deny, needs to perish.
Then one day, as the knowledge increased, some started pondering about computation that is microscopic and able to simulate physical systems with which classical computers have difficulty. [204] That person, right at the forefront, thinking these "microscopic" thoughts that were far beyond the abilities of those days was Richard Feynman. [204, 5] It is not known when exactly he first started pondering the idea of a quantum computer, but what is known is that in his 1959 talk, he was predicting an enormous miniaturization of technology, even to the size of an atom. [204, 5] There was nothing that he saw in the laws of nature that wouldn’t allow this miniaturization, and he was speaking about it. [204, 5] Time has passed, and Feynman, together with other scientists, tried to advance the issue. Then something happened, and a theory so necessary for practical quantum computation started to emerge .
In 1981, Feynman gave a conference talk on "simulating physics with computers" [204, 38], which was later published as an edited transcript [204] in a scientific journal [70]–and for all intents and purposes this event launched "quantum computing as a field of study" [204, 5], "which established the beginning of quantum information theory" [38]. At about the same time, others were investigating as well, and from then onward, nothing was ever the same. What is fascinating is that both Manin [140] and Benioff [17] were just a year prior, in 1980, bringing into the foreground ideas of large significance. Manin was in his book Computable and Uncomputable [97, 204] discussing how simulating a many-particle system requires exponential cost on a classical computer [204, 140, 5], while Benioff went further down the quantum line, complementing Manin, in explaining how one would describe computation from the quantum outlook and suggesting by the construction of such a model that quantum computation might be a possibility [17].
On a bit different note, the question that was continually puzzling Einstein, whether two particles really can be entangled and have correlation between their states without a hidden information, was being experimentally answered by Alain Aspect et al., and the answer was yes , they can. [38] With the first real-world experiments that were successful and conducted a decade earlier, in 1972 by Freedman and Clauser (which in turn depended on the work of Carl Kocher [120]), and with an extra-strong correlation being observed [43, 74], Aspect’s work, where "the greatest violation of generalized Bell’s inequalities" [9] ever was achieved, has put the predictions of quantum mechanics strongly on the map.
Only a few years later, in 1985, another important advance came when David Deutsch "formalized the notion of a quantum computer" [204, 60] and raised the question: "Whether quantum computers might have an advantage over classical computers at solving problems that have nothing to do with quantum physics" [204, 60]. True, the algorithm that Deutsch and Jozsa later published [61] was of little practical significance, but it showed superiority in efficiency of the quantum algorithm over its deterministic classical counterpart. [61, 237] Thinking about quantum computation and ideas that came through Benioff [17] and Feynman [70] Deutsch was led to in 1989 propose what later became the standard model for describing quantum computation, the well-known circuit-gate model [62]. [5]
With Deutsch formalizing the notion of quantum computer, Umesh Vazirani and his student Ethan Bernstein were formulating "a contrived problem that a quantum computer could solve with a super-polynomial speedup over a classical computer" [204, 23]–that was in 1993 [22], that is . The same superiority was presented in 1994 by Daniel Simon, who showed that by solving the idealized version of the problem, which is finding the function period, quantum computers could indeed achieve an exponential improvement in speed when compared to their classical counterparts. [237, 238, 204] And despite the fact that Simon’s idea, just like the one from Deutsch, had little practical weight and no application in sight, that was soon to change, for in just a short while, tremendous happenings will occur for quantum computation. [204]
The same idea and an instance where quantum computers would show their superiority has, in 1994, inspired Peter Shor to baffle the world and publish the paper in which he presented an efficient way for Fourier transform calculation, which he used for a definition of an efficient algorithm for computing discrete logarithms–and all this was done for a quantum computer. [232, 235, 204] But that was not the end. A few days later, after the aforementioned breakthrough, and by using similar ideas [204], in the same seminal paper, Shor presented "an efficient quantum algorithm for factoring large numbers" [232, 204]. [232, 235] The implications for cryptanalysis were enormous , and the interest in quantum computing has once again exploded. [204]
All was not well in the land called Q-Country, though, and at the same time those great achievements were being made, a dark cloud was looming over quantum computation, and that dark cloud was called decoherence –an inability for a computer to compute in a quantum manner because of interaction with the outside world . [204, 127, 256, 92] But the question of decoherence was already being tackled and is one of the main issues with quantum hardware that remains to be tackled to this day. [144, 33, 44, 32] Shor himself has already, in 1995 and 1996, published research on quantum error-correcting codes and on fault-tolerant methods by which one could compute on quantum hardware, which is rather noisy, in a reliable manner. [234, 246, 233, 204] And with that, "by the end of 1996 it was understood, at least in principle, that quantum computing could be scaled up to large devices that solve very hard problems, assuming that errors afflicting the hardware are not too common or strongly correlated" [204, 3, 119, 202]–which is confirmed by the latest research dealing with quantum computation, scalability, and decoherence : "fault-tolerant quantum computation will be practically realizable." [123].
During those same exciting times [204], as John Preskill adequately called them [204, 201], another important realization was happening. It was the year 1995 when Cirac and Zoller published that, with the tools in atomic physics and quantum optics, one could implement a quantum computer and perform quantum logical operations. [47] Building on that foundation, a few months later in the same year, Monroe et. al. demonstrated a fundamental quantum logic gate, "operation of a two-bit controlled-NOT quantum logic gate", to be exact [150], which, coupled with simple single-bit operations, formed a universal quantum logic gate [150]–this was quite important piece of the quantum computing puzzle, since if correct and practical model of computation can not be found, then all efforts, perspiration and tears would be in vain. With previous breakthroughs, especially those that happened during the last decennia of the 20th century, a strong foundation was laid, and it seemed quite possible that one day quantum computation would be a reality. The possibility of that reality was never given up, and research continued.
Then, soon after Shor presented his Las Vegas quantum algorithms [232], in 1996 Lov Grover emerged with another fascinating discovery: it is possible to search a database for an entry in N𝑁\sqrt{N}square-root start_ARG italic_N end_ARG time and identify a record with a probability of 1212\frac{1}{2}divide start_ARG 1 end_ARG start_ARG 2 end_ARG [87]–which then represents quantum Monte Carlo, and is asymptotically optimal [89], and by repeated sampling, this probability can arbitrarily grow [87]. A substantial achievement since classical machines, both deterministic and probabilistic, will need N2𝑁2\frac{N}{2}divide start_ARG italic_N end_ARG start_ARG 2 end_ARG time to achieve the same probability bound of 1212\frac{1}{2}divide start_ARG 1 end_ARG start_ARG 2 end_ARG, and only in an ordered list via Binary search can classical machines achieve log2Nsubscript2𝑁\log_{2}Nroman_log start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT italic_N time. [88] Some, as well researching in quantum computing, were advancing tools for better understanding such computations and developing theories for quantum state machines, 1997 was the year. [154, 155, 121]
Shortly after, just a few years have passed, in 2001, the company was IBM, and scientists there have announced that successful testing of a quantum computer has been conducted. The capacity of the machine was 7777 qubits (first register 3333, second register 4444), and the quantum computer itself was implemented by nuclear magnetic resonance . [222] Shor’s algorithm was executed on this machine, and by employing quantum effects, number 15151515 was factorized [222]–this achievement was for the history books, deserving of noting big success. Then again, in 2007 , a validation came when scientists at the University of Queensland (UQ) experimentally demonstrated execution of Shor’s algorithm for large number factorization by "using quantum logic gates based on photon polarization"–they have also factorized number 15151515 (first register 3333 qubits, second register 4444 qubits). [222] At this stage, quantum computation has gone from theory to practice. By the end of the 1990s, enough foundational theory had been discovered, and the beginning of the 21st century was the dawn of practical quantum computation. Machines are being built, and algorithms are being implemented , and now theory and practice go together.
And so in 2009 and 2012 new experiments have confirmed the reality of quantum computation, making it even stronger; one more successful experimental demonstration of Shor’s algorithm has taken place, the method was an integrated wave-guide based on a silicon chip, with only 4444 qubits based on photons used for factorization of number 15151515 (first register 1111 qubit, second register 3333 qubits). [222] And as a supplementation, in 2012, at the University of California (UC), one more experiment successfully factored number 15151515, Shor’s algorithm in action, "using phase qubits and superconducting wave resonators", with 4444 qubits, just like the previous group of researchers (but in the first register there were 2222 qubits, and in the second 2222 as well). [222]
This series of implementations of quantum computers and successful algorithm runs continued, and soon there was quite a group of scientists that have dabbled in quantum computing and have witnessed its strangeness and marvelousness at the same time, e.g. Martin-Lopez et al. in [141] with factoring number 21212121, via Shor, "using only two photon-based qubits" (2012), Nanyang Xu et al. in [273] turning factorization problem into optimization problem, by a scheme from Burges from Microsoft Research, and factoring number 143143143143 with 4444 qubits only, this was an adiabatic algorithm run on a liquid crystal nuclear magnetic resonance quantum processor, and for example, Thomas Monz et al. in [152], via five trapped calcium ions on a quantum computer, implemented a scalable version of Shor’s algorithm, with the approach providing "potential for designing a powerful quantum computer, but with fewer resources." [222]
Next, it was to IBM again, which has seriously grabbed quantum computation and was making strides. It was 2016, when the company announced the creation of a 5555 qubit quantum computer, where one qubit would correct errors, with the computing device being based on a "five-qubit superconducting chip with star geometry and implementation of the complete Clifford algebra ." [222] The machine was programmable; it allowed for the creation of gates and the modeling of operations. [222] But the progress has not stopped there, as in 2017, in May, to be exact, another announcement was to be made: quantum computers with 16161616 and 17171717 qubits have been implemented; and then an enormous leap, in November of 2017, IBM announced a quantum device with 50505050 qubits, where 20202020 qubits were used for computation and 30303030 were used for error correction. [222] It was possible for this quantum device to maintain its qubits in a coherent state for up to 90909090 μs𝜇𝑠\mu sitalic_μ italic_s, and the device was with consumption of 10−15101510-1510 - 15 kW𝑘𝑊kWitalic_k italic_W of power "sufficiently energy-efficient"–without including the energy for device cooling outside work. [222]
Quantum computing research was now beyond its fledgling days, and in 2016, the first quantum satellite was launched from China , Micius it was called. [38] The goal of the space mission was to "perform quantum experiments at space scale", which was an important achievement for quantum communication and space science at the same time. [38] This attempt at a space-scale quantum leap in 2020 resulted in a new milestone for space quantum communications when, via Micius, a secure link, by quantum key distribution , was established between two on-ground stations that were separated by 1120 kilometers. [275] While these events were happening, another breakthrough was in the making.
Intel was interested in quantum computation, and this they loudly expounded in January 2018 when a declaration was made of superconducting quantum chip implementation, the name was Tangle Lake, quite an Intelish name, I might add, and the number of qubits was 49494949. [222, 104] This event was followed by one coming from Google, for they presented in March 2018 a new quantum superconducting processor, Bristlecone, with a capacity of 72727272 qubits. [222, 115] This device was a continuation of a previous one announced a few years ago with 9999 qubits and a rather low level of error, which was 1%percent11\%1 % for data reading, with 0.1%percent0.10.1\%0.1 % and 0.6%percent0.60.6\%0.6 % for one-qubit and two-qubit quantum gates, respectively. [222] With a two-dimensional structure of two 6⋅6⋅666\cdot 66 ⋅ 6 arrays that are placed one above the other, the system can track the errors happening during computation and correct them . [222]
With the ever-moving advance of quantum devices, research was continuing in different aspects of quantum mechanics, an important element for quantum computation, and although evidence is still not conclusive, in 2018, quantum entanglement was observed in objects almost visible to the naked eye, a potential application of which could be seen in quantum internet and physics research. [214, 198]
In 2019, the Google AI Quantum group declaimed [204] "a 52-qubit superconducting chip named Sycamore, which they claim has demonstrated quantum supremacy" [213, 156]. A first claim of this type and a very exciting one, however, when one looks back from a distance, only then it is often the case that a man can clearly see what was the event that made something of something; it might be that it was this one, but perhaps it was not just yet. [222, 193]
As it seems that the previous question has not been answered yet, let us jump to one that is, namely, quantum entanglement. In 2022, after decades of effort and research, it seems that Einstein’s "spooky action at a distance" has finally been thoroughly investigated and brought into the realm of fact, since in the year mentioned, Aspect, Clauser, and Zeilinger received the Nobel Prize in Physics [253], and while this research article is not about rewards, a question that has for many decades puzzled some of the best minds deserves a mention . The last loophole in a well known Bell’s test has been closed in 2015 [94], thus supporting quantum theory [146], the universe we live in is not anymore strange; it is quantum entangled and magnificently fascinating.
If the previous event seemed imposing, the next one is in at least the same category, for in 2023 a reversing protocol for a quantum system has been demonstrated, with authors noting that this new understanding of quantum mechanics could have application in quantum information technology. [225, 255] The protocol is a universal time-reversing mechanism with an arbitrarily high probability of success, where interference of different paths in the end causes the system to jump to the state it had some T𝑇Titalic_T time units before–the protocol is "requiring no knowledge of the quantum process to be rewound, is optimal in its running time, and brings quantum rewinding into a regime of practical relevance." [225, 255]
What brings us at the cusp of time, it is still unknown what technology will prevail [213], or if it will perhaps be a mixture of the two, quantum and classical working in tandem, a most probable outcome, but what can be stated with greater certainty is that the next ten years will probably reveal and answer far more in terms of quantum machines usefulness and area of their specialty.
While the history of quantum computing is for the time being concluded, what comes next is an open question, a choice, and a work that is yours.
2 Quantumness of Quantum Computing
Even though quantum computing has seen great progress, it seems that it is a subject with which scientists and practitioners are still not that familiar. There are probably at least these reasons behind it: their education has not covered the topic, they still do not see the use of such a tool, the state of quantum computing is still far from mainstream, and the link between quantum physics and computing is not an easy one to make. It is also a matter of fact that quantum computation is a sub-discipline that is multidisciplinary in its essence and requires experts with vastly different backgrounds [157], as such, it represents a sub-discipline for which it is difficult to get your brain around.
If one searches through existing scientific papers, a substantial number of articles will now be found, and the articles range from theory to practice, from the synthesis of knowledge to algorithms. Naturally, the papers include important elements that one needs when dealing with quantum computing. It is, however, problematic that so many of these elements present a stumbling block in that learning curve towards quantum way of conducting work–quantum computation is so vastly different from classical computing, and it is perhaps in the beginning quite daunting to come from classical computation, where one knows much, to quantum computation, where one knows little.
For example, right at the start of one’s journey to the universe of quantum, instead of a classical bit for information storage, one is confronted with a quantum bit, i.e. a qubit. And instead of storing one value, as in a bit, in quantum computation, one has a situation where one qubit is in both states [153] (both levels) simultaneously.
After dealing with the qubit question, one is confronted with other quantum computing peculiarities like entanglement and collapse of quantum states through observation. It is almost one big thing after another, and to get to grips with these and other questions, the goals of this research article on quantum computation are the following:
Historical Outlook
Develop a never-before-published historical context of quantum computing that is encompassing and detailed without missing valuable information, precise, covering milestones, and presenting the most significant achievements.
Theory Chronology
Synthesize a one-of-a-kind broad, deep, precise, and thoroughly referenced chronological outlook on quantum computing, both textually and visually, through a timeline presenting a broad picture of the field and segments of its history that will expound on the progression of the theory, present those that came before, and show links between quantum phenomena and other fields.
Foundational Terminology
The basis of any theory, together with axioms, theorems, lemmas, and corollaries, is its terminology and definitions of those terms. The literature at the moment offers no complete, deep, and well-referenced material. Such a state of the matter leads to confusion and a lack of understanding in terms of quantum computing. A compendium of such nature is therefore a must; thus, to construct and present such a work is one of the goals of this paper.
Standard Model
As a way of delving into the practical part of quantum computing and gearing toward computer experts in a streamlined and straightforward manner , quantum computing knowledge will be combined through a standard model of computation, with special emphasis on foundational high-level quantum algorithm modeling and a design pattern.
General Outlook
Synthesis of the present state of the art with the future importance and possibilities of quantum computing. Embedding discussion on problems still in need of solving while not forgetting those pervasive open questions.
From Now to Beyond
Provide a number of quality literature materials that will present themselves as an extended arm of this research. Facilitating an even broader reach of the research conducted and enabling future research and algorithm development through a compact number of reliable steps to the next breakthroughs and game-changers.
In order to achieve previous goals, an effort will be made to cater to the computer science mind and to build a strong theoretical foundation and intuition. Thus enabling a correct, consistent, and deep understanding of quantum computation and quantum mechanics’ phenomena. With the introduction over, the next step in the journey is foundational terminology.
3 Foundational Terminology
When one is dealing with any subject, there are primarily two ways in which he can proceed to expose the issue. The first is to start with a general and then build in a top-down manner. The second is, of course, to start with concrete and then build in a bottom-up manner. They both have their pros and cons, with the latter being more fascinating and interesting, but perhaps in certain instances it is more difficult to understand in such a way, with the former being more conceptual and gradual, but not a stumbling block on the mind while trying to grasp some complex new idea. One would choose one or the other depending on the subject, audience, and perhaps some other factors as well.
It is often the case, perhaps even exclusively, in the scientific literature, at least in the discipline of quantum computation, that the more practical approach, which is bottom-up, is used. Considering that quantum computation at its best is physics in action, that approach is logical and has its merits. However, quantum mechanics is so strange and at times so counter-intuitive that it is quite challenging to understand its complex essence, and the mind has an issue combing all those different threads of thought at the same time–for thinking, one needs time, and for thinking about quantum computation, one needs a considerable amount of time. And if learning is impeded, if the subject has not been understood, one cannot expect great results from then on.
Therefore, in order to continue the strain of thought from previous sections, to give the mind the necessary time for information incubation, and to build up essential intuition, before we delve into some concrete examples of quantum computation essential for the review and an outlook that is being written, we will first define a broad range of terms that will be linked to that practical quantum computation and revealing of fascinating knowledge about it, but not so overwhelming that it will impede progress more than it would be expected. The first stop will then, fittingly, be the definition of quantum mechanics.
Quantum Mechanics It is said of physicists that quantum mechanics represents the most complete as well as the most accurate description of the universe we live in. [157] It is a theory consisting of rules and principles that define a framework that is then, in turn, used in order to develop other physical theories. [157] What these rules, principles, and mathematics are, we will soon see.
Quantum Computing The act of using those rules and principles of quantum mechanics in order to carry out computation is then called quantum computing. [213] Quantum computing has two powerful mechanisms through which computation is performed, namely superposition and entanglement, and these have no counterpart in classical computation. [213] Such is the nature of computation that is quantum, and these are its key advantages. [213] It is well known what data is and what information is, but how is that transferred into the realm of quantum? We will answer that next.
Quantum Information Those well-established definitions and understandings of data and information are at a general level unchanged; however, at the practical level, the situation is quite different. According to the well-known no-cloning theorem, quantum data cannot be copied, and as such, it lasts only as long as the program lasts . [268, 213] Data is, to a physicist, an encodable and storable feature that can be processed "in some physical system using some physical process." [204] Data may then be regarded as a feature that one stores and processes in a quantum state. [204]
Quantum Bit A qubit, or quantum bit, represents an indivisible unit of quantum data. [204] Abstract qubits can be encoded in a physical quantum system, and that qubit can be "an atom, an electron, a photon, an electrical circuit, or something else." [204] Unlike a classical bit that can be 00 or 1111, a qubit can be in multiple states simultaneously, mathematically described as a vector in a complex Hilbert space , "with two mutually orthogonal basis states which we can label |0⟩ket0|0\rangle| 0 ⟩ and |1⟩ket1|1\rangle| 1 ⟩." [204] These orthonormal states can, for example, correspond to a different polarization of a photon or perhaps to a different spin of an electron. [215]
Superposition Feature of being quantized, Fig. 5, and having infinite degrees of freedom, that is, being in multiple states at the same time (linear combination)–until observation has been made. [249, 280] This feature represents one of the two main pillars of quantum mechanics, the other being entanglement. [29] Through superposition, one has access to the real power of quantum computation via the exponential state space of multiple qubits. [215] "Just as a single qubit can be in a superposition of 00 and 1111, a register of n𝑛nitalic_n qubits can be in a superposition of all 2nsuperscript2𝑛2^{n}2 start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT possible values." [215]
Entanglement Quantum state where particles, Fig. 6, and in quantum computing qubits, are locked, with one exhibiting an influence on the other (there is a correlation between particle states, e.g. one particle collapses to 00, the consequence of which is that the other then measures to 1111). [213] Distance between particles does not play a role; that is, entanglement correlation works regardless of the distance between particles–this is a phenomenon of which Einstein did not speak so kindly when he said, "spooky action at a distance" [213], but it turned out to be correct nevertheless [188, 190]. Data is in quantum computation and is therefore stored both in qubits and in relationships between them, with the amount of stored data being exponential in the number of qubits. [204]
Bell State Quantum state, also known as EPR (Einstein, Podolsky, Rosen) pair of two qubits that are in superposition and are maximally, in regard to correlation, quantumly entangled. [250, 16] These Bell states can be both symmetric and asymmetric (e.g. 1111 and 1111, or 1111 and 00), [250, 76] with applications in quantum teleportation [230], dense coding [197], information processing [59], privacy protection [129], cryptography [252], networks [281], optics [132], etc.
Teleportation Enabled by particles that are in a quantum state and entangled, where an unknown particle state is transferred between far apart parties, from one party to another, from one particle to another, but the particle itself is not sent. [4] In the procedure for such an event, before teleportation can take place, some source S𝑆Sitalic_S needs to generate an entangled pair and send particles to their respective destinations. [196] Then, when quantum communication can begin and data transfer happen, after one side has made a measurement, the other side needs to be contacted via classical channels, bound by no faster than light communication, so as to inform them of the measurement parameters for observation, through which the other side will ultimately receive quantum data via the obtained state. [215, 196]
Dense Coding Protocol that is dual to teleportation, Fig. 7, and depends on the entanglement that is described in the EPR experiment; it uses a single qubit in order to transfer, that is, transmit, two bits (in terms of classical information). [215] If source and destination have a particle of EPR entangled pair with maximal correlation, which source has prepared and then sent one particle to destination, it is then possible to transmit two bits of classical data via only one qubit by applying a unitary operator at destination and returning that particle back to source, where party at the source can now jointly measure both particles, that is, the entire EPR pair, and naturally, also learn of the operator party at the destination used in order to manipulate the particle that it received. [21]
Measurement Disturbing the quantum state by making an observation, Fig. 8, intended or otherwise. [215] Quantum measurement is probabilistic, and it is not an easy task to "pick" the result one would like to receive. [215] Since data from a qubit can only be obtained by measurement, regardless of the superposition of states, in the end it is possible to extract only one classical state, in terms of data, from a qubit–and the reason is that when measurement takes place, the superposition collapses and the state is changed to one of the basis states. [215] In order to describe the phenomena of quantum mechanics, scientists have used complex numbers, but as the imaginary part of the phenomenon description is not observable in the physical world, out of the four dimensions that we would need for two base states of quantum computing, one would have only two dimensions; thus, the Bloch sphere has three dimensions, two for polarization and one for the base states. [248] New information is however coming into focus, as it seems that there are entangled states that are distinguishable only by their imaginary component [270, 271, 212, 39, 135]–as fascinating as these discoveries are, whether the imaginary number mathematical trick used to facilitate calculations is necessary for the physical world is yet to be determined via the mountain of evidence that future research needs to provide.
Quantum Gate An operator, also known as a quantum logic gate, is used to both create and manipulate quantum states. [150, 285] It is an elementary quantum circuit that makes operations on a small number of quantum bits. [150, 285] With these, one is building a complex quantum circuit, and this complex circuit is enabling the execution of an algorithm on the quantum machine. [150, 285]
Quantum Circuit Model of computation consisting of a series of qubits (or some sort of quantum data storage), initializations, gates, and measurements. [63, 40, 72]
Quantum Algorithm An algorithm, much like a classical algorithm, that uses quantum effects and represents a sequence of steps, which in turn, by a number of operations, manipulate the initial quantum state for some input, and at the final stage, with measurement being taken, the algorithm returns the correct answer. [4]
Quantum Parallelism The effect present in quantum systems where the amount of parallelism increases exponentially as the size of the system itself, that is, the physical space required, increases linearly. [61] As n𝑛nitalic_n qubits allow one to work at the same time with 2nsuperscript2𝑛2^{n}2 start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT states, quantum parallelism is the effect that gives quantum computing its superiority as it bypasses the classical restriction of time/space tradeoff by giving an exponential quantity of computation space in a linear quantity of real physical space, and therefore quantum machines can compute solutions to all possibilities at the same time, while classical computers can compute only for one input state at the same time. [215]
Interference When measurement is performed on a superposition of output states for a particular input, what one will receive is a random collapse to one state out of all states in the superposition, Fig. 9, with all other states, that is, values, being destroyed. [215] In this way, one cannot reliably compute, and such a behavior needs to somehow be guided. Interference allows us to do exactly that, guide towards desirable output. With interference, it is possible to cause a cancellation between exponentially many input parallel states , with the goal being to produce such an interference between states, that is of the wave function , so as to destroy all undesirable states and collapse into exactly the one we need. [4] The combination of quantum parallelism and interference gives quantum computation tremendous power, and its use in quantum algorithms is essential. [4, 38]
Decoherence For the reason of the interaction of the quantum system and its environment, which is inevitable, the state of a quantum system is extremely fragile, Fig. 10, and thus due to this interaction, the quantum nature of the system can be lost–this loss of quantum information, this distortion , and collapse of superposition due to interaction of the quantum system with its surroundings is called decoherence. [4, 215]
By reading the text to this point, a first quantum computation has already been performed; in fact, probably more than a few were done in one’s mind. This incubation of data and information has not only made one knowledgeable about the subject of quantum computing but has also developed intuition and a crucial way of thinking needed for such a topic as quantum computation. And now, with neurons and pathways of the brain speaking quantum computation, we will deal in a bit more detail with topics that were touched upon, but for which one’s scientific curiosity, trying to decode the universe we live in, wants more.
4 Quantum Effects and the Universe we Live in
Some time has passed since the event, when during one of his talks, Nobel Prize winner Ivar Giaever told the story of his youth and a job that he applied for and received. After getting the job, his mentor told him a story about quantum mechanics, more specifically tunneling–the story was so strange that Giaever’s own words will best explain his disposition: "I did not believe a word of what he told me, nothing." Giaever got his job and decided to be quiet, but what is it that his mentor, John Fisher, told young Giaever?
He told him a story of small particles; he told him a story of the underlying laws of physics that are the foundation of the world we live in; he told him that if one would throw a tennis ball in the wall, that ball would eventually cross the wall and end up on the other side, in the same condition in which it was before it went through the wall; and to top it off, he told Giaever that there would be no hole in the wall. Now that was some story, like something from a fairytale, and Giaever’s reaction of not believing a word of what he was told was expected.
During that fascinating talk, Giaever expositioned, explaining that what if one would take an extremely small particle for a ball, i.e. electron, and throw that ball toward obstacles that are very close to one another, distanced in a few atoms, and are not touching? In that case, there is a finite probability that an electron will find itself on the other side of the obstacle, never being in between. That is quantum tunneling, and that is for what Giaever shared his Nobel Prize, in 1973. [176] Quantum effects are real and are typically observable only with very small particles , on an atomic and subatomic level.
Quantum effects are dependent on a wave function and its accompanying probability that a state will be observed. This wave-particle duality was an outstanding discovery with profound consequences that are shaking science to this day. [7] The differences between classical and quantum systems are many, with one of the more intricate being the answer to the question of what one knows about one type of system and, of course, the other. If we know everything about a classical system, including all its characteristics, we naturally know everything about its components; however, this does not hold for quantum systems, which can clearly be seen in a quantum phenomenon called entanglement. [204, 206, 227] One could, for example, have a composite quantum system, i.e. AB𝐴𝐵ABitalic_A italic_B, and know everything about that system’s laws of physics would allow us to know, despite of that fact, if one would observe just part of the system, i.e. B𝐵Bitalic_B, information needed to completely characterize that part of the system is missing, as the series of expectations for the subsystem depends on an unknown value of the variable for some other subsystem, in this instance, on the observation of A𝐴Aitalic_A. [227, 204]
This series of expectations, being a consequence of superposition, is linked to entanglement in a meaningful way. Extra states, with no analog in a classical system, leading "to the exponential size of the quantum state space are the entangled states". [215] In this way, by undergoing initialization, quantum state transformations, and measurement, a quantum system achieves its result. [215]
Even though we live in the quantum world, seldom do we think about it, but the macroscopic world we are surrounded with is not isolated from its own surroundings and is therefore in uninterrupted interactions with the environment, meaning it is continually measured, a phenomenon aforementioned and called decoherence. [204] Such a quantum system, continually being observed, represents a system known from the down of time and "is well described by classical physics." [204] Though "weird," a vast number of experiments have shown that quantum mechanics correctly describes physical reality. In order to combat decoherence in quantum computers, a breakthrough came, but not from the physical side, as was perhaps expected. [215] It was theorized by some that quantum error correction is beyond our abilities "because of the impossibility of reliably copying an unknown quantum state", yet it was not so, as it is possible via error-correcting techniques to design error-correcting codes by which one can detect specific errors and reconstruct "the exact error-free quantum state." [215]
And so this battle between decoherence and superposition continually "rages." A quantum system can perform an enormous amount of computation in parallel, but accessing the desired result is far from easy. [215, 157] In order to read the result, quantum state is disturbed, only one of those parallel threads is read, and as the measurement is probabilistic in nature, "we cannot even choose which one we get." [215] It is, however, possible to skillfully deal with the problem of measurement and thus exploit quantum parallelism; "this sort of manipulation has no classical analog and requires nontraditional programming techniques." [215] Shor’s factorization algorithm manipulates quantum states in such a way that the "common property of all of the output values can be read off" [215], and in this way direct toward the output one would like to achieve, while, for example, Grover’s search algorithm makes amplification through which the probability that the result of interest will be read is increased, thus manipulating quantum states. [87, 232, 215]
Basic operations in any classical algorithm are data copying and data deletion. While trying to project this to a quantum computer, one comes to a brick wall, as this is not possible in a quantum system, perfect copying of an unknown quantum state is an intrinsic impossibility, as per the no-cloning theorem , not just a limitation of laboratory conditions. [215, 223] If, on the other hand, we disregard the notion that the copy needs to be perfect (producing a perfect copy of a limited number of quantum states with probability <1absent1<1< 1), then one can devise an apparatus (a copier or cloner) by which copying can be conducted, reproducing the desired state through an approximation or to a degree of probability. [98]
As one might presume, with copying being such a stepping stone, data deletion also differs substantially from the classical case. If one assumes that there are two identical copies of an arbitrary and unknown quantum state to be deleted, this process actually cannot be accomplished (as per the no-deleting theorem), aside from deleting approximately –as is the case for quantum cloning; however, just as is the case for cloning, the process of deletion is possible if one deals with known orthogonal states. [221, 192] This inability to clone and delete quantum information, but only express possibilities already in existence, postulates conservation of quantum information, as information cannot be created nor destroyed. [192, 288]
Related to cloning and deleting quantum information is the inability to hide information, known as the no-hiding theorem. [192] If a quantum system interacts with its surroundings and loses information, that information actually is not missing; it simply resides somewhere else in the universe–that is, correlations between the system and the environment are not able to hide information . [192, 288]
In spite of all the hurdles we go through when trying to discover new knowledge and understand the universe in which we are, this same quantum universe works perfectly and mindbogglingly precise, with quantum computers being devised and in operation. That being said, scientific discovery and painstaking experimentation have produced criteria for successful implementation of a device that would be called a quantum computer; they are found in [63], and are as follows:
I
"A scalable physical system with well characterized qubits", that is, a collection of qubits with physical parameters that are accurately known,
II
"The ability to initialize the state of the qubits to a simple fiducial state, such as 000000000000", that is, initializing quantum registers to a known value before one starts computing,
III
"Long relevant decoherence times, much longer than the gate operation time", that is, dynamics with the environment brings about quantum state decay with which quantum computation is possible,
IV
"A ’universal’ set of quantum gates", that is, a set of quantum gates that are able to implement via a finite sequence of gates any quantum operation,
V
"A qubit-specific measurement capability", that is, the capacity to be able to measure specific qubits.
In addition to the previous five, two additional ones are added, namely "the ability to inter-convert stationary and flying qubits" and "the ability to faithfully transmit flying qubits between specified locations", in order to achieve quantum communication, as not all information processing is only computation. [63] The need for the additional two criteria is clearly seen in quantum key distribution [20], and quantum cryptography [63]. It is, however, not an easy task to transmit a qubit from one place to another, and when this is done, decoherence plays an important hurdle to overcome. [82, 204]
In spite of all of its strangeness, quantum mechanics has withstood the test of time, and for the time being, it stands supreme. But just as is the case for the theory of relativity and Newtonian physics, so is the case for quantum and classical physics; both are needed. In fact, classical is quantum, but simply for large objects for which wavelengths are so small that they cannot be measured. Thus, if something functions specifically, it does not mean that it functions generally, but if it does not function generally, it does not mean that it is not useful. With the next section most definitely being useful, as it deals with quantum gates and algorithms.
5 Computation with Quantum Gates
Fundamentally speaking, as is the case when one does classical computation, by analogy, so is the situation for quantum computation, since in order to manipulate quantum information, one needs quantum gates that are then forming a quantum circuit and consequently a quantum algorithm. There is a myriad of quantum gates, e.g. Identity (I), Not (NOT or PauliX), Controlled Not (CNOT), Controlled Controlled Not (CCNOT or Toffoli), Swap (SWAP or S), Hadamard (H), Phase (P), etc. [145, 245], with some being a single qubit gate while others are multiple qubit.
Before we proceed into a more in-depth look at quantum computation, we will first expound on a number of quantum gates, as this knowledge is essential for understanding quantum circuits. Let’s start with the quantum gate, whose classical equivalent should be known to every computer expert and physicist: the NOT gate. Let us assume that superposition states, from now on, that we will use shall be |0⟩ket0\ket{0}| start_ARG 0 end_ARG ⟩ and |1⟩ket1\ket{1}| start_ARG 1 end_ARG ⟩, with |ψ⟩=a|0⟩+b|1⟩ket𝜓𝑎ket0𝑏ket1\ket{\psi}=a\ket{0}+b\ket{1}| start_ARG italic_ψ end_ARG ⟩ = italic_a | start_ARG 0 end_ARG ⟩ + italic_b | start_ARG 1 end_ARG ⟩. This basis is called the computational or standard basis and is in three-dimensional space represented by the axes Z, therefore the Z-basis, which is "generally the only basis in which we can make measurements of the system." [53]
NOT Not gate is a single qubit gate. [145] Denoted as well as PauliX (named after Wolfgang Pauli, who received the Nobel Prize in Physics in 1945, proposing "that no two electrons in an atom could have identical sets of quantum numbers" that correspond to "distinct states of energy and movement." [170]), as the operation it makes is a rotation by π𝜋\piitalic_π radians around the X𝑋Xitalic_X axis. [145] As a consequence of this rotation, there is a mapping, |0⟩→|1⟩→ket0ket1\ket{0}\rightarrow\ket{1}| start_ARG 0 end_ARG ⟩ → | start_ARG 1 end_ARG ⟩ and |1⟩→|0⟩→ket1ket0\ket{1}\rightarrow\ket{0}| start_ARG 1 end_ARG ⟩ → | start_ARG 0 end_ARG ⟩. [145] The transformation matrix used in order to calculate an output for the gate and its input is [145],
NOT=[0110]𝑁𝑂𝑇matrix0110NOT=\begin{bmatrix}0&1\\ 1&0\end{bmatrix}italic_N italic_O italic_T = [ start_ARG start_ROW start_CELL 0 end_CELL start_CELL 1 end_CELL end_ROW start_ROW start_CELL 1 end_CELL start_CELL 0 end_CELL end_ROW end_ARG ] (1)
H Hadamard gate is a single qubit gate. [145] Known also as the Walsh-Hadamard gate (named after Jacques Hadamard [113] and Joseph Walsh [103]), the gate makes an operation of superposition–for a basis state, the superposition that is created is equal in probability. [145, 113] Superposition is created by making a rotation of π𝜋\piitalic_π radians around the axis between the X𝑋Xitalic_X axis and the Z𝑍Zitalic_Z axis. [265] As a consequence of this superposition operation, there is a mapping, |0⟩→|0⟩+|1⟩2→ket0ket0ket12\ket{0}\rightarrow\frac{\ket{0}+\ket{1}}{\sqrt{2}}| start_ARG 0 end_ARG ⟩ → divide start_ARG | start_ARG 0 end_ARG ⟩ + | start_ARG 1 end_ARG ⟩ end_ARG start_ARG square-root start_ARG 2 end_ARG end_ARG and |1⟩→|0⟩−|1⟩2→ket1ket0ket12\ket{1}\rightarrow\frac{\ket{0}-\ket{1}}{\sqrt{2}}| start_ARG 1 end_ARG ⟩ → divide start_ARG | start_ARG 0 end_ARG ⟩ - | start_ARG 1 end_ARG ⟩ end_ARG start_ARG square-root start_ARG 2 end_ARG end_ARG. [145] If we apply the Hadamard operation twice, a particle is placed into a superposition of states and then returned to its original state. [231] The transformation matrix used in order to calculate an output for the gate and its input is [145],
H=12[111−1]𝐻12matrix1111H=\dfrac{1}{\sqrt{2}}\begin{bmatrix}1&1\\ 1&-1\end{bmatrix}italic_H = divide start_ARG 1 end_ARG start_ARG square-root start_ARG 2 end_ARG end_ARG [ start_ARG start_ROW start_CELL 1 end_CELL start_CELL 1 end_CELL end_ROW start_ROW start_CELL 1 end_CELL start_CELL - 1 end_CELL end_ROW end_ARG ] (2)
P Phase gate is a single qubit gate . [145] Known also as the Phase Shift gate, as the gate makes an operation of shifting a qubit’s phase with probabilities for the qubit staying unchanged, that is, probabilities for basis states, |0⟩ket0\ket{0}| start_ARG 0 end_ARG ⟩ and |1⟩ket1\ket{1}| start_ARG 1 end_ARG ⟩, remain the same. [145] As the phase is shifted, there is a mapping, |0⟩→|0⟩→ket0ket0\ket{0}\rightarrow\ket{0}| start_ARG 0 end_ARG ⟩ → | start_ARG 0 end_ARG ⟩ and |1⟩→eiθ|1⟩→ket1superscript𝑒𝑖𝜃ket1\ket{1}\rightarrow e^{i\theta}\ket{1}| start_ARG 1 end_ARG ⟩ → italic_e start_POSTSUPERSCRIPT italic_i italic_θ end_POSTSUPERSCRIPT | start_ARG 1 end_ARG ⟩, with θ𝜃\thetaitalic_θ being a phase shift and the period being 2π2𝜋2\pi2 italic_π. [145, 75] The transformation matrix used in order to calculate an output for the gate and its input is [145],
Pθ=[100eiθ]subscript𝑃𝜃matrix100superscript𝑒𝑖𝜃P_{\theta}=\begin{bmatrix}1&0\\ 0&e^{i\theta}\end{bmatrix}italic_P start_POSTSUBSCRIPT italic_θ end_POSTSUBSCRIPT = [ start_ARG start_ROW start_CELL 1 end_CELL start_CELL 0 end_CELL end_ROW start_ROW start_CELL 0 end_CELL start_CELL italic_e start_POSTSUPERSCRIPT italic_i italic_θ end_POSTSUPERSCRIPT end_CELL end_ROW end_ARG ] (3)
The term eiθsuperscript𝑒𝑖𝜃e^{i\theta}italic_e start_POSTSUPERSCRIPT italic_i italic_θ end_POSTSUPERSCRIPT is a part of the well-known Euler’s formula, eiθ=cos(θ)+isin(θ)superscript𝑒𝑖𝜃𝜃𝑖𝜃e^{i\theta}=\cos(\theta)+i\sin(\theta)italic_e start_POSTSUPERSCRIPT italic_i italic_θ end_POSTSUPERSCRIPT = roman_cos ( start_ARG italic_θ end_ARG ) + italic_i roman_sin ( start_ARG italic_θ end_ARG ) (a complex number x+yi𝑥𝑦𝑖x+yiitalic_x + italic_y italic_i that has magnitude 1111 can be stated via the aforementioned formula)–with the numbers sitting on the unit circle in a complex plane, closing an angle θ𝜃\thetaitalic_θ with the axis of the circle. [247]
I Identity gate is a single qubit gate. [248] This gate does not modify the quantum state in any way–it is typically used in a quantum circuit when we want to show what is happening to a qubit at a certain step or when we want to cause a delay (which the researchers sometimes want to do in order to "calculate measurements of the decoherence of a qubit"). [248] The transformation matrix used in order to calculate an output for the gate and its input is the identity matrix [248],
I=[1001]𝐼matrix1001I=\begin{bmatrix}1&0\\ 0&1\end{bmatrix}italic_I = [ start_ARG start_ROW start_CELL 1 end_CELL start_CELL 0 end_CELL end_ROW start_ROW start_CELL 0 end_CELL start_CELL 1 end_CELL end_ROW end_ARG ] (4)
CNOT Controlled Not gate is a two qubit gate. [145] This gate is very similar to the Not gate, the difference being that the target qubit is flipped only if the first qubit is in an excited state, that is, there is a mapping, |00⟩→|00⟩→ket00ket00\ket{00}\rightarrow\ket{00}| start_ARG 00 end_ARG ⟩ → | start_ARG 00 end_ARG ⟩ and |01⟩→|01⟩→ket01ket01\ket{01}\rightarrow\ket{01}| start_ARG 01 end_ARG ⟩ → | start_ARG 01 end_ARG ⟩ and |10⟩→|11⟩→ket10ket11\ket{10}\rightarrow\ket{11}| start_ARG 10 end_ARG ⟩ → | start_ARG 11 end_ARG ⟩ and |11⟩→|10⟩→ket11ket10\ket{11}\rightarrow\ket{10}| start_ARG 11 end_ARG ⟩ → | start_ARG 10 end_ARG ⟩. [215] The transformation matrix used in order to calculate an output for the gate and its input is [215],
CNOT=[1000010000010010]𝐶𝑁𝑂𝑇matrix1000010000010010CNOT=\begin{bmatrix}1&0&0&0\\ 0&1&0&0\\ 0&0&0&1\\ 0&0&1&0\\ \end{bmatrix}italic_C italic_N italic_O italic_T = [ start_ARG start_ROW start_CELL 1 end_CELL start_CELL 0 end_CELL start_CELL 0 end_CELL start_CELL 0 end_CELL end_ROW start_ROW start_CELL 0 end_CELL start_CELL 1 end_CELL start_CELL 0 end_CELL start_CELL 0 end_CELL end_ROW start_ROW start_CELL 0 end_CELL start_CELL 0 end_CELL start_CELL 0 end_CELL start_CELL 1 end_CELL end_ROW start_ROW start_CELL 0 end_CELL start_CELL 0 end_CELL start_CELL 1 end_CELL start_CELL 0 end_CELL end_ROW end_ARG ] (5)
S Swap gate is a two-qubit gate. As the name suggests, this gate makes an operation of swapping the values of two qubits; the order of the qubits is not important for this gate. [145] There is also a version of the Swap gate called the Fredkin gate (a three-qubit gate [145]), which makes an operation of a controlled swap. [215] The transformation matrix used in order to calculate an output for the Swap gate and its input is [145],
S=[1000001001000001]𝑆matrix1000001001000001S=\begin{bmatrix}1&0&0&0\\ 0&0&1&0\\ 0&1&0&0\\ 0&0&0&1\\ \end{bmatrix}italic_S = [ start_ARG start_ROW start_CELL 1 end_CELL start_CELL 0 end_CELL start_CELL 0 end_CELL start_CELL 0 end_CELL end_ROW start_ROW start_CELL 0 end_CELL start_CELL 0 end_CELL start_CELL 1 end_CELL start_CELL 0 end_CELL end_ROW start_ROW start_CELL 0 end_CELL start_CELL 1 end_CELL start_CELL 0 end_CELL start_CELL 0 end_CELL end_ROW start_ROW start_CELL 0 end_CELL start_CELL 0 end_CELL start_CELL 0 end_CELL start_CELL 1 end_CELL end_ROW end_ARG ] (6)
CCNOT Controlled Controlled Not gate is a three-qubit gate. Similarly to the two-qubit Controlled Not gate, this gate takes two controlled qubits, and depending on the values of these, the value of a third qubit is flipped–that is, iff the first two qubits have a value of 1, then the value of a third qubit is flipped. [215] This gate is also known by the name Toffoli gate. [215] The transformation matrix used in order to calculate an output for the gate and its input is [248],
CCNOT=[1000000001000000001000000001000000001000000001000000000100000010]𝐶𝐶𝑁𝑂𝑇matrix1000000001000000001000000001000000001000000001000000000100000010CCNOT=\begin{bmatrix}1&0&0&0&0&0&0&0\\ 0&1&0&0&0&0&0&0\\ 0&0&1&0&0&0&0&0\\ 0&0&0&1&0&0&0&0\\ 0&0&0&0&1&0&0&0\\ 0&0&0&0&0&1&0&0\\ 0&0&0&0&0&0&0&1\\ 0&0&0&0&0&0&1&0\\ \end{bmatrix}italic_C italic_C italic_N italic_O italic_T = [ start_ARG start_ROW start_CELL 1 end_CELL start_CELL 0 end_CELL start_CELL 0 end_CELL start_CELL 0 end_CELL start_CELL 0 end_CELL start_CELL 0 end_CELL start_CELL 0 end_CELL start_CELL 0 end_CELL end_ROW start_ROW start_CELL 0 end_CELL start_CELL 1 end_CELL start_CELL 0 end_CELL start_CELL 0 end_CELL start_CELL 0 end_CELL start_CELL 0 end_CELL start_CELL 0 end_CELL start_CELL 0 end_CELL end_ROW start_ROW start_CELL 0 end_CELL start_CELL 0 end_CELL start_CELL 1 end_CELL start_CELL 0 end_CELL start_CELL 0 end_CELL start_CELL 0 end_CELL start_CELL 0 end_CELL start_CELL 0 end_CELL end_ROW start_ROW start_CELL 0 end_CELL start_CELL 0 end_CELL start_CELL 0 end_CELL start_CELL 1 end_CELL start_CELL 0 end_CELL start_CELL 0 end_CELL start_CELL 0 end_CELL start_CELL 0 end_CELL end_ROW start_ROW start_CELL 0 end_CELL start_CELL 0 end_CELL start_CELL 0 end_CELL start_CELL 0 end_CELL start_CELL 1 end_CELL start_CELL 0 end_CELL start_CELL 0 end_CELL start_CELL 0 end_CELL end_ROW start_ROW start_CELL 0 end_CELL start_CELL 0 end_CELL start_CELL 0 end_CELL start_CELL 0 end_CELL start_CELL 0 end_CELL start_CELL 1 end_CELL start_CELL 0 end_CELL start_CELL 0 end_CELL end_ROW start_ROW start_CELL 0 end_CELL start_CELL 0 end_CELL start_CELL 0 end_CELL start_CELL 0 end_CELL start_CELL 0 end_CELL start_CELL 0 end_CELL start_CELL 0 end_CELL start_CELL 1 end_CELL end_ROW start_ROW start_CELL 0 end_CELL start_CELL 0 end_CELL start_CELL 0 end_CELL start_CELL 0 end_CELL start_CELL 0 end_CELL start_CELL 0 end_CELL start_CELL 1 end_CELL start_CELL 0 end_CELL end_ROW end_ARG ] (7)
While not all of these will be used in our own calculations, they represent some of the elementary quantum gates and foundational quantum operations and are therefore mentioned as part of one’s necessary quantum arsenal. For a number of other quantum gates, both frequently and infrequently in use, one can consult [53]. It is useful to have quantum transformations represented graphically , therefore, a single-bit operations are typically graphically represented by labeled boxes, while multiple-qubit operations are typically represented by circles, marks, and lines–as other authors have dealt with this satisfactorily, we will not repeat it here. [215, 53]
In order to know the output of a quantum algorithm, we need to be able to calculate that result, just like with a classical algorithm. There is, however, a twist in a quantum situation. Since we are dealing with particle states and quantum operations represented by matrices, we transform input into output by using vector notation for probability amplitudes and then calculate the tensor product for the expression, after which we perform matrix multiplication, which in turn transforms amplitudes, which in turn changes probability density and the end result. Let us therefore perform a few interesting calculations.
If we had a qubit that we wanted to place into a superposition of states, we would use the Hadamard gate, abbreviated as H. By following the aforementioned procedure, a qubit needs to be had. Let us therefore define the following qubit, |ψ0⟩=1|0⟩+0|1⟩ketsubscript𝜓01ket00ket1\ket{\psi_{0}}=1\ket{0}+0\ket{1}| start_ARG italic_ψ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT end_ARG ⟩ = 1 | start_ARG 0 end_ARG ⟩ + 0 | start_ARG 1 end_ARG ⟩. On this qubit, one now needs to apply the H gate, an operation needs to be performed on the operand, so as to achieve the desired result, namely, superposition. By placing the qubit amplitudes into a column vector and using the H gate matrix, we will have the following.
12[111−1][10]=12+1212matrix1111matrix101212\dfrac{1}{\sqrt{2}}\begin{bmatrix}1&1\\ 1&-1\end{bmatrix}\begin{bmatrix}1\\ 0\end{bmatrix}=\dfrac{1}{\sqrt{2}}+\dfrac{1}{\sqrt{2}}divide start_ARG 1 end_ARG start_ARG square-root start_ARG 2 end_ARG end_ARG [ start_ARG start_ROW start_CELL 1 end_CELL start_CELL 1 end_CELL end_ROW start_ROW start_CELL 1 end_CELL start_CELL - 1 end_CELL end_ROW end_ARG ] [ start_ARG start_ROW start_CELL 1 end_CELL end_ROW start_ROW start_CELL 0 end_CELL end_ROW end_ARG ] = divide start_ARG 1 end_ARG start_ARG square-root start_ARG 2 end_ARG end_ARG + divide start_ARG 1 end_ARG start_ARG square-root start_ARG 2 end_ARG end_ARG (8)
Such a result has given us a qubit in a superposition; thus, by performing the above multiplication, we have |ψ1⟩=12|0⟩+12|1⟩ketsubscript𝜓112ket012ket1\ket{\psi_{1}}=\frac{1}{\sqrt{2}}\ket{0}+\frac{1}{\sqrt{2}}\ket{1}| start_ARG italic_ψ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT end_ARG ⟩ = divide start_ARG 1 end_ARG start_ARG square-root start_ARG 2 end_ARG end_ARG | start_ARG 0 end_ARG ⟩ + divide start_ARG 1 end_ARG start_ARG square-root start_ARG 2 end_ARG end_ARG | start_ARG 1 end_ARG ⟩. Perfect, we have a qubit in a superposition with both states having the same amplitude, and by squaring the state values, we see that both states have a 50%percent5050\%50 % chance of being observed after superposition collapse. By applying the H gate again, what one can freely try, the original state would again be a reality. It is also good to note here that a qubit is always in a superposition, although for the reason that one of the amplitudes is 0, the original state is often not called such.
With the Hadamard-gained superposition state, we can proceed to another operation. Let us next perform the CNOT operation. We know what the CNOT gate does, and we also know that such a gate is a two-qubit gate. With that in mind, we will define one more qubit, |ψ2⟩=1|0⟩+0|1⟩ketsubscript𝜓21ket00ket1\ket{\psi_{2}}=1\ket{0}+0\ket{1}| start_ARG italic_ψ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT end_ARG ⟩ = 1 | start_ARG 0 end_ARG ⟩ + 0 | start_ARG 1 end_ARG ⟩. By placing the qubit amplitudes into a column vector , and using the CNOT gate matrix, we will have the following.
[1000010000010010][120120]=12+0+0+12matrix1000010000010010matrix120120120012\begin{bmatrix}1&0&0&0\\ 0&1&0&0\\ 0&0&0&1\\ 0&0&1&0\\ \end{bmatrix}\begin{bmatrix}\dfrac{1}{\sqrt{2}}\\ 0\\ \dfrac{1}{\sqrt{2}}\\ 0\end{bmatrix}=\dfrac{1}{\sqrt{2}}+0+0+\dfrac{1}{\sqrt{2}}[ start_ARG start_ROW start_CELL 1 end_CELL start_CELL 0 end_CELL start_CELL 0 end_CELL start_CELL 0 end_CELL end_ROW start_ROW start_CELL 0 end_CELL start_CELL 1 end_CELL start_CELL 0 end_CELL start_CELL 0 end_CELL end_ROW start_ROW start_CELL 0 end_CELL start_CELL 0 end_CELL start_CELL 0 end_CELL start_CELL 1 end_CELL end_ROW start_ROW start_CELL 0 end_CELL start_CELL 0 end_CELL start_CELL 1 end_CELL start_CELL 0 end_CELL end_ROW end_ARG ] [ start_ARG start_ROW start_CELL divide start_ARG 1 end_ARG start_ARG square-root start_ARG 2 end_ARG end_ARG end_CELL end_ROW start_ROW start_CELL 0 end_CELL end_ROW start_ROW start_CELL divide start_ARG 1 end_ARG start_ARG square-root start_ARG 2 end_ARG end_ARG end_CELL end_ROW start_ROW start_CELL 0 end_CELL end_ROW end_ARG ] = divide start_ARG 1 end_ARG start_ARG square-root start_ARG 2 end_ARG end_ARG + 0 + 0 + divide start_ARG 1 end_ARG start_ARG square-root start_ARG 2 end_ARG end_ARG (9)
Therefore, the final state of the operation performed is, |ψ3⟩=12|00⟩+0|01⟩+0|10⟩+12|11⟩=12|00⟩+12|11⟩ketsubscript𝜓312ket000ket010ket1012ket1112ket0012ket11\ket{\psi_{3}}=\frac{1}{\sqrt{2}}\ket{00}+0\ket{01}+0\ket{10}+\frac{1}{\sqrt{2% }}\ket{11}=\frac{1}{\sqrt{2}}\ket{00}+\frac{1}{\sqrt{2}}\ket{11}| start_ARG italic_ψ start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT end_ARG ⟩ = divide start_ARG 1 end_ARG start_ARG square-root start_ARG 2 end_ARG end_ARG | start_ARG 00 end_ARG ⟩ + 0 | start_ARG 01 end_ARG ⟩ + 0 | start_ARG 10 end_ARG ⟩ + divide start_ARG 1 end_ARG start_ARG square-root start_ARG 2 end_ARG end_ARG | start_ARG 11 end_ARG ⟩ = divide start_ARG 1 end_ARG start_ARG square-root start_ARG 2 end_ARG end_ARG | start_ARG 00 end_ARG ⟩ + divide start_ARG 1 end_ARG start_ARG square-root start_ARG 2 end_ARG end_ARG | start_ARG 11 end_ARG ⟩. The situation we have here is different than the usual CNOT gate example given above, yet if we think about what has actually happened, this is exactly the result one would expect. We have stated that the CNOT gate will flip the target qubit only if the first qubit is raised, which is 1. Here we had a qubit that served as a control in a superposition, with equal amplitudes, while the target qubit was in a state of 1|0⟩1ket01\ket{0}1 | start_ARG 0 end_ARG ⟩. Therefore, as the control is in a superposition, if the control is 0, then the second qubit would be the same, while if the control were 1, the target would be raised to 1, which means that the resulting states need to be |00⟩+|11⟩ket00ket11\ket{00}+\ket{11}| start_ARG 00 end_ARG ⟩ + | start_ARG 11 end_ARG ⟩, precisely what we have obtained by performing calculation. And as the amplitudes are 1212\frac{1}{\sqrt{2}}divide start_ARG 1 end_ARG start_ARG square-root start_ARG 2 end_ARG end_ARG, this has "spilled" over to the transformed state |ψ3⟩ketsubscript𝜓3\ket{\psi_{3}}| start_ARG italic_ψ start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT end_ARG ⟩. Two states of the |ψ3⟩ketsubscript𝜓3\ket{\psi_{3}}| start_ARG italic_ψ start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT end_ARG ⟩ whose amplitudes are 0 do not represent a logical outcome, as the tensor product pairs are not in line with the CNOT gate operation.
By observing what has happened with the CNOT gate calculation, one might wonder what else might be in store with various gates and qubit states. We will therefore perform one more operation, and that operation will be Swap, denoted with the S. Swap gate is a two-qubit gate that swaps qubit states. This time, let us take the qubit with the state |ψ1⟩=12|0⟩+12|1⟩ketsubscript𝜓112ket012ket1\ket{\psi_{1}}=\frac{1}{\sqrt{2}}\ket{0}+\frac{1}{\sqrt{2}}\ket{1}| start_ARG italic_ψ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT end_ARG ⟩ = divide start_ARG 1 end_ARG start_ARG square-root start_ARG 2 end_ARG end_ARG | start_ARG 0 end_ARG ⟩ + divide start_ARG 1 end_ARG start_ARG square-root start_ARG 2 end_ARG end_ARG | start_ARG 1 end_ARG ⟩ and the qubit with the same state denoted |ψ4⟩ketsubscript𝜓4\ket{\psi_{4}}| start_ARG italic_ψ start_POSTSUBSCRIPT 4 end_POSTSUBSCRIPT end_ARG ⟩. Yet again, by placing the qubit amplitudes into a column vector and using the S gate matrix, we will have the following.
[1000001001000001][12121212]=12+12+12+12matrix1000001001000001matrix1212121212121212\begin{bmatrix}1&0&0&0\\ 0&0&1&0\\ 0&1&0&0\\ 0&0&0&1\\ \end{bmatrix}\begin{bmatrix}\dfrac{1}{2}\\[6.99997pt] \dfrac{1}{2}\\[6.99997pt] \dfrac{1}{2}\\[6.99997pt] \dfrac{1}{2}\end{bmatrix}=\dfrac{1}{2}+\dfrac{1}{2}+\dfrac{1}{2}+\dfrac{1}{2}[ start_ARG start_ROW start_CELL 1 end_CELL start_CELL 0 end_CELL start_CELL 0 end_CELL start_CELL 0 end_CELL end_ROW start_ROW start_CELL 0 end_CELL start_CELL 0 end_CELL start_CELL 1 end_CELL start_CELL 0 end_CELL end_ROW start_ROW start_CELL 0 end_CELL start_CELL 1 end_CELL start_CELL 0 end_CELL start_CELL 0 end_CELL end_ROW start_ROW start_CELL 0 end_CELL start_CELL 0 end_CELL start_CELL 0 end_CELL start_CELL 1 end_CELL end_ROW end_ARG ] [ start_ARG start_ROW start_CELL divide start_ARG 1 end_ARG start_ARG 2 end_ARG end_CELL end_ROW start_ROW start_CELL divide start_ARG 1 end_ARG start_ARG 2 end_ARG end_CELL end_ROW start_ROW start_CELL divide start_ARG 1 end_ARG start_ARG 2 end_ARG end_CELL end_ROW start_ROW start_CELL divide start_ARG 1 end_ARG start_ARG 2 end_ARG end_CELL end_ROW end_ARG ] = divide start_ARG 1 end_ARG start_ARG 2 end_ARG + divide start_ARG 1 end_ARG start_ARG 2 end_ARG + divide start_ARG 1 end_ARG start_ARG 2 end_ARG + divide start_ARG 1 end_ARG start_ARG 2 end_ARG (10)
The final state of the operation performed is, |ψ5⟩=12|00⟩+12|01⟩+12|10⟩+12|11⟩ketsubscript𝜓512ket0012ket0112ket1012ket11\ket{\psi_{5}}=\frac{1}{2}\ket{00}+\frac{1}{2}\ket{01}+\frac{1}{2}\ket{10}+% \frac{1}{2}\ket{11}| start_ARG italic_ψ start_POSTSUBSCRIPT 5 end_POSTSUBSCRIPT end_ARG ⟩ = divide start_ARG 1 end_ARG start_ARG 2 end_ARG | start_ARG 00 end_ARG ⟩ + divide start_ARG 1 end_ARG start_ARG 2 end_ARG | start_ARG 01 end_ARG ⟩ + divide start_ARG 1 end_ARG start_ARG 2 end_ARG | start_ARG 10 end_ARG ⟩ + divide start_ARG 1 end_ARG start_ARG 2 end_ARG | start_ARG 11 end_ARG ⟩. This example of the S gate is perhaps not as intuitive as the one where we have qubits in extreme states, that is, in |0⟩ket0\ket{0}| start_ARG 0 end_ARG ⟩ and in |1⟩ket1\ket{1}| start_ARG 1 end_ARG ⟩–with a probability of 1. However, we observe that amplitudes are present for every individual state for both qubits, which means that every tensor product pair needs to be a candidate for swapping, and as it can be seen from the result, they are all there, on the other side, as an output. By taking into account amplitude values and the equality thereof, the probability of observing a particular swapped state as a result also needs to be equal, which it is. If we take, for example, the amplitude state |01⟩ket01\ket{01}| start_ARG 01 end_ARG ⟩ with the amplitude of 1212\frac{1}{2}divide start_ARG 1 end_ARG start_ARG 2 end_ARG, by squaring the amplitude and thus obtaining the probability of observing that characteristic, we have 1414\frac{1}{4}divide start_ARG 1 end_ARG start_ARG 4 end_ARG–and that is exactly what we expect as per our own reasoning, as input and output are linked.
In previous examples, we were performing calculations in a single sequence, but it is also possible to perform them in parallel and then, at some point, "merge" results and continue, for example, in a single sequence. How an algorithm will look depends on the problem and the designer of the algorithm. By constructing a quantum circuit, one can manipulate events and, in turn, the probability of amplitude states, transforming input into output and a problem into a solution. Therefore, with that in mind, we will in the continuation show the often-used algorithm design pattern useful to solve various quantum conundrums, namely the Bernstein-Vazirani design pattern.
5.1 Bernstein–Vazirani Algorithm Design Pattern
One might think that quantum computers have an upper hand over classical computers in terms of computability; however, this is not the case. [257] Every problem that a quantum machine can solve can also be solved on a classical computer, thus not making a quantum machine superior in that respect; as a consequence, problems that are undecidable in a classical case, which are the hardest problems in existence [159], are also undecidable for quantum computers. [257] What makes quantum computers of interest are superposition, quantum parallelism, and entanglement, as these make quantum machines perform faster. [159, 257, 4]
"The heart of any quantum algorithm is the way in which it manipulates quantum parallelism so that desired results will be measured with high probability." [215] What brings us to the Bernstein-Vazirani algorithm, which uses superposition, quantum parallelism, and an effect called phase-kickback, so as to achieve its result. [23] These manipulations have no analog in the classical computer world; therefore, a quantum computer is necessary to bring the aforementioned algorithm into reality. [215]
There is a problem of determining the value of each character in a string. [158] For example, one might have the following string, 1100110011001100. The question then is: what algorithm could we devise in order to determine in which place the string has a raised bit? As the reader might already guess, we would need to perform a logical conjunction for every bit, as presented in the following equation. [23, 158]
1100&101100&101100&111100&11matrixmissing-subexpression1100missing-subexpressionmissing-subexpressionmissing-subexpression1missing-subexpressionmissing-subexpressionmissing-subexpressionmissing-subexpressionmissing-subexpression0matrixmissing-subexpression1100missing-subexpressionmissing-subexpression1missing-subexpressionmissing-subexpressionmissing-subexpressionmissing-subexpressionmissing-subexpression0missing-subexpressionmatrixmissing-subexpression1100missing-subexpression1missing-subexpressionmissing-subexpressionmissing-subexpressionmissing-subexpressionmissing-subexpression1missing-subexpressionmissing-subexpressionmatrixmissing-subexpression11001missing-subexpressionmissing-subexpressionmissing-subexpressionmissing-subexpressionmissing-subexpression1missing-subexpressionmissing-subexpressionmissing-subexpression\begin{matrix}&1&1&0&0\\ \&&&&&1\\ \hline\cr&&&&0\end{matrix}{\mkern 30.0mu}\begin{matrix}&1&1&0&0\\ \&&&&1&\\ \hline\cr&&&0&\end{matrix}{\mkern 30.0mu}\begin{matrix}&1&1&0&0\\ \&&&1&&\\ \hline\cr&&1&&\end{matrix}{\mkern 30.0mu}\begin{matrix}&1&1&0&0\\ \&&1&&&\\ \hline\cr&1&&&\end{matrix}start_ARG start_ROW start_CELL end_CELL start_CELL 1 end_CELL start_CELL 1 end_CELL start_CELL 0 end_CELL start_CELL 0 end_CELL end_ROW start_ROW start_CELL & end_CELL start_CELL end_CELL start_CELL end_CELL start_CELL end_CELL start_CELL 1 end_CELL end_ROW start_ROW start_CELL end_CELL end_ROW start_ROW start_CELL end_CELL start_CELL end_CELL start_CELL end_CELL start_CELL end_CELL start_CELL 0 end_CELL end_ROW end_ARG start_ARG start_ROW start_CELL end_CELL start_CELL 1 end_CELL start_CELL 1 end_CELL start_CELL 0 end_CELL start_CELL 0 end_CELL end_ROW start_ROW start_CELL & end_CELL start_CELL end_CELL start_CELL end_CELL start_CELL 1 end_CELL start_CELL end_CELL end_ROW start_ROW start_CELL end_CELL end_ROW start_ROW start_CELL end_CELL start_CELL end_CELL start_CELL end_CELL start_CELL 0 end_CELL start_CELL end_CELL end_ROW end_ARG start_ARG start_ROW start_CELL end_CELL start_CELL 1 end_CELL start_CELL 1 end_CELL start_CELL 0 end_CELL start_CELL 0 end_CELL end_ROW start_ROW start_CELL & end_CELL start_CELL end_CELL start_CELL 1 end_CELL start_CELL end_CELL start_CELL end_CELL end_ROW start_ROW start_CELL end_CELL end_ROW start_ROW start_CELL end_CELL start_CELL end_CELL start_CELL 1 end_CELL start_CELL end_CELL start_CELL end_CELL end_ROW end_ARG start_ARG start_ROW start_CELL end_CELL start_CELL 1 end_CELL start_CELL 1 end_CELL start_CELL 0 end_CELL start_CELL 0 end_CELL end_ROW start_ROW start_CELL & end_CELL start_CELL 1 end_CELL start_CELL end_CELL start_CELL end_CELL start_CELL end_CELL end_ROW start_ROW start_CELL end_CELL end_ROW start_ROW start_CELL end_CELL start_CELL 1 end_CELL start_CELL end_CELL start_CELL end_CELL start_CELL end_CELL end_ROW end_ARG (11)
And now, by reading from the back, we have the result, 1100110011001100, the original bit string is decoded. It is clearly seen from the example that for a n𝑛nitalic_n-bit string, we would need n𝑛nitalic_n operations to find the source bit string–that is, with the linear increase of the input, the complexity of the algorithm increases linearly. This is not an inefficient algorithm; however, for a bit string of length 109superscript10910^{9}10 start_POSTSUPERSCRIPT 9 end_POSTSUPERSCRIPT the number of steps needed to be performed is substantial, and this is where a quantum computer can excel. By employing characteristics that a quantum machine would have, the aforementioned algorithm could be adapted and the entire calculation done in only one step, and thus regardless of the input string, if the quantum machine can match the problem, the calculation would be completed in one step only–this is outstanding, and the procedure that accomplishes the aforementioned is called the Bernstein-Vazirani algorithm; for a visual representation, one can consult Figure 12.
Before we perform some calculations, let us expound on a fundamental idea behind the Bernstein-Vazirani algorithm. Since our string is four bits long, we also need four qubits for the quantum algorithm as well. The quantum algorithm, however, needs one additional qubit through which the essence of the algorithm will be delivered. All the qubits are at the beginning in the ground state of |0⟩ket0\ket{0}| start_ARG 0 end_ARG ⟩. These qubits are then placed in a superposition of values, while the last qubit is first placed in a |1⟩ket1\ket{1}| start_ARG 1 end_ARG ⟩ and then into a superposition, which means that the last qubit has a phase added to its superposition, and this is crucial.
In the next series of operations, there are CNot gates added to every qubit on which we need to decode 1, an excited state, with the last qubit, a qubit with a phase in its superposition, being a target of the CNot. This part of the algorithm is the part where the flash happens, as the phase from the target qubit transfers onto the control qubits, a target has had an influence on the control; this unexpected event is known under the name phase-kickback [189] and is a crucial part of the algorithm. When we, after this step, perform an additional step with the Hadamard gate and return qubits out of superposition, the phase-kickback will have, as a consequence, a qubit in the state |1⟩ket1\ket{1}| start_ARG 1 end_ARG ⟩ where before it was |0⟩ket0\ket{0}| start_ARG 0 end_ARG ⟩. By making measurements on qubits, as a last step of the algorithm, we will read the final state and receive the desired result of the decoded string. This read data is then stored on a classical storage. And so, by using a phase-kickback effect, we were able to detect a desirable characteristic and make a transformation by which the end result was obtained. [189, 23]
By performing actual calculations, it can be more clearly seen why this has happened and what the algorithm’s inner workings are. At the very start of the algorithm, we need to place qubits into superposition, and as we have already shown this in Equation 8, and as it is quite clear what will happen by applying the Not operation from Equation 1, these steps will be skipped. Suffice to say, Hadamard gate will produce, |ψ1,2,3,4⟩=12|0⟩+12|1⟩ketsubscript𝜓123412ket012ket1\ket{\psi_{1,2,3,4}}=\frac{1}{\sqrt{2}}\ket{0}+\frac{1}{\sqrt{2}}\ket{1}| start_ARG italic_ψ start_POSTSUBSCRIPT 1 , 2 , 3 , 4 end_POSTSUBSCRIPT end_ARG ⟩ = divide start_ARG 1 end_ARG start_ARG square-root start_ARG 2 end_ARG end_ARG | start_ARG 0 end_ARG ⟩ + divide start_ARG 1 end_ARG start_ARG square-root start_ARG 2 end_ARG end_ARG | start_ARG 1 end_ARG ⟩, the X gate will produce, |ψ5⟩=0|0⟩+1|1⟩ketsubscript𝜓50ket01ket1\ket{\psi_{5}}=0\ket{0}+1\ket{1}| start_ARG italic_ψ start_POSTSUBSCRIPT 5 end_POSTSUBSCRIPT end_ARG ⟩ = 0 | start_ARG 0 end_ARG ⟩ + 1 | start_ARG 1 end_ARG ⟩, and the Hadamard applied after the X gate will produce, |ψ5⟩=12|0⟩−12|1⟩ketsubscript𝜓512ket012ket1\ket{\psi_{5}}=\frac{1}{\sqrt{2}}\ket{0}-\frac{1}{\sqrt{2}}\ket{1}| start_ARG italic_ψ start_POSTSUBSCRIPT 5 end_POSTSUBSCRIPT end_ARG ⟩ = divide start_ARG 1 end_ARG start_ARG square-root start_ARG 2 end_ARG end_ARG | start_ARG 0 end_ARG ⟩ - divide start_ARG 1 end_ARG start_ARG square-root start_ARG 2 end_ARG end_ARG | start_ARG 1 end_ARG ⟩–with this, the first series of operations is finished, and now we are onto phase-kickback.
In the second series of steps, the CNot gate is applied to the qubits where we need to decode 1, and so we have |ψ3,4⟩=12|0⟩+12|1⟩ketsubscript𝜓3412ket012ket1\ket{\psi_{3,4}}=\frac{1}{\sqrt{2}}\ket{0}+\frac{1}{\sqrt{2}}\ket{1}| start_ARG italic_ψ start_POSTSUBSCRIPT 3 , 4 end_POSTSUBSCRIPT end_ARG ⟩ = divide start_ARG 1 end_ARG start_ARG square-root start_ARG 2 end_ARG end_ARG | start_ARG 0 end_ARG ⟩ + divide start_ARG 1 end_ARG start_ARG square-root start_ARG 2 end_ARG end_ARG | start_ARG 1 end_ARG ⟩ as the control for their respective CNot gate, while we have |ψ5⟩=12|0⟩−12|1⟩ketsubscript𝜓512ket012ket1\ket{\psi_{5}}=\frac{1}{\sqrt{2}}\ket{0}-\frac{1}{\sqrt{2}}\ket{1}| start_ARG italic_ψ start_POSTSUBSCRIPT 5 end_POSTSUBSCRIPT end_ARG ⟩ = divide start_ARG 1 end_ARG start_ARG square-root start_ARG 2 end_ARG end_ARG | start_ARG 0 end_ARG ⟩ - divide start_ARG 1 end_ARG start_ARG square-root start_ARG 2 end_ARG end_ARG | start_ARG 1 end_ARG ⟩ as the target for both instances. By performing the tensor product |ψ3⟩⊗|ψ5⟩tensor-productketsubscript𝜓3ketsubscript𝜓5\ket{\psi_{3}}\otimes\ket{\psi_{5}}| start_ARG italic_ψ start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT end_ARG ⟩ ⊗ | start_ARG italic_ψ start_POSTSUBSCRIPT 5 end_POSTSUBSCRIPT end_ARG ⟩ we have the following.
[1000010000010010][12−1212−12]=12|00⟩−12|01⟩−12|10⟩+12|11⟩matrix1000010000010010matrix1212121212ket0012ket0112ket1012ket11\begin{bmatrix}1&0&0&0\\ 0&1&0&0\\ 0&0&0&1\\ 0&0&1&0\\ \end{bmatrix}\begin{bmatrix}[r]\dfrac{1}{2}\\[6.99997pt] -\dfrac{1}{2}\\[6.99997pt] \dfrac{1}{2}\\[6.99997pt] -\dfrac{1}{2}\end{bmatrix}=\dfrac{1}{2}\ket{00}-\dfrac{1}{2}\ket{01}-\dfrac{1}% {2}\ket{10}+\dfrac{1}{2}\ket{11}[ start_ARG start_ROW start_CELL 1 end_CELL start_CELL 0 end_CELL start_CELL 0 end_CELL start_CELL 0 end_CELL end_ROW start_ROW start_CELL 0 end_CELL start_CELL 1 end_CELL start_CELL 0 end_CELL start_CELL 0 end_CELL end_ROW start_ROW start_CELL 0 end_CELL start_CELL 0 end_CELL start_CELL 0 end_CELL start_CELL 1 end_CELL end_ROW start_ROW start_CELL 0 end_CELL start_CELL 0 end_CELL start_CELL 1 end_CELL start_CELL 0 end_CELL end_ROW end_ARG ] [ start_ARG start_ROW start_CELL divide start_ARG 1 end_ARG start_ARG 2 end_ARG end_CELL end_ROW start_ROW start_CELL - divide start_ARG 1 end_ARG start_ARG 2 end_ARG end_CELL end_ROW start_ROW start_CELL divide start_ARG 1 end_ARG start_ARG 2 end_ARG end_CELL end_ROW start_ROW start_CELL - divide start_ARG 1 end_ARG start_ARG 2 end_ARG end_CELL end_ROW end_ARG ] = divide start_ARG 1 end_ARG start_ARG 2 end_ARG | start_ARG 00 end_ARG ⟩ - divide start_ARG 1 end_ARG start_ARG 2 end_ARG | start_ARG 01 end_ARG ⟩ - divide start_ARG 1 end_ARG start_ARG 2 end_ARG | start_ARG 10 end_ARG ⟩ + divide start_ARG 1 end_ARG start_ARG 2 end_ARG | start_ARG 11 end_ARG ⟩ (12)
By applying the CNot gate, probabilities have not changed, and if we were to measure the states now, at this moment, nothing extraordinary would happen. But, if we observe the mixed state more closely, a change of phase has happened, and this is exactly what we wanted; the target has influenced the control, and thus we have, |ψ3⟩=+|0⟩−|1⟩ketsubscript𝜓3ket0ket1\ket{\psi_{3}}=+\ket{0}-\ket{1}| start_ARG italic_ψ start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT end_ARG ⟩ = + | start_ARG 0 end_ARG ⟩ - | start_ARG 1 end_ARG ⟩. What brings us to the last step, just before we are ready to measure the result. If we apply the H gate one more time, we will reverse the superposition, yet as we have changed the phase of certain qubits, these will no longer collapse to their original state but to the opposite one. Let us collapse |ψ3⟩ketsubscript𝜓3\ket{\psi_{3}}| start_ARG italic_ψ start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT end_ARG ⟩.
12[111−1][12−12]=0+112matrix1111matrix121201\dfrac{1}{\sqrt{2}}\begin{bmatrix}1&1\\ 1&-1\end{bmatrix}\begin{bmatrix}[r]\dfrac{1}{\sqrt{2}}\\ -\dfrac{1}{\sqrt{2}}\end{bmatrix}=0+1divide start_ARG 1 end_ARG start_ARG square-root start_ARG 2 end_ARG end_ARG [ start_ARG start_ROW start_CELL 1 end_CELL start_CELL 1 end_CELL end_ROW start_ROW start_CELL 1 end_CELL start_CELL - 1 end_CELL end_ROW end_ARG ] [ start_ARG start_ROW start_CELL divide start_ARG 1 end_ARG start_ARG square-root start_ARG 2 end_ARG end_ARG end_CELL end_ROW start_ROW start_CELL - divide start_ARG 1 end_ARG start_ARG square-root start_ARG 2 end_ARG end_ARG end_CELL end_ROW end_ARG ] = 0 + 1 (13)
Which makes the finals state, |ψ3⟩=0|0⟩+1|1⟩ketsubscript𝜓30ket01ket1\ket{\psi_{3}}=0\ket{0}+1\ket{1}| start_ARG italic_ψ start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT end_ARG ⟩ = 0 | start_ARG 0 end_ARG ⟩ + 1 | start_ARG 1 end_ARG ⟩, a 100%percent100100\%100 % chance of observing 1–by which the desired result was obtained, the binary string was decoded, and the information now only needs to be recorded, an operation conducted via a classical storage device. The string 1100110011001100 was the one to find, and while the string 1100110011001100 was the one found, the algorithm works well.
Phase-kickback is a mechanism that is often found in quantum algorithms, like, for example, Grover’s [87, 93], and in Deutsch-Josza [61, 189], and it is therefore important to conquer this design pattern. The basic idea behind such algorithms is to develop a quantum "oracle" that will only apply the negative phase to a state one is looking for, which is by no means an easy task to do, and when that is achieved, we can perform, as necessary, amplitude amplification, thus diminishing undesirable amplitudes and increasing the desirable ones, which is the way by which a quantum computer increases the probability of success so as to ensure a sought-after result is measured with high occurrence probability. [87, 93, 61, 189, 232]
6 Questions that Puzzle the Mind
Among many intriguing problems in quantum computing that are in need of solving and that are also of interest, there are some that are of special stake for us here. In spite of all the accomplishments, the more reliable and broader reality of quantum computers is still a dream. The main issues standing in the way of quantum computer construction are the following:
I
"The possibility in principle to construct a scalable quantum computer." [222]
II
"Instability (decoherence) because of the influence of external environment." [222]
III
"A physical implementation of a scalable quantum computer with a sufficient (for practical problems) number of jointly operating qubits ." [222]
IV
"The uncertainty of the degree of dependence of errors since a very fast accumulation of errors with increasing the number of qubits will give no way to obtain the sought-for result when executing computations with an acceptable number of repetitions." [222]
V
"The construction of new mathematical algorithms that will allow to considerably accelerate computations and the search for solutions for a wide class of problems." [222]
Another area of research that is quite significant is finding the position of quantum computation with regard to classical computation in terms of computational cost and complexity classes, as well as exploring the limitations of models of computation. [207] As such, there exists a complexity class BQP (bounded-error quantum polynomial time) that consists of decision problems that can be solved by a quantum machine in polynomial time, with the probability of a correct answer being ≥23absent23\geq\frac{2}{3}≥ divide start_ARG 2 end_ARG start_ARG 3 end_ARG. [30, 277] This complexity class is a quantum analogue for the classical BPP (bounded-error probabilistic polynomial time) that "consists of problems for which there exists a polynomial-time Atlantic City algorithm with a two-sided error." [124] These classes are related in the following way, BPP⊆BQP𝐵𝑃𝑃𝐵𝑄𝑃BPP\subseteq BQPitalic_B italic_P italic_P ⊆ italic_B italic_Q italic_P, with both classes belonging to PSPACE and needing a polynomial amount of space. [124] The question of BQP and its relation to NP is a matter that is more difficult. There are indications that perhaps NP is contained in BQP, as there are results for the opposite being true–this then still represents a question that is an issue in science and is considered unresolved. [277]
Quantum computation is fascinating from yet another perspective, which is the very basis of it, that is because of quantum mechanics. Quantum mechanics is the fundamental theory in physics describing nature at the smallest of scale, at the atomic and subatomic level [71], yet it seems that even quantum mechanics does not give all the answers, and not only for the reason of Gödel’s incompleteness theorems [242]. There are certain aspects that escape us, at least for the time being, with entanglement and non-locality representing parts of the picture. [194] The issue is, however, broader, as the theory of quantum mechanics fails to address the question of, "how even a single particle, by being in a given quantum state, causes the frequency distribution of measurement values specified by the state." [194] And so, the never-ending pursuit in science, for new knowledge and discoveries, continues.
7 Moving Forward
It is tempting to think that one should use a quantum computer for every problem and for every task; quantum computers, however, are not a key that fits into every lock. There are problems that naturally fit quantum computing and those that do not. The most obvious application of a quantum computer is naturally quantum simulation [78]. By using a quantum computer, one can cope well with the complexity that overwhelms a classical machine. Examples of such modeling include superconductivity [107], chemical processes [13], photosynthesis [262], physics processes [78], cosmology [137], etc. Other, more classical examples, so to speak, are cryptography [195], optimization [134], search [276], and also machine learning and artificial intelligence [46, 65].
There are two main types of quantum computer implementation: universal [126], and non-universal [222]. "The main distinction is that universal quantum computing devices are developed with a view to executing arbitrary allowed operations and solving arbitrary problems; while non-universal computing devices are created to solve some limited class of problems, for example, to optimize definite machine learning algorithms." [222]
These quantum machines can be implemented in various ways, with different physical technologies in mind, like trapped ions, superconductors, or photons. [213] Each individual technology has its ups and downs; in each case, however, quantum computers "are very hard to build"; with the thread that permeates all implementations being quantum noise. [213] "Quantum mechanical states are extremely fragile and require near-absolute isolation from the environment; such conditions are hard to create and typically require temperatures near absolute zero and shielding from radiation." [213] Which makes quantum computers expensive to build and difficult to operate. [213] As the size of a quantum computer increases, so do the challenges, which get mounted one upon the other (in terms of the "number of qubits and the length of time they must be coherent"). [213]
When computation is being done on a quantum machine, that is, on encoded states, "qubits interact with each other through the gates, and this way errors can propagate through the gates, from one qubit to another." [4] In such a manner, the error can quickly be spread to all of the qubits. [4] To solve this problem, computation and error correction can be performed in a distributed way so that "each qubit can effect only a small number of other qubits." [4] An estimation was made that "more than 99%percent9999\%99 % of the computation performed by a quantum computer will be for error correction." [213, 122] If that is the case, then the calculations that a quantum computer should perform become of secondary nature, thus making the goal of fault-tolerant quantum operations of extremely high importance. [122] By taking that fact into context, quantum practicality will be a difficult goal to achieve, as a commercial quantum computer would need thousands and millions of qubits–efforts are, however, being made in order to solve the issue. [100, 35]
Quantum computers have limitations that go beyond their applicability. In spite of having a general scheme for speeding up computation, it is not expected to solve efficiently and in an exact manner NP-hard optimization problems. [204, 96] In order to make quantum practicality a reality, significant algorithmic improvements are yet to be achieved, while "due to limitations of input and output bandwidth, quantum computers will be practical for "big compute" problems on small data, not big data problems." [100] Nevertheless, through continuing progress and innovation, it is expected that a quantum computer able to break RSA-4096, with a probability of 1212\frac{1}{2}divide start_ARG 1 end_ARG start_ARG 2 end_ARG, will be constructed within the next 10-15 years. [222] With that in mind, it is necessary to already prepare options for replacement so as to ensure post-quantum cryptography viability. [222]
In the meantime, until commercial quantum computers are a reality, it is possible to create variational quantum algorithms that are trying to merge the classical and quantum approaches to problems. [36] In order to deal with the limitations of quantum computers, such as the limit on the number of qubits and the limit on the circuit depth as per noise, a variational quantum algorithm can be used instead. [36] Such an algorithm uses "a classical optimizer to train a parameterized quantum circuit." [36] In spite of the challenges of these algorithms as well, like trainability, accuracy, and efficiency, they are, for the short term at least, perhaps the best option for making the quantum dream a reality in the here and now. [36]
In order to start building quantum algorithms now, the following resources represent possible starting positions. In [142] one can read about a quantum singular value transformation (QSVT), which represents a general framework for a number of quantum algorithms, with the possibility of suggesting a unification of quantum algorithms. [142] While the following materials represent practical and hands-on foundational experience in quantum computing: [248], [97] , [143], [108], [106], [147], [84].
8 Few Last Words
It was the goal of this research to present to the scientific community an in-depth historical and current survey of quantum computing, with a special emphasis on foundational concepts that are difficult to grasp while also gazing into the future–and almost all of it has been done, from history to terminology, from quantum effects to quantum computation, and from the standard model algorithmics to the related literature. It is therefore left for us to touch upon wrapping issues, consider open questions, and draw conclusions.
Even tough, at times it might seem hopeless that a true, large-scale quantum computer will some day be a reality. Science is advancing, and every year there comes some new experimental success, and this ambitious dream of quantum computation might be possible. [4, 35, 260]
Quantum entanglement is of special interest as it allows for the teleportation of quantum states, and as it is currently known, there is no limit on the distance, which could perhaps enable a large-scale network, a marvel that would be quantum internet. [213] Considering that quantum encryption can’t be broken, even in theory, such a communication network is of great interest and would be of incredible value–it would be the absolute security realized. [213, 275]
If we have learned anything thus far, it is the fact that realizing a quantum computer, even of any kind, is not an easy task; however, Quantum David just might overpower Classical Goliath. [204] By superconducting quantum technology, Google was successful in constructing Sycamore, a programmable quantum machine that has 53 qubits. [204] For the reason of errors, "the final measurement yields the correct output only once in 500 runs", yet if one makes repeated calculations "millions of times in just a few minutes", a statistically useful result can be obtained. [204] The Sycamore quantum computer is only a single chip, compared to a classical computer that spans tennis courts and uses megawatts of power. [204] And Google is not the only one; IBM, for example, paves the way for an error-resilient quantum computer with thousands of qubits. [35] Indeed, sufficient progress has still not been achieved in realizing a scalable quantum device, it is nevertheless perceived that, with the developments at hand, "a full-fledged quantum computer will be created in the next 10-15 years." [222]
At the present, quantum mechanics is "considered the most accurate description of the Universe", although the theory might need modifications in the future. [4, 194] If and when such a scenario becomes a reality, it is unclear how will that change in the theory of quantum mechanics reflect on quantum computing and quantum information; however, "the novel physical theory that will emerge may give rise to a new computational paradigm, maybe even more powerful than quantum computing." [4] There is a possibility that large-scale commercial quantum devices won’t be feasible, perhaps because of a currently unknown or unsolvable issue–in such a case, a quantum computer can still be useful, e.g. for being "the simulator Feynman first envisaged", or for allowing experimental research in physics, and thus, by manipulating a small number of qubits, physicists will be performing tests and validating predictions of quantum theory. [4]
Even though it is not expected that quantum computers, via quantum algorithms, will be able to solve NP-complete problems in a manner that is exact and efficient, there is a possibility of finding efficient algorithms for those problems for which we do not know whether they belong to a class of NP-complete problems and do not have known and efficient classical algorithms, like, for example, the problem of "checking whether two graphs are isomorphic, known as Graph Isomorphism ." [4, 204, 96]
In spite of all of its marvels and all of the scientific contributions, there are many unsolved/partially solved open problems in the realm of quantum computing and quantum mechanics. Here we will list just a small fraction of those, which are likely also the most pressing and fascinating.
∙∙\bullet∙
Reduction of quantum error rates. [91, 123, 42]
∙∙\bullet∙
Suppression of quantum decoherence. [234, 269]
∙∙\bullet∙
Finding a type of technology best suited for quantum computation and an implementation thereof. [112, 278, 282, 95, 90, 205]
∙∙\bullet∙
The relationship in regard to NP and BQP. [19, 52]
∙∙\bullet∙
Scalability of a quantum computer. [69, 240]
∙∙\bullet∙
Verification of a quantum system. [80, 229]
∙∙\bullet∙
Separation of BQP and PH outside of a black-box model. [210]
∙∙\bullet∙
Efficient quantum memory. [216, 118]
∙∙\bullet∙
Networking protocols and devices for the quantum internet. [31, 11]
∙∙\bullet∙
Balance of connectivity between qubits. [51, 279]
∙∙\bullet∙
Performance of a quantum gate set. [51, 125]
∙∙\bullet∙
Compilers and software stack performance. [51, 138, 55]
∙∙\bullet∙
Materials challenges in quantum computing. | |||||||
2453 | dbpedia | 3 | 31 | https://igorpak.wordpress.com/tag/peter-sarnak/ | en | Igor Pak's blog | [
"https://igorpak.wordpress.com/wp-content/uploads/2024/04/image.png?w=1024",
"https://igorpak.wordpress.com/wp-content/uploads/2024/04/image-1.png?w=1024",
"https://igorpak.wordpress.com/wp-content/uploads/2024/04/image-2.png?w=1024",
"https://igorpak.wordpress.com/wp-content/uploads/2024/04/image-3.png?w=731... | [] | [] | [
""
] | null | [] | null | Posts about Peter Sarnak written by igorpak | en | Igor Pak's blog | https://igorpak.wordpress.com/tag/peter-sarnak/ | Conjectures are a staple of mathematics. They are everywhere, permeating every area, subarea and subsubarea. They are diverse enough to avoid a single general adjective. They come in al shapes and sizes. Some of them are famous, classical, general, important, inspirational, far-reaching, audacious, exiting or popular, while others are speculative, narrow, technical, imprecise, far-fetched, misleading or recreational. That’s a lot of beliefs about unproven claims, yet we persist in dispensing them, inadvertently revealing our experience, intuition and biases.
The conjectures also vary in attitude. Like a finish line ribbon they all appear equally vulnerable to an outsider, but in fact differ widely from race to race. Some are eminently reachable, the only question being who will get there first (think 100 meter dash). Others are barely on the horizon, requiring both great effort, variety of tools, and an extended time commitment (think ironman triathlon). The most celebrated third type are like those Sci-Fi space expeditions in requiring hundreds of years multigenerational commitments, often losing contact with civilization it left behind. And we can’t forget the romantic fourth type — like the North Star, no one actually wants to reach them, as they are largely used for navigation, to find a direction in unchartered waters.
Now, conjectures famously provide a foundation of the scientific method, but that’s not at all how we actually think of them in mathematics. I argued back in this pointed blog post that citations are the most crucial for the day to day math development, so one should take utmost care in making references. While this claim is largely uncontroversial and serves as a raison d’être for most GoogleScholar profiles, conjectures provide a convenient idealistic way out. Thus, it’s much more noble and virtuous to say “I dedicated my life to the study of the XYZ Conjecture” (even if they never publish anything), than “I am working hard writing so many papers to gain respect of my peers, get a promotion, and provide for my family“. Right. Obviously…
But given this apparent (true or perceived) importance of conjectures, are you sure you are using them right? What if some/many of these conjectures are actually wrong, what then? Should you be flying that starship if there is no there there? An idealist would argue something like “it’s a journey, not a destination“, but I strongly disagree. Getting closer to the truth is actually kind of important, both as a public policy and on an individual level. It is thus pretty important to get it right where we are going.
What are conjectures in mathematics?
That’s a stupid question, right? Conjectures are mathematical claims whose validity we are trying to ascertain. Is that all? Well, yes, if you don’t care if anyone will actually work on the conjecture. In other words, something about the conjecture needs to interesting and inspiring.
What makes a conjecture interesting?
This is a hard question to answer because it is as much psychological as it is mathematical. A typical answer would be “oh, because it’s old/famous/beautiful/etc.” Uhm, ok, but let’s try to be a little more formal.
One typically argues “oh, that’s because this conjecture would imply [a list of interesting claims and known results]”. Well, ok, but this is self-referential. We already know all those “known results”, so no need to prove them again. And these “claims” are simply other conjectures, so this is really an argument of the type “this conjecture would imply that conjecture”, so not universally convincing. One can argue: “look, this conjecture has so many interesting consequences”. But this is both subjective and unintuitive. Shouldn’t having so many interesting conjectural consequences suggest that perhaps the conjecture is too strong and likely false? And if the conjecture is likely to be false, shouldn’t this make it uninteresting?
Also, wouldn’t it be interesting if you disprove a conjecture everyone believes to be true? In some sense, wouldn’t it be even more interesting if until now everyone one was simply wrong?
None of this are new ideas, of course. For example, faced with the need to justify the “great” BC conjecture, or rather 123 pages of survey on the subject (which is quite interesting and doesn’t really need to be justified), the authors suddenly turned reflective. Mindful of self-referential approach which they quickly discard, they chose a different tactic:
We believe that the interest of a conjecture lies in the feeling of unity of mathematics that it entails. [M.P. Gomez Aparicio, P. Julg and A. Valette, “The Baum-Connes conjecture“, 2019]
Huh? Shouldn’t math be about absolute truths, not feelings? Also, in my previous blog post, I mentioned Noga Alon‘s quote that Mathematics is already “one unit“. If it is, why does it need a new “feeling of unity“? Or is that like one of those new age ideas which stop being true if you don’t reinforce them at every occasion?
If you are confused at this point, welcome to the club! There is no objective way to argue what makes certain conjectures interesting. It’s all in our imagination. Nikolay Konstantinov once told me that “mathematics is a boring subject because every statement is equivalent to saying that some set is empty.” He meant to be provocative rather than uninspiring. But the problem he is underlying is quite serious.
What makes us believe a conjecture is true?
We already established that in order to argue that a conjecture is interesting we need to argue it’s also true, or at least we want to believe it to be true to have all those consequences. Note, however, that we argue that a conjecture is true in exactly the same way we argue it’s interesting: by showing that it holds is some special cases, and that it would imply other conjectures which are believed to be true because they are also checked in various special cases. So in essence, this gives “true = interesting” in most cases. Right?
This is where it gets complicated. Say, you are working on the “abc conjecture” which may or may not be open. You claim that it has many consequences, which makes it both likely true and interesting. One of them is the negative solution to the Erdős–Ulam problem about existence of a dense set in the plane with rational pairwise distances. But a positive solution to the E-U problem implies the Harborth’s conjecture (aka the “integral Fáry problem“) that every graph can be drawn in the plane with rational edge lengths. So, counterintuitively, if you follow the logic above shouldn’t you be working on a positive solution to Erdős–Ulam since it would both imply one conjecture and give a counterexample to another? For the record, I wouldn’t do that, just making a polemical point.
I am really hoping you see where I am going. Since there is no objective way to tell if a conjecture is true or not, and what exactly is so interesting about it, shouldn’t we discard our biases and also work towards disproving the conjecture just as hard as trying to prove it?
What do people say?
It’s worth starting with a general (if slightly poetic) modern description:
In mathematics, [..] great conjectures [are] sharply formulated statements that are most likely true but for which no conclusive proof has yet been found. These conjectures have deep roots and wide ramifications. The search for their solution guides a large part of mathematics. Eternal fame awaits those who conquer them first. Remarkably, mathematics has elevated the formulation of a conjecture into high art. [..] A well-chosen but unproven statement can make its author world-famous, sometimes even more so than the person providing the ultimate proof. [Robbert Dijkgraaf, The Subtle Art of the Mathematical Conjecture, 2019]
Karl Popper thought that conjectures are foundational to science, even if somewhat idealized the efforts to disprove them:
[Great scientists] are men of bold ideas, but highly critical of their own ideas: they try to find whether their ideas are right by trying first to find whether they are not perhaps wrong. They work with bold conjectures and severe attempts at refuting their own conjectures. [Karl Popper, Heroic Science, 1974]
Here is how he reconciled somewhat the apparent contradiction:
On the pre-scientific level we hate the very idea that we may be mistaken. So we cling dogmatically to our conjectures, as long as possible. On the scientific level, we systematically search for our mistakes. [Karl Popper, quoted by Bryan Magee, 1971]
Paul Erdős was, of course, a champion of conjectures and open problems. He joked that the purpose of life is “proof and conjecture” and this theme is repeatedly echoed when people write about him. It is hard to overestimate his output, which included hundreds of talks titled “My favorite problems“. He wrote over 180 papers with collections of conjectures and open problems (nicely assembled by Zbl. Math.)
Peter Sarnak has a somewhat opposite point of view, as he believes one should be extremely cautious about stating a conjecture so people don’t waste time working on it. He said once, only half-jokingly:
Since we reward people for making a right conjecture, maybe we should punish those who make a wrong conjecture. Say, cut off their fingers. [Peter Sarnak, UCLA, c. 2012]
This is not an exact quote — I am paraphrasing from memory. Needless to say, I disagree. I don’t know how many fingers he wished Erdős should lose, since some of his conjectures were definitely disproved: one, two, three, four, five, and six. This is not me gloating, the opposite in fact. When you are stating hundreds of conjectures in the span of almost 50 years, having only a handful to be disproved is an amazing batting average. It would, however, make me happy if Sarnak’s conjecture is disproved someday.
Finally, there is a bit of a controversy whether conjectures are worth as much as theorems. This is aptly summarized in this quote about yet another champion of conjectures:
Louis J. Mordell [in his book review] questioned Hardy‘s assessment that Ramanujan was a man whose native talent was equal to that of Euler or Jacobi. Mordell [..] claims that one should judge a mathematician by what he has actually done, by which Mordell seems to mean, the theorems he has proved. Mordell’s assessment seems quite wrong to me. I think that a felicitous but unproved conjecture may be of much more consequence for mathematics than the proof of many a respectable theorem. [Atle Selberg, “Reflections Around the Ramanujan Centenary“, 1988]
So, what’s the problem?
Well, the way I see it, the efforts made towards proving vs. disproving conjectures is greatly out of balance. Despite all the high-minded Popper’s claims about “severe attempts at refuting their own conjectures“, I don’t think there is much truth to that in modern math sciences. This does not mean that disproofs of famous conjectures aren’t celebrated. Sometimes they are, see below. But it’s clear to me that the proofs are celebrated more frequently, and to a much greater degree. I have only anecdotal evidence to support my claim, but bear with me.
Take prizes. Famously, Clay Math Institute gives $1 million for a solution of any of these major open problems. But look closely at the rules. According to the item 5b, except for the P vs. NP problem and the Navier–Stokes Equation problem, it gives nothing ($0) for a disproof of these problems. Why, oh why?? Let’s look into CMI’s “primary objectives and purposes“:
To recognize extraordinary achievements and advances in mathematical research.
So it sounds like CMI does not think that disproving the Riemann Hypothesis needs to be rewarded because this wouldn’t “advance mathematical research”. Surely, you are joking? Whatever happened to “the opposite of a profound truth may well be another profound truth“? Why does the CMI wants to put its thumb on the scale and support only one side? Do they not want to find out the solution whatever it is? Shouldn’t they be eager to dispense with the “wrong conjecture” so as to save numerous researches from “advances to nowhere“?
I am sure you can see that my blood is boiling, but let’s proceed to the P vs. NP problem. What if it’s independent of ZFC? Clearly, CMI wouldn’t pay for proving that. Why not? It’s not like this kind of thing never happened before (see obligatory link to CH). Some people believe that (or at least they did in 2012), and some people like Scott Aaronson take this seriously enough. Wouldn’t this be a great result worthy of an award as much as the proof that P=NP, or at least a nonconstructive proof that P=NP?
If your head is not spinning hard enough, here is another amusing quote:
Of course, it’s possible that P vs. NP is unprovable, but that that fact itself will forever elude proof: indeed, maybe the question of the independence of P vs. NP is itself independent of set theory, and so on ad infinitum! But one can at least say that, if P vs. NP (or for that matter, the Riemann hypothesis, Goldbach’s conjecture, etc.) were proven independent of ZF, it would be an unprecedented development. [Scott Aaronson, P vs. NP, 2016].
Speaking of Goldbach’s Conjecture, the most talked about and the most intuitively correct statement in Number Theory that I know. In a publicity stunt, for two years there was a $1 million prize by a publishing house for the proof of the conjecture. Why just for the proof? I never heard of anyone not believing the conjecture. If I was the insurance underwriter for the prize (I bet they had one), I would allow them to use “for the proof or disproof” for a mere extra $100 in premium. For another $50 I would let them use “or independent of ZF” — it’s a free money, so why not? It’s such a pernicious idea of rewarding only one kind of research outcome!
Curiously, even for Goldbach’s Conjecture, there is a mild divergence of POVs on what the future holds. For example, Popper writes (twice in the same book!) that:
[On whether Goldbach’s Conjecture is ‘demonstrable’] We don’t know: perhaps we may never know, and perhaps we can never know. [Karl Popper, Conjectures and Refutations, 1963]
Ugh. Perhaps. I suppose anything can happen… For example, our civilizations can “perhaps” die out in the next 200 years. But is that likely? Shouldn’t the gloomy past be a warning, not a prediction of the future? The only thing more outrageously pessimistic is this theological gem of a quote:
Not even God knows the number of permutations of 1000 avoiding the 1324 pattern. [Doron Zeilberger, quoted here, 2005]
Thanks, Doron! What a way to encourage everyone! Since we know from numerical estimates that this number is ≈ 3.7 × 101017 (see this paper and this follow up), Zeilberger is suggesting that large pattern avoidance numbers are impossibly hard to compute precisely, already in the range of only about 1018 digits. I really hope he is proved wrong in his lifetime.
But I digress. What I mean to emphasize, is that there are many ways a problem can be resolved. Yet some outcomes are considered more valuable than others. Shouldn’t the research achievements be rewarded, not the desired outcome? Here is yet another colorful opinion on this:
Given a conjecture, the best thing is to prove it. The second best thing is to disprove it. The third best thing is to prove that it is not possible to disprove it, since it will tell you not to waste your time trying to disprove it. That’s what Gödel did for the Continuum Hypothesis. [Saharon Shelah, Rutgers Univ. Colloqium, 2001]
Why do I care?
For one thing, disproving conjectures is part of what I do. Sometimes people are a little shy to unambiguously state them as formal conjectures, so they phrase them as questions or open problems, but then clarify that they believe the answer is positive. This is a distinction without a difference, or at least I don’t see any (maybe they are afraid of Sarnak’s wrath?) Regardless, proving their beliefs wrong is still what I do.
For example, here is my old bog post on my disproof of the Noonan-Zeiberger Conjecture (joint with Scott Garrabrant). And in this recent paper (joint with Danny Nguyen), we disprove in one big swoosh both Barvinok’s Problem, Kannan’s Problem, and Woods Conjecture. Just this year I disproved three conjectures:
The Kirillov–Klyachko Conjecture (2004) that the reduced Kronecker coefficients satisfy the saturation property (this paper, joint with Greta Panova).
The Brandolini et al. Conjecture (2019) that concrete lattice polytopes can multitile the space (this paper, joint with Alexey Garber).
Kenyon’s Problem (c. 2005) that every integral curve in R3 is a boundary of a PL surface comprised of unit triangles (this paper, joint with Alexey Glazyrin).
On top of that, just two months ago in this paper (joint with Han Lyu), we showed that the remarkable independence heuristic by I. J. Good for the number of contingency tables, fails badly even for nearly all uniform marginals. This is not exactly disproof of a conjecture, but it’s close, since the heuristic was introduced back in 1950 and continues to work well in practice.
In addition, I am currently working on disproving two more old conjectures which will remain unnamed until the time we actually resolve them (which might never happen, of course). In summary, I am deeply vested in disproving conjectures. The reasons why are somewhat complicated (see some of them below). But whatever my reasons, I demand and naively fully expect that my disproofs be treated on par with proofs, regardless whether this expectation bears any relation to reality.
My favorite disproofs and counterexamples:
There are many. Here are just a few, some famous and some not-so-famous, in historical order:
Fermat‘s conjecture (letter to Pascal, 1640) on primality of Fermat numbers, disproved by Euler (1747)
Tait’s conjecture (1884) on hamiltonicity of graphs of simple 3-polytopes, disproved by W.T. Tutte (1946)
General Burnside Problem (1902) on finiteness of periodic groups, resolved negatively by E.S. Golod (1964)
Keller’s conjecture (1930) on tilings with unit hypercubes, disproved by Jeff Lagarias and Peter Shor (1992)
Borsuk’s Conjecture (1932) on partitions of convex sets into parts of smaller diameter, disproved by Jeff Kahn and Gil Kalai (1993)
Hirsch Conjecture (1957) on the diameter of graphs of convex polytopes, disproved by Paco Santos (2010)
Woods’s conjecture (1972) on the covering radius of certain lattices, disproved by Oded Regev, Uri Shapira and Barak Weiss (2017)
Connes embedding problem (1976), resolved negatively by Zhengfeng Ji, Anand Natarajan, Thomas Vidick, John Wright and Henry Yuen (2020)
In all these cases, the disproofs and counterexamples didn’t stop the research. On the contrary, they gave a push to further (sometimes numerous) developments in the area.
Why should you disprove conjectures?
There are three reasons, of different nature and importance.
First, disproving conjectures is opportunistic. As mentioned above, people seem to try proving much harder than they try disproving. This creates niches of opportunity for an open-minded mathematician.
Second, disproving conjectures is beautiful. Let me explain. Conjectures tend to be rigid, as in “objects of the type pqr satisfy property abc.” People like me believe in the idea of “universality“. Some might call it “completeness” or even “Murphy’s law“, but the general principle is always the same. Namely: it is not sufficient that one wishes that all pqr satisfy abc to actually believe in the implication; rather, there has to be a strong reason why abc should hold. Barring that, pqr can possibly be almost anything, so in particular non-abc. While some would argue that non-abc objects are “ugly” or at least “not as nice” as abc, the idea of universality means that your objects can be of every color of the rainbow — nice color, ugly color, startling color, quiet color, etc. That kind of palette has its own sense of beauty, but it’s an acquired taste I suppose.
Third, disproving conjectures is constructive. It depends on the nature of the conjecture, of course, but one is often faced with necessity to construct a counterexample. Think of this as an engineering problem of building some pqr which at the same time is not abc. Such construction, if at all possible, might be difficult, time consuming and computer assisted. But so what? What would you rather do: build a mile-high skyscraper (none exist yet) or prove that this is impossible? Curiously, in CS Theory both algorithms and (many) complexity results are constructive (you need gadgets). Even the GCT is partially constructive, although explaining that would take us awhile.
What should the institutions do?
If you are an institution which awards prizes, stop with the legal nonsense: “We award […] only for a publication of a proof in a top journal”. You need to set up a scientific committee anyway, since otherwise it’s hard to tell sometimes if someone deserves a prize. With mathematicians you can expect anything anyway. Some would post two arXiv preprints, give a few lectures and then stop answering emails. Others would publish only in a journal where they are Editor-in-Chief. It’s stranger than fiction, really.
What you should do is say in the official rules: “We have [this much money] and an independent scientific committee which will award any progress on [this problem] partially or in full as they see fit.” Then a disproof or an independence result will receive just as much as the proof (what’s done is done, what else are you going to do with the money?) This would also allow some flexibility for partial solutions. Say, somebody proves Goldbach’s Conjecture for integers > exp(exp(10100000)), way way beyond computational powers for the remaining integers to be checked. I would give this person at least 50% of the prize money, leaving the rest for future developments of possibly many people improving on the bound. However, under the old prize rules such person gets bupkes for their breakthrough.
What should the journals do?
In short, become more open to results of computational and experimental nature. If this sounds familiar, that’s because it’s a summary of Zeilberger’s Opinions, viewed charitably. He is correct on this. This includes publishing results of the type “Based on computational evidence we believe in the following UVW conjecture” or “We develop a new algorithm which confirms the UVW conjecture for n<13″. These are still contributions to mathematics, and the journals should learn to recognize them as such.
To put in context of our theme, it is clear that a lot more effort has been placed on proofs than on finding counterexamples. However, in many areas of mathematics there are no small counterexamples, so a heavy computational effort is crucial for any hope of finding one. Such work is not be as glamorous as traditional papers. But really, when it comes to standards, if a journal is willing to publish the study of something like the “null graphs“, the ship has sailed for you…
Let me give you a concrete example where a computational effort is indispensable. The curious Lovász conjecture states that every finite connected vertex-transitive graph contains a Hamiltonian path. This conjecture got to be false. It hits every red flag — there is really no reason why pqr = “vertex transitive” should imply abc = “Hamiltonian”. The best lower bound for the length of the longest (self-avoiding) path is only about square root of the number of vertices. In fact, even the original wording by Lovász shows he didn’t believe the conjecture is true (also, I asked him and he confirmed).
Unfortunately, proving that some potential counterexample is not Hamiltonian is computationally difficult. I once had an idea of one (a nice cubic Cayley graph on “only” 3600 vertices), but Bill Cook quickly found a Hamiltonian cycle dashing my hopes (it was kind of him to look into this problem). Maybe someday, when the TSP solvers are fast enough on much larger graphs, it will be time to return to this problem and thoroughly test it on large Cayley graphs. But say, despite long odds, I succeed and find a counterexample. Would a top journal publish such a paper?
Editor’s dilemma
There are three real criteria for evaluation a solution of an open problem by the journal:
Is this an old, famous, or well-studied problem?
Are the tools interesting or innovative enough to be helpful in future studies?
Are the implications of the solution to other problems important enough?
Now let’s make a hypothetical experiment. Let’s say a paper is submitted to a top math journal which solves a famous open problem in Combinatorics. Further, let’s say somebody already proved it is equivalent to a major problem in TCS. This checks criteria 1 and 3. Until not long ago it would be rejected regardless, so let’s assume this is happening relatively recently.
Now imagine two parallel worlds, where in the first world the conjecture is proved on 2 pages using beautiful but elementary linear algebra, and in the second world the conjecture is disproved on a 2 page long summary of a detailed computational search. So in neither world we have much to satisfy criterion 2. Now, a quiz: in which world the paper will be published?
If you recognized that the first world is a story of Hao Huang‘s elegant proof of the induced subgraphs of hypercubes conjecture, which implies the sensitivity conjecture. The Annals published it, I am happy to learn, in a welcome break with the past. But unless we are talking about some 200 year old famous conjecture, I can’t imagine the Annals accepting a short computational paper in the second world. Indeed, it took a bit of a scandal to accept even the 400 year old Kepler’s conjecture which was proved in a remarkable computational work.
Now think about this. Is any of that fair? Shouldn’t we do better as a community on this issue?
What do other people do?
Over the years I asked a number of people about the uncertainty created by the conjectures and what do they do about it. The answers surprised me. Here I am paraphrasing them:
Some were dumbfounded: “What do you mean this conjecture could be false? It has to be true, otherwise nothing I am doing make much sense.”
Others were simplistic: “It’s an important conjecture. Famous people said it’s true. It’s my job to prove it.”
Third were defensive: “Do you really think this conjecture could be wrong? Why don’t you try to disprove it then? We’ll see who is right.”
Fourth were biblical: “I tend to work 6 days a week towards the proof and one day towards the disproof.”
Fifth were practical: “I work on the proof until I hit a wall. I use the idea of this obstacle to try constructing potential counterexamples. When I find an approach to discard such counterexamples, I try to generalize the approach to continue working on the proof. Continue until either side wins.”
If the last two seem sensible to you to, that’s because they are. However, I bet fourth are just grandstanding — no way they actually do that. The fifth sound great when this is possible, but that’s exceedingly rare, in my opinion. We live in a technical age when proving new results often requires great deal of effort and technology. You likely have tools and intuition to work in only one direction. Why would you want to waste time working in another?
What should you do?
First, remember to make conjectures. Every time you write a paper, tell a story of what you proved. Then tell a story of what you wanted to prove but couldn’t. State it in the form of a conjecture. Don’t be afraid to be wrong, or be right but oversharing your ideas. It’s a downside, sure. But the upside is that your conjecture might prove very useful to others, especially young researchers. In might advance the area, or help you find a collaborator to resolve it.
Second, learn to check your conjectures computationally in many small cases. It’s important to give supporting evidence so that others take your conjectures seriously.
Third, learn to make experiments, explore the area computationally. That’s how you make new conjectures.
Fourth, understand yourself. Your skill, your tools. Your abilities like problem solving, absorbing information from the literature, or making bridges to other fields. Faced with a conjecture, use this knowledge to understand whether at least in principle you might be able to prove or disprove a conjecture.
Fifth, actively look for collaborators. Those who have skills, tools, or abilities you are missing. More importantly, they might have a different POV on the validity of the conjecture and how one might want to attack it. Argue with them and learn from them. | |||||
2453 | dbpedia | 0 | 64 | https://algs4.cs.princeton.edu/66intractability/ | en | Intractability | [
"https://algs4.cs.princeton.edu/cover.png",
"https://introcs.cs.princeton.edu/java/cover.png",
"https://algs4.cs.princeton.edu/cover-analysis-of-algorithms.png",
"https://algs4.cs.princeton.edu/66intractability/images/integration.png"
] | [] | [] | [
""
] | null | [
"Robert Sedgewick",
"Kevin Wayne"
] | null | en | null | 6.6 Intractability
This section under construction. The goal of complexity theory is to understand the nature of efficient computation. We have learned about analysis of algorithms, which enables us to classify algorithms according to the amount of resources they will consume. In this section, we will learn about a rich class of problems for which nobody has been able to devise an efficient algorithm.
Computational complexity.
As digital computers were developed in the 1940s and 1950s, the Turing machine served as the theoretical model of computation. In the 1960s Hartmanis and Stearns proposed measuring the time and memory needed by a computer as a function of the input size. They defined complexity classes in terms of Turing machines and proved that some problems have "an inherent complexity that cannot be circumvented by clever programming." They also proved a formal version (time hierarchy theorem) of the intuitive idea that if given more time or space, Turing machines can compute more things. In other words, no matter how hard a problem is (time and space requirements), there are always harder problems.
Computational complexity is the art and science of determining resource requirements for different problems. Computational complexity deals with assertions about any conceivable algorithm for a problem. Making such statements is significantly more challenging than understanding the running time of one particular algorithm for the problem since we must reason about all possible algorithms (even those not yet discovered). This makes computational complexity an exciting, yet daunting, field of study. We will survey some of its most important ideas and practical outgrowths.
Polynomial time.
We have analyzed the running time of an algorithm as a function of its input size. When solving a given problem, we prefer an algorithm that takes 8 N log N steps to one that takes 3 N2 steps, since when N is large, the first algorithm is significantly faster than the first. The second algorithm will ultimately solve the same problem (but it might take hours instead of seconds). In contrast, an exponential time algorithm has a different qualitative behavior. For example, a brute force algorithm for the TSP might take N! steps. Even if each electron in the universe (1079) had the power of today's fastest supercomputer (1012 instructions per second), and each worked for the life of the universe (1017 seconds) on solving the problem, it would barely make a dent in solving a problem with N = 1,000 since 1000! >> 101000 >> 1079 * 1012 * 1017. Exponential growth dwarfs technological change. We refer to any algorithm whose running time is bounded by a polynomial in the input size (e.g., N log N or N^2) as a polynomial-time algorithm. We say that a problem is intractable if there is no polynomial-time algorithm for the problem.
Create log-log scale plot of N, N3, N5, N10, 1.1N, 2N, N! as in Harel p. 74.
As programmers gained more experience with computation, it became evident that polynomial-time algorithms were useful and exponential-time algorithms were not. In a very influential paper, Jack Edmonds referred to polynomial algorithms as "good algorithms" and argued that polynomial time is a good surrogate for efficient computation. Kurt Godel wrote a letter to von Neumann (p. 9) in 1956 that contains the (implicit) notion that polynomiality is a desirable feature. Earlier (1953), von Neumann recognized the qualitative difference between polynomial and exponential algorithms. The idea of classifying problems according to polynomial and exponential time profoundly changed the way people thought about computational problems.
NP.
Informally we define a search problem as a computational problem where we are looking for a solution among a (potentially huge) number of possibilities, but such that when we find a solution, we can easily check that it solves our problem. Given an instance I of a search problem (some input data specifying the problem), our goal is to find a solution S (an entity that meets some pre-specified criterion) or report that no such solution exists. To be a search problem, we require that it be easy to check that S is indeed a solution. By easy, we mean polynomial-time in the size of the input I. The complexity class NP is the set of all search problems. Here are a few examples.
Linear systems of equations. Given a system of linear equations Ax = b, find a solution x that satisfies the equations (if one exists). The problem is in NP because if we are given a purported solution x, we can check that Ax = b by plugging in x and verifying each equation.
Linear programming. Given a system of linear inequalities Ax ≤ b, find a solution x that satisfies the inequalities (if one exists). The problem is in NP because if we are given a purported solution x, we can check that Ax ≤ b by plugging in x and verifying each inequality.
Integer linear programming. Given a system of linear inequalities Ax ≤ b, find a binary (0/1) solution x that satisfies the inequalities (if one exists). The problem is in NP because if we are given a purported solution x, we can check that Ax ≤ b by plugging in x and verifying each inequality.
While it is easy to check a proposed solution to all three problems, how difficult is it to find a solution from scratch?
Remark: our definition of NP is slightly non-standard. Historically, complexity classes were defined in terms of decision problems (yes-no problems). For example, given a matrix A and a vector b, does there exist a solution x such that Ax = b?
P.
The complexity class P is the set of all search problems solvable in polynomial-time (on a deterministic Turing machine). As before, we define P in terms of search problems (instead of decision problems). It captures most of the problems that we can solve in practice on real machines. We list a few examples below:
Problem Description Algorithm Instance Solution GCD Find the greatest common divisor of two integers x and y. Euclid's algorithm
(Euclid, 300 BCE) 34, 51 17 STCONN Given a graph G and two vertices s and t, find a path from s to t. BFS or DFS
(Theseus) SORT Find permutation that puts elements in ascending order. Mergesort
(von Neumann, 1945) 2.3 8.5 1.2
9.1 2.2 0.3 5 2 4 0 1 3 PLANARITY Given a graph G, draw it in the plane so that no two edges cross. (Hopcroft-Tarjan, 1974) LSOLVE Given a matrix A and a vector b, find a vector x such Ax = b. Gaussian elimination
(Edmonds, 1967) x+y=1
2x+4y=3 x = 1/2
y = 1/2 LP Given a matrix A and a vector b, find a vector x such that Ax ≤ b? Ellipsoid algorithm
(Khachiyan, 1979) x+y≤1
2x+4y≤3 x = 0
y = 0 DIOPHANTINE Given a (sparse) polynomial of one variable with integer coefficients, find an integral root? (Smale et. al, 1999) x5 - 32 x = 2
Extended Church-Turing Thesis.
In the mid 1960s Cobham and Edmonds independently observed that the set of problems solvable in a polynomial number of steps remains invariant over a very wide range of computational models, from deterministic Turing machines to RAM machines. The extended Church-Turing thesis asserts that the Turing machine is as efficient as any physical computing device. That is, P is the set of search problems solvable in polynomial-time in this universe. If some piece of hardware solves a problem of size N in time T(N), the extended Church-Turing thesis asserts that a deterministic Turing machine can do it in time T(N)k for some fixed constant k, where k depends on the particular problem. Andy Yao expresses the broad implications of this thesis:
They imply that at least in principle, to make future computers more efficient, one only needs to focus on improving the implementation technology of present-day computer designs.
In other words, any reasonable model of computation can be efficiently simulated on a (probabilistic) Turing machine. The extended Church-Turing thesis is true for all known physical general purpose computers. For random access machines (e.g., your PC or Mac) the constant k = 2. So, for example, if a random access machine can perform a computation in time N3/2, then a Turing machine can do the same computation in time N3.
Does P = NP?
One of the most profound scientific questions of our time is whether P = NP. That is, can all search problems be solved in polynomial time? Clay Foundation offers a 1 million dollar millennium prize for solving it. Here are some speculations on when the question will be resolved. The overwhelming consensus is that P != NP, but nobody has been able to prove it.
Video of Homer Simpson pontificating over P = NP, with accompanying music Erased by Paradise Lost.
Godel's letter to von Neumann anticipated the P = NP question. He recognized that if P = NP (satisfiability is in P), it "would have consequences of the greatest importance" since then "the mental work of a mathematician concerning Yes-or-No questions could be completely replaced by a machine." He asked for which combinatorial problems was there a more efficient alternative to exhaustive search.
NP-completeness.
Informally, NP-complete problems are the "hardest" problems in NP; they are the ones most likely to not be in P. Define: a problem is NP-complete if (i) it is in NP and (ii) every problem in NP polynomial reduces to it. Defining the concept of NP-completeness does not mean that such problems exist. In fact, the existence of NP-complete problems is an amazing thing. We cannot prove a problem is NP-complete by presenting a reduction from each NP problem since there are infinitely many of them. In the 1960s, Cook and Levin proved that SAT is NP-complete.
This is an example of universality: if we can solve any NP-complete, then we can solve any problem in NP. Unique scientific discovery giving common explanation to all sorts of problems. It is even more amazing that there exist "natural" problems that are NP-complete.
The impact of NP-completeness on the natural sciences has been undeniable. One the first NP-complete problems were discovered, intractability "spread like a shockwave through the space of problems", first in computer science, and then to other scientific disciplines. Papadimitriou lists 20 diverse scientific disciplines that were coping with internal questions. Ultimately, scientists discovered their inherent complexity after realizing that their core problems were NP-complete. NP-completeness is mentioned as a keyword in 6,000 scientific papers per year. "Captures vast domains of computational, scientific, mathematical endeavors, and seems to roughly delimit what mathematicians and scientists had been aspiring to compute feasibly." [Papadimitriou] Few scientific theories have had such a breadth and depth of influence.
Some NP-complete problems. Since the discovery that SAT is NP-complete, tens of thousands of problems have been identified as NP-complete. In 1972, Karp showed that 21 of the most infamous problem s in discrete mathematics were NP-complete, including Tsp, Knapsack, 3Color, and Clique. The failure of scientists to find an efficient algorithm for these 21 problems, despite being unaware that they were NP-complete, was among the first evidence suggesting that P != NP. Below we list a sampling of some NP-complete problems. Here are some more NP-complete problems. This is only meant to illustrate their diversity and pervasiveness.
Bin Packing. You have n items and m bins. Item i weighs w[i] pounds. Each bin can hold at most W pounds. Can you pack all n items into the m bins without violating the given weight limit?
This problem has many industrial applications. For example, UPS may need to ship a large number of packages (items) from one distribution center to another. It wants to put them into trucks (bins), and use as few trucks as possible. Other NP-complete variants allow volume requirements: each 3-dimensional package takes up space and you also have to worry about arranging the packages within the truck.
Knapsack. You have a set of n items. Item i weighs w[i] pounds and has benefit b[i]. Can you select a subset of the items that have total weight less than or equal to W and total benefit greater than or equal to B? For example, when you go camping, you must select items to bring based on their weight and utility. Or, suppose you are burglarizing a home and can only carry W pounds of loot in your knapsack. Each item i weighs w[i] pounds has a street value of b[i] dollars. Which items should you steal?
Subset Sum. Given n integers does there exists a subset of them that sum exactly to B? For example, suppose the integers are {4, 5, 8, 13, 15, 24, 33}. If B = 36 then the answer is yes (and 4, 8, 24 is a certificate). If B = 14 the answer is no.
Partition. Given n integers, can you divide them into two subsets so that each subset sums to the same number? For example, suppose the integers are {4, 5, 8, 13, 15, 24, 33}. Then the answer is yes, and {5, 13, 33} is a certificate. Load balancing for dual processors.
Integer linear programming. Given an integer matrix A and an integer vector b, does there exist an integer vector x such that Ax ≤ b? This is a central problem in operations research since many optimization problems can be formulated in this way. Note the contrast to the linear programming problem presented above where we are looking for a rational vector instead of an integer vector. The line between problems which are tractable and problems which are intractable can be very subtle.
SAT. Given n Boolean variables x1, x2, ..., xN and a logical formula, is there an assignment of truth variables that makes the formula satisfiable, i.e., true? For example, suppose the formula is
(x1' + x2 + x3) (x1 + x2' + x3) (x2 + x3) (x1' + x2' + x3')
Then, the answer is yes and (x1, x2, x3) = (true, true, false) is a certificate. Many applications to electronic design automation (EDA), including testing and verification, logic synthesis, FPGA routing, and path delay analysis. Application to AI, including knowledge base deduction and automatic theorem proving.
Exercise: given two circuits C1 and C2, design a new circuit C such that some setting of input values makes C output true if and only if C1 and C2 are equivalent.
3-SAT. Given n Boolean variables x1, x2, ..., xN and a logical formula in conjunction normal form (product-of-sums) with exactly 3 distinct literals per clause, is there an assignment of truth variables that makes the formula satisfiable?
Clique. Given n people and a list of pairwise friendships. Is there a group or clique of k people such that every possible pair of people within the group are friends? It is convenient to draw the friendship graph, where we include a node for each person and an edge connecting each pair of friends. In the following example with n = 11 and k = 4, the answer is yes, and {2, 4, 8, 9} is a certificate.
Longest path. Given a set of nodes and pairwise distances between nodes, does there exists a simple path of length at least L connecting some pair of nodes?
Machine Scheduling. Your goal is to process n jobs on m machines. For simplicity, assume each machine can process any one job in 1 time unit. Also, there can be precedence constraints: perhaps job j must finish before job k can start. Can you schedule all of the jobs to finish in L time units?
Scheduling problems have a huge number of applications. Jobs and machines can be quite abstract: to graduate Princeton you need to take n different courses, but are unwilling to take more than m courses in any one semester. Also, many courses have prerequisites (you can't take COS 226 or 217 before taking 126, but it is fine to take 226 and 217 at the same time). Can you graduate in L semesters?
Shortest Common Superstring. Given the genetic alphabet { a, t, g, c } and N DNA fragments (e.g., ttt, atggtg, gatgg, tgat, atttg) is there a DNA sequence with K or fewer characters that contains every DNA fragment? Suppose K = 11 in the above example; then the answer is yes and atttgatggtg is a certificate. Applications to computational biology.
Protein folding. Proteins in organism fold in three dimensional dimensional space in a very specific way, to their native state. This geometric pattern determines the behavior and function of a protein. One of the most widely used folding models is the two dimensional hydrophilic-hydrophobic (H-P) model. In this model, a protein is a sequence of 0s and 1s, and the problem is to embed it into a 2-d lattice such that the number of pairs of adjacent 1s in the lattice, but not in the sequence (its energy), is minimized. For example, the sequence 011001001110010 is embedded in the figure below in such a way that there are 5 new adjacent pairs of 1s (denoted by asterisks).
Minimizing the H-P energy of a protein is NP-hard. (Papadimitriou, et al.) It is well accept by biologists that proteins fold to minimize their energies. A version of Levinthal's paradox asks how it is possible that proteins are able to efficiently solve apparently intractable problems.
Integration. Given integers a1, a2, ..., aN, does the following integral equal 0? If you see this integral in your next Physics course, you should not expect to be able to solve it. This should not come as a big surprise because in Section 7.4 we consider a version of integration that is undecidable.
Crossword puzzle. Given an integer N, and a list of valid words, is it possible to assign letters to the cells of an N-by-N grid so that all horizontal and vertical words are valid? No easier if some of the squares are black as in a crossword puzzle.
Theorem. Given a purported theorem (such as one for the Riemann Hypothesis), can you prove it is true using at most n symbols in some formal system such as Zermelo-Fraenkel set theory?
Tetris.
Minesweeper.
Regular expressions. Give two regular expressions over the unary alphabet { 1 }, do they represent different languages? Give two NFAs, do they represent different languages? It may not be apparent that either problem is even decidable since we don't have an obvious bound on the size of the smallest string that is in one language but not the other. [Note that the corresponding inequivalence problem for DFAs is polynomial solvable.] The reason why we phrase the problem as inequivalence instead of equivalence is that it is easy to check that the two entities are non-equivalent by demonstrating a string s. In fact, if the two languages are different, then the smallest string is polynomial in the size of the input. Thus, we can use the efficient algorithms from Section 7.xyz to check whether s is recognized by an RE or accepted by an NFA. However, to argue that two REs are equivalent, we would need an argument that guarantees that all strings in one are in the other, and vice versa. [It is possible to devise an (exponential) algorithm to test whether two REs or NFAs are equivalent, although this should not be obvious.]
Lemmings. Is it possible to guide a tribe of green-haired lemming creatures to safety in a level of the game Lemmings?
Multinomial minimization over unit hypercube. Given a multinomial of N variables, is the minimum <= C, assuming all variables are bounded between 0 and 1. Classic calculus problem: min f(x) = ax^2 + bx + c over [0, 1]. Derivative at x = ?? is 0, but minimum occurs at boundary.
Quadratic Diophantine equations. Given positive integers a, b, and c, are there positive integers x and y such that ax2 + by = c?
Knot theory. Which knots on a 3-dimensional manifold bound a surface of genus ≤ g?
Bounded Post Correspondence Problem. Given a post correspondence problem with N cards and an integer K &le N, is there a solution that uses at most K cards? Recall it is undecidable if there is no limit on K.
Nash equilibria. Cooperative game theory. Given a 2-player game, find a Nash equilibrium that maximizes the payoff to player 1. Do more than one NE exist? Is there a NE that is Pareto optimal? NE that maximizes social welfare.
Quadratic congruence. Given positive integers a, b, and c, is there a positive integer x < c such that x2 = a (mod b)?
Ising model in 3d. Simple mathematical model of phase transitions, e.g., when water freezes or when cooling iron becomes magnetic. Computing lowest energy state is NP-hard. Solvable in polynomial time if graph is planar, but 3d lattice is nonplanar. Holy grail of statistical mechanics for 75 years before proved NP-hard. Establishing NP-completeness means that physicists won't spend another 75 years attempting to solve the unsolvable.
Bandwidth minimization. Given an N-by-N matrix A and an integer B, is it possible to permute the rows and columns of A such that Aij = 0 if |i - j| > B. Useful for numerical linear algebra.
Voting and social choice. NP-hard for an individual to manipulate a voting scheme known as single transferable vote. NP-hard to determine who has won an election in a scheme seriously proposed by Lewis Carroll (Charles Dodgson) in 1876. In Carroll's scheme, the winner is the candidate who with the fewest pairwise adjacent changes in voters' preference rankings becomes the Condercet winner (a candidate who would beat all other candidates in a pairwise election). Shapley-Shubik voting power. Computing the Kemeny optimal aggregation.
Coping with intractability.
The theory of NP-completeness says that unless P = NP, there are some important problems for which we can't create an algorithm that simultaneously achieves the following three properties:
Guarantee to solve the problem in polynomial-time.
Guarantee to solve the problem to optimality.
Guarantee to solve arbitrary instances of the problem.
When we encounter an NP-complete problem, we must relax one of the three requirements. We will consider solutions to the TSP problem that relax one of the three goals.
Complexity theory deals with worst-case behavior. This leaves open the possibility of designing algorithms that run quickly on some instances, but take a prohibitive amount of time on others. For example, Chaff is a program that can solve many real-world SAT instances with 10,000 variables. Remarkably, it was developed by two undergraduates at Princeton. The algorithm does not guarantee to run in polynomial time, but the instances we're interested in may be "easy."
Sometimes we may be willing to sacrifice the guarantee on finding the optimal solution. Many heuristic techniques (simulating annealing, genetic algorithms, Metropolis algorithm) have been designed to find "nearly optimal" solutions to the TSP problem. Sometimes it is even possible to prove how good the resulting solution will be. For example, Sanjeev Arora designed an approximation algorithm for the Euclidean TSP problem that guarantees to find a solution that costs at most, say 1%, above the optimum. Designing approximation algorithms is an active area of research. Unfortunately, there are also non-approximability results of the form: if you can find an approximation algorithm for problem X that guarantees to get within a factor of 2 of the optimum, then P = NP. Thus, designing approximation algorithms for some NP-complete problems is not possible.
If we are trying to solve a special class of TSP problems, e.g., where the points lie on the boundary of a circle or the vertices of an M-by-N lattice, then we can design efficient (and trivial) algorithms to solve the problem.
Exploiting intractability. Having intractability problems is occasionally a good thing. In Section XYZ, we will exploit intractable problems to design cryptographic systems.
Between P and NP-complete. Most natural problems in NP are now known to be in P or NP-complete. If P != NP, then there are provably some NP problems that are neither in P or NP-complete. Like "dark matter we have not developed means of observing." A few notable unclassified problems in the netherworld: factoring, and subgraph isomorphism.
Factoring. Best known algorithm is 2^O(n^1/3 polylog(n)) - number field sieve. Believed by experts not to be in P.
Precedence constrained 3-processor scheduling. Given a set of unit length tasks, and a precedence order, find the shortest schedule on 3 parallel machines.
Turnpike problem. Given N(N-1)/2 positive numbers (not necessarily distinct), does there exist a set of N points on the line such that these numbers are the pairwise distances of the N points. Intuition: points are exits on I-95. Problem first arose in 1930s in the context of x-ray crystallography. Also known as the partial digest problem in molecular biology.
Boolean formula dualization. Given a monotone CNF formula and a monotone DNF formula, are they equivalent? (a + b)(c + d) = ac + ad + bc + bd. Naively applying De Morgan's law leads to exponential algorithm because of redundancy. Best algorithm O(n^(log n / log log n)).
Stochastic games. White, Black and Nature alternate moving a token on the edges of a directed graph, starting at the start state s. White's goal is to move the token to a goal state t. Black's goal is to prevent the token from ever reaching t. Nature moves the tokens at random. Given a digraph, a start state s, and a goal state t,, does White have a strategy which will make the token reach t with probability ≥ 1/2? Problem is in NP intersect co-NP, but not known to be in P. Believed to be in P, we just haven't found a polynomial-time algorithm.
Other complexity classes.
The complexity classes P, NP, and NP-complete are the three most famous complexity classes. Scott Aaronson's website The Complexity Zoo contains a comprehensive list of other complexity classes that are useful in classifying problems according to their computational resources (time, space, parallelizability, use of randomness, quantum computing). We describe a few of the most important ones below.
PSPACE. The complexity class PSPACE = problems solvable by a Turing machine using polynomial space. PSPACE-complete = in PSPACE and every other problem in PSPACE can be reduced to it in polynomial time.
Here is a complexity version of the halting problem. Given a Turing machine that is limited to n tape cells, does it halt in at most k steps? The problem is PSPACE-complete, where n is encoded in unary. This means that unless P = PSPACE, we are unlikely to be able to tell whether a given program, running on a computer with n units of memory, will terminate before k steps substantially faster than the trivial method of running it for k steps and seeing what happens.
Bodlaender: given a graph with vertices 1, ..., N, two players alternate in labeling the vertices red, green, or blue. The first player to label a vertex the same color as one of its neighbors loses. Determining whether there is a winning strategy for the first player is PSPACE-complete.
Versions of many conventional games are provably intractable; this partially explains their appeal. Also natural generalizations of Othello, Hex, Geography, Shanghai, Rush Hour, go-moku, Instant Insanity, and Sokoban are PSPACE-complete.
Eppstein's list of hard games.
Is a given string a member of a context sensitive grammar?
Do two regular expressions describe different languages? PSPACE-complete even over the binary alphabet and if one of the regular expressions is .*.
Another example that can be made rigorous is the problem of moving a complicated object (e.g., furniture) with attachments that can move and rotate through an irregularly shaped corridor.
Another example arises in parallel computing when the challenge is to determine whether a deadlock state is possible within a system of communicating processors.
Note PSPACE = NPSPACE (Savitch's theorem).
EXPTIME. The complexity class EXPTIME = all decision problem solvable in exponential time on deterministic Turing machine. Note P ⊆ NP ⊆ PSPACE ⊆ EXPTIME, and, by the time hierarchy theorem, at least one inclusion is strict, but unknown which one (or more). It is conjectured that all inclusions are strict.
Roadblock from Harel p. 85.
Natural generalization of chess, checkers, Go (with Japanese style ko termination rule), and Shogi are EXPTIME-complete. Given a board position, can the first player force a win? Here N is the size of the board, and the running time is exponential in N. One reason that these problems are harder from a theoretical standpoint than Othello (and other PSPACE-complete games) is that they can take an exponential number of moves. Checkers (aka English draughts on an N-by-N board): player can have an exponential number of moves at a given turn because of jump sequences. [pdf] Note: depending on termination rules, checkers can either be PSPACE-complete or EXPTIME-complete. For EXPTIME-complete, we assume the "forced capture rule" where a player must make a jump (or sequence of jumps) if available.
Here is a complexity version of the halting problem. Given a Turing machine, does it halt in at most k steps? Alternatively, given a fixed Java program and a fixed input, does it terminate in at most k steps? The problem is EXPTIME-complete. Here the running time is exponential in the binary representation of k. In fact, no turing machine can guarantee to solve it in, say, O(k / log k) steps. Thus, brute force simulation is essentially best possible: provably, the problem cannot be solved substantially faster than the trivial method of running the Turing machine for the first k steps and seeing what happens.
An EXPTIME-complete problem cannot be solved in polynomial-time on a deterministic Turing machine - it does not depend on the P ≠ NP conjecture.
EXPSPACE. EXPSPACE-complete: given two "extended" regular expressions, do they represent different languages? By extended, we allow a squaring operation (two copies of an expression). Stockmeyer and Meyer (1973). Or, more simply set intersection (Hunt, 1973). Word problem for Abelian groups (Cardoza, Lipton, Meyer, 1976), Vector Addition Subsystem.
The Vector Addition Subsystem is EXPSAPCE-hard: given a nonnegative vector s and a set of arbitrary vectors, v1, v2, ..., vn, a vector x is reachable from s if it is either (i) the vector s or (ii) the vector y + vi where y is reachable. The VAS problem is to determine whether a given vector x is reachable.
DOUBLE-EXPTIME. The class DOUBLE-EXPTIME is the set of all decision problems solvable in doubly exponential time. A remarkable example is determining whether a formula in first order Presburger arithmetic is true. Presburger arithmetic consists of statements involving integers with + as the only operation (no multiplication or division). It can model statements like the following: if x and y are integer such that x &le y + 2, then y + 3 > x. In 1929 Presburger proved that his system is consistent (can't prove a contradiction like 1 > 2) and complete (every statement can be proven true or false). In 1974, Fischer and Rabin proved that any algorithm that decides the truth of a Presburger formula requires at least 2(2cN) time for some constant c, where N is the length of the formula.
Non-elementary. More than 2^2^2^...^2^N for any finite tower. Given two regular expressions that allow squaring and complementation, do they describe different languages?
Other types of computational problems.
We focus on search problems since this is a very rich and important class of problems for scientists and engineers.
Search problems. This is the version we have considered in detail. Technically, FP = polynomial-time function problems, FNP = polynomial-time function problems on nondeterministic Turing machine. FP problems can have any output that can be computed in polynomial time (e.g., multiplying two numbers or finding the solution to Ax = b).
Decision problems. Traditionally, complexity theory is defined in terms of yes/no problems, e.g., Does there exist a solution to Ax &le b? Definition of reduction is cleaner (no need to deal with output). Classes P and NP traditionally defined in terms of decision problems. Typically the search problem reduces to the decision problem (and this is known to be true for all NP-complete problems). Such search problems are referred to as self-reducible. The P = NP question is equivalent to the FP = FNP question.
Total functions. Occasionally, a decision problem is easy, while the corresponding search problem is (believed to be) hard. For example, there may be a theorem asserting that a solution is guaranteed to exist, but the theorem does not provide any hint as to how to find one efficiently.
subset sum example. Given N numbers, find two (disjoint) subsets of these N numbers that sum to exactly the same value. If N = 77 and all the numbers are at most twenty-one decimal digits long, then by the pigeonhole principle, at least two subsets must sum to the same value. This is because there are 2^77 subsets but at most 1 + 77 * 10^21 < 2^77 possible sums. Or decision = composite, search = factor.
John Nash proved that Nash equilibria always exist in a normal form game of two or more players with specified utilities. Proof was nonconstructive, so unclear how to find such an equilibria. Proved to be PPAD-complete - the analog of NP-complete for problems known to have solutions.
General equilibrium theory is foundation of microeconomics. Given an economy with k commodities, each of N agents has an initial endowment of the commodities. Each agent also as a utility function for each commodity. The Arrow-Debreu theorem asserts that under suitable technical conditions (e.g., utility functions are continuous, monotonic, and strictly concave) there exist a (unique) set of market prices such that each agent sells all their goods and buys the optimal bundle using this money (i.e., supply equals demand for every commodity). But how does market compute it? The proof relies on a deep theorem from topology (Kakutani's fixed point theorem) and no efficient algorithm is currently known. Economists assume that the market finds the equilibrium prices; Adam Smith used the metaphor of the invisible hand to describe this social mechanism.
Generalization of 15-slider puzzle. Testing whether solution exists is in P, but finding shortest solution is intractable. [Ratner-Warmuth, 1990]
Optimization problems. Sometimes we have optimization problems, e.g., TSP. Given an NP problem and a cost function on solutions, the goal for a given instance is to find the best solution for it (e.g find the shortest TSP tour, the minimum energy configuration, etc.) Sometimes hard to formulate as a search problem (find the shortest TSP tour) since not clear how to efficiently check that you have optimal tour. Instead, we rephrase as: given a length L, find a tour of length at most L. Then binary search for optimal L.
Counting problems. Given an NP problem, find the number of solutions for it. For example, given a CNF formula, how many satisfying assignments does it have? Includes many problems in statistical physics and combinatorics. Formally, the class of problems is known as #P.
Strategic problems. Given a game, find an optimal strategy (or best move) for a player. Includes many problems in economics and board games (e.g., chess, go).
Output polynomial time.
Some problems involve more output than a single bit of information. For example, outputting a solution to the Towers of Hanoi problem requires at least 2^N steps. This requirement is not because the solution is inherently hard to compute, but rather because there are 2^N symbols of output, and it takes one unit of time to write each output symbol. Perhaps a more natural way to measure efficiency is a function both of the input size and of the output size. A classic electrical engineering problem with DFAs is to build a DFA from a RE that uses the minimum number of states. We would like an algorithm that is polynomial in the size of the input RE (number of symbols) and also in the size of the output DFA (number of states). Unless P = NP, designing such an algorithm is impossible. In fact, it's not even possible to design a polynomial algorithm that gets the answer within a constant (or even polynomial) number of states! Without the theory of NP-completeness, researchers would waste time following unpromising research directions.
Other lower bounds.
Information theoretic. In Section X.Y we saw that insertion uses at most N^2 compares to sort N items, and mergesort uses at most N log N compares. A natural question to ask is whether we can do better, perhaps one that uses at most 5N compares or even 1/2 N log N compares. To make the question more precise, we must explicitly state our computational model (decision tree). Here, we assume that we only access the data through the less() function. A remarkable theorem due to X says that no (comparison based) sorting algorithm can guarantee to sort every input of N distinct elements in fewer than ~ N log N compares. To see why, observe that each compare (call to less) provides one bit of information. In order to identify the correct permutation, you need log N! bits, and log N! ~ N log N. This tells us that mergesort is (asymptotically) the best possible sorting algorithm. No sorting algorithm in existence (or even one not yet imagined) will use substantially fewer compares.
3-Sum hard. Given a set of N integers, do any three of them sum to 0? Quadratic algorithm exists (see exercise xyz), but no subquadratic algorithm known. 3-SUM linear reduces to many problems in computational geometry. (find whether set of points in the plane have 3 that are collinear, decide whether a set of line segments in the plane can be split into two subsets by a line, determining whether a set of triangles cover the unit square, can you translate a polygon P to be completely inside another polygon Q, robot motion planning).
Brute force TSP takes N! steps. Using dynamic programming, can get it down to 2^N. Best lower bound = N. Essence of computational complexity = trying to find matching upper and lower bounds.
Circuit complexity.
There are other ways to define and measure computational complexity. A Boolean circuit of n inputs can compute any Boolean function of n variables. We can associate the set of binary strings of size n for which the circuit outputs 1 as the set of strings in the language. We need one circuit for each input size n. Shannon (1949) proposed the size of the circuit as a measure of complexity. It is known that a language has uniformly polynomial circuits if and only if the language is in P.
Physical and analog computation.
The P = NP question is a mathematical question regarding the capabilities of Turing machines and classical digital computers. We might also wonder whether the same is true for analog computers. By analog, we mean any "deterministic physical device that uses a fixed number of physical variables to represent each problem variable." Internal state represented by continuous variables instead of discrete. E.g., soap bubbles, protein folding, quantum computing, gears, time travel, black holes, etc.
Vergis, Steiglitz, and Dickinson proposed an analog form of the Strong Church-Turing thesis:
Any finite analog computer can be simulated efficiently by a digital computer, in the sense that the time required by the digital computer to simulate the analog computer is bounded by a polynomial function of the resources used by the analog computer.
The resources of the analog computer could be time, volume, mass, energy, torque, or angular momentum. Reference: The Physics of Analog Computation
Any reasonable model of computation (e.g., not involving exponential parallelism) can be simulated in polynomial time by a Turing machine (supplemented by a hardware random number generator).
Reference: Scott Aaronson. Can yield new insights into physics. One day "the presumed intractability of NP-complete problems might be taken as a useful constraint in the search for new physical theories" just like the second law of thermodynamics. Still can be falsified by experiment, but don't waste time looking...
Soap bubbles. Folklore that you can solve Steiner tree problem. In reality, only finds a local minimum, and may take a while to do so.
Quantum computing. One speculative model of computation - quantum computers - might be capable of solving some problems in a polynomial time that a deterministic Turing machine cannot do. Peter Shor discovered an N^3 algorithm for factoring N-digit integers, but the best known algorithm on a classical computer takes time exponential in N. Same idea could lead to a comparable speedup in simulating quantum mechanical systems. This explains the recent excitement in quantum computation, as it could result in a paradigm shift for computing. However, quantum computers do not yet violate the extended Church-Turing thesis since we don't yet know how to build them. (Difficult to harness because much of the quantum information seems to be easily destroyed by its interactions with the outside world, i.e., decoherence.) Moreover, it is still possible that someone might discover a polynomial-time algorithm for factoring on a classical computer, although most experts suspect that this is not possible. Grover's algorithm: search in sqrt(N) time instead of N.
Richard Feynman showed in 1982 that classical computers cannot simulate quantum mechanical systems without slowing down exponentially (crux of argument is that Turing machines have locality of reference whereas quantum mechanics includes "exploit spooky action at a distance"). A quantum computer might be able to get around this problem. Feynman quote with respect to building a computer to simulate physics...
"The rule of simulation that I would like to have is that the number of computer elements required to simulate a large physical system is only to be proportional to the space-time volume of the physical system. I don't want to have an explosion."
Rephrase in terms of modern complexity theory by replacing "proportional to" by "bounded by a polynomial function of".
Deutsch-Jozsa give algorithm that is provably exponentially faster on a quantum computer than on a deterministic Turing machine. (Though exponential gap does not exist if the Turing machine has access to a hardware random number generator and can be wrong with negligible probability. Quantum computers can generate true randomness. )
PRIMES and COMPOSITE.
It is easy to convince someone that a number is composite by producing a factor. Then, the person just has to check (by long division) that you did not lie to them. Marin Mersenne conjectured that numbers of the form 2p - 1 are prime for p = 2, 3, 5, 7, 13, 17, 19, 31, 67, 127 and 257. His conjecture for p = 67 was disproved by F. N. Cole over two hundred and fifty years later in 1903. According to E. T. Bell's book Mathematics: Queen and Servant of Science
In the October meeting of the AMS, Cole announced a talk "On the Factorisation of Large Numbers". He walked up to the blackboard without saying a word, calculated by hand the value of 267, carefully subtracted 1. Then he multiplied two numbers (which were 193707721 and 761838257287). Both results written on the blackboard were equal. Cole silently walked back to his seat, and this is said to be the first and only talk held during an AMS meeting where the audience applauded. There were no questions. It took Cole about 3 years, each Sunday, to find this factorization, according to what he said.
For the record 267 - 1 = 193707721 × 761838257287 = 147573952589676412927.
Q + A
Q. Are polynomial algorithms always useful?
A. No, algorithms that take N100 or 10100 N2 steps are as useless in practice as exponential ones. The constants that arise in practice are usually sufficiently small that polynomial-time algorithms scale to huge problems, so polynomiality often serves as a surrogate for useful in practice.
Q. Why is the class of all search problems named NP?
A. The original definition of NP was in terms of nondeterministic Turing machines: NP is the set of all decision problems that can be solved in polynomial-time on a nondeterministic Turing machine. Roughly speaking, the difference between a deterministic and nondeterministic Turing machine is that the former operates like a conventional computer, performing each instruction in sequence, forming a computational path; a nondeterministic Turing machine can "branch off" where each branch can execute a different statement in parallel, forming a computational tree (If any path in the tree leads to a YES, then we accept; if all paths lead to NO, we reject.) This is where the N in NP comes from. It turns out the two definitions are equivalent, but the certificate one is now more widely used. (Also, Karp's 1972 paper uses the polynomial-time verifiability definition.)
Q. What is the complexity class NP-hard?
A. Several competing definitions. We define a problem (decision, search, or optimization) problem to be NP-hard if solving it in polynomial time would imply P = NP. Definition implicitly uses Turing reduction (extended to search problems).
Q. What's so hard about factoring an integer N in polynomial time - can't I just divide all potential factors less than N (or √N) into x and see if any have a remainder of zero?
A. The algorithm is correct, but remember it takes only lg N bits to represent the integer N. Thus, for an algorithm to be polynomial in the input size, it must be polynomial in lg N, and not N.
Q. How is it possible that checking whether an integer is composite is solvable in polynomial time, yet finding its factors is not known (or believed) to be?
A. There are ways to prove a number is composite without getting your hands on any of its factors. A famous theorem from number theory (Fermat's little theorem) implies that if you have two integers a and p such that (i) a is not a multiple of p and (ii) ap-1 != 1 (mod p), then p is not prime.
Q. Is there a decision problem that is polynomial solvable on a quantum computers, but provably not in P?
A. This is an open research problem. FACTOR is a candidate, but there is no proof that FACTOR is not in P, although this is widely believed to be outside P.
Q. Does NP = EXPTIME?
A. The experts believe no, but have been unable to prove it.
Q. Suppose someone proves P = NP. What would be the practical consequences?
A. It depends on how the question is resolved. Obviously, it would be a remarkable theoretical breakthrough. In practice, it might have dramatic significance if the proof of P = NP established a fast algorithm for an important NP-complete problem. If the proof results in an 2^100 N^117 algorithm for the TSP (and the constant and exponent could not be reduced), it would not have little practical impact. It could also be that someone proves P = NP by indirect means, thereby yielding no algorithm at all!
Q. Suppose someone proves P != NP. What would be the practical consequences?
A. It would be a remarkable theoretical breakthrough and solidify the foundation of much of computational complexity.
Q. Suppose P = NP. Does that mean deterministic TM are the same as non-deterministic TM?
A. Not quite. For example, even if P = NP, a non-deterministic TM may be able to solve a problem in time proportional to N^2, where the best deterministic one would take N^3. If P = NP, it just means that the two types of machines solve the same set of decision problems in polynomial time, but it says nothing about the degree of the polynomial.
Q. Where can I learn more about NP-completeness?
A. The authoritative reference remains Garey and Johnson Computers and Intractability: A Guide to the Theory of NP-completeness. Many of the most important subsequent discoveries are documented in David Johnson's NP-completeness column.
Exercises
Suppose that X is NP-complete, X poly-time reduces to Y, and Y poly-time reduces to X. Is Y necessarily NP-complete?
Answer: No, since Y may not be in NP. For example if X = CIRCUIT-SAT and Y = CO-CIRCUIT-SAT then X and Y satisfy the conditions, but it is unknown whether Y is in NP. Note that the answer depends on our definition of poly-time reduction (to be Turing reductions and not Karp reductions).
Explain why the optimization version of the vertex cover problem is not necessarily a search problem.
Answer: There does not appear to be an efficient way to certify that a purported solution is the best possible (even though we could use binary search on the search version of the problem to find the best solution).
Web Exercises
Subset sum. Given N positive integers and a target value V, determine if there is a subset whose sum is exactly V. Divide the integers into 4 equal groups. Enumerate and store all of the subset sums in each group by brute force. Let A, B, C, and D denote the subset sums of the four groups. The goal is to find integers a, b, c, and d such that a + b + c + d = V, where a is in A, b is in B, c is in C, and d is in D. Now, use a heap to enumerate the sums a + b where a is in A and b is in B. Simultaneously, use another heap to enumerate the sums c + d in decreasing order, where c is in C and d is in D.
Sum of square roots. What is the minimum nonzero difference between two sums of square roots of integers? Given n and k, find the minimum positive value of
where ai and bi are between 0 and n. For example r(20, 2) = √10 + √11 - √5 - √18 and r(20, 3) = √5 + √6 + √18 - √4 - √12 - √12. Hint: enumerate all 2^(n/2) sums of square roots of the first n/2 integers and let that set be A, enumerate all 2^(n/2) sums of square roots of the last n/2 integers and let that be B. Now enumerate sums of a + b in sorted order, where a is in A and b is in B. Look for sums whose difference is very tiny.
Dividing diamonds. Given N (around 36) class D diamonds, divide them into two groups so that they are as close in total weight to each other as possible. Assume the weights are real numbers (measured in carats).
Hamilton path in DAG. Given a directed acyclic graph G, give an O(n+m)-time algorithm to test whether or not it is Hamiltonian. Hint: topological sort.
Which of the following can we infer from the fact that the traveling salesperson problem is NP-complete, if we assume that P is not equal to NP?
There does not exist an algorithm that solves arbitrary instances of the TSP problem.
There does not exist an algorithm that efficiently solves arbitrary instances of the TSP problem.
There exists an algorithm that efficiently solves arbitrary instances of the TSP problem, but no one has been able to find it.
The TSP is not in P.
All algorithms that are guaranteed to solve the TSP run in polynomial time for some family of input points.
All algorithms that are guaranteed to solve the TSP run in exponential time for all families of input points.
Answer: (b) and (d) only.
Which of the following can we infer from the fact that PRIMALITY is in NP but not known to be NP-complete, if we assume that P is not equal to NP?
There exists an algorithm that solves arbitrary instances of PRIMALITY.
There exists an algorithm that efficiently solves arbitrary instances of PRIMALITY.
If we found an efficient algorithm for PRIMALITY, we could immediately use it as a black box to solve TSP.
Answer: We can infer only (a) since all problems in P are decidable. If P != NP, then there are problems in NP that are neither in P or NP-complete. PRIMALITY could be one of them (although this was recently disproved). Part (c) cannot be inferred since we don't know if PRIMALITY is NP-complete.
Which of the following are NP-complete?
The brute force TSP algorithm.
The quicksort algorithm for sorting.
The Halting problem.
Hilbert's 10th problem.
Answer: None. NP-completeness deals with *problems* not specific algorithm for problems. The Halting problem and Hilbert's 10th problem are undecidable, so they are not in NP (and all NP-complete problems are in NP).
Let X and Y be two decision problems. Suppose we know that X reduces to Y. Which of the following can we infer?
If Y is NP-complete then so is X.
If X is NP-complete then so is Y.
If Y is NP-complete and X is in NP then X is NP-complete.
If X is NP-complete and Y is in NP then Y is NP-complete.
X and Y can't both be NP-complete.
If X is in P, then Y is in P.
If Y is in P, then X is in P.
Answer: (d) and (g) only. X reduces to Y means that if you had a black box to solve Y efficiently, you could use it to solve X efficiently. X is no harder than Y.
Show that CIRCUIT-SAT reduces to CIRCUIT-DIFF. Hint: create a circuit with N inputs that always outputs 0.
Show that CIRCUIT-DIFF reduces to CIRCUIT-SAT.
Show that DETERMINANT is in NP: given an N-by-N integer matrix A, is det(A) = 0?
Solution: certificate is a nonzero vector x such that Ax = 0.
Show that FULL-RANK is in NP: given an N-by-N integer matrix A, is det(A) ≠ 0?
Solution: certificate is an N-by-N inverse matrix B such that AB = I.
Search problems vs. decision problems. We can formulate a search problem using a corresponding decision problem. For example, the problem of finding the prime factorization of an integer N can be formulate using the decision problem: given two integers N and and L, does N have a nontrivial factor strictly less than L. The search problem is solvable in polynomial time if and only if the corresponding decision problem is. To see why, we can efficiently find the smallest factor p of N by using different values of L along with binary search. Once we have the factor p, we can repeat the process on N/p.
Usually we can show that the search problem and the decision problem are equivalent up to polynomial factors in running time. Papadimitriou (Example 10.8) gives an interesting counterexample to the rule. Given N positive integers such that their sum is less than 2^N - 1, find two subsets whose sum is equal. For example, the 10 numbers below sum to 1014 < 1023.
23 47 59 88 91 100 111 133 157 205
Since there are more subsets of N integers (2^N) than numbers between 1 and 1014, there must be two different subsets with the same sum. But nobody know a polynomial time algorithm for finding such a subset. On the other hand, the natural decision problem is trivial solvable in constant time: are there two subsets of numbers that sum to the same value?
Pratt's primality certificate. Show that PRIMES is in NP. Use Lehmer's theorem (Fermat's Little Theorem Converse) which asserts that an integer p > 1 is prime if and only if there exists an integer x such that xN-1 = 1 (mod p) and x(p-1)/d ≠ 1 (mod p) for all prime divisors d of p-1. For example, if N = 7919, then the prime factorization of p-1 = 7918 = 2 × 37 × 107. Now x = 7 satisfies 77918 = 1 (mod 7919), but 77918/2 ≠ 1 (mod 7919), 77918/37 ≠ 1 (mod 7919), 77918/107 ≠ 1 (mod 7919). This proves that 7919 is prime (assuming that you recursively certify that 2, 37, and 107 are prime).
Pell's equation. Find all positive integer solutions to Pell's equation: x^2 - 92y^2 = 1. Solution: (1151, 120), (2649601, 276240), etc. There are infinitely many solutions, but each successive one is about 2300 times the previous one.
Pell's equation. In 1657, Pierre Fermat challenged his colleagues with the following problem: given a positive integer c, find a positive integer y such that cy2 is a perfect square. Fermat used c = 109. It turns out the smallest solution is (x, y) = (158,070,671,986,249, 15,140,424,455,100). Write a program Pell.java that reads in an integer c and finds the smallest solution to Pell's equation: x2 - c y2 = 1. Try c = 61. The smallest solution is (1,766,319,049, 226,153,980). For c = 313, the smallest solution is ( 3,218,812,082,913,484,91,819,380,158,564,160). The problem is provably unsolvable in a polynomial number of steps (as a function of the number of bits in the input c) because the output may require exponentially many bits!
3-COLOR reduced to 4-COLOR. Show that 3-COLOR polynomial reduces to 4-COLOR. Hint: given an instance G of 3-COLOR, create an instance G' of 4-COLOR by adding a special vertex x to G and connecting it to all of the vertices in G.
3-SAT is self-reducible. Show that 3-SAT is self-reducible. That is, given an oracle that answers whether or not any 3-SAT formula is satisfiable, design an algorithm that can find a satisfying assignment to a 3-SAT formula (assuming it is satisfiable). Your algorithm should run in polynomial time plus a polynomial number of calls to the oracle.
3-COLOR is self-reducible. Show that 3-COLOR is self-reducible. That is, given an oracle that answers whether or not any graph G is 3-colorable, design an algorithm that can 3-color a graph (assuming it is 3-colorable). Your algorithm should run in polynomial time plus a polynomial number of calls to the oracle. | |||||||
2453 | dbpedia | 3 | 89 | https://dabacon.org/pontiff/2012/06/22/threading-the-needle-of-mathematical-consistency/ | en | Threading the needle of mathematical consistency | [
"https://dabacon.org/pontiff/wp-content/uploads/sites/2/2024/01/cropped-The_Solemn_Mock_Procession_of_the_Pope_1829.jpg",
"https://secure.gravatar.com/avatar/aa17556232e389b5574440bba5385736?s=100&d=mm&r=g",
"https://secure.gravatar.com/avatar/f24fbe6cbfec5f69fefad27583ce77d0?s=100&d=mm&r=g",
"https://secure.... | [] | [] | [
""
] | null | [] | 2012-06-22T00:00:00 | The latest round of the debate between Aram and Gil Kalai is now up over at Goedel’s Lost Letter. I realize that I’m preaching to the choir at this venue, but I thought I would highligh… | en | The Quantum Pontiff | https://dabacon.org/pontiff/2012/06/22/threading-the-needle-of-mathematical-consistency/ | ||||||
2453 | dbpedia | 3 | 5 | https://www.quantamagazine.org/why-is-quantum-computing-so-hard-to-explain-20210608/ | en | What Makes Quantum Computing So Hard to Explain? | [
"https://d2r55xnwy6nx47.cloudfront.net/uploads/2021/06/0607_QComputer_Lede-2880x1620_F_v0-1720x968.jpg",
"https://d2r55xnwy6nx47.cloudfront.net/uploads/2021/06/Aaronson_Scott.jpg",
"https://d2r55xnwy6nx47.cloudfront.net/uploads/2021/05/Youtube_Article_160.gif",
"https://d2r55xnwy6nx47.cloudfront.net/uploads/2... | [] | [] | [
""
] | null | [
"civil conversation. Abusive",
"self-promotional"
] | 2021-06-08T15:10:00+00:00 | To understand what quantum computers can do — and what they can’t — avoid falling for overly simple explanations. | en | /favicon.png | Quanta Magazine | https://www.quantamagazine.org/why-is-quantum-computing-so-hard-to-explain-20210608/ | Quantum computers, you might have heard, are magical uber-machines that will soon cure cancer and global warming by trying all possible answers in different parallel universes. For 15 years, on my blog and elsewhere, I’ve railed against this cartoonish vision, trying to explain what I see as the subtler but ironically even more fascinating truth. I approach this as a public service and almost my moral duty as a quantum computing researcher. Alas, the work feels Sisyphean: The cringeworthy hype about quantum computers has only increased over the years, as corporations and governments have invested billions, and as the technology has progressed to programmable 50-qubit devices that (on certain contrived benchmarks) really can give the world’s biggest supercomputers a run for their money. And just as in cryptocurrency, machine learning and other trendy fields, with money have come hucksters.
In reflective moments, though, I get it. The reality is that even if you removed all the bad incentives and the greed, quantum computing would still be hard to explain briefly and honestly without math. As the quantum computing pioneer Richard Feynman once said about the quantum electrodynamics work that won him the Nobel Prize, if it were possible to describe it in a few sentences, it wouldn’t have been worth a Nobel Prize.
Not that that’s stopped people from trying. Ever since Peter Shor discovered in 1994 that a quantum computer could break most of the encryption that protects transactions on the internet, excitement about the technology has been driven by more than just intellectual curiosity. Indeed, developments in the field typically get covered as business or technology stories rather than as science ones.
That would be fine if a business or technology reporter could truthfully tell readers, “Look, there’s all this deep quantum stuff under the hood, but all you need to understand is the bottom line: Physicists are on the verge of building faster computers that will revolutionize everything.”
The trouble is that quantum computers will not revolutionize everything.
Yes, they might someday solve a few specific problems in minutes that (we think) would take longer than the age of the universe on classical computers. But there are many other important problems for which most experts think quantum computers will help only modestly, if at all. Also, while Google and others recently made credible claims that they had achieved contrived quantum speedups, this was only for specific, esoteric benchmarks (ones that I helped develop). A quantum computer that’s big and reliable enough to outperform classical computers at practical applications like breaking cryptographic codes and simulating chemistry is likely still a long way off.
But how could a programmable computer be faster for only some problems? Do we know which ones? And what does a “big and reliable” quantum computer even mean in this context? To answer these questions we have to get into the deep stuff.
Let’s start with quantum mechanics. (What could be deeper?) The concept of superposition is infamously hard to render in everyday words. So, not surprisingly, many writers opt for an easy way out: They say that superposition means “both at once,” so that a quantum bit, or qubit, is just a bit that can be “both 0 and 1 at the same time,” while a classical bit can be only one or the other. They go on to say that a quantum computer would achieve its speed by using qubits to try all possible solutions in superposition — that is, at the same time, or in parallel.
This is what I’ve come to think of as the fundamental misstep of quantum computing popularization, the one that leads to all the rest. From here it’s just a short hop to quantum computers quickly solving something like the traveling salesperson problem by trying all possible answers at once — something almost all experts believe they won’t be able to do.
The thing is, for a computer to be useful, at some point you need to look at it and read an output. But if you look at an equal superposition of all possible answers, the rules of quantum mechanics say you’ll just see and read a random answer. And if that’s all you wanted, you could’ve picked one yourself.
What superposition really means is “complex linear combination.” Here, we mean “complex” not in the sense of “complicated” but in the sense of a real plus an imaginary number, while “linear combination” means we add together different multiples of states. So a qubit is a bit that has a complex number called an amplitude attached to the possibility that it’s 0, and a different amplitude attached to the possibility that it’s 1. These amplitudes are closely related to probabilities, in that the further some outcome’s amplitude is from zero, the larger the chance of seeing that outcome; more precisely, the probability equals the distance squared.
But amplitudes are not probabilities. They follow different rules. For example, if some contributions to an amplitude are positive and others are negative, then the contributions can interfere destructively and cancel each other out, so that the amplitude is zero and the corresponding outcome is never observed; likewise, they can interfere constructively and increase the likelihood of a given outcome. The goal in devising an algorithm for a quantum computer is to choreograph a pattern of constructive and destructive interference so that for each wrong answer the contributions to its amplitude cancel each other out, whereas for the right answer the contributions reinforce each other. If, and only if, you can arrange that, you’ll see the right answer with a large probability when you look. The tricky part is to do this without knowing the answer in advance, and faster than you could do it with a classical computer.
Twenty-seven years ago, Shor showed how to do all this for the problem of factoring integers, which breaks the widely used cryptographic codes underlying much of online commerce. We now know how to do it for some other problems, too, but only by exploiting the special mathematical structures in those problems. It’s not just a matter of trying all possible answers at once.
Compounding the difficulty is that, if you want to talk honestly about quantum computing, then you also need the conceptual vocabulary of theoretical computer science. I’m often asked how many times faster a quantum computer will be than today’s computers. A million times? A billion?
This question misses the point of quantum computers, which is to achieve better “scaling behavior,” or running time as a function of n, the number of bits of input data. This could mean taking a problem where the best classical algorithm needs a number of steps that grows exponentially with n, and solving it using a number of steps that grows only as n2. In such cases, for small n, solving the problem with a quantum computer will actually be slower and more expensive than solving it classically. It’s only as n grows that the quantum speedup first appears and then eventually comes to dominate.
But how can we know that there’s no classical shortcut — a conventional algorithm that would have similar scaling behavior to the quantum algorithm’s? Though typically ignored in popular accounts, this question is central to quantum algorithms research, where often the difficulty is not so much proving that a quantum computer can do something quickly, but convincingly arguing that a classical computer can’t. Alas, it turns out to be staggeringly hard to prove that problems are hard, as illustrated by the famous P versus NP problem (which asks, roughly, whether every problem with quickly checkable solutions can also be quickly solved). This is not just an academic issue, a matter of dotting i’s: Over the past few decades, conjectured quantum speedups have repeatedly gone away when classical algorithms were found with similar performance.
Note that, after explaining all this, I still haven’t said a word about the practical difficulty of building quantum computers. The problem, in a word, is decoherence, which means unwanted interaction between a quantum computer and its environment — nearby electric fields, warm objects, and other things that can record information about the qubits. This can result in premature “measurement” of the qubits, which collapses them down to classical bits that are either definitely 0 or definitely 1. The only known solution to this problem is quantum error correction: a scheme, proposed in the mid-1990s, that cleverly encodes each qubit of the quantum computation into the collective state of dozens or even thousands of physical qubits. But researchers are only now starting to make such error correction work in the real world, and actually putting it to use will take much longer. When you read about the latest experiment with 50 or 60 physical qubits, it’s important to understand that the qubits aren’t error-corrected. Until they are, we don’t expect to be able to scale beyond a few hundred qubits.
Once someone understands these concepts, I’d say they’re ready to start reading — or possibly even writing — an article on the latest claimed advance in quantum computing. They’ll know which questions to ask in the constant struggle to distinguish reality from hype. Understanding this stuff really is possible — after all, it isn’t rocket science; it’s just quantum computing! | ||||
2453 | dbpedia | 3 | 27 | https://mathoverflow.net/questions/13682/which-mathematical-ideas-have-done-most-to-change-history | en | Which mathematical ideas have done most to change history? | [
"https://cdn.sstatic.net/Sites/mathoverflow/Img/logo.svg?v=3a674b060adf",
"https://mathoverflow.net/posts/13682/ivc/0eda?prg=d4872ffe-842c-4fe4-b455-b6636484bb31"
] | [] | [] | [
""
] | null | [] | 2010-02-01T14:43:02 | I'm planning a course for the general public with the general theme of "Mathematical ideas that have changed history" and I would welcome people's opinions on this topic. What do you think have be... | en | https://cdn.sstatic.net/Sites/mathoverflow/Img/favicon.ico?v=8bbfe38cfc48 | MathOverflow | https://mathoverflow.net/questions/13682/which-mathematical-ideas-have-done-most-to-change-history | Turing's work on computability, extending those of Goedel and the other early logicians, paved the way for the development of modern computers. Before Turing and Goedel, the concept of computability was murky. It was Turing who realized that there could be a universal computer---a computer whose hardware does not have to be separately modified for every change in application. Although we all take this for granted now, as we install various programs on our laptop computers, the mathematical idea of it was and is profound. Turing's early work introduced the formal concept of subroutines in computation, computational languages, and so on, which laid the groundwork for the later development of computers as we know them.
The work of Oliver Heaviside and Laplace put the electrical theories in a firm footing.
Heaviside invented an operational calculus for solving differential equations arising out of electrical network analysis, which was justified rigorously later by Laplace Transforms(but which makes full sense only incorporating the theory of distributions).
This might not seem important enough historically. But, all power generation, motors, the light you have in your room, and indeed all uses of electricity were able to be set up properly thanks to the work of these people, and the midnight oil they burned. We wouldn't have computers or MO without electricity distribution everywhere, for instance.
Every time I see a question like this I am reminded of something V.I. Arnold wrote, which I take the liberty of quoting here:
All mathematics is divided into three parts: cryptography (paid for by CIA, KGB and the like), hydrodynamics (supported by manufacturers of atomic submarines) and celestial mechanics (financed by military and by other institutions dealing with missiles, such as NASA.).
Cryptography has generated number theory, algebraic geometry over finite fields, algebra \footnote{The creator of modern algebra, Vi`ete, was the cryptographer of King Henry~I/V of France.}, combinatorics and computers.
Hydrodynamics procreated complex analysis, partial derivative equations, Lie groups and algebra theory, cohomology theory and scientific computing.
Celestial mechanics is the origin of dynamical systems, linear algebra, topology, variational calculus and symplectic geometry.
The existence of mysterious relations between all these different domains is the most striking and delightful feature of mathematics (having no rational explanation).
The idea that new knowledge can be obtained by careful deduction from previous truths has in my opinion had an enormous impact on european history and is certainly not a trivial one. Be it found in the work of Plato (think of the Meno, the Theaetetus or the famous warning sign in the Academy), Aristotle, Descartes (whose prime example of analysis in the philosophical sense was the derivation of the equation of the tangent to a curve), Spinoza (Ethica Ordine Geometrico Demonstrata), Kant (with his discussion of analytic and synthetic knowledge) or even arguably in modern guise, this idea has been tied to mathematics.
Consequently, if I were to teach such a class, I would first try to convey how crucial the ideas of Plato, Bacon, Descartes, Galileo, Newton, Kant (and so on...) have been in shaping the way we think about society, politics, moral, history, even religion. Then I would try to convince my audience that these ideas have been intrinsically linked with contemporary mathematical thoughts, and ultimately with the concept of proof and reasoning as understood in mathematics.
So perhaps my suggestion for the most influential mathematical idea in terms of what has influenced science/history or changed the way humans think would be the idea that mathematics is possible, and that playing this game of proving theorems is in fact a deeply worthy activity.
Structuralism in mathematics. It may have started in linguistics, but it reached mathematics next, promoted largely through Weil and Bourbaki, category theory, and then the grand vision of Grothendieck. Structuralism is not so much a single mathematical idea as a way of thinking about properties and definitions, what mathematical objects are, and how we should study them. The ideas expanded out from mathematics swiftly, and in the course of 20th century intellectual development, it is hard to find an idea as pervasive and influential as the structuralist approach.
(There is a book by Amir Aczel on Bourbaki that some of the story. I found the book to be unfortunately rather poorly written, but informative nonetheless.)
Structuralism is literally everywhere. It contains the idea the objects are characterised by their relationships relative to all other objects, rather than having an inherent identity of their own. For example, one sees an element of this in passing from old notions of groups and collections of transformations of something to the more abstract notion of a set equipped with the structure of a group multiplication law. Through Levi-Strauss, structuralism was introduced into anthropology. It created a large school of thought in history, sociology, political science, and so on.
Up above, I see that the Google PageRank algorithm was mentioned. One can view this as an example of structuralism in action - the rank of a website is computed by the algorithm as a certain function of its relationship to all other websites rather than as a function of the content of the site itself.
Analytic geometry, both in the sense of Fermat and Descartes, and in the modern sense of "Feynman diagrams" encrypting algebraic axioms. Certainly the former precedes Wallis, Newton, and Leibniz, and from a modern perspective, it seems trivial, too trivial to mention. But that geometric problems can be dealt with analytically (algebraically), and vice versa, helped formulate and inform the revolutions of science.
I agree that we have not yet understood the role that algebraic diagrammatics play in our understanding of mathematics, physics, or even how they will affect the average person in the street. However, I will be surprised if they are not at least as important as the use of arrows to indicate functions. (They are a generalization thereof).
Can you say something about the audience of this course? Popular math? Undergrads? grads? That might set some appropriate response parameters.
If this were a graduate-level course (I suspect not, but I feel like addressing this option anyway :)), I'd probably point to categories, sheaves, and cohomology -- and maybe just "cohomology" as a general concept, if I had to pick one. Also, the link provided by algebraic geometry between manifolds, varieties, and commutative algebra.
For an undergraduate non-major course, I don't think there's any way of overstating the historical significance of calculus. The scope of problems, both mathematical and physical, that were instantaneously solvable by mathematicians all over the world after its development and deployment, was mind-boggling.
I think there are probably more important ideas than those above if the scope of the question is how much impact they've had on humanities' development (e.g., development of serious linear algebra would certainly go in there for applications to just about everything, someone else mentioned RSA), but the above are my votes for ideas that have changed the way that people (or at least mathematicians) have thought about mathematics.
A couple of years ago, I saw a talk by Keith Devlin around his book The unfinished game. In his talk, the three revolutions were (and excuse me as I butcher this a little bit, this is from memory)
numbering systems
measurements (Galileo)
probability theory
So where's calculus and algebra and geometry? The argument was that these three have entered everyone's life to stay. Everyone uses numbers daily, measures things (temperature, speed), and talks about probabilities (chances of rain and so on).
Of course, that doesn't mean that people do any of this well, are aware of the intricacies involved, or, for probabilities, have a good intuition. But the point is that these revolutions now completely permeate everyday life (unlike calculus!) to the extent that it is very difficult to imagine what went on in people's minds before these inventions came on the scene. (If you've ever tried to do euclidean geometry by requiring that numbers can only be described as proportions of physical magnitudes, you know what I mean.)
The thought-provoking part of course is that the first two items don't seem to belong at all in the same order of mathematics as probability. | ||||
2453 | dbpedia | 2 | 84 | https://blog.computationalcomplexity.org/2006/01/ | en | Computational Complexity | https://blog.computationalcomplexity.org/favicon.ico | https://blog.computationalcomplexity.org/favicon.ico | [
"https://resources.blogblog.com/img/icon18_email.gif",
"https://resources.blogblog.com/img/icon18_edit_allbkg.gif",
"https://resources.blogblog.com/img/icon18_email.gif",
"https://resources.blogblog.com/img/icon18_edit_allbkg.gif",
"https://resources.blogblog.com/img/icon18_email.gif",
"https://resources.... | [] | [] | [
""
] | null | [
"Lance Fortnow"
] | null | Computational Complexity and other fun stuff in math and computer science from Lance Fortnow and Bill Gasarch | en | https://blog.computationalcomplexity.org/favicon.ico | https://blog.computationalcomplexity.org/2006/01/ | GUEST BLOGGER: Bill Gasarch This is my last day guest blogging, so I'll end where I began, THREE points on LUDDITES I) Janos Simon corrected my history of Luddites, for which I thank him. If you are interested, go to HIS comment on MY post from Monday Jan 23 for a link to a very nice article. II) My father and father-in-law offer an interesting contrast: FATHER-IN-LAW (Engineering Major, career mostly in Business, now retired): LUDDITE: Does not program his VCR. Not sure if he doesn't know how to or just doesn't want to. So he HAS to be home on Sunday to watch Desperate Housewives (a show I found distasteful- My father in law is hipper than I am). NON-LUDDITE: Took a course on C at a local community college when he was 70. Pays all his bills on line. FATHER (English Major, High School English Teacher and Vice Principal, now retired) LUDDITE: Got a computer recently and still can't get email or pay his bills on line. NON-LUDDITE: Uses his VCR to tape ALOT of shows. He needs it since he watches ALOT: West Wing, My Name is Earl, The Sopranos, Sex in the City when it was on (a show I find distasteful- My dad is hipper than I am), 6 feet under, Deadwood, all four Law and Orders, and all three CSI's, Without a trace, other stuff I can't recall. This from the man who restricted me, wisely, to no more than an hour of TV a night when I was a kid.) III) Stuart Kurtz emailed me some more questions for my Luddite quiz. I asked him if I could post them and he suggested asking for other inputs. No one replied, so here are his: STUART BEGIN: 9) Do you write emails (or blog posts) in a) variable width fonts with formatting, b) variable width fonts without formatting, c) fixed width fonts, d) What's a blog?, e) What's email?, or f) What's writing? 10) Do you indicate emphasis by a) using italic or slanted font, b) using a bold faced font, c) metadiscourse, i.e., "I want to emphasize that... ", d) ALL CAPS, or e) Shouting and waving your arms. 11) Does your mouse have a) four buttons, b) three buttons, c) two buttons, d) one button, e) control characters are good enough for RMS, and they're good enough for me, or f) four feet and a tail. 12) What's your favorite programming language? a) Ruby or Python, b) Java c) Lisp, d) C++, e) Awk, f) IBM-360 assembly language, g) C, h) Lisp, or i) graduate student. [I know Lisp occurs twice, but c and h are still different answers. Note that there's no point asking for Perl -- as Perl programmers can only write, not read.] STUART END. bill g. P.S. I am supposed to say ``Now that I've guest blogged for a week I'm even more impressed with Lance getting a topic out every day'' But this is NOT TRUE. I was SO IMPRESSED with Lance in the first place that I can't be ``more impressed''
GUEST BLOGGER BILL GASARCH TOPIC: How much is what we do influenced by non-scientific criteria? (BEFORE I START TODAYS BLOG- A REQUEST. EMAIL ME OTHER LUDDITE QUESTIONS- I WILL POST THE BEST ONES ON FRIDAY) I) AN INCOMPLETE SUMMARY OF Thomas Kuhn's book The Structure of Scientific Revolution: For long periods of time a field of science will agree on the basic terms and problems of the field and will all work with that worldview (also called a paradigm). This is called Normal Science. This is GOOD since if people were working with different paradigms progress would be hard. BUT there comes a time when some problems just cannot be solved using the usual techniques. There will be an effort to jam this problem and some approaches to it into the current paradigm, but eventually, the old paradigm will fall and a new one will take its place. The new one will help to answer some old questions, and pose new ones that could not have even been asked in the old one. Newtonian Phy vs Einstein is the usual example, though there are others on a much less cosmic scale. II) People after him have misconstrued his work to saying that science has NO objective truth, that it ALL depends on the Paradigm. This is, of course, hogwash. More so when they claim that its a tool by the elite to dominate the masses, or some such (look up SOKAL HOAX on google for one view of this view). III) But a fair question CAN be raised along these lines: How MUCH of what scientists do depends on political or personality or other factors VERSUS how much is driven by objective scientific principles? A few examples a) What if in response to Russell's paradox the math world essentially axiomized what set theorist now call V=L (every object is constructable). Then we would know LOTs more about L, we would KNOW that the Axiom of Choice is true, and we would know that Cont Hyp is true. We might know that there were these weird other models that are unnatural where CH is false, but we wouldn't care. (Some Set Theorists tell me this could never happen- that people would be interested in other models. They are wrong.) b) What if in response to the Banach Tarski paradox mathematicians rejected some version of the axiom of choice? This would have been quite possible before AC began being used in so many places. c) The people who believe in constructive methods only (e.g, Brouwer) are portrayed as cranky old men holding onto an old paradigm that no longer worked. But if they had won then people like Hilbert would be viewed as crazy rebels who fortunately were never taken seriously. (This one I am less sure of- nonconstructive techniques are SO powerful that I think they may be inevitable.) d) If Computing Devices were invented either earlier or later then they were would have a drastic effect on Theory. While we think that P vs NP is a natural problem, it only came out once the technology was in place. Was it inevitable that it arise? Probably Was it inevitable that it be considered important? Hard to say. e) There is ALOT of work in Quantum Computing because (i) Peter Shor proved FACTORING in Quantum P hence giving the problem new interest, or (ii) There is (or actually was) lots of Grant money in it. (of course these two are linked) f) Do schools like MIT have too big an influence on what gets studied? (They have less influence now than the used to.) MORE GENERALLY, if I had the time and the energy I would do research on history/phil of math asking the question HOW MUCH DO EXTERNAL FORCES EFFECT WHAT IS STUDIED ? and I would do it WITHOUT an ax to grind.
GUEST BLOGGER: Bill Gasarch (I will be guest blogging this week while Lance is on Vacation.) Are you a Luddite? The original Luddites were workers who, scared of lower wages via technology, destroyed factory machines. This was around 1811. Their leader was General Ned Ludd. (Not sure if General was an honorary title) TODAY the term has come to mean someone who does not adapt to technology or does not like technology. If you are NOT one, you can use Google to find out more about them. Are you a Luddite? I offer the following questions and let you score yourselves. 1) At a conference do you use a) Powerpoint with fancy animation and pictures off the web. b) Powerpoint with nice backgrounds, but nothing much else c) pdf files d) physical slides made using latex e) physical slides made using magic markers and overlays f) physical slides without overlays g) chalk h) draw diagrams in the sand with a twig 2) Same as question 1 but for large classroom lecture (over 50), small classroom lectures (under 10), seminars (8 people who actually know something). 3) For writing papers do you use a) LaTeX (or some other package) b) Typewriter (YOU HAVE A TYPEWRITER? MIGHT BE WORTH SOMETHING ON EBAY! c) Handwritten and give to your secretary to type (YOU HAVE A SECRETARY? MIGHT BE WORTH SOMETHING ON EBAY!) d) Quill pen and inkwell on parchment. 4) When listening to talks do you a) Take notes with an e-pen that automatically puts it online b) Take notes in an e-notebook c) Take notes in a p-notebook (thats paper) c) Not take notes at all d) Fall asleep 5) When you applied to grad school did you a) Check out the website of the school b) Ask your roomate who also majored in CS and Math c) Ask your roomate who majored in Political Science d) Apply to schools you heard were good e) Apply to schools randomly (time bounded Kolmogorov Random) 6) If you need a result that is already known do you a) Goto Google b) Goto the library c) Goto your own file cabinet d) Rederive the result by yourself 7) Which of these might you most likely say? a) When is the next version coming out so I can update? b) I'll update in 2 years (and you do) c) I'll update in 2 years (but you don't) d) You can have my chalk when you pry it from my cold dead hands. 8) Do you play music on a) MP3's b) CD's c) LP's d) 78's e) Wax Cylinders (WAX CYLINDERS! MIGHT BE WORTH SOMETHING ON EBAY!) bill g. Postscript: Thanks to my collegue Jack Lutz for catching that I spelled Luddite wrong originally. I used him instead of a spell checker, and note that the error he found would not have been discovered with a spell checker.
I am on vacation next week and I've lined up Bill Gasarch as a guest Blogger in my absence. But today we have a guest post from Kamal Jain. This is a long post but well worth reading through.
This post is prompted by recent development and discussions on electronic publishing, which themselves are prompted by book scanning initiative of Google and Open Content Alliance. Although, I am not talking about paper books being converted into electronic format, I like the idea of having the books available in a searchable electronic format. And certainly this is a must have feature for any newly written book.
Recently, I got two invitations to write for books. The first was to write a book on Network Coding. I felt that I was not the best person so I did not accept. If I had, then I would have insisted on a free electronic copy. Second, I got an invitation to co-write a chapter on Cost Sharing with Mohammad Mahdian for a book, Algorithmic Game Theory, edited by, Noam Nisan, Tim Roughgarden, Eva Tardos and Vijay Vazirani. I agreed to this because I felt that such a book is a great idea and I could make a positive contribution. My selfish motive was to spread knowledge of the subject to which I have contributed. And, I guess that was also the expected motive of the other contributors. This I could say because the explicit incentive offered in the invitation to the contributors was that the editors (originally Eva and Vijay only) have made an excellent deal with a publisher, Springer Verlag. The deal they have is $40 for up to six hundred pages. I am not sure whether it is a paper back or hard-cover. But that was not my focus anyway. My focus is the absence of any electronic publishing component in the deal. Because of that, I felt this is not such a good deal in today's electronic age. On one side we are talking about scanning paper books, starting electronic journals, writing wikis, blogs and on the other we do not even make a deal on electronic publishing of newly written books. I wrote an email back to the editors that I do not think Springer deal is a good one. I was hoping to get back a response and start a discussion with them on this, which IMO, was obligatory for them because I point blank disagreed with the incentive they explicitly offered. At this point I am assuming that there is no electronic publishing agreement with the publisher. This was the background.
Now, I realize that this is not something to discuss with the editors in private. This is an important issue which is likely to reoccur in other situations. So I requested this space from Lance so that I could discuss with the whole community. Following are some of my random thoughts and I like to hear everybody's thoughts too, random or not :-) Please press the comment button and put your thoughts in writing so that Springer and other publishers would know what we want from them.
There are at least two kinds of books. First kind, written by individual authors. Second kind, written collaboratively by the community like the above proposed Algorithmic Game Theory. Individual authors write books for various reasons and it is up to them what kind of deal they lock with the publishers. The books written by a community has a predetermined goal and that is to spread the knowledge of the subject. It is not up to one or two persons to lock whatever deal they think is great. So the community must form unspoken guidelines to facilitate the negotiation between editors and publishers. These unspoken guidelines must include minimum desires of the community. Such a set of guidelines would have resolved the prisoner's dilemma for me. I did not like the absence of electronic publishing agreement. If I decline the invitation then the book still has gone ahead without my contribution and if I accept the invitation, which I did, then I know that my efforts are not optimally used. But in case it were a common expectation from the editors to negotiate an electronic publishing agreement, then I know that I could reject the invitation because others invitee would also do the same, thereby insisting that the editors go back to the publisher and make an electronic publishing agreement. One would ask why publishers have any electronic publishing agreement. For information, Reinhard Diestel's book, Graph Theory, has a free searchable and hyperlinked electronic edition and further this book is published by Springer Verlag. Let us first discuss what Springer provides to us and what we provide to Springer. Then we should discuss whether we are getting the optimal deal.
Springer does the marketing which sells the book.
Springer provides the brand name which sells the book.
Springer provides the brand name which makes the line in our resume about the book a bit bolder.
Springer prints and binds the book, for which the buyer pays.
Springer gave peanut financial support ($2000) to pay to students to draw pictures. This fund is for those contributors who do not have their own funds.
We give to Springer
Free content and transfer copyright so that they can legally publish the content. I am assuming there is no royalties involved in a community written book.
Word of mouth marketing.
Use our own funds for other expenses.
Our university or companies resources.
What are the possible deals we could have:
Status Quo. Springer publishes the book and sells them. Takes the copyright and does not provide free electronic copy. In future, if Springer wants, makes more money from electronic copy too.
Reinhard Diestel model. Provides free searchable and hyperlinked electronic edition. A user can't conveniently print the pages.
Springer publishes the book and sells them. Takes an exclusive time bound license, say one year. After one year, Springer still keeps the exclusive license on the paper publishing, but we could put the free electronic copies on our webpages.
Springer publishes the book and sells them. Takes the exclusive right to publish the book in paper format — that's all it needs to legally publish the book. We keep all other rights. We put the book in electronic format on our webpages or at some cheap servers.
Note that in all the above 4 options Springer is still getting something for free — the content. So it still is a good deal for Springer. 1. is the best deal for Springer. The only reason Springer could insist on 1. is because we do not insist with unity (Reinhard probably insisted very hard). If we insist then we could possibly get them to agree on 4. It is an irony that this book is about Game Theory, and the game theory principles are not used to get a better deal. Mohammad suggested that even if Springer wins on getting the first deal, we could still put our chapters on our webpages. This does not make sense because of three reasons. First, there are going to be cross-references. Second, the chapters together provide a synergy and that's the reason we all agreed to put our chapters together. Third, if we could all put chapters on our webpages then why can't we compile them together and put on a single webpage. A book is more than the sum of its chapters. A question which is typically raised about free electronic version is the following. If people could download the book for free then why would they buy from Springer? I think people would still buy, libraries would buy, professors would buy and anybody who needs to read a significant part of the book would buy. Still, for a moment let us assume that people won't buy the paper book in the presence of a free electronic version. In this case, it simply means people want only the free electronic version and not the paid paper version. That is having only the electronic version is what everybody desires. Then, under this assumption, why even deal with Springer?
Because, as mentioned above, Springer provides some value. We could still avoid Springer and create these values ourselves. We anyway will be spending couple of thousand hours on this book (my experience on working with Vijay is that it takes at least few hours per page). There are at least two ways to avoid Springer.
We go to a small publisher and get the book published. Transfer the exclusive right to publish the book in paper format. We keep all other rights.
We publish only the electronic version.
What role would Springer play?
Springer does the marketing. We will discuss this later to see how we could do the marketing ourselves.
Springer provides the brand name to sell the book. I think the brand name of the editors and the authors is much more in this case. This is also the case with any good book written by a community.
Springer provides the brand name to make the line related to this book in our resume a bit bolder. First, most authors contributing in the book already have enough lines in their resume that they can do with one fewer line. Second, this line is minor for a community written book. Each person contributes a chapter, may be equivalent to writing one or two journal papers.
Springer prints and binds the book. I do not know how much it costs to print and bind the book. "The Search" by John Battelle is a three hundred page hard-bound book and available at 16 bucks at Amazon. Well The Search probably will sell more than this technical book. But it shows that $40 for Algorithmic Game Theory could very well be an optimum profit making point for Springer rather than a favor as they want to portray to us. A small publisher would be able to beat that even in the presence of competing free electronic version.
The last is the peanut financial support. I am sure we could arrange $2000 bucks without Springer. Even if we fail, grad student would be happy to contribute this for a credit. If I do not personally have time to draw pictures, then I do not mind having a co-author who does that for me. A picture is worth thousand words. If I am claiming authorship for writing thousand words then anybody who draws pictures deserves the equal credit.
So the only value Springer provides is marketing. There are various ways we could do that too.
We create a pamphlet and a poster which we distribute to the program chair of various conferences.
Put the electronic version at one place. Let each of the contributor links to it. If there are fifty links from places like, Cornell, Georgia Tech, Stanford then on searches related to the keyword in the book, the book should show up at the top.
Let Citeseer crawl the book, let Google crawl the book, let us upload it on Wikipedia.
Even if it is not sufficient then we could market for money via search engine paid listing. We could raise the money by having only one or two ads in the book, let us say in the content and index pages. If we have an electronic version we could even have Google Adsense ads at the book download page. Certainly Google Adsense would put ads for academic people. In this case, if we are anyway buying something we could buy through those ads.
One question which one could raise is that many people in the world still live on the other side of the digital divide. But such people do not have $40 bucks either. The solution for them is to have a publisher in India or China to publish this book and sells to these people.
Pre-bottom line is we give more to Springer than it is giving back in return. Game theoretically it is not a fair solution and we could do better. I am not sure whether there is any electronic publishing deal which the editors of this book have with the publisher, if they had then they probably would have told me. In any case this posting is about many others future books which will be written co-operatively. Bottom line is, any book which is not written for money must be available free of charge in an electronic format.
The Internet has led to a complete shifts in how we deal with storing and sharing information, but when it comes to academic papers the changes we see are ad hoc and added in a piecemeal basis.
Suppose we could start from scratch and create a proper system for research papers. Here is how I would envision such a system.
XML has become the standard for storing information on the internet; it gives a simple machine-readable method for creating tree structures. Academic papers have such a tree structure (Sections, subsections, theorems, proofs, etc.) that would lend it itself well to XML. Mathematical equations should also be written using XML, we already have a MathML specification for doing this.
A academic paper XML file would only have content information, not any formatting information. For this we would use XSL files, themselves XML files that describe how to format the document. You would use different XSL files depending on whether the paper is viewed on the screen or printed, and different publishers can develop their own XSL files to have consistent looking papers. LaTeX, the system used by most theoretical computer scientists, has similar capabilities but because LaTeX does not enforce any standards, changing style files often requires considerable editing.
Researchers will not have to create these XML files directly (unless they want to) but can use word processors that will save the documents according to those standards.
For citations we should just point to a unique identifier for a paper, no longer should we need to cut and paste bibliographic information. The formatting program can go online based on the identifier to get the information to create a human readable bibliography with web links if appropriate. Most publishers already use Digital Object Identifiers (DOI), we just need DOIs to point to an XML file giving bibliographic information, have DOIs for unpublished papers and have a method for DOIs to point to a later version of a paper.
The author information on academic papers are often useless (like my postal address) or out of date as academics change locations. Each academic research should get their own DOI-like number that points to an XML file giving personal and contact information and then we only need add these DOIs to the academic papers.
Most importantly we need to have enforced standards for each of these XML documents (via XML schemas). If we can truly separate the content from the formatting of documents, and make that content available in an easy machine-readable forms, not only can researchers focus more on the writing and less on the style but will also open the door to applications that we cannot even imagine today. | |||
2453 | dbpedia | 1 | 45 | https://www.mdpi.com/2079-9292/12/12/2643 | en | Evaluation and Comparison of Lattice-Based Cryptosystems for a Secure Quantum Computing Era | [
"https://pub.mdpi-res.com/img/design/mdpi-pub-logo-black-small1.svg?da3a8dcae975a41c?1723031622",
"https://pub.mdpi-res.com/img/design/mdpi-pub-logo-black-small1.svg?da3a8dcae975a41c?1723031622",
"https://pub.mdpi-res.com/img/journals/electronics-logo.png?8600e93ff98dbf14",
"https://www.mdpi.com/bundles/mdpis... | [] | [] | [
""
] | null | [
"Maria E. Sabani",
"Ilias K. Savvas",
"Dimitrios Poulakis",
"Georgia Garani",
"Georgios C. Makris",
"Maria E",
"Ilias K",
"Georgios C"
] | 2023-06-12T00:00:00 | The rapid development of quantum computing devices promises powerful machines with the potential to confront a variety of problems that conventional computers cannot. Therefore, quantum computers generate new threats at unprecedented speed and scale and specifically pose an enormous threat to encryption. Lattice-based cryptography is regarded as the rival to a quantum computer attack and the future of post-quantum cryptography. So, cryptographic protocols based on lattices have a variety of benefits, such as security, efficiency, lower energy consumption, and speed. In this work, we study the most well-known lattice-based cryptosystems while a systematic evaluation and comparison is also presented. | en | MDPI | https://www.mdpi.com/2079-9292/12/12/2643 | by
Maria E. Sabani
1,*,† ,
Ilias K. Savvas
1,† ,
Dimitrios Poulakis
2 ,
Georgia Garani
1 and
Georgios C. Makris
1
1
Department of Digital Systems, University of Thessaly, Geopolis Campus, Larissa-Trikala Ring-Road, 415 00 Larissa, Greece
2
Department of Mathematics, Aristotle University of Thessaloniki, 541 24 Thessaloniki, Greece
*
Author to whom correspondence should be addressed.
†
These authors contributed equally to this work.
Electronics 2023, 12(12), 2643; https://doi.org/10.3390/electronics12122643
Submission received: 5 May 2023 / Revised: 6 June 2023 / Accepted: 7 June 2023 / Published: 12 June 2023
(This article belongs to the Special Issue Quantum Computation and Its Applications)
Abstract
:
The rapid development of quantum computing devices promises powerful machines with the potential to confront a variety of problems that conventional computers cannot. Therefore, quantum computers generate new threats at unprecedented speed and scale and specifically pose an enormous threat to encryption. Lattice-based cryptography is regarded as the rival to a quantum computer attack and the future of post-quantum cryptography. So, cryptographic protocols based on lattices have a variety of benefits, such as security, efficiency, lower energy consumption, and speed. In this work, we study the most well-known lattice-based cryptosystems while a systematic evaluation and comparison is also presented.
1. Introduction
Quantum computing constitutes a critical issue as the impact of its advent and development will be present in every cell of our technology and therefore, our life. Quantum computational systems use the qubit (QUantum BIT) instead of the typical bit, which has a unique property; it can be in basic states ∣0〉, ∣1〉 but it can also be in a state that is a linear combination of these two states, such that a ∣ 0 〉 + b ∣ 1 〉 , a , b ∈ C , ∧ a 2 + b 2 = 1 [1]. This is an algebraic-mathematical expression of quantum superposition which claims that two quantum states can be added and their sum can also be a valid quantum state [2]. Regardless of superposition, quantum computers’ power and capability are based on quantum physics and specifically on the phenomenon of quantum entanglement and the no-cloning system. The odd phenomenon of quantum entanglement states that there are particles that are generated, interact, and are connected, regardless of the distance or the obstacles that separate them [3]. This fundamental law of quantum physics allows us to know or to measure the state of one particle if we know or measure the other particles.
Programmable quantum devices are capable of solving and overcoming problems that typical computers cannot solve in logical time. A quantum computer can perform operations with enormous speed, and in the flash of an eye, can process and store an extensive amount of information. This huge computational power which makes quantum computers superior to classical computers was described in 2012 by John Preskill with the term quantum supremacy [4]. Quantum mechanics provides us a fascinating theorem, the no-cloning theorem. As an evolution of the no-go theorem by James Park, the no-cloning theorem was proposed, a fundamental theorem of quantum physics and quantum cryptography. According to this theorem, the independent and identical replication of any unknown quantum state is impossible [2].
Cryptography is one of the oldest sciences and was developed out of the human necessity for secure communication [5]. Cryptographic protocols and algorithms are based on complex mathematics and cryptosystems appear in every electronic transaction and communication in our everyday life. The security, efficiency, and speed of these cryptographic methods and schemes are a main issue of interest and study. Contemporary cryptosystems are considered to be vulnerable to a quantum computer attack. In 1994, the American mathematician and cryptography professor Peter Shor presented an algorithm [6], which dumbfounded scientists. Shor in his work argued that with the implementation of the proposed algorithm in a quantum device, there would be no more security in current computational systems. This was a real revolution for the science of computing and a great motivator for the design and construction of quantum computational devices. The science that studies and develops cryptographic algorithms resistant to attacks by quantum computers is well known as post-quantum cryptography [7]. By bringing up to date mathematically based algorithms and standards, post-quantum cryptography examines and studies how to prepare the world for the era of quantum computing. [8,9].
Lattice-based cryptographic protocols attract the interest of researchers for a number of reasons. Firstly, the algorithms that are applied to lattice-based protocols are simple and efficient. Additionally, they have proven to be secure protocols and create a multitude of applications.
In this review, we examine the cryptographic schemes that are developed for a quantum computer. The following research questions are answered:
How much is the science of Cryptography affected by quantum computers ?
Which cryptosystems are efficient and secure for the quantum era?
Which are the most known lattice-based cryptographic schemes and how do they function?
How can we evaluate NTRU, LWE, and GGH cryptosystem?
What are their strengths and weaknesses ?
The rest of the paper is organized as follows. In Section 2, we present changes and challenges due to quantum devices in cryptography and in Section 3, cryptographic schemes in the quantum era are described. In Section 4, we present some basic issues of lattice theory. In Section 5 and Section 6, we present the lattice-based cryptographic schemes NTRU, LWE, and GGH correspondingly, while a discrete implementation of them is given. In addition, the GGH cryptosystem is described in Section 7. Results and comparisons are given in Section 8, while some future work directions are presented in Section 9. Finally, Section 10 concludes this work.
2. The Evolution of Quantum Computing in Cryptography
Cryptography is an indispensable tool for protecting information in computer systems, and difficult mathematical problems such as the discrete logarithm problem and the factorization of large prime numbers are the basis of current cryptographic protocols. We can divide the cryptographic protocols into two broad categories: symmetric cryptosystems and asymmetric (public key cryptosystems) cryptosystems [5].
The same key for both encryption and decryption is being used in symmetric cryptosystems, and despite their speed and their easy implementation, they have certain disadvantages. One main issue of this type of cryptosystem is the secret key distribution between two parties that want to communicate safely. Another drawback of symmetric cryptographic schemes is that the private keys which are being used must be changed frequently in order not to be known by a fraudulent user. If we can ensure the existence of an efficient method to generate and exchange keys, symmetric encryption and decryption methods are considered to be secure [10,11].
Asymmetric cryptographic schemes use a pair of keys, private and public keys, for encryption and decryption. This type of cryptosystem relies on mathematical problems that are characterized as hard to solve [12]. Some of the most widely known and implemented public key cryptosystems are RSA [13], the Diffie–Helman protocol, ECDSA, and others. Since the early 1990s, all these cryptographic schemes were believed to be effective and secure but Shor’s algorithm changed things.
Peter Shor proved with his algorithm that a quantum computer could quickly and easily compute the period of a periodic function in polynomial time [14]. Since 1994, when Shor’s protocol was presented, has been a great amount of study, analysis, and implementation of the algorithm both in classical and quantum computing devices. Shor’s method solves both the discrete logarithm problem and the factorization problem that are the basis of the current cryptographic schemes and therefore, the public key cryptosystems are insecure and vulnerable to a quantum attack [6].
2.1. Quantum Cryptography
In 1982, for the first time the term “Quantum Cryptography” was recommended but the idea of quantum information appeared for the first time in the decade of the 1970s, from Stephen Wiesner and his work about quantum money [15]. The science of quantum cryptography uses the fundamental laws of quantum physics to securely transfer or store data. In general, in quantum cryptography, the transmission and the encryption procedure is performed with the aid of quantum mechanics [16]. Quantum cryptography exploits the fundamental laws of quantum mechanics such as superposition and quantum entanglement, and constructs cryptographic protocols in a more advanced and efficient way.
A basic problem in classical cryptographic schemes is the key generation and exchange, as this process is endangered and unsafe when it takes place in an insecure environment [17]. When two different parties want to communicate and transfer data, they exchange information (i.e., key, message) and this procedure occurs in a public channel, so their communication could be vulnerable to an attack by a third party [18]. The most fascinating and also the most useful discovery and a widely used method of quantum cryptography is quantum key distribution.
2.2. Quantum Key Distribution
Quantum key distribution (QKD) utilizes the laws of quantum physics in the creation of a secret key through a quantum channel. With the principles of quantum physics, in QKD a secret key is generated and a secure communication between two (or more parties) is established. The inherent randomness of the quantum states and the results accrue from their measurements and they have as a result total randomness in the generation of the key. Quantum mechanics solves the problem of key distribution—the main challenge in cryptographic schemes—with the aid of quantum superposition, quantum entanglement, and the Uncertainty Principle of Heisenberg. Heisenberg’s Principle argues that two quantum states cannot be measured simultaneously [3]. This principle has as a consequence the detection of someone who tries to eavesdrop on the communication between two parties. If a fraudulent user tries to change the quantum system, he will be detected and the users abort the protocol.
Let us suppose that we have two parties that want to communicate and use a quantum key distribution protocol to generate a secret key. A quantum key distribution scheme has two phases and for its implementation the existence of a classical and a quantum channel is necessary. In the quantum channel, the private key is generated and reproduced and in the classical channel, the communication of the two parties takes place. Into the quantum channel are sent polarized photons and each one of the photons has a random quantum state [17]. Both the two parties have in their possession a device that collects and measures the polarization of these photons. Due to Heisenberg’s principle, the measurement of the polarized photons can reveal a possible eavesdropper as in his effort to elicit information, the state of the quantum system changes and the fraudulent user is detected [19].
The BB84 protocol, named after its creators and the year it was published, was the first quantum key distribution protocol and it was proposed in 1984 by Charles Bennett and Gilles Brassard [20]. BB84 is the most studied, analyzed, and implemented QKD protocol; since then, various QKD protocols have been proposed. B92 and SARG04, which are known as variants of BB84 and E91 that exploit the phenomenon of quantum entanglement, are a few of the widely known quantum key distribution protocols [1]. All these QKD protocol are in theory well designed and structured and are proved to be secure, but in practice, there are imperfections in their implementation. Loopholes, such as poorly constructed detectors or defective optical fibers, and general imperfections in devices and the practical QKD system make the QKD protocols vulnerable to attacks. By exploiting these weaknesses of the system, one can perform certain types of attacks and this is the basic issue of research and study, QKD security.
Significant progress has been made in the implementation of the quantum phase of communication and the development of quantum systems. Entanglement dynamics in CV quantum channels for both common and independent reservoirs have received a lot of attention recently [21]. As the security of QKD is the main goal, interesting experiments have shown that non-Markovian features can be used to improve security and/or locate an eavesdropper along the transmission line and determine their location [22]. Additionally, the entanglement dynamics have been studied and recent experiments have shown that photonic band gap media are promising to acquire non-Markovian behaviour and that materials with a photonic bandgap may be able to transmit entanglement reliably over long distances [23]. Moreover, the study of the phase modulation of coherent states in channels where the quantum communication phase takes place has turned into a subject of interest. Very interesting and useful studies and experiments have proven that phase diffusion is the most damaging kind of noise in a phase modulation scheme, where the information is encoded in the phase of a quantum seed signal [24]. Additionally, time-independent Markovian noise, specifically when the seed state is coherent, has been shown to be detrimental to information transfer and may compromise the channel’s overall performance [25,26]. The environment’s spectral structure, on the other hand, may lead to non-Markovian damping or diffusion channels in quantum optical communications [22,26]. It has also demonstrated that phase channels better preserve the transfer of information above a threshold on the loss and phase noise parameters, which is compared to the lossy coherent states amplitude-based scheme. So, in the presence of time-correlated noise, which results in dynamical non-Markovian phase diffusion, the interaction between the use of NLA and the memory effects results in a pronounced rise in performance [24].
3. Cryptographic Schemes in Quantum Era
The advances in computer processing power and the evolution of quantum computers seem for many people to be a threat in the distant future. On the other hand, researchers and security technologists are anxious about the capabilities of a quantum computational device to threaten the security of contemporary cryptographic algorithms. Shor’s algorithm consists of two parts, a classical part and a quantum part, and with the aid of a quantum routine could break modern cryptographic schemes, such as RSA and the Diffie–Hellman cryptosystem [27]. The factorization problem and the discrete logarithm problem are the fundamental basis for modern cryptographic schemes and serve as the foundation for these kinds of cryptosystems.
From that moment and after, it has been widely known in the scientific and technological community that with the arrival of a sufficiently large quantum computer, there is no more security in our encryption schemes. Therefore, post-quantum data encryption protocols are a basic topic of research and work, with the main goal being to construct cryptosystems resistant to quantum computers’ attacks [7,8]. Subsequently, we present certain cryptographic schemes that have been developed and that are secure under an attack of a quantum computer.
3.1. Code-Based Cryptosystems
Coding Theory is an important scientific field which studies and analyzes linear codes that are being used for digital communication. The main subject of research in coding theory is finding a secure and efficient data transmission method. In the process of data transmission, data are often lost due to errors owing to noise, interference, or other reasons, and the main subject of study of coding theory is to minimize this data loss [28]. When two discrete parties want to communicate and transfer data, they add extra information to each message which is transferred to enable the message to be decoded despite the existing errors.
Code-based cryptographic schemes are based on the theory of error-correcting codes and are considered to be prominent for the quantum computing era. These cryptosystems are considered to be reliable and their hardness relies on hard problems of coding theory, such as syndrome decoding (SN) and learning parity with noise (LPN).
The first code-based cryptosystem was proposed by Robert McEliece in 1978. It was based on the difficulty of decoding random linear codes, a problem which is considered to be NP-hard [29]. The main idea of McEliece is to use an error-correcting code, for which a decoding algorithm is known and which is capable to correct up to t errors to generate the secret key. The public key is constructed by the private key, covering up the selected code as a general linear code. The sender creates a codeword using the public key that is disturbed up to t errors. The receiver performs error correction and efficient decoding of the codeword and decrypts the message.
McEliece’s cryptosystem and the Niederreiter cryptosystem that was proposed by Harald Niederreiter in 1986 [30] can be suitable and efficient for encryption, hashing, and signature generation. The McEliece cryptosystem has a basic disadvantage, which is the large size of the keys and ciphertexts. In modern variants of the McEliece cryptosystem, there has been an effort to reduce the size of the keys. However, these types of cryptographic schemes are considered to withstand attacks by quantum computers and this makes them prominent for post-quantum cryptography.
3.2. Hash-Based Cryptosystems
Hash-based cryptographic schemes in general generate digital signatures and rely on cryptographic hash functions’ security, such as SHA-3. In 1979, Ralph Merkle proposed an asymmetric signature scheme based on one-time signature (OTS) and the Merkle signature scheme is considered to be the simplest and the most widely known hash-based cryptosystem [31]. This digital signature cryptographic scheme converts a weak signature with the aid of a hash function to a strong one.
The Merkle signature scheme is a practical development of Leslie Lamport’s idea of OTS that turn it into a many-times signature scheme, a signature process that could be used multiple times. The generated signatures are based on hash functions and their security is guaranteed even against quantum attacks.
Many of the reliable signature schemes based on hash functions have the drawback that the person who signs must keep record of the precise number of messages that have been signed before, and any error in this record will create a gap in their security [32]. Another disadvantage of these schemes is that a certain number of digital signatures can be generated and if this number increases indefinitely, then the size of the digital signatures is exceptionally big. However, hash-based algorithms for digital signatures are regarded as safe and strong against a quantum attack and can be used for post-quantum cryptography.
3.3. Multivariate Cryptosystems
In 1988, T. Matsumoto and H. Imai [33] presented a cryptographic scheme which relied on two-degree multivariate polynomials over a finite field for encryption and for signature verification. In 1996, J. Patarin [34] implemented a cryptosystem, the security of which relied on the fact that multivariate polynomial systems in finite fields are difficult to solve.
The multivariate quadratic polynomial problem states that given m quadratic polynomials f 1 , … , f m in n variables x 1 , … , x n with their coefficients to be chosen from a field F , it is requested to find a solution z ∈ F n such that f i ( z ) = 0 , for i ∈ [ m ] . The choice of the parameters make the cryptosystem reliable and safe against attacks, so this problem is considered to be NP-hard.
These type of cryptographic schemes are believed to be efficient and fast with high-speed computation processes and suitable for implementation on smaller devices. The need for new, stronger cryptosystems with the evolution of quantum computers created various candidates for secure cryptographic schemes based on the multivariate quadratic polynomial problem [8]. These type of cryptosystems are considered to be an active issue of research due to their quantum resilience.
3.4. Lattice-Based Cryptosystems
Cryptographic algorithms that are based on lattice theory have gained the interest of researchers and are perhaps the most famous of all candidates for post-quantum cryptography. Imagine a lattice like a set of points in an n dimensional space with periodic structure. The algorithms which are implemented in lattice-based cryptosystems are characterized by simplicity and efficiency and are highly parallelizable [35].
Lattice-based cryptographic protocols are proven to be secure, as their strong security relies on well-known lattice problems such as the Shortest Vector Problem (SVP) and the Learning with Errors problem (LWE) [36]. Additionally, they create powerful and efficient cryptographic primitives, such as functional encryption and fully homomorphic encryption [37]. Moreover, lattice-based cryptosystems create several applications, such as key exchange protocols and digital signature schemes. For all these reasons, lattice-based cryptographic schemes are believed to be the most dynamic field of exploration in post-quantum cryptography and the most prominent and promising one.
4. Lattices
Lattices are considered to be a typical subject in both cryptography and cryptanalysis and an essential tool for future cryptography, especially with the transition to the quantum computing era. The study and the analysis of the lattices goes back to the 18th century, when C.F. Gauss and J.L. Lagrange used lattices in number theory and H. Minkowski with his great work “geometry of numbers” sparked the study of lattice theory [38]. In the late 1990s, a lattice was used for the first time in a cryptographic scheme, and in recent years the evolution in this scientific field has been enormous, as there are lattice-based cryptographic schemes for encryption, digital signatures, trapdoor functions, and much more.
A lattice is a discrete subgroup of points in n-dimensional space with periodic structure. Any subgroup of Z n is a lattice, which is called integer lattice. It is appropriate to describe a lattice using its basis [35]. The basis of a lattice is a set of independent vectors in R n and by combining them, the lattice can be generated.
Definition 1.
A set of vectors { b 1 , b 2 , … , b n } ⊂ R m is linearly independent if the equation
c 1 b 1 + c 2 b 2 + ⋯ + c n b n = 0 , where c i ∈ R ( i = 1 , … , n )
accepts only the trivial solution c 1 = c 2 = ⋯ = c n = 0 .
Definition 2.
Given n linearly independent vectors b 1 , b 2 , … , b n ∈ R m , the lattice generated by them is defined as
L ( b 1 , b 2 , … , b n ) = { ∑ x i b i / x i ∈ Z } .
Therefore, a lattice consists of all integral linear combinations of a set of linearly independent vectors and this set of vectors { b 1 , b 2 , … , b n } is called a lattice basis. So, a lattice can be generated by different bases as can be seen in Figure 1.
Definition 3.
The same number d i m ( L ) of elements of all the bases of a lattice L it is called the dimension (or rank) of the lattice, since it matches the dimension of the vector subspace s p a n ( L ) spanned by L .
Definition 4.
Let L be a lattice with dimension n and B = { b 1 , b 2 , … , b n } a basis of the lattice. We define as fundamental parallelepiped the set:
P ( b 1 , b 2 , … , b n ) = { t 1 b 1 , t 2 b 2 , … , t n b n : 0 ≤ t i < 1 } = ∑ j = 1 n [ 0 , 1 ) b j
Not every given set of vectors forms a basis of a lattice and the following theorem gives us a criterion.
Theorem 1.
Let L be a lattice with rank n and { b 1 , b 2 , … , b n } ∈ L , n linearly independent lattice vectors. The vectors { b 1 , b 2 , … , b n } form a basis of L if and only if P ( b 1 , b 2 , … , b n ) ∩ L = { 0 } .
Definition 5.
A matrix U ∈ Z n × n is called unimodular if d e t U = ± 1 .
For example, the matrix
4 5 13 16
with d e t ( U ) = − 1 .
Theorem 2.
Two bases B 1 , B 2 ∈ R m × n generate the same lattice if and only if there is an umimodular matrix U ∈ R n × n such that B 2 = B 1 U .
Definition 6.
Let L = L ( B ) be a lattice of rank n and let B a basis of L . We define the determinant of L denoted d e t ( L ) , as the n-dimensional volume of P ( B ) .
We can write
d e t ( L ( B ) ) = v o l ( P ) and also
d e t ( L ) = d e t ( B T B ) .
An interesting property of the lattices is that the smaller the determinant of the lattice is, the denser the lattice is.
Definition 7.
For any lattice L = L ( B ) , the minimum distance of L is the smallest distance between any two lattice points:
λ ( L ) = i n f { ∥ x − y ∥ : x , y ∈ L / x ≠ y }
It is obvious that the minimum distance can be equivalently defined as the length of the shortest nonzero lattice vector:
λ ( L ) = i n f { ∥ v ∥ : v ∈ L , { 0 } }
4.1. Shortest Vector Problem (SVP)
The Shortest Vector Problem (SVP) is a very interesting and extensively studied computational problem on lattices. The Shortest Vector Problem states that given a lattice L , the shortest nonzero vector in L should be found.
That is to say, given a basis B = { b 1 , b 2 , … , b n } ∈ R m × n , the shortest vector problem is to find a vector v → satisfying
∥ v → ∥ = min u → ∈ L ( B ) / 0 = λ ( L ( B ) )
which is a variant of the Shortest Vector Problem is computing the length of the shortest nonzero vector in L (e.g., λ ( L ) ) without necessarily finding the vector.
Theorem 3.
Minkowski’s first theorem. The shortest nonzero vector in any n-dimensional lattice L has length at most γ n d e t ( L ) 1 / n , where γ n is an absolute constant (approximately equals to n ) that depend only of the dimension n and d e t ( L ) is the determinant of the lattice.
Two great mathematicians, J. Lagrange and C.F. Gauss, were the first ones to study the lattices and they knew an algorithm to find the shortest nonzero vector in two dimensional lattices. In 1773, Lagrange proposed an efficient algorithm to find a shortest vector of a lattice and Gauss, working independently, made a publication with his proposal for this algorithm in 1801 [38].
A g-approximation algorithm for SVP is an algorithm that on input a lattice L , outputs a nonzero lattice vector of length at most g times the length of the shortest vector in the lattice. The LLL lattice reduction algorithm is capable of approximating SVP within a factor g = O ( ( 2 / 3 ) n ) where n is the dimension of the lattice. Micciancio proved that the Shortest Vector Problem is NP-hard even to approximate within any factor less than 2 [39]. SVP is considered to be a hard mathematical problem and can be used as a cornerstone for the construction of provably secure cryptographic schemes, such as lattice-based cryptography.
One more form of the CVP is figuring the distance of the objective from the lattice without finding the nearest vector of the lattice, and numerous applications are only interested in finding a vector in the lattice that is somewhat close to the objective, not necessarily the nearest one.
4.2. Closest Vector Problem (CVP)
The Closest Vector Problem (CVP) is a computational problem on lattices that relates closely to the Shortest Vector Problem. CVP states that given a target point x → , the lattice point closest to the target should be found.
Let L be a lattice and a fixed point t ∈ R n ; we define the distance:
d ( t , L ) : m i n x ∈ L ∥ x − t ∥ .
CVP can be formulated as follows: Given a basis matrix B for the lattice L and a t ∈ R n , compute a non-zero vector v ∈ L such that ∥ t − v ∥ is minimal. So, we search a non-zero vector v ∈ L , such that ∥ v ∥ = d ( t , L ) .
Another version of the CVP is computing the distance of the target from the lattice without finding the closest vector of the lattice, and many applications only demand to find a lattice vector that is not too far from the target, not necessarily the closest one [40].
The most famous polynomial-time algorithms to solve the Shortest Vector Problem are Babai’s algorithm and Kannan’s algorithm which are based on lattice reduction. Below, in Algorithm 1, we present the first algorithm which was proposed by Lazlo Babai in 1986 [41].
Algorithm 1 Babai’s Round-off Algorithm.
Input: basis B = { b 1 , b 2 , … , b n } ∈ Z n , target vector c ∈ R
Output: approximate closest lattice point of c in L ( B )
1: procedure RoundOff
2: Compute inverse of B : B − 1 ∈ Q n
3: v : = B [ B − 1 c ]
4: return v
5: end procedure
CVP is the foundation of many cryptographic schemes of lattice cryptography, where the decryption procedure corresponds to a CVP computation. It is regarded as NP-hard to solve approximately within any constant factor [42]. Besides cryptography, the problem of finding a good CVP approximation algorithm with approximation factors that grow as a polynomial in the dimension of a lattice has numerous applications in computer science and is an active open problem in lattice theory.
4.3. Lattice Reduction
Lattice reduction, or lattice basis reduction, is about finding an interesting, useful basis of a lattice. Such a requested useful basis, from a mathematical point of view, satisfies a few strong properties. A lattice reduction algorithm is an algorithm that takes as input a basis of the lattice and returns a simpler basis which generates the same lattice. For computing science, we are interested in computing such bases in a reasonable time, given an arbitrary basis. In general, a reduced basis is composed from vectors with good properties, such as being short or being orthogonal.
A polynomial-time basis reduction algorithm developed by Laszlo Lovasz, Arjen Lenstra, and Hendrik Lenstra was published in 1982, the LLL, which took its name from the initials of their surnames [43]. The basis reduction algorithm approaches the solution of the smallest vector problem in small dimensions, especially in two dimensions; the shortest vector is too small to be computed in a polynomial time. On the contrary, in large dimensions there is no algorithm known which solves the SVP in a polynomial time. With the aid of the Gram–Schmidt orthonormalization method, we define the base reduction method LLL.
5. The NTRU Cryptosystem
A public key cryptosystem known as NTRU was presented in 1996 by Joseph H. Silverman, Jill Pipher, and Jeffrey Hoffstein. [44]. Until 2013, the NTRU cryptosystem was only commercially available, but after that it was released into the public domain for public use. The NTRU is based on the shortest vector problem in a lattice and is one of the fastest public key cryptographic schemes. It encrypts and decrypts data using polynomial rings. NTRU is more efficient than other current cryptosystems such as RSA, and it is believed to be resistant to quantum computer attacks, and this makes it a prominent post-quantum cryptosystem.
To describe the way the NTRU cryptographic scheme operates, we first have to give some definitions.
Definition 8.
Fix a positive integer N. The ring of convolution polynomials (of rank N) is the quotient ring
R = Z [ X ] ( X N − 1 ) .
(1)
Definition 9.
The ring of convolution polynomials (modulo q) is the quotient ring
R q = ( Z / q Z ) [ x ] ) ( X N − 1 ) .
(2)
Definition 10.
We consider a polynomial a ( x ) as an element of R q by reducing its coefficients mopulo q. For any positive integers d 1 and d 2 , we let
L ( d 1 , d 2 ) = a ( x ) ∈ R : a ( x ) has d 1 coefficients equal to 1 a ( x ) has d 2 coefficients equal to − 1 a ( x ) has all other coefficients equal to 0
(3)
Polynomials in L ( d 1 , d 2 ) are called ternary (or trinary) polynomials. They are analogous to binary polynomials, which have only 0’s and 1’s as coefficients.
We assume we have two polynomials a ( x ) and b ( x ) . The product of these two polynomials is given by the formula
a ( x ) × b ( x ) = c ( x ) with c k = ∑ i = 0 k a i b k − i + ∑ i = k + 1 N − 1 a i b N + k − i = ∑ i + j ≡ k mod N a i b j
(4)
We will denote the inverses by F q and F p , such that
F q × f ≡ 1 ( mod q ) and F p × f ≡ 1 ( mod p )
(5)
5.1. Description
The NTRU cryptographic scheme is based firstly on three well-chosen parameters ( N , p , q ) , such that N is a fixed positive large integer, p and q, is not necessary to be prime but are relatively prime, e.g., g c d ( p , q ) = 1 and q will be always larger than p [44]. Secondly, NTRU depends on four sets of polynomials L f , L g , L ϕ and L m with integer coefficients of degree N − 1 and works on the ring R = Z [ X ] X N − 1 .
Every element f ∈ R is written as a polyonomial or as vector f = ∑ N − 1 i = 0 f i x i = [ f 0 , f 1 , … , f N − 1 ] . We make the assumption that Alice and Bob are the two parties that they want to transfer data, to communicate with security. A trusted party or the first party selects public parameters ( N , p , q , d ) such that N,p are prime numbers, g c d ( p , q ) = g c d ( N , q ) = 1 and q > ( 6 d + 1 ) p .
Alice chooses randomly two polynomials f ( x ) ∈ L ( d + 1 , d ) and g ( x ) ∈ L ( d , d ) . These two polynomials are Alice’s private key.
Alice computes the inverse polynomials
F q ( x ) = f ( x ) − 1 ∈ R q and F p ( x ) = f ( x ) − 1 ∈ R p
(6)
Alice computes h ( x ) = F q ( x ) × g ( x ) ∈ R q and the polynomial h ( x ) is Alice’s public key. Alice’s private key is the pair ( f ( x ) , F p ( x ) ) and by only using this key, she can decrypt messages. Otherwise, she can store it, which is probably mod q and compute F p ( x ) when she needs it.
Alice publishes her key h.
Bob wants to encrypt a message and chooses his plaintext m ( x ) ∈ R p . The m ( x ) is a polynomial with coefficients m i such that − 1 2 p ≤ m i ≤ 1 2 p .
Bob chooses a random polynomial r ( x ) ∈ T ( d , d ) , which is called ephemeral key, and computes
e ( x ) ≡ p h ( x ) × r ( x ) + m ( x ) ( mod q )
(7)
and this is the encrypted message that Bob sends to Alice.
Alice computes
a ( x ) ≡ f ( x ) × e ( x ) ( mod q )
(8)
Alice chooses the coefficients of a in the interval from − q / 2 to q / 2 (center lifts a ( x ) to an element of R).
Alice computes
b ( x ) ≡ F p ( x ) × a ( x ) ( mod p )
(9)
and she recovers the message m as if the parameters have been chosen correctly; the polynomial b ( x ) equals the plaintext m ( x ) .
Depending on the choice of the ephemeral key r ( x ) the plaintext m ( x ) can be encrypted with many ways, as its possible encryptions are p h ( x ) × r ( x ) + m ( x ) . The ephemeral key should be used one time only, e.g., it should not be used to encrypt two different plaintexts. Additionally, Bob should not encrypt the same plaintext by using two different ephemeral keys.
5.2. Discrete Implementation
Assume the trusted party chooses the parameters ( N , p , q , d ) = ( 11 , 3 , 61 , 2 ) . As we can see, N = 11 and p = 3 are prime numbers, g c d ( 3 , 61 ) = g c d ( 11 , 2 ) = 1 and the condition q > ( 6 d + 1 ) p is satisfied as it is 61 > ( 6 · 2 + 1 ) 3 = 39 .
Alice chooses the polynomials
f ( x ) = x 10 − x 8 − x 6 + x 4 + x 2 + x + 1 ∈ L ( 3 , 2 ) g ( x ) = x 9 − x 8 − x 6 + x 4 + x 2 + 1 ∈ L ( 2 , 2 )
These polynomials, f , g are the private key of Alice.
Alice computes the inverses
F 61 ( x ) = f ( x ) − 1 mod 61 = = 45 x 10 + 49 x 9 + 26 x 8 + 40 x 7 + 53 x 6 + 47 x 5 + 21 x 4 + 24 x 3 + 60 x 2 + 32 x + 31 ∈ R 61 F 3 ( x ) = f ( x ) − 1 = x 9 + x 7 + x 5 + 2 x 4 + 2 x 3 + 2 x 2 + x ∈ R 3
Alice can store ( f ( x ) , F 3 ( x ) ) as her private key.
Alice computes
h ( x ) = F 61 ( x ) × g ( x ) = = 11 x 10 + 49 x 9 + 26 x 8 + 46 x 7 + 28 x 6 + 53 x 5 + 31 x 4 + 36 x 3 + 30 x 2 + 5 x + 50
and publishes her public key h ( x ) .
Bob decides to encrypt the message m ( x ) = x 7 − x 4 + x 3 + x + 1 and uses the ephemeral key r ( x ) = x 9 + x 7 + x 4 − x 3 + 1 .
Bob computes and sends to Alice the encrypted message
e ( x ) ≡ p h ( x ) × r ( x ) + m ( x ) ( mod q )
that is
e ( x ) = 11 x 10 + 49 x 9 + 52 x 8 + 35 x 7 + 30 x 6 + 25 x 5 + 35 x 4 + 32 x 3 + 18 x 2 + 56 x + 28 ( mod 61 ) .
Alice receives the ciphertext e ( x ) and computes
f ( x ) × e ( x ) = = 58 x 10 + 60 x 9 + 60 x 8 + 4 x 7 + 56 x 5 + 6 x 4 + 55 x 2 + 3 x + 6 ∈ R 61
Therefore, Alice centerlifts modulo 61 to obtain
a ( x ) = − 3 x 10 − x 9 − x 8 + 4 x 7 + 5 x 5 + 6 x 4 − 6 x 2 + 3 x + 6 ∈ R 61
She reduces a ( x ) modulo 3 and computes
F 3 ( x ) × a ( x ) = x 7 + 2 x 4 + x 3 + x + 1 ∈ R 3
and recovers Bob’s message m ( x ) = x 7 − x 4 + x 3 + x + 1
5.3. Security
Lattice-based NTRU is one of the fastest public key cryptosystems and it is used for encryption (NTRU-Encrypt) and digital signatures (NTRUSign). From the moment that NTRU was presented in 1996, NTRU security has been a main issue of interest and research. NTRU hardness relies on the hard mathematical problems in a lattice, such as the Shortest Vector Problem [35].
The authors of NTRU in their paper [44] argue that the secret key can be recovered by the public key, by finding a sufficiently short vector of the lattice that is generated in the NTRU algorithm. D. Coppersmith and A. Shamir proposed a simple attack against the NTRU cryptosystem. In their work, they argued that the target vector f | | g ∈ Z 2 N (the symbol || denotes vector concatenation) belongs to the natural lattice:
L C S = { F | | G ∈ Z 2 N | F ≡ h × G mod q where F , G ∈ R } .
It is obvious that L C S is a full dimension lattice in Z 2 N , with volume q N . The target vector is the shortest vector of L C S , so the private keys should be outputted heuristically by SVP-oracle f and g. Hoffstein et al. claimed that if one chooses the number N reasonably, the NTRU is sufficiently secure, as all these types of attacks are exponential in N. These types of attacks are based on the difficulty of solving certain lattice problems, such as SVP and CVP [45]. Lattice attacks can be used to recover the private key of an NTRU system, but they are generally considered to be infeasible for the current parameters of NTRU. It is important that the key size of the NTRU protocol is O ( N log q ) and this fact makes NTRU a promising cryptographic scheme for post-quantum cryptography [46].
Furthermore, the cryptanalysis of NTRU is an active area of research and other types of attacks against the NTRU cryptosystem have been developed [47,48,49]. We refer to some of them as detailed below.
Brute-Force Attack. In this type of attack, all possible values of the private key are tested until the correct one is found. Brute-force attacks are generally not practical for NTRU, as the size of the key space is very large [50].
Key Recovery Attack. This type of attack relies on exploiting vulnerabilities in the key-generation process of NTRU. For example, if assuming the arbitrary number generator used to create the confidential key is frail, a fraudulent user may be able to recover the private key [51].
Side-channel Attack. This type of attack take advantage of the weaknesses in the implementation of NTRU, such as timing attack, power analysis attack, and fault attack. Side-channel attacks require the device to be physically accessible running the implementation [52,53].
To protect NTRU against these types of attacks and avoid the leak of secret data and information, researchers use various techniques to ensure its security, such as parameter selection, randomization, and error-correcting codes.
6. The LWE Cryptosystem
In 2005, O. Regev presented a new public key cryptographic scheme, the Learning with Errors cryptosystem, and for this work, Regev won the Godel Prize in 2018 [54]. LWE is one of the most famous lattice-based cryptosystems and one of the most widely studied in recent years. It is based on the Learning with Errors problem and the hardness of finding a random linear function of a secret vector modulo a prime number. A probabilistic cryptosystem with a high probability algorithm is the LWE public key cryptosystem. Since LWE proved to be secure and efficient, it has become one of the most contemporary and innovative research topics in both lattice-based cryptography and computer science.
6.1. The Learning with Errors Problem
Firstly, we have to introduce the Learning with Errors problem (LWE). Assuming that we have a secret vector s = ( s 1 , s 2 , … , s n ) ∈ Z n with coefficient integer numbers and n linear equations, such that
a 11 s 1 + a 12 s 2 + … + a 1 n s n ≈ a a 21 s 1 + a 22 s 2 + … + a 2 n s n ≈ b ⋮ a m 1 s 1 + a m 2 s 2 + … + a m n s n ≈ m
We use the symbol “≈” to claim that within a certain error, the value approaches the actual response. This is a difficult problem because adding and multiplying rows together will increase the number of errors in each equation, resulting in the final row reduced state being worthless and the answer being far away from the real value.
Definition 11.
Let s ∈ Z q n be a secret vector and χ be a given distribution on Z q . An LWE distribution A s , n , q , χ generates a sample ( a , b ) ∈ Z q n × Z q or ( A , b ) ∈ Z q m × n × Z q m where a ∈ Z q n is uniformly distributed and b = 〈 a , s 〉 + e , where e ← χ and 〈 a , s 〉 is the inner product of a and s in Z q .
We call A s , n , q , χ = ( a , b ) ∈ Z q n × Z q the LWE distribution, s is called the private key, and e is called the error distribution. If b ∈ Z q is uniformly distributed, then it is called the uniform LWE distribution.
Definition 12.
Fix n ≥ 1 , q ≥ 2 and an error probability distribution χ on Z q . Let s be a vector with n coefficients in Z q . Let A s , χ on Z q n × Z q be the probability distribution choosing a vector a ∈ Z q uniformly at random, choosing e ∈ Z q according to χ and outputting ( a , 〈 a , s 〉 + e ) where additions are performed in Z q . We say an algorithm solves LWE with modulus q and error distribution χ if for any s ∈ Z q n given enough samples from A s , χ it outputs s with high probability.
Definition 13.
Suppose we have a way of generating samples from A s , χ as above, and also generating random uniformly distributed samples of ( a , b ) from Z q n × Z q . We call this uniform distribution U. The decision-LWE problem is to determine after a polynomial number of samples whether the samples are coming from A s , χ or U.
Simplifying the definition and formulated in more compact matrix notation, if we want to generate a uniformly random matrix A with coefficients between 0 and q and two secret vectors s, e with coefficients drawn from a distribution with small variance, the LWE sample can be calculated as: ( A , b = A s + e mod q ) . According to the LWE problem, it is challenging to locate the secret s from such a sample.
Definition 14.
For a > 0 , the family Ψ a is the (uncountable) set of all elliptical Gaussian distributions D r over a number field K R in which r ≥ a .
The choice of the parameters is crucial for the hardness of this problem. The distribution is a Gaussian distribution or a binomial distribution with variance 1 to 3; the length of the secret vector n is such that 2 9 < n < 2 10 and the modulus q is in the range 2 8 to 2 16 .
6.2. Description
Assume n ≥ 1 , q ≥ 2 are positive integers and χ is a given probability distribution in Z q . The LWE cryptographic scheme is based on LWE distribution A s , χ and is described below.
The parameters of the LWE cryptosystem are crucial to the protocol’s security. So, let n be the security parameter of the system; m, q are two integer numbers and χ is a probability distribution on Z q .
The security and the correctness of the cryptosystem are based on the following parameters, which are be chosen appropriately.
Choose q, a prime number between n 2 and 2 n 2 .
Let m = ( 1 + ϵ ) ( n + 1 ) log q for some arbitrary constant ϵ > 0 .
The probability distribution is chosen to be χ = Ψ a ( n ) for a ( n ) ∈ O ( 1 / n log n )
We suppose that there are two parties, Alice and Bob, who want to transfer information securely. The LWE cryptosystem has the typical structure of a cryptographic scheme and its steps are the following.
Alice chooses uniformly at random s ∈ Z q n . s is the private key.
Alice generates a public key by choosing m vectors a 1 , a 2 , … , a m ∈ Z q n independently from the uniform distribution. She also chooses elements (error offsets) e 1 , e 2 , … , e m ∈ Z q n independently according to χ . The public key is ( a i , b i ) i = 1 m , where b i = 〈 a i , s 〉 + e i .
In matrix form, the public key is the LWE sample ( A , b = A s + e mod q ) , where s is the secret vector.
Bob, in order to encrypt a bit, chooses a random set S uniformly among all 2 m subsets of [ m ] . The encryption is ( ∑ i ∈ S a i , ∑ i ∈ S b i ) if the bit is 0 and ( ∑ i ∈ S a i , ⌊ q 2 ⌋ + ∑ i ∈ S b i ) if the bit is 1.
In matrix form, Bob can encrypt a bit m by calculating two LWE problems: one using A as random public element, and one using b. Bob generates his own secret vectors s ′ , e ′ and e and make the LWE samples ( A , b ′ = A T s ′ + e ′ mod q ) , ( b , v ′ = b T s ′ + e ′ ′ mod q ) . Bob has to add the message that wants to encrypt to one of these samples, where v ′ is a random integer between 0 and q. The encrypted message of Bob consists of the two samples ( A , b ′ = A T s ′ + e ′ mod q ) , ( b , v ′ = b T s ′ + e ′ ′ + q 2 m mod q ) .
Alice wants to decrypt Bob’s ciphertext. The decryption of a pair ( a , b ) is 0 if b − 〈 a , s 〉 is closer to 0 than to ⌊ q 2 ⌋ modulo q. In another case, the decryption is 1.
In matrix form, Alice firstly calculates Δ v = v ′ − b ′ T s . As long as e T s ′ + e ′ ′ − s T e ′ is small enough, Alice recovers the message as m e s = ⌊ 2 q Δ v ⌉ .
6.3. Discrete Implementation
We choose n = 4 and q = 13 .
Alice chooses the private key s = [ 2 , 5 , 0 , 6 ] .
Let m = 3 so Alice generates the public key with the aid of three vectors a i , i = 1 , 2 , 3 and three elements e i , i = 1 , 2 , 3 (error terms). She chooses: a 1 = [ 1 , 6 , 2 , 4 ] and e 1 = 1 , a 2 = [ 0 , 3 , 5 , 1 ] and e 2 = 0 and a 3 = [ 2 , 1 , 6 , 3 ] and e 3 = − 1 . Therefore, Alice’s public key is:
{ ( [ 1 , 6 , 2 , 4 ] , 4 ) , ( [ 0 , 3 , 5 , 1 ] , 8 ) , ( [ 2 , 1 , 6 , 0 ] , 0 ) }
Bob wants to encrypt 0 so he takes the subset S = { 1 , 2 } . So, he computes
( ∑ i ∈ S a i , ∑ i ∈ S b i ) = ( [ 1 , 6 , 2 , 4 ] + [ 0 , 3 , 5 , 1 ] , 4 + 8 ) = ( [ 1 , 9 , 7 , 5 ] , 12 )
Alice performs the decryption algorithm by computing
b − 〈 a , s 〉 = 12 − 〈 [ 1 , 9 , 7 , 5 ] , [ 2 , 5 , 0 , 6 ] 〉 = 12 − 12 = 0
and obviously the decryption is 0 since the output value is closer to 0 (in this case equal to 0) than to ⌊ 13 2 ⌋ modulo 13.
Therefore, the encryption scheme worked correctly.
6.4. Implementations and Variants
The Learning with Errors (LWE) cryptosystem is a popular post-quantum cryptographic scheme that relies on the hardness of using lattices to solve particular computational problems. There are several variants of the LWE cryptosystem, including the Ring-LWE, the Dual LWE, the Module-LWE, the Binary-LWE, the multilinear LWE, and others [55,56,57].
The RING-LWE Cryptosystem
This variant of LWE uses polynomial rings instead of the more general lattices used in standard LWE. Ring-LWE has a simpler structure, which improves execution speed and memory utilization efficiency. In 2013, Lyubashevsky et al. [46] presented a new public key cryptographic scheme that is based in the LWE problem.
The Ring-LWE cryptosystem structure.
Lyubachevsky et al. proposed a well-analyzed cryptosystem that uses two ring elements for both public key and ciphertext and it is a plain lattice-based version of the public key cryptographic system.
The two parties they want to communicate agree on the complexity value of n, the highest co-efficient power to be used. Let R = Z [ X ] ( X n + 1 ) be the fixed ring and an integer q is chosen, such as q = 2 n − 1 . The steps of the Ring-LWE protocol are described below.
A secret vector s with n length is chosen with modulo q integer entries in ring R q , where q ∈ Z + . This is the private key of the system.
An element a ∈ R q is chosen and a random small element e ∈ R from the error distribution and we compute b = a s ˙ + e .
The public key of the system is the pair ( a , b ) .
Let m be the n bit message that is meant for encryption.
The message m is considered an element of R and the bits are used as coefficients of a polynomial of a degree less than n.
The elements e 1 , e 2 , r ∈ R are generated from error distribution.
The u = a · r + e 1 mod q is computed.
The v = b · r + e 2 + · ⌊ q 2 ⌉ · m mod q is computed and it is sent ( u , v ) ∈ R q 2 to receiver.
The second party receives the payload ( u , v ) ∈ R q 2 and computes r = v − u · s = ( r · e − s · e 1 + e 2 ) + ⌊ q 2 ⌉ · m mod q . Each r i is evaluated and if r 1 ≈ q 2 , then the bits are recovered back to 1, or else 0.
The Ring-LWE cryptographic scheme is similar to the LWE cryptosystem that was proposed by Regev. Their difference is that the inner products are replaced with ring products, so the result is a new ring structure, increasing the efficiency of the operations.
6.5. Security
Learning with Errors (LWE) is a computational problem that is the basis for cryptosystems and especially for cryptographic schemes of post-quantum cryptography. It is considered to be a hard mathematical problem and as a consequence, cryptosystems that are based on the LWE problem are of high security as well. LWE cryptographic protocols are a contemporary and active field of research and therefore their security is studied and analyzed continually and steadily.
There are various attacks that can be performed against the cryptosystems which are based on the LWE problem. We can say that these types of attacks are, in general, attacks that exploit weaknesses in the LWE problem itself, and attacks that exploit weaknesses in the specific implementation of the cryptosystem. Below, we present some of these types of attacks that can be launched against LWE-based cryptographic schemes.
Dual Attack. This type of attack is based on the dual lattice and is most effective against LWE instances with small size of plaintext messages.
Thus, hybrid dual attacks are appropriate for spare and small secrets, and in a hybrid attack, one estimates part of the secret without knowledge and performs some attacks on the leftover part [58] The cost of attacking the remaining portion of the secret is decreased because guessing reduces the problem’s size. Additionally, the component of the lattice attack can be utilized for multiple guesses. When the lattice attack component is a primal attack, we call it a hybrid primal attack and a hybrid dual attack, respectively, and the optimal attack is achieved when the cost of guessing is equal to the lattice attack cost.
Sieving Attack. This type of attack relies on the idea of sieving, which claims to find linear combinations of the LWE samples that reveal information about the secret. Sieving attacks can be used to solve the LWE problem with fewer samples than its original complexity.
Algebraic attack. This type of attack is based on the idea of finding algebraic relations between the LWE samples that let out secret data information. Algebraic attacks can be suitable for solving the LWE problem with fewer samples than the original complexity as well.
Side-channel attack. This type of attack exploits weaknesses in the implementation of the LWE-based scheme, such as timing attacks and others. Side-channel attacks are generally easier to mount than attacks against the LWE problem itself, but they require physical access to the device running the implementation.
Attacks that use the BKW algorithm. This is a classic attack; it is considered to be sub-exponential and is most effective against small or small-structured LWE instances.
To mitigate these attacks, LWE-based schemes typically use various techniques such as parameter selection, randomization, and error-correcting codes. These techniques are designed to make the LWE problem harder to solve and to prevent attackers from taking advantage of vulnerabilities in the implementation [59,60].
7. The GGH Cryptosystem
In 1997, Oded Goldreich, Shafi Goldwasser, and Shai Halevi proposed a cryptosystem (GGH) [61] based on algrebraic coding theory and it can be seen as a lattice analogue of the McEliece cryptosystem [29]. In both the GGH and McEliece schemes, the addition of a random noise vector to the plaintext is called the ciphertext [35]. In the GGH cryptosystem, the public and the private key are a representation of a lattice and in the McEliece, the public and the private key are a representation of a linear code. The basic distinction between these two cryptographic schemes is that the domains in which the operations take place are different. The main idea and structure of the GGH cryptographic scheme is characterized by simplicity and it is based on the difficulty of reducing lattices.
7.1. Description
The GGH public key encryption scheme is formed by the key generation algorithm K, the encryption algorithm E, and the decryption algorithm D. It is based on lattices in Z n , a key derivation function h : Z n × Z n → K s and a symmetric cryptosystem ( K s , P , C , E s , D s ), where K is the key generation algorithm, P the set of plain texts, C the set of ciphertexts, E s the encryption algorithm, and D s the decryption algorithm.
The key generation algorithm K generates a lattice L by choosing a basis matrix V that is nearly orthogonal. An integer matrix U it is chosen which has determinant d e t ( U ) = ± 1 and the algorithm computes W = U V . Then, the algorithm outputs e k = W and d k = V .
The encryption algorithm E receives as input an encryption key e k = W and a plain message m ∈ P . It chooses a random vector u ∈ Z n and a random noise vector u. Then it computes x = u W , z = x + r and encrypts the message w = E s ( h ( x , r ) , m ) . It outputs the ciphertext c = ( z , w ) .
The decryption algorithm D takes as input a decryption key d k = V and a ciphertext c = ( z , w ) . It computes x = ⌊ z V − 1 ⌉ V and r = z − x and decrypts as m = D s ( h ( x , r ) , w ) . If D s algorithm outputs the symbol ⊥ the decryption fails and then D outputs ⊥, otherwise the algorithm outputs m.
We assume that there exist two users, Alice and Bob, who want to communicate secretly. The main (classical) process of the GGH cryptosystem is described below.
Alice chooses a set of linearly independent vectors v 1 , v 2 , … , v n ∈ Z n which form the matrix V = [ v 1 , v 2 , … , v n ] , v i ∈ Z n , 1 ≤ i ≤ n . Alice, by calculating the Hadamard Ratio of matrix V and verifying that is not too small, checks her vector’s choice. This is Alice’s private key and we let L be the lattice generated by these vectors.
Alice chooses an n × n unimodular matrix U with integer coefficients that satisfies d e t ( U ) = ± 1 .
Alice computes a bad basis w 1 , w 2 , … , w n for the lattice L, as the rows of W = U V , and this is Alice’s public key. Then, she publishes the key w 1 , w 2 , … , w n .
Bob chooses a plaintext that he wants to encrypt and he chooses a small vector m (e.g., a binary vector) as his plaintext. Then, he chooses a small random “noise” vector r which acts as a random element and r has been chosen randomly between − δ and δ , where δ is a fixed public parameter.
Bob computes the vector e = m W + r = ∑ i = 1 n m i w i + r = x 1 w 1 + x 2 w 2 + ⋯ + x n w n + r using Alice’s public key and sends the ciphertext e to Alice.
Alice, with the aid of Babai’s algorithm, uses the basis v 1 , v 2 , … , v n to find a vector in L that is close to e. This vector is the a = m W , since the “noise” vector r is small and since she uses a good basis. Then, she computes a W − 1 = m W W − 1 ans she recovers m.
Supposing there is an eavesdropper, Eve, who wants to obtain information of the communication between Alice and Bob. Eve has in her possession the message e that Bob sends to Alice and therefore tries to find the closest vector to e, solving the CVP, using the public basis W. As she uses vectors that are not reasonably orthogonal, Eve will recover a message e ^ which probably will not be near to m.
7.2. Discrete Implementation
Alice chooses a private basis v 1 → = ( 48 , 1 ) and v 2 → = ( − 1 , 48 ) which is a good basis since v 1 → and v 2 → are orthogonal vectors, e.g., it is 〈 v 1 → , v 2 → 〉 = 0 . The rows of the matrix V = 48 1 − 1 48 are Alice’s private key. The lattice L spanned by v 1 → and v 2 → has determinant d e t ( L ) = 2305 and the Hadamard ratio of the basis is H = ( d e t ( L ) / | v 1 → | | v 2 → | ) 1 / 3 ≃ 1
Alice chooses the unimodular matrix U that its determinant is equal to 1, such that U = 5 8 3 5 with d e t ( U ) = + 1 .
Alice computes the matrix W, such that W = U V = 232 389 139 243 . Its rows are Alice’s bad basis w 1 → = ( 232 , 389 ) and w 2 → = ( 139 , 243 ) , since it is c o s ( w 1 → , w 2 → ) ≃ 0.99948 and these vectors are nearly parallel, so they are suitable for a public key.
It is very important for the noise vector to be selected carefully and that it is not shifted where the nearest point is located. For Alice’s basis that generates the lattice L, r → is chosen that | r → | < 20 . So, the vector r → is chosen to be ( r x , r y ) with − 10 ≤ r x and r y ≤ 10 .
Bob wants to encrypt the message m = ( 35 , 27 ) . The message can be seen as a linear combination of the basis w 1 → , w 2 → , such as 35 w 1 → + 25 w 2 → and the noise vector r → can be added.
The corresponding ciphertext is e = m W + r = ( 35 , 27 ) 232 389 139 243 + ( − 9 , 1 ) = ( 19 , 285 , 17 , 064 ) + ( − 9 , 1 ) = ( 19 , 276 , 17 , 065 ) and Bob sends it to Alice.
Alice, using the private basis, applies Babai’s algorithm and finds the closest lattice point. So, she solves the equation a 1 ( 48 , 1 ) + a 2 ( − 1 , 48 ) = ( 19 , 276 , 17 , 065 ) and finds a 1 ≃ 463.02 and a 2 ≃ 345.8 . So, the closest lattice point is a 1 ( 48 , 1 ) + a 2 ( − 1 , 48 ) = 463 ( 48 , 1 ) + 346 ( − 1 , 48 ) = ( 21 , 878 , 17 , 071 ) and this lattice vector is close to e.
Alice realizes that Bob must have computed ( 21 , 878 , 17 , 071 ) as a linear combination of the public basis vectors and then solving the linear combination again m 1 ( 232 , 389 ) + m 2 ( 139 , 243 ) = ( 21 , 878 , 17 , 071 ) , she finds m 1 = 35 and m 2 = 27 and recovers the message m = ( m 1 , m 2 ) = ( 35 , 27 ) .
Eve has in her possession the encrypted message ( 19 , 276 , 17 , 065 ) that Bob had sent to Alice and she tries to solve the CVP using the public basis. So, she is solving the equation m 1 ( 232 , 389 ) + m 2 ( 139 , 243 ) = ( 19 , 276 , 17 , 065 ) ; she finds the incorrect values m 1 ≃ 1003.1 , m 2 ≃ − 1535.5 and recovers the incorrect encryption m ′ = ( m 1 , m 2 ) = ( 1003 , − 1535 ) .
In 1999 and in 2001, D. Micciancio proposed a simple technique to reduce both the size of the key and size of the ciphertext of GGH cryptosystem without decreasing the level of its security [62,63].
7.3. Security
In the GGH cryptographic scheme, if a security parameter n is chosen, the time required for encryption and the size of the key is O ( n 2 log n ) and it is more efficient than other cryptosystems such as AD.
There are some natural ways to perform an attack on the GGH cryptographic scheme.
Leak information and obtain the private key V from the public key W.
For this type of attack, a lattice basis reduction (LLL) algorithm is performed on the public key, the matrix W. It is possible that the output is a basis W ′ that is good enough to enable the effective solution of the necessary instances of the closest vector. It will be extremely difficult for this attack to succeed if the dimension of the lattice is sufficiently large.
Assuming we have a small error vector r, try to extract information about the message from the ciphertext e.
For this type of attack, it is useful that in the ciphertext e = m W + r , the error vector r is a vector with small entries. An idea is to compute e W − 1 = m W W − 1 + r W − 1 and try to deduce possible values for some entries of r W − 1 . For example, if the j-th column of W − 1 has a particularly small norm, then one can deduce that the j-th entry of r W − 1 is always small and hence get an accurate estimate for the j-th entry of m. To defeat this attack, one should only use some low-order bits of some entries of m to carry information, or use an appropriate randomized padding scheme
Try to solve the Closest Vector Problem of e with respect to the lattice that is being generated by W, for example, by performing the Babai’s nearest plane algorithm or the embedding technique.
Moreover, certain types of attacks can be performed against GGH which are discussed below, such as Nguyen’s attack and Lee and Hahn attack.
Goldreich, Goldwasser, and Halevi claimed that increasing the key size compensates for the decrease in computation time [35]. When presenting their paper, the three authors published five numerical challenges that corresponded to increase the value of the parameters n in higher dimensions with the aim of supporting their algorithm. In each challenge, a public key and a ciphertext were given and it was requested to recover the plaintext.
In 1999, P. Nguyen exploited the weakness specific to the way the parameters are chosen and developed an attack against the GGH cryptographic scheme [64]. The first four challenges, for n = 200 , 250 , 300 , 350 were broken; since then, GGH is considered to be broken partially in its original form. Nguyen argued that the choice of the error vector is its weakness and that it makes it vulnerable to a possible attack. The error vectors used in the encryption of the GGH algorithm must be shorter than the vectors that generate the lattice. This weakness makes Closest Vector Problem instances arising from GGH easier than general CVP instances [35].
The other weakness of the GGH cryptosystem is the choice of the error vector e in the encryption algorithm procedure. The e vector is in { ± σ } n and it is chosen to maximize the Euclidean norm under requirements on the nity norm. Nguyen takes the ciphertext c = m B + e modulo s i g m a , where m is the plaintext and B the public key, and the e disappears from the equation. This is because e ∈ { ± σ } n and every choice is 0 mod σ . So, this leaks information about the message m ( mod σ ) and increasing the modulus to 2 σ and adding an all − σ vector s to the equation. If this equation is solved for m, it leaks information for m ( mod 2 σ ) . Nguyen also demonstrated that in most cases, this equation could be easily solved for m.
In 2006, Nguyen and Regev performed an attack on the GGH signatures scheme, transforming a geometrical problem to a multivariate optimization problem [65]. The final numerical challenge for n = 400 was solved by M.S. Lee and S.G. Hahn in 2010 [66]. Therefore, GGH has weaknesses and trapdoors, such that it is vulnerable to certain type of attacks, such as one attack that allows a fraudulent user to recover the secret key using a small amount of information about the ciphertext. Specifically, if an attacker can obtain the two smallest vectors in the lattice, they can give information and recover the secret key using Coppersmith’s algorithm [67]. As a result, GGH has limited practical use and has been largely superseded by newer and more secure lattice-based cryptosystems. So, while GGH made an important early contribution to the field of lattice-based cryptography, it is not currently considered a practical choice for secure communication due to its limitations in security.
8. Evaluation, Comparison and Discussion
We have presented a few of the main cryptographic schemes that are based on the hardness of lattice problems and especially based on the Closest Vector Problem. GGH is a public key cryptosystem which is based in algebraic coding theory. A plaintext is been added with a vector noise and the result of this addition is a ciphertext. Both the private and the public keys are a depiction of a lattice and the private key has a specific structure. Nguyen’s attack [64] revealed the weakness and vulnerability of the GGH cryptosystem and many researchers after that considered GGH to be unusable [64,68]
Therefore, in 2010, M.S. Lee and S.G. Hahn presented a method that solved the numerical challenge of the highest dimension 400 [66]. Applying this specific method, Lee and Hann came to the conclusion that the decryption of the ciphertext could be accomplished using partial information of the plaintext. Thus, this method requires some knowledge of the plaintext and cannot be performed in actually real cryptanalysis circumstances. On the other side, in 2012 M. Yoshino and N. Kunihiro and C. Gu et al. in 2015 presented a few modifications and improvements in the GGH cryptosystem, claiming that they made it more resistant to these attacks [67,69].
The same year, C.F. de Barros and L.M. Schechter, in their paper “GGH may not be dead after all”, proposed certain improvements for GGH and finally a variation of the GGH cryptographic scheme [70]. De Barros and Schecher, by reducing the public key in order to find a basis with the aid of Babai’s algorithm, perform a direct way to attack to GGH. They increase the length of the noise vector r → setting a new parameter k that modified the GGH cryptographic algorithm. Their modifications resulted in a variation of GGH more resistant to cryptanalysis, but with slower decryption process of the algorithm. In 2015, Brakerski et al. described certain types of attacks against some variations of the GGH cryptosystem and relied on the linearity of the zero-testing procedure [71].
GGH was a milestone in the evolution of post-quantum cryptography; it was one of the earliest lattice-based cryptographic schemes and it is based on the Shortest Vector Problem’s difficulty. Even though is is viewed as one of the most significant lattice-based cryptosystems and still has a theoretical interest, it is not recommended for practical use due to its security weaknesses. GGH is less efficient than other lattice-based cryptosystems [72]. The process to encrypt and decrypt a message requires a large amount of computations and this fact makes the GGH cryptosystem obviously slower and less practical than other lattice-based cryptosystems.
Thus, the GGH protocol is vulnerable to certain attacks, such as Coppersmith’s attack and Babai’s nearest plane algorithm, and it is considered not to be strong enough. These attacks disputed the security of the GGH and made it less preferable than newer, stronger, and more secure lattice-based cryptosystems. Evaluating the efficiency of GGH cryptographic protocol, GGH is relatively inefficient compared to other lattice-based cryptosystems such as NTRU, LWE, and others, and especially in the key generation and for large key length. As the GGH cryptosystem is based in multiplications of matrices, when we choose large keys, it requires a computationally expensive basis reduction algorithm for the encryption and decryption procedure.
Moreover, GGH is considered to be a complex cryptographic scheme which requires concepts and knowledge of lattices and linear algebra to study, analyze, and implement. GGH also has one more drawback, which is the lack of standardization, and this makes hard the comparison of its functionality, security, and connectivity with other cryptographic schemes. GGH was one of the first cryptographic schemes that were developed based on lattice theory and cryptography. In spite of the fact that GGH certainly has interesting theoretical basis and properties, GGH is not used in practice due to its limitations in security, efficiency, and complexity.
NTRU is a public key cryptographic scheme that is based on the Shortest Vector Problem in a lattice and was first presented in the 1990s. It is one of the most well studied and analyzed lattice-based cryptosystems and there have been many cryptanalysis studies of NTRU algorithms, including NTRU signatures. NTRU has a high level of security and efficiency and it is a promising protocol for post-quantum cryptography. Moreover, the NTRU cryptographic algorithm uses polynomial multiplication as its basic operation and it is notable for its simplicity.
A main advantage of the NTRU cryptosystem is its speed and it has been used in certain commercial applications where speed is a priority. NTRU has a fast implementation compared with other lattice-based cryptosystems, such as GGH, LWE, and Ajtai-Dwork. For this reason, NTRU is preferable for applications that require fast encryptions and decryption, such as in IoT devices or in embedded systems. In addition to its speed, NTRU uses smaller key sizes than other public key cryptosystems, but the same level of security is maintained. This makes it ideal for applications or environments with limited memory and processing power.
NTRU is considered to be a secure cryptographic scheme against various types of attacks. It is designed to be resistant against attacks such as lattice basis reduction, meet-in-the-middle attacks, and chosen ciphertext attacks. NTRU is believed to be a strong cryptographic scheme for the quantum era, meaning that it is considered to be resistant against attacks by quantum computers.
NTRU has become famous and widely usable after 2017, because before then, it was under a patent and it was difficult for researchers to use it and modify it. Thus, NTRU is not widely used or standardized in the industry, making it difficult to assess its interoperability with other cryptosystems. Furthermore, NTRU is considered to be a public key cryptographic protocol with relative complexity, and its analysis and implementation require a good understanding of lattice-based cryptography and ring theory. NTRU is a promising lattice-based cryptosystem for post-quantum cryptography that offers fast implementation and strong security guarantees [73].
Learning with Errors (LWE) is a widely used and well-studied public key cryptographic scheme that is based in lattice theory [74]. LWE is considered to be secure against both quantum and classical attacks and indeed, it is considered to be among the most secure and efficient of these schemes, while NTRU has limitations in terms of its security [75]. LWE depends its hardness on the difficulty of finding a random error vector in a matrix product and this makes it a resistant cryptosystem against various types of attacks, the same types of attacks as with NTRU. It is considered to be a strongly secure cryptosystem and post-quantum secure, which means that it is resistant to attacks by a quantum computer [76].
LWE uses keys with small length size compared with other cryptographic schemes that are designed for the quantum era, such as code-based and hash-based cryptosystems [77]. Just like NTRU, LWE is appropriate for implementation in resource-constrained environments, such as in IoT devices or in embedded systems. A basic advantage of the LWE cryptosystem is its flexibility, as it is a versatile cryptographic scheme that can be suitable in a variety of cryptographic methods such as digital signatures, key exchange, and encryption. LWE also serves as a foundation for more advanced cryptographic protocols, which developed other variations of it.
LWE can be vulnerable to certain type of attacks, such as side-channel attacks, i.e., timing attacks or power analysis attacks, if we do not take the right countermeasures [78]. Just like NTRU, LWE is not considered to be standardized and widely adopted by the computing industry and this makes it difficult to assess its interoperability with other cryptosystems and make a comparison with them. Moreover, LWE cryptographic protocol is characterized by complexity and understanding and modifying it becomes challenging.
Undoubtedly, both NTRU and LWE are fast, efficient, and secure cryptographic schemes. NTRU uses smaller key sizes and that makes it suitable for applications where memory and computational power are limited. Both LWE and NTRU are considered to be strong and resistant to various types of attacks and are considered to be prominent for post-quantum cryptography. Thus, LWE is an adaptable cryptographic protocol and can be used in a wide range of cryptographic tasks and methods, while NTRU is primarily used for encryption and decryption.
In summary, LWE and NTRU are both promising lattice-based cryptosystems that offer strong security guarantees and are resistant to quantum attacks. NTRU is known for its fast implementation and smaller key sizes, while LWE offers more flexibility in cryptographic primitives and is currently undergoing standardization. Ultimately, the choice between LWE and NTRU will depend on specific use cases and implementation requirements.
Overall, each lattice-based cryptosystem has its own strengths and weaknesses depending on the specific use case. Choosing the right one requires careful consideration of factors such as security, efficiency, and ease of implementation.
9. Lattice-Based Cryptographic Implementations and Future Research
Quantum research over the past few years has been particularly transformative, with scientific breakthroughs that will allow exponential increases in computing speed and precision. In 2016, the National Institute of Standards and Technology (NIST) announced an invitation to researchers to submit their proposals for developed public—key post-quantum cryptographic algorithms. At the end of 2017, when was the initial submission deadline, 23 signature schemes and 59 encryption—key encapsulation mechanism (KEM) schemes were submitted, in total, 82 candidates’ proposals.
In July 2022, the NIST finished the third round of selection and chose a set of encryption tools designed to be secure against attacks by future quantum computers. The four selected cryptographic algorithms are regarded as an important milestone in securing sensitive data against the possibility of cyberattacks from a quantum computer in the future [79].
The algorithms are created for the two primary purposes for which encryption is commonly employed: general encryption, which is used to secure data transferred over a public network, and digital signatures, which are used to verify an individual’s identity. Experts from several institutions and nations collaborated to develop all four algorithms which are presented below.
CRYSTALS-Kyber
This cryptographic scheme is selected by NIST for general encryption and is based on the module Learning with Errors problem. CRYSTALS-Kyber is similar to the Ring-LWE cryptographic scheme but it is considered to be more secure and flexible. The parties that communicate can use small encrypted keys and exchange them easily with high speed.
CRYSTALS-Dilithium
This algorithm is recommended for digital signatures and relies its security on the difficulty of lattice problems over module lattices. Like other digital signature schemes, the Dilithium signature scheme allows a sender to sign a message with their private key, and a recipient uses the sender’s public key to verify the signature but Dilithium has the minor public key and signature size of any lattice-based signature scheme that only uses uniform sampling.
FALCON
FALCON is a cryptographic protocol which is proposed for digital signatures. The FALCON cryptosystem is based on the theoretical framework of Gentry et al [80]. It is a promising post-quantum algorithm as it provides capabilities for quick signature generation and verification. The FALCON cryptographic algorithm has strong advantages such as security, compactness, speed, scalability, and RAM Economy.
SPHINCS+
SPHINCS plus is the third digital signature algorithm that was selected by NIST. SPHINCS + uses hash functions and is considered to be a bit larger and slower than FALCON and Dilithium. It is regarded as an improvement of the SPHINCS signature scheme, which was presented in 2015, as it reduces the size of the signature. One of the key points of interest of SPHINCS+ over other signature schemes is its resistance to quantum attacks by depending on the hardness of a one-way function.
10. Conclusions
In recent years, significant progress has been made, taking us beyond classical computing and into a new era of data called quantum computing. Quantum research over the past few years has been particularly transformative, with scientific breakthroughs that will allow exponential increases in computing speed and precision. Research on post-quantum algorithms is active and huge sums of money are being invested for this reason, because it is necessary for the existence of strong cryptosystems.
It is considered almost certain that both the symmetric key algorithm and hash functions will continue to be used as tools of post-quantum cryptography. A variety of cryptographic schemes have been proposed for the quantum era of computing and this is a topic of ongoing research. The development and the standardization of an efficient post-quantum algorithm is the challenge of the academic community. What was once considered a science fiction fantasy is now a technological reality. The quantum age is coming and it will bring enormous changes; therefore, we have to be prepared.
Author Contributions
Investigation, G.C.M.; Writing—original draft, M.E.S.; Supervision, I.K.S., D.P. and G.G. All authors have read and agreed to the published version of the manuscript.
Funding
This research received no external funding.
Conflicts of Interest
The authors declare no conflict of interest.
References
Sabani, M.; Savvas, I.K.; Poulakis, D.; Makris, G. Quantum Key Distribution: Basic Protocols and Threats. In Proceedings of the 26th Pan-Hellenic Conference on Informatics (PCI 2022), Athens, Greece, 25–27 November 2022; ACM: New York, NY, USA, 2022; pp. 383–388. [Google Scholar]
Nielsen, M.; Chuang, I. Quantum Computation and Quantum Information; Cambridge University Press: Cambridge, UK, 2011. [Google Scholar]
Sabani, M.; Savvas, I.K.; Poulakis, D.; Makris, G.; Butakova, M. The BB84 Quantum Key Protocol and Potential Risks. In Proceedings of the 8th International Congress on Information and Communication Technology (ICICT 2023), London, UK, 20–23 February 2023. [Google Scholar]
Preskill, J. Quantum computing and the entanglement frontier. In Proceedings of the 25th Solvay Conference on Physics, Brussels, Belgium, 19–25 October 2011; Available online: https://arxiv.org/abs/1203.5813 (accessed on 26 March 2012).
Poulakis, D. Cryptography, the Science of Secure Communication, 1st ed.; Ziti Publications: Thessaloniki, Greece, 2004. [Google Scholar]
Shor, P.W. Polynomial-time algorithms for prime factorization and discrete logarithms on a quantum computer. J. Comput. SIAM 1997, 26, 1484–1509. [Google Scholar] [CrossRef] [Green Version]
Alkim, E.; Dukas, L.; Pöppelmann, T.; Schwabe, P. Post-Quantum Key Exchange—A New Hope. In Proceedings of the USENIX Security Symposium 2016, Austin, TX, USA, 10–12 August 2016; Available online: https://eprint.iacr.org/2015/1092.pdf (accessed on 11 November 2015).
Berstein, D.J.; Buchmann, J.; Brassard, G.; Vazirani, U. Post-Quantum Cryptography; Springer: Berlin/Heidelberg, Germany, 2009. [Google Scholar]
Zheng, Z.; Tian, K.; Liu, F. Modern Cryptography Volume 2 a Classical Introduction to Informational and Mathematical Principle; Springer: Singapore, 2023. [Google Scholar]
Silverman, J.H.; Piher, J.; Hoffstein, J. An Introduction to Mathematical Cryptopraphy, 1st ed.; Springer: New York, NY, USA, 2008. [Google Scholar]
Galbraith, S. Mathematics of Public Key Cryptography; Cambridge University Press: Cambridge, UK, 2012. [Google Scholar]
Kannan, R. Algorithmic Geometry of Numbers. In Annual Reviews of Computer Science; Annual Review Inc.: Palo Alto, CA, USA, 1987; pp. 231–267. [Google Scholar]
Rivest, R.L.; Shamir, A.; Adleman, A. Method for Obtaining Digital Signatures and Public-Key Cryptosystems. J. ACM 1978, 21, 120–126. [Google Scholar] [CrossRef] [Green Version]
Sabani, M.; Galanis, I.P.; Savvas, I.K.; Garani, G. Implementation of Shor’s Algorithm and Some Reliability Issues of Quantum Computing Devices. In Proceedings of the 25th Pan-Hellenic Conference on Informatics (PCI 2021), Volos, Greece, 26–28 November 2021; ACM: New York, NY, USA, 2021; pp. 296–392. [Google Scholar]
Wiesner, S. Conjugate coding. Sigact News 1983, 15, 78–88. [Google Scholar] [CrossRef]
Van Assche, G. Quantum Cryptography and Secret-Key Distillation, 3rd ed.; Cambridge University Press: New York, NY, USA, 2006. [Google Scholar]
Scherer, W. Mathematics of Quantum Computing, An Introduction; Springer: Berlin/Heidelberg, Germany, 2019. [Google Scholar]
Bennett, C.H.; Brassard, G.; Ekert, A. Quantum cryptography. Sci. Am. 1992, 50–57. Available online: https://https://www.jstor.org/stable/e24939235 (accessed on 2 October 1992). [CrossRef]
Bennett, C.H.; Brassard, G.; Breidbart, S.; Wiesner, S. Quantum cryptography, or Unforgeable subway tokens. In Advances in Cryptology; Conference Paper; Springer Science + Business Media: New York, NY, USA, 1982; pp. 267–275. [Google Scholar] [CrossRef]
Bennett, C.H.; Brassard, G. Quantum Cryptography: Public Key Distribution and Coin Tossing. In Proceedings of the International Conference in Computer Systems and Signal Processing, Bangalore, India, 10–12 December 1984. [Google Scholar]
Teklu, B. Continuous-variable entanglement dynamics in Lorenzian environment. Phys. Lett. A 2022, 432, 128022. [Google Scholar] [CrossRef]
Vasile, R.; Olivares, S.; Paris, M.G.A.; Maniscalco, S. Continuous variable quantum key distribution in non-Markovian channels. Phys. Rev. A 2011, 83, 042321. [Google Scholar] [CrossRef] [Green Version]
Teklu, B.; Bina, M.; Paris, M.G.A. Noisy propagation of Gaussian states in optical media with finite bandwidth. Sci. Rep. 2022, 12, 11646. Available online: https://www.nature.com/articles/s41598-022-15865-5 (accessed on 8 June 2023). [CrossRef]
Adnane, H.; Teklu, B.; Paris, M.G. Quantum phase communication assisted by non-deterministic noiseless amplifiers. J. Opt. Soc. Am. B 2019, 36, 2938–2945. [Google Scholar] [CrossRef]
Teklu, B.; Trapani, J.; Olivares, S.; Paris, M.G.A. Noisy quantum phase communication channels. Phys. Scr. 2015, 90, 074027. [Google Scholar] [CrossRef]
Trapani, J.; Teklu, B.; Olivares, S.; Paris, M.G.A. Quantum phase communication channels in the presence of static and dynamical phase diffusion. Phys. Rev. A 2015, 92, 012317. [Google Scholar] [CrossRef] [Green Version]
Diffie, W.; Hellman, M. New Directions in Cryptography. IEEE Trans. Inf. Theory 1976, 22, 644–654. [Google Scholar] [CrossRef] [Green Version]
Trappe, W.; Washington, L.C. Introduction to Cryptography with Coding Theory; Pearson Education: New York, NY, USA, 2006. [Google Scholar]
McEliece, R. A public key cryptosystem based on alegbraic coding theory. DSN Prog. Rep. 1978, 42–44, 114–116. [Google Scholar]
Niederreiter, H. Knapsack-type cryptosystems and algebraic coding theory. Probl. Control Inf. Theory Probl. Upr. I Teor. Inf. 1986, 15, 159–166. [Google Scholar]
Merkle, R. A certified digital signature. In Advances in Cryptology—CRYPTO’89, Proceedings of the CRYPTO ’89, 9th Annual International Cryptology Conference, Santa Barbara, California, USA, 20–24 August 1989; Springer: Berlin/Heidelberg, Germany, 1989; pp. 218–238. [Google Scholar]
Bai, S.; Gong, Z.; Hu, L. Revisiting the Security of Full Domain Hash. In Proceedings of the 6th International Conference on Security, Privacy and Anonymity in Computation, Communication and Storage, Nanjing, China, 18–20 December 2013. [Google Scholar]
Matsumoto, T.; Imai, H. Public quadratic polynomials-tuples for efficient signature verification and message encryption. Adv. Cryptol. Eur. Crypt’88 1988, 330, 419–453. [Google Scholar]
Patarin, J. Hidden field equations and isomorphism of polynomials. In Proceedings of the Eurocrypto’96, Zaragoza, Spain, 12–16 May 1996. [Google Scholar]
Nguyen, P.Q.; Stern, J. The two faces of Lattices in Cryptology. In Proceedings of the International Cryptography and Lattices Conference, Rhode, RI, USA, 29–30 March 2001; pp. 146–180. [Google Scholar]
Micciancio, D.; Regev, O. Lattice-based cryptography. In Post-Quantum Cryptography; Springer: Berlin/Heidelberg, Germany, 2009. [Google Scholar]
Lyubashevsky, V. A Decade of Lattice Cryptography. In Advances in Cryptology—EUROCRYPT 2015; Springer: Berlin/Heidelberg, Germany, 2015. [Google Scholar]
Peikert, C. Lattice-Based Cryptography: A Primer. IACR Cryptol. ePrint Arch. 2016. Available online: https://eprint.iacr.org/2015/939.pdf (accessed on 17 February 2016).
Micciancio, D. On the Hardness of the Shortest Vector Problem. Ph.D. Thesis, Massachusetts Institute of Technology, Cambridge, MA, USA, 1998. [Google Scholar]
Micciancio, D. The shortest vector problem is NP-hard to approximate within some constant. In Proceedings of the 39th FOCS IEEE, Palo Alto, CA, USA, 8–11 November 1998. [Google Scholar]
Babai, L. On Lovasz’ lattice reduction and the nearest lattice point problem. Combinatorica 1986, 6, 1–13. [Google Scholar] [CrossRef]
Micciancio, D. The hardness of the closest vector problem with preprocessing. IEEE Trans. Inform. Theory 2001, 47, 1212–1215. [Google Scholar] [CrossRef] [Green Version]
Lenstra, A.K.; Lenstra, H.W., Jr.; Lovasz, L. Factoring polynomials with rational coefficients. Math. Ann. 1982, 261, 513–534. [Google Scholar] [CrossRef]
Hoffstein, J.; Pipher, J.; Silverman, J. NTRU: A ring-based public key cryptosystem. In Algorithmic Number Theory (Lecture Notes in Computer Science); Springer: New York, NY, USA, 1998; Volume 1423, pp. 267–288. [Google Scholar]
Faugère, J.C.; Otmani, A.; Perret, L.; Tillich, J.P. On the Security of NTRU Encryption. In Advances in Cryptology—EUROCRYPT 2010; Springer: Berlin/Heidelberg, Germany, 2010. [Google Scholar]
Lyubashevsky, V.; Peikert, C.; Regev, O. On Ideal Lattices and Learning with Errors over Rings. ACM 2013, 60, 43:1–43:35. [Google Scholar] [CrossRef]
Albrecht, M.; Ducas, L. Lattice Attacks on NTRU and LWE: A History of Refinements; Cambridge University Press: Cambridge, UK, 2021. [Google Scholar]
Ashur, T.; Tromer, E. Key Recovery Attacks on NTRU and Schnorr Signatures with Partially Known Nonces. In Proceedings of the 38th Annual International Cryptology Conference, Santa Barbara, CA, USA, 19–23 August 2018. [Google Scholar]
Coppersmith, D.; Shamir, A. Lattice attacks on NTRU. In Advances in Cryptology—EUROCRYPT’97; Springer: Berlin/Heidelberg, Germany, 1997. [Google Scholar]
Buchmann, J.; Dahmen, E.; Vollmer, U. Cryptanalysis of the NTRU Signature Scheme. In Proceedings of the 6th IMA International Conference on Cryptography and Coding, Cirencester, UK, 17–19 December 1997. [Google Scholar]
Singh, S.; Padhye, S. Cryptanalysis of NTRU with n public keys. IEEE. 2017. Available online: https://ieeexplore.ieee.org/document/7976980 (accessed on 13 July 2017).
May, A.; Peikert, C. Lattice Reduction and NTRU. In Proceedings of the 46th Annual IEEE Symposium on Foundations of Computer Science, Pittsburgh, PA, USA, 23–25 October 2005. [Google Scholar]
Buchmann, J.; Dahmen, E.; Hulsing, A. XMSS - A Practical Forward Secure Signature Scheme Based on Minimal Security Assumptions. In Post-Quantum Cryptography; Springer: Berlin/Heidelberg, Germany, 2011. [Google Scholar]
Regev, O. On lattices, learning with errors, random linear codes, and cryptography. J. ACM 2009, 56, 1–40. [Google Scholar] [CrossRef]
Komano, Y.; Miyazaki, S. On the Hardness of Learning with Rounding over Small Modulus. In Proceedings of the 21st Annual International Conference on the Theory and Application of Cryptology and Information Security, Auckland, New Zealand, 29 November–3 December 2015. [Google Scholar]
Regev, O. Learning with Errors over Rings. In Algorithmic Number Theory: 9th International Symposium, ANTS-IX, Nancy, France, 19–23 July 2010. Proceedings 9, 2010. Available online: https://link.springer.com/chapter/10.1007/978-3-642-14518-6_3 (accessed on 10 July 2010).
Brakerski, Z.; Gentry, C.; Vaikuntanathan, V. New Constructions of Strongly Unforgeable Signatures Based on the Learning with Errors Problem. In Proceedings of the 48th Annual ACM Symposium on Theory of Computing, Cambridge, MA, USA, 19–21 June 2016. [Google Scholar]
Bi, L.; Lu, X.; Luo, J.; Wang, K.; Zhang, Z. Hybrid Dual Attack on LWE with Arbitrary Secrets. Cryptol. ePrint Arch. 2022. Available online: https://eprint.iacr.org/2021/152 (accessed on 25 February 2021).
Bos, W.; Costello, C.; Ducas, L.L.; Mironov, I.; Naehrig, M.; Nikolaenko, V.; Raghunathan, A.; Stebila, D. Frodo: Take off the ring! Practical, quantum-secure key exchange from LWE. In Proceedings of the CCS 2016; Vienna, Austria, 24–28 October 2016, Available online: https://eprint.iacr.org/2016/659.pdf (accessed on 28 June 2016).
Chunsheng, G. Integer Version of Ring-LWE and its Applications. Cryptol. ePrint Arch. 2017. Available online: https://eprint.iacr.org/2017/641.pdf (accessed on 24 October 2019).
Goldreich, O.; Goldwasser, S.; Halive, S. Public-Key cryptosystems from lattice reduction problems. Crypto’97 1997, 10, 112–131. [Google Scholar]
Micciancio, D. Lattice based cryptography: A global improvement. Technical report. Theory Cryptogr. Libr. 1999, 99-05. Available online: https://citeseerx.ist.psu.edu/document?repid=rep1&type=pdf&doi=9591bda3813b0d09522eff2ba17c3665b530ebb9 (accessed on 4 March 1999).
Micciancio, D. Improving Lattice Based Cryptosystems Using the Hermite Normal Form. In Cryptography and Lattices Conference; Springer: Berlin/Heidelberg, Germany, 2001. [Google Scholar]
Nguyen, P.Q. Cryptanalysis of the Goldreich-Goldwasser-Halevi cryptosystem from crypto’97. In Annual International Cryptology Conference; Springer: Santa Barbara, CA, USA, 1999; pp. 288–304. [Google Scholar]
Nguyen, P.Q.; Regev, O. Learning a parallelepiped: Cryptanalysis of GGH and NTRU signatures. J. Cryptol. 2009, 22, 139–160. [Google Scholar] [CrossRef]
Lee, M.S.; Hahn, S.G. Cryptanalysis of the GGH Cryptosystem. Math. Comput. Sci. 2010, 3, 201–208. [Google Scholar] [CrossRef]
Gu, C.; Yu, Z.; Jing, Z.; Shi, P.; Qian, J. Improvement of GGH Multilinear Map. In Proceedings of the IEEE Conference on P2P, Parallel, Grid, Cloud and Internet Computing (3PGCIC), Krakow, Poland, 4–6 November 2015; pp. 407–411. [Google Scholar]
Minaud, B.; Fouque, P.A. Cryptanalysis of the New Multilinear Map over the Integers. IACR Cryptol. ePrint Arch. 2015, 941. Available online: https://eprint.iacr.org/2015/941 (accessed on 28 September 2015).
Yoshino, M. Kunihiro, Improving GGH Cryptosystem for Large Error Vector. In Proceedings of the International Symposium on Information Theory and Its Applications, Honolulu, HI, USA, 28–31 October 2012; pp. 416–420. [Google Scholar]
Barros, C.; Schechter, L.M. GGH may not be dead after all. In Proceedings of the Congresso Nacional de Matemática Aplicada e Computacional, Sao Paolo, PR, Brazil, 8–12 September 2014. [Google Scholar]
Brakerski, Z.; Gentry, C.; Halevi, S.; Lepoint, T.; Sahai, A.; Tibouchi, M. Cryptanalysis of the Quadratic Zero-Testing of GGH. IACR Cryptol. ePrint. Available online: https://eprint.iacr.org/2015/845 (accessed on 21 September 2015).
Susilo, W.; Mu, Y. Information Security and Privacy; Springer: Berlin/Heidelberg, Germany, 2014; Volume 845. [Google Scholar]
Bonte, C.; Iliashenko, I.; Park, J.; Pereira, H.V.; Smart, N. FINAL: Faster FHE Instantiated with NTRU and LWE. Cryptol. ePrint Arch. 2022. Available online: https://eprint.iacr.org/2022/074 (accessed on 20 January 2022).
Bai, S.; Chen, Y.; Hu, L. Efficient Algorithms for LWE and LWR. In Proceedings of the 10th International Conference on Applied Cryptography and Network Security, Singapore, 26–29 June 2012. [Google Scholar]
Brakerski, Z.; Langlois, A.; Regev, O.; Stehl, D. Classical Hardness of Learning with Errors. In Proceedings of the 45th Annual ACM Symposium on Theory of Computing (STOC), Palo Alto, CA, USA, 2–4 June 2013; pp. 575–584. [Google Scholar]
Lyubashevsky, V.; Micciancio, D. Generalized Compact Knapsacks Are Collision Resistant. In Proceedings of the 33rd International Colloquium on Automata, Languages and Programming, Venice, Italy, 10–14 July 2006; pp. 144–155. [Google Scholar]
Takagi, T.; Kiyomoto, S. Improved Sieving Algorithms for Shortest Lattice Vector Problem and Its Applications to Security Analysis of LWE-based Cryptosystems. In Proceedings of the 23rd Annual International Conference on the Theory and Applications of Cryptographic Techniques, Lyon, France, 2–6 May 2004. [Google Scholar]
Balbas, D. The Hardness of LWE and Ring-LWE: A Survey. Cryptol. ePrint Arch. 2021. Available online: https://eprint.iacr.org/2021/1358.pdf (accessed on 8 October 2021).
Post-Quantum Cryptography. Available online: https://csrc.nist.gov/Projects/post-quantum-cryptography/selected-algorithms-2022 (accessed on 8 June 2023).
Gentry, C.; Peikert, C.; Vaikuntanathan, V. Trapdoors for Hard Lattices and New Cryptographic Constructions. Cryptol. ePrint Arch. 2007. Available online: https://eprint.iacr.org/2007/432 (accessed on 24 November 2007).
Figure 1. Bases of a lattice.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
MDPI and ACS Style
Sabani, M.E.; Savvas, I.K.; Poulakis, D.; Garani, G.; Makris, G.C. Evaluation and Comparison of Lattice-Based Cryptosystems for a Secure Quantum Computing Era. Electronics 2023, 12, 2643. https://doi.org/10.3390/electronics12122643
AMA Style
Sabani ME, Savvas IK, Poulakis D, Garani G, Makris GC. Evaluation and Comparison of Lattice-Based Cryptosystems for a Secure Quantum Computing Era. Electronics. 2023; 12(12):2643. https://doi.org/10.3390/electronics12122643
Chicago/Turabian Style
Sabani, Maria E., Ilias K. Savvas, Dimitrios Poulakis, Georgia Garani, and Georgios C. Makris. 2023. "Evaluation and Comparison of Lattice-Based Cryptosystems for a Secure Quantum Computing Era" Electronics 12, no. 12: 2643. https://doi.org/10.3390/electronics12122643
Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.
Article Metrics
No
No
Article Access Statistics
For more information on the journal statistics, click here.
Multiple requests from the same IP address are counted as one view. | |||||
2453 | dbpedia | 2 | 10 | https://www.quantamagazine.org/why-is-quantum-computing-so-hard-to-explain-20210608/ | en | What Makes Quantum Computing So Hard to Explain? | [
"https://d2r55xnwy6nx47.cloudfront.net/uploads/2021/06/0607_QComputer_Lede-2880x1620_F_v0-1720x968.jpg",
"https://d2r55xnwy6nx47.cloudfront.net/uploads/2021/06/Aaronson_Scott.jpg",
"https://d2r55xnwy6nx47.cloudfront.net/uploads/2021/05/Youtube_Article_160.gif",
"https://d2r55xnwy6nx47.cloudfront.net/uploads/2... | [] | [] | [
""
] | null | [
"civil conversation. Abusive",
"self-promotional"
] | 2021-06-08T15:10:00+00:00 | To understand what quantum computers can do — and what they can’t — avoid falling for overly simple explanations. | en | /favicon.png | Quanta Magazine | https://www.quantamagazine.org/why-is-quantum-computing-so-hard-to-explain-20210608/ | Quantum computers, you might have heard, are magical uber-machines that will soon cure cancer and global warming by trying all possible answers in different parallel universes. For 15 years, on my blog and elsewhere, I’ve railed against this cartoonish vision, trying to explain what I see as the subtler but ironically even more fascinating truth. I approach this as a public service and almost my moral duty as a quantum computing researcher. Alas, the work feels Sisyphean: The cringeworthy hype about quantum computers has only increased over the years, as corporations and governments have invested billions, and as the technology has progressed to programmable 50-qubit devices that (on certain contrived benchmarks) really can give the world’s biggest supercomputers a run for their money. And just as in cryptocurrency, machine learning and other trendy fields, with money have come hucksters.
In reflective moments, though, I get it. The reality is that even if you removed all the bad incentives and the greed, quantum computing would still be hard to explain briefly and honestly without math. As the quantum computing pioneer Richard Feynman once said about the quantum electrodynamics work that won him the Nobel Prize, if it were possible to describe it in a few sentences, it wouldn’t have been worth a Nobel Prize.
Not that that’s stopped people from trying. Ever since Peter Shor discovered in 1994 that a quantum computer could break most of the encryption that protects transactions on the internet, excitement about the technology has been driven by more than just intellectual curiosity. Indeed, developments in the field typically get covered as business or technology stories rather than as science ones.
That would be fine if a business or technology reporter could truthfully tell readers, “Look, there’s all this deep quantum stuff under the hood, but all you need to understand is the bottom line: Physicists are on the verge of building faster computers that will revolutionize everything.”
The trouble is that quantum computers will not revolutionize everything.
Yes, they might someday solve a few specific problems in minutes that (we think) would take longer than the age of the universe on classical computers. But there are many other important problems for which most experts think quantum computers will help only modestly, if at all. Also, while Google and others recently made credible claims that they had achieved contrived quantum speedups, this was only for specific, esoteric benchmarks (ones that I helped develop). A quantum computer that’s big and reliable enough to outperform classical computers at practical applications like breaking cryptographic codes and simulating chemistry is likely still a long way off.
But how could a programmable computer be faster for only some problems? Do we know which ones? And what does a “big and reliable” quantum computer even mean in this context? To answer these questions we have to get into the deep stuff.
Let’s start with quantum mechanics. (What could be deeper?) The concept of superposition is infamously hard to render in everyday words. So, not surprisingly, many writers opt for an easy way out: They say that superposition means “both at once,” so that a quantum bit, or qubit, is just a bit that can be “both 0 and 1 at the same time,” while a classical bit can be only one or the other. They go on to say that a quantum computer would achieve its speed by using qubits to try all possible solutions in superposition — that is, at the same time, or in parallel.
This is what I’ve come to think of as the fundamental misstep of quantum computing popularization, the one that leads to all the rest. From here it’s just a short hop to quantum computers quickly solving something like the traveling salesperson problem by trying all possible answers at once — something almost all experts believe they won’t be able to do.
The thing is, for a computer to be useful, at some point you need to look at it and read an output. But if you look at an equal superposition of all possible answers, the rules of quantum mechanics say you’ll just see and read a random answer. And if that’s all you wanted, you could’ve picked one yourself.
What superposition really means is “complex linear combination.” Here, we mean “complex” not in the sense of “complicated” but in the sense of a real plus an imaginary number, while “linear combination” means we add together different multiples of states. So a qubit is a bit that has a complex number called an amplitude attached to the possibility that it’s 0, and a different amplitude attached to the possibility that it’s 1. These amplitudes are closely related to probabilities, in that the further some outcome’s amplitude is from zero, the larger the chance of seeing that outcome; more precisely, the probability equals the distance squared.
But amplitudes are not probabilities. They follow different rules. For example, if some contributions to an amplitude are positive and others are negative, then the contributions can interfere destructively and cancel each other out, so that the amplitude is zero and the corresponding outcome is never observed; likewise, they can interfere constructively and increase the likelihood of a given outcome. The goal in devising an algorithm for a quantum computer is to choreograph a pattern of constructive and destructive interference so that for each wrong answer the contributions to its amplitude cancel each other out, whereas for the right answer the contributions reinforce each other. If, and only if, you can arrange that, you’ll see the right answer with a large probability when you look. The tricky part is to do this without knowing the answer in advance, and faster than you could do it with a classical computer.
Twenty-seven years ago, Shor showed how to do all this for the problem of factoring integers, which breaks the widely used cryptographic codes underlying much of online commerce. We now know how to do it for some other problems, too, but only by exploiting the special mathematical structures in those problems. It’s not just a matter of trying all possible answers at once.
Compounding the difficulty is that, if you want to talk honestly about quantum computing, then you also need the conceptual vocabulary of theoretical computer science. I’m often asked how many times faster a quantum computer will be than today’s computers. A million times? A billion?
This question misses the point of quantum computers, which is to achieve better “scaling behavior,” or running time as a function of n, the number of bits of input data. This could mean taking a problem where the best classical algorithm needs a number of steps that grows exponentially with n, and solving it using a number of steps that grows only as n2. In such cases, for small n, solving the problem with a quantum computer will actually be slower and more expensive than solving it classically. It’s only as n grows that the quantum speedup first appears and then eventually comes to dominate.
But how can we know that there’s no classical shortcut — a conventional algorithm that would have similar scaling behavior to the quantum algorithm’s? Though typically ignored in popular accounts, this question is central to quantum algorithms research, where often the difficulty is not so much proving that a quantum computer can do something quickly, but convincingly arguing that a classical computer can’t. Alas, it turns out to be staggeringly hard to prove that problems are hard, as illustrated by the famous P versus NP problem (which asks, roughly, whether every problem with quickly checkable solutions can also be quickly solved). This is not just an academic issue, a matter of dotting i’s: Over the past few decades, conjectured quantum speedups have repeatedly gone away when classical algorithms were found with similar performance.
Note that, after explaining all this, I still haven’t said a word about the practical difficulty of building quantum computers. The problem, in a word, is decoherence, which means unwanted interaction between a quantum computer and its environment — nearby electric fields, warm objects, and other things that can record information about the qubits. This can result in premature “measurement” of the qubits, which collapses them down to classical bits that are either definitely 0 or definitely 1. The only known solution to this problem is quantum error correction: a scheme, proposed in the mid-1990s, that cleverly encodes each qubit of the quantum computation into the collective state of dozens or even thousands of physical qubits. But researchers are only now starting to make such error correction work in the real world, and actually putting it to use will take much longer. When you read about the latest experiment with 50 or 60 physical qubits, it’s important to understand that the qubits aren’t error-corrected. Until they are, we don’t expect to be able to scale beyond a few hundred qubits.
Once someone understands these concepts, I’d say they’re ready to start reading — or possibly even writing — an article on the latest claimed advance in quantum computing. They’ll know which questions to ask in the constant struggle to distinguish reality from hype. Understanding this stuff really is possible — after all, it isn’t rocket science; it’s just quantum computing! | ||||
2453 | dbpedia | 0 | 48 | https://www.jinfo.org/Computer_Info_Science.html | en | Jews in Computer & Information Science | [] | [] | [] | [
""
] | null | [] | null | null | This section concerns contributions to the development of information science and technology at its logical (as opposed to its hardware) level. Specifically, this section deals with areas such as computation theory, artificial intelligence, the statistical theories of information, communication, and systems control, cryptography, operations research, computer and network architectures, and algorithm and software design. The general level of this contribution is reflected in the current ~45% Jewish membership in the Computer and Information Sciences division of the US National Academy of Sciences and in the percentages of Jewish recipients shown below for several of the most prestigious awards in the field. Two of the four individuals generally recognized to have been the principal architects of the Information Age were the Jewish mathematicians Norbert Wiener and John von Neumann.1
Some of the more notable Jewish contributions are listed below. (The names of non-Jewish scientists and engineers mentioned in the accompanying discussion have been denoted with the superscript "+" in order to avoid confusion.)
The interpretation of thermodynamic entropy as an information metric by Leo Szilard. Szilard's 1929 analysis of the Maxwell's demon paradox "is now considered to be the earliest known paper in what became the field of 'information theory' in the 1950s and 1960s." 2 Other important information metrics were formulated by John von Neumann, Alfréd Rényi, Solomon Kullback, and Richard Leibler. The von Neumann entropy, e.g., is the quantum generalization of Szilard's classical information measure and is one of the fundamental concepts in quantum information theory.
The introduction of the diagonal argument proof method by Georg Cantor*. This method is central to the derivation of the incompleteness and noncomputability results of Gödel+, Turing+, Church+, and Post that lie at the foundation of theoretical computer science. In a 1936 paper, Emil Post described a mechanical definition of computation, known as the Post machine, which is equivalent to the Turing machine introduced by Alan Turing+ in a paper that appeared several months later. Post had understood the undecidability implications of such a definition as early as 1921, but had hesitated to publish and lost priority to Gödel+, who approached the problem from a very different perspective in his 1931 paper. Post was also one of the four principal founders of the theory of recursive functions, which is of immense importance in theoretical computer science.3
The logical design of Colossus, the first all-electronic, digital, programmable computer by Max Newman.* Although Colossus was not a general-purpose computer and had only limited programmability, it represented an important milestone. Newman, a Cambridge University professor of mathematics, headed the "Newmanry," a special code-breaking unit at Bletchley Park in England during World War II. In this capacity, he formulated the logical design of a machine to mechanize the cryptanalysis of the German Lorenz Cipher, which was used by the Nazi high command to encrypt its highest priority communications. The first instantiation of this machine, called Heath Robinson, was based on a combination of electromechanical, electromagnetic-relay, and vacuum tube switches. Tommy Flowers,+ who had worked on the engineering design of Heath Robinson, subsequently argued, and then brilliantly demonstrated, that a much faster and more reliable version of the machine could be realized all-electronically, using just vacuum tubes. The resulting Colossus machines played a critical role in securing Allied victory in Europe and were influential in the post-war development of computers in England.4 (Contrary to what is sometimes claimed, Alan Turing,+ who was Newman's protégé, had relatively little direct involvement with Colossus, although his ideas were extremely influential. Newman later declined an OBE appointment in protest against the treatment accorded Turing+ by the postwar British government.)
The design of the logical architecture employed in virtually all modern computers by John von Neumann. Von Neumann's 1946 paper "Preliminary Discussion of the Logical Design of an Electronic Computing Instrument" has been described as "the most influential paper in the history of computer science ... the ideas it contains, collectively known as the von Neumann machine, have provided the foundation for essentially all computer system developments since that date."5 The machine that von Neumann designed and had constructed at the Institute for Advanced Study (IAS) in the late 1940s was widely replicated in the development of many other early computer systems, including SEAC (US National Bureau of Standards), ILLIAC (University of Illinois), ORDVAC (Aberdeen Proving Ground), JOHNNIAC (RAND Corporation), MANIAC (Los Alamos National Laboratory), AVIDAC (Argonne National Laboratory), ORACLE (Oak Ridge National Laboratory), WEIZAC (Weizmann Institute of Science), and the IBM 701 (IBM's first mass-produced, commercial mainframe computer).6 The IAS computer project was intended by von Neumann primarily to demonstrate the utility of computers in the solution of scientific and engineering problems, an objective at which it indeed succeeded, creating the field of modern numerical weather prediction and solving some of the most difficult problems in radiation hydrodynamics associated with the design of thermonuclear weapons. Von Neumann invented the computerized random number generator and co-invented the Monte Carlo method. He also invented the theory of system fault tolerance and the cellular automata model of computation. The universal von Neumann constructor, a generalization of the universal Turing machine that emerged out of von Neumann's theory of self-reproducing automata, is one of the foundational concepts in the theoretical study of the biomolecular nanotechnology of living systems.
The invention of parallel supercomputing architectures by Stephen Unger, Daniel Slotnick, David Schaefer, and Włodzimierz Holsztyński. Unger, Slotnick, Schaefer, and Holsztyński are four of the "eight men [who] dominate the history of SIMD computer architectures." 7 SIMD (single instruction, multiple data) refers to the basic parallel processing technique employed in the earliest supercomputers.8 Unger was the first to propose and explore such architectures in the late 1950s. Slotnick designed SOLOMON in the early 1960s and built the first parallel processing prototypes. He was later the architect of Illiac IV, the first important parallel supercomputer, which had up to 256 processing elements. Built with 64 processing elements in the early 1970s with ARPA (now DARPA) funding and operated by NASA, Illiac IV remained the world's fastest computer until its shutdown in 1981. In the late 1970s and early 1980s, Schaefer initiated and managed the development of NASA's Massively Parallel Processor (MPP), the first truly massively parallel supercomputer, with 16,384 processing elements. Holsztyński designed the Geometric-Arithmetic Parallel Processor (GAPP) in 1981. GAPPs with hundreds of thousands of processing elements are used today in real-time video image processing applications such as image enhancement and noise reduction, video data compression, and format and frame rate conversion.
The co-discovery of NP-completeness by Leonid Levin. Levin and Stephen Cook+ independently discovered and proved what is now referred to as the Cook-Levin theorem, the central result concerning the P = NP? question, which is the major open problem in theoretical computer science. Richard Karp introduced the terms "P" and "NP" and defined NP-completeness (although not the term itself) in its present form. He also identified the decision problem formulations of many well-known, combinatorially intractable problems as being NP-complete. Levin, Karp, and Manuel Blum are considered to be three of the six founders of the field of computational complexity theory.
The invention of context-free languages by Noam Chomsky. This work was based on Emil Post's theory of production systems in mathematical logic. It is the basis of the BNF notation widely used to specify the syntax rules of programming languages. Chomsky's hierarchical classification of formal languages initiated the field of formal language theory in computer science.
The co-invention of BASIC by John Kemeny. Kemeny and Thomas Kurtz+ developed this popular programming language. At least one-third of the nine-person team that developed FORTRAN under John Backus+ at IBM were Jewish. Also at IBM, Adin Falkoff collaborated with Kenneth Iverson+ on the design and development of the array processing language APL (acronym for "A Programming Language"). Four of the six principal designers of COBOL, Howard Bromberg, Norman Discount, Jean Sammet, and William Selden, were Jewish. COBOL was used to run an estimated 80% of the world's business systems at the turn of the century. Although its usage has been in gradual decline since then, it is still widely employed in business, financial, and government systems. Ada, an advanced programming language adopted by the US Department of Defense as its standard high-level computer programming language in the 1980s and 1990s, was designed by Jean Ichbiah. LISP, the second-oldest high-level programming language still in use (primarily in artificial intelligence research), was invented by John McCarthy* in 1958. Barbara Liskov was awarded the 2008 ACM Turing Award for fundamental advances in programming language design. The ACM press release noted that her innovations "are now the basis of every important programming language since 1975, including Ada, C++, Java, and C#."
The invention of the MINIX operating system by Andrew Tanenbaum. MINIX was the precursor to, and inspiration for, the widely used Linux operating system.
The invention of the computer spreadsheet by Dan Bricklin and Robert Frankston. Bricklin and Frankston's VisiCalc spreadsheet was the first "killer app." The Lotus 1-2-3 spreadsheet program, the most successful software product of its time, was developed by Jonathan Sachs and Mitchell Kapor.
The invention of the computerized word processor by Evelyn Berezin. Berezin, who was inducted into the National Inventors Hall of Fame in 2022, invented the Data Secretary, the first computerized word processor, in 1971. She is also credited with developing the first office computer, the first computerized banking system, and the United Airlines computerized reservation system, the largest data processing system of its time. Other important word processors were designed by Charles "Nick" Corfield (Adobe FrameMaker) and Richard Brodie* (Microsoft Word).
The co-founding of the field of artificial intelligence (AI) by Marvin Minsky, Herbert Simon*, and John McCarthy*. (Allen Newell+ is also considered to have been one of AI's four principal founders.9) Six of the ten inductees into the IEEE Computer Society's Intelligent Systems Magazine AI Hall of Fame are, or were, Jewish or of Jewish descent (Noam Chomsky, Edward Feigenbaum, John McCarthy*, Marvin Minsky, Judea Pearl, and Lotfi Zadeh*). Major approaches to machine learning, the now dominant approach to artificial intelligence, were pioneered by Jews, including Bayesian Networks (Judea Pearl), Support Vector Machines (Vladimir Vapnik and Alexey Chervonenkis), Deep Learning (Frank Rosenblatt and Yoshua Bengio, together with Geoffrey Hinton+ and Yann LeCun+), Evolutionary Computing (Lawrence Fogel), and the Probably Approximately Correct (PAC) Model (Leslie Valiant).10 Minsky, Simon, McCarthy, Feigenbaum, Pearl, Bengio, and Valiant have received seven of the eleven Turing Awards given thus far for work in artificial intelligence.
The creation of ChatGPT by Ilya Sutskever. Sutskever, a co-founder and the chief scientist of OpenAI, is the principal designer of the large language model GPT-4, on which ChatGPT is based. The extraordinary ability of this deep learning AI language model to "understand" textual input and generate textual output that is both syntactically and semantically precise has shocked even many AI researchers.
The development of computer algebra (symbol manipulation) programs by Jean Sammet (FORMAC), Carl Engelman (MATHLAB), Joel Moses (MACSYMA), and Stephen Wolfram (Mathematica).
The invention of reversible computation theory by Rolf Landauer. Reversible computation circumvents the thermodynamic limits on irreversible computation established by John von Neumann, and is one of the foundations of quantum computing. The ballistic architecture, or Fredkin gate, model of reversible computation was introduced by Edward Fredkin.
The invention of quantum computing by Paul Benioff, Richard Feynman, Yuri Manin,* and David Deutsch.
The invention of DNA computing by Leonard Adleman.
The invention of fuzzy logic by Max Black and Lotfi Zadeh* (independently).
The invention of algorithmic complexity by Ray Solomonoff. Also termed Kolmogorov complexity or algorithmic information theory, Solomonoff's 1964 work was later arrived at independently by Andrei Kolmogorov+ (1965) and Gregory Chaitin (1969).
The invention of the Monte Carlo method by Stanislaw Ulam and John von Neumann. This statistical numerical method is one of the cornerstones of computer simulation science. Von Neumann invented the first computer-based random number generator for use in Monte Carlo simulations. The so-called Metropolis Monte Carlo algorithm, widely used in statistics and computational physics, was largely devised by Marshall Rosenbluth, based in part on ideas from Edward Teller and John von Neumann.11
The invention of nondeterministic algorithms by Michael Rabin. Such algorithms employ Monte Carlo methods to provide efficiently computable solutions that are correct with high (but less than one hundred percent) probability to many problems whose exact solution is computationally intractable. Rabin's probabilistic primality testing, e.g., is essential to the practical implementation of RSA public-key cryptography.
The invention of the SIMPLEX linear programming algorithm by George B. Dantzig. Linear programming (LP), invented independently by Dantzig and Leonid Kantorovich, is a powerful optimization technique that is widely used in economics and engineering. It has been estimated that, aside from database operations such as sorting and searching, LP consumes more computer time than any other mathematical procedure.12 The SIMPLEX algorithm remains LP's fundamental numerical solution technique.
The invention of the ellipsoid method of convex optimization by Naum Shor and, independently, by Arkadi Nemirovski and David Yudin. This technique, which was successfully employed by Leonid Khachiyan+ to prove the polynomial-time complexity of linear programming, underlies most modern results concerning the computational complexity of convex optimization programs. The ellipsoid method provided the first effective solver for semidefinite programs (which are encountered in many engineering applications) and has led to significant advances in combinatorial optimization.
The invention or co-invention of five of CiSE's "Top Ten Algorithms of the Century" by Stanislaw Ulam, John von Neumann, Marshall Rosenbluth, Edward Teller, George Dantzig, Leonid Kantorovich, Cornelius Lanczos, I. J. Good, Leslie Greengard, and Vladimir Rokhlin, Jr.. The January/February 2000 issue of Computing in Science & Engineering, a joint publication of the American Institute of Physics and the IEEE Computer Society, assembled a list of "the ten algorithms with the greatest influence on the development and practice of science and engineering in the 20th century." In addition to the Monte Carlo method and the SIMPLEX algorithm discussed above, the top ten algorithms included the Krylov subspace iteration method for the solution of large systems of linear equations (Lanczos, together with Magnus Hestenes+ and Eduard Stiefel+), the Fast Fourier Transform (FFT) (Lanczos, together with G. C. Danielson+ in 1942, and independently by I. J. Good in 1958 and by James Cooley+ and John Tukey+ in 1965),13 and the fast multipole algorithm for the solution of many-body problems (Greengard and Rokhlin).
The invention of the Wiener filter by Norbert Wiener. The Wiener filter is an optimal filter for extracting signals from noise in stationary stochastic systems and is one of the central results in statistical communication theory, a field pioneered by Wiener. (A version of the Wiener filter was also formulated independently by Andrei Kolmogorov+.) The nonlinear, recursive Wiener filter, or Kalman filter, its extension to nonstationary systems for use in tracking and guidance was first formulated by Peter Swerling in 1959.14 Wiener and Alexander Khinchine independently derived the Wiener-Khinchine theorem, another central result in statistical communication theory.
The invention of statistical decision theory by Abraham Wald. Among other applications, statistical decision theory plays an important role in radar, control, and communication. Its minimax decision rules derive from John von Neumann's theory of optimal strategies (theory of games).
The invention of dynamic programming by Richard Bellman. This procedure solves sequential, or multi-stage, decision problems and is one of the foundations of modern control theory. It also constitutes the basis for many powerful algorithms, including the backpropagation algorithm used to train neural networks in machine learning and the Viterbi algorithm, invented by Andrew Viterbi, that is used to decode convolutional codes employed in error correction and in CDMA and GSM digital cellular telephony.
The co-invention of public-key cryptography by Martin Hellman. Hellman and Whitfield Diffie+ devised the Diffie-Hellman algorithm for secure key distribution over nonsecure channels.
The co-invention of RSA by Adi Shamir and Leonard Adleman. RSA (which is named for its three co-inventors, Shamir, Adleman, and Ronald Rivest+) is the most widely used public-key algorithm.
The invention of elliptic curve cryptography (ECC) by Neal Koblitz and Victor S. Miller (independently). Based on concepts rooted in algebraic geometry, ECC is widely deployed as the leading successor to RSA. Relative to RSA, it provides greater cryptographic strength with much smaller cryptovariables. (It does, however, share the same potential vulnerability to quantum computational attack.)
The formulation of the learning with errors (LWE) problem and encryption system by Oded Regev. The application of LWE, a computationally "hard" problem, has led to revolutionary developments in cryptography. CRYSTALS-Kyber, the recent finalist in the US National Institute of Standards and Technologys six-year international competition to find a quantum-resistant, or "post-quantum," algorithm standard for general encryption, is based on LWE. CRYSTALS-Dilithium, one of the three algorithms selected to become quantum-resistant digital signature standards, is also based on LWE. (FALCON, one of the other two digital signature finalists, is based on the NTRU cryptographic system, which was invented by Joseph Silverman, Jeffrey Hoffstein, and Jill Pipher.+) LWE is also the basis for the recent breakthrough in the construction of efficient, fully homomorphic encryption (FHE) schemes by Zvika Brakerski, Craig Gentry,+ and Vinod Vaikuntanathan.+ FHE techniques permit the processing of encrypted data without the need to first decrypt it, thus permitting the secure processing of sensitive data on non-secure computing platforms.
The invention of quantum cryptography by Stephen Wiesner. Although quantum key distribution was invented in the mid-1980s by others, it was specifically acknowledged to have been inspired by Wiesner's circa 1970 work that established the basic principles underlying the use of quantum mechanics to achieve information security.
The development of mathematical and statistical cryptanalysis by William Friedman. Friedman's innovations are ranked amongst the greatest in the history of cryptology; he supervised the breaking of the Japanese diplomatic code PURPLE in 1940 and directed US cryptanalysis during World War II. Other important World War II cryptologists included Solomon Kullback, Leo Rosen, and Abraham Sinkov in the US and Max Newman*, I.J. Good, and Leo Marks in England. Newman and Good were instrumental in the design of Colossus, which was used to break the Lorenz cipher employed by the German high command. Marks, the chief cryptologist of the Special Operations Executive (SOE) of MI6, revolutionized the one-time pad.
The invention of cryptocurrency by David Chaum. In his seminal 1982 paper, Chaum established the concept of secure digital cash, the first cryptocurrency. Chaum's invention anticipated by several decades the now widespread recognition of the issue of electronic privacy in financial and other online transactions.
The co-invention of blockchain by David Chaum and, independently, by Stuart Haber (together with W. Scott Stornetta+). Chaum gave the first full technical description of a blockchain in his 1982 doctoral dissertation, but never published the concept in a journal or conference proceeding. It, therefore, went largely unnoticed. In a 1991 paper, Haber and Stornetta+ introduced another embodiment of this concept for a shared, tamper-proof, decentralized transaction ledger. Their company, Surety Technologies, founded in 1994, created the first and longest running commercial blockchain. Blockchain technology is widely claimed to have the potential to revolutionize e-commerce (and more) by greatly increasing the efficiency, confidentiality, and trust with which online transactions are conducted and recorded.
The invention of convolutional codes by Peter Elias. Important decoding algorithms for these error correction codes were invented by Barney Reiffen, Robert Fano, and Andrew Viterbi.
The co-invention of the Reed-Solomon error correction code by Gustave Solomon. Reed-Solomon and Viterbi- or Fano-decoded convolutional codes, or hybrid concatenations of the two, are probably the most widely used error correction techniques at present.
The invention of the LZ data compression algorithm by Jacob Ziv and Abraham Lempel. Although LZ coding was not the first data compression technique (the first such technique having been invented, independently, by Robert Fano and Claude Shannon+), it is today the most widely used in commercial systems. It underpins PDF, GIF, TIFF, ZIP, and other widely used file formats.
The development of automated, electronically switched telephone networks by Amos Joel. Joel received both the 1989 Kyoto Prize ("Japan's Nobel Prize") and the 1993 US National Medal of Technology for work that revolutionized telephone switching systems worldwide. Joel's 1972 US Patent No. 3,663,762, "Mobile Communication System," is the basis of the switching technology that made cellular telephone networks possible.
The co-invention of spread spectrum communications by Hedy Lamarr. Lamarr (the Hollywood actress) and George Antheil+ (a Hollywood composer) received US Patent No. 2,292,387, "Secret Communication System," in 1942 for the invention of frequency-hopped spread spectrum. The digital form of spread spectrum that is widely used in cellular communications (CDMA) was developed by Qualcomm, a company founded by the information theorists Irwin Jacobs and Andrew Viterbi. Jacobs received the US National Medal of Technology in 1994 and Viterbi received the US National Medal of Science in 2007. Both were recognized for their pioneering innovations in digital wireless communications. Joel Engel also received the Medal of Technology in 1994 as one of the two "fathers of the cellular phone" for his work on the development of the basic network architecture used worldwide in cellular telephony. (The cell phone itself, as opposed to cellular switching fabrics, etc., was invented by Martin Cooper. Precursor technologies, such as the "walkie-talkie," the pager, the cordless telephone, and CB radio, were largely pioneered by Irving "Al" Gross.)
The co-invention of the Internet by Leonard Kleinrock, Paul Baran, Vinton Cerf,* and Robert Kahn. Together with Kleinrock, Baran, Cerf, and Kahn, Donald Davies+ and Lawrence Roberts+ are the six individuals most frequently cited as principal inventors of the Internet. Kleinrock, Cerf, Kahn, and Roberts+were awarded the US National Academy of Engineering's half-million dollar Draper Prize in 2001 "for the development of the Internet." Baran, Kleinrock, Davies+, and Roberts+ received the first IEEE Internet Award in 2000 for "their early, preeminent contributions in conceiving, analyzing and demonstrating packet-switching networks, the foundation technology of the Internet." Cerf, Kahn, and Baran received US National Medals of Technology, the former two in 1997 and the latter in 2007. Kleinrock was awarded the US National Medal of Science in 2007. Cerf and Kahn co-invented the TCP/IP protocol for integration of heterogeneous networks, which is the basis of the Internet's "inter-networking" architecture. They shared the 2004 ACM Turing Award for this work, and in 2005 each received the US Presidential Medal of Freedom.
The invention of Alohanet (precursor to Ethernet) by Norman Abramson. Alohanet was a packet-switched research network that solved the major problem of packet interference, or "packet collision." Alohanet was further developed by Robert Metcalfe,+ working at the Xerox Palo Alto Research Center, into Ethernet (which Metcalfe+ originally called the Alto Aloha network), the standard method used in local area computer networking. Radia Perlman's spanning tree protocol, which solved the problem of broadcast storms due to network switching loops, was the critical enabler that allowed Ethernet to realize high levels of robust network complexity.
The invention of Google by Sergey Brin and Larry Page*.The algorithm employed by Google, the most powerful and widely used search engine on the Internet, employs an adaptation of the citation frequency "impact factor" metric originally invented in the 1950s by Eugene Garfield to rank the relative influence of scientific researchers, articles, and journals. A search algorithm very similar to Google PageRank, called HITS (Hypertext Induced Topic Search), was devised almost simultaneously by Jon Kleinberg at IBM. In his papers, Kleinberg credited the 1976 mathematical work of Gabriel Pinski and Francis Narin, as does the PageRank patent. Pinski and Narin had shown how to formulate and compute Garfield's relative influence in terms of a graph theoretic matrix eigenvalue problem. Similar mathematical techniques for calculating the relative influence of individuals in social networks or of production sectors in national economies can be found, respectively, in the 1953 work of statistician Leo Katz and in the 1941 work on input-output analysis by the Nobel Prize winning economist Wassily Leontief.*
NOTES
1. According to the prominent historian of technology George Dyson, "there were four essential prophets whose mathematics brought us into the Information Age: Norbert Wiener, John von Neumann, Alan Turing and Claude Shannon." See "The Elegance of Ones and Zeroes," by George Dyson in The Wall Street Journal, 21 July 2017.
2. See Genius in the Shadows: A Biography of Leo Szilard, by William Lanouette (Scribner's, New York, 1992, p. 63).
3. See "Emil Post and His Anticipation of Gödel and Turing," by John Stillwell in Mathematics Magazine (Mathematical Association of America, Washington, DC, Vol. 77, No. 1, Feb. 2004, pp. 3-14). See also http://www-gap.dcs.st-and.ac.uk/~history/Mathematicians/Post.html.
4. See "Max Newman: Mathematician, Codebreaker and Computer Pioneer," by William Newman in Colossus: The First Electronic Computer, edited by Jack Copeland (Oxford, Oxford and New York, 2004).
5. Encyclopedia of Computer Science (Fourth Edition), edited by Anthony Ralston, Edwin D. Reilly, and David Hemmendinger (Wiley, Chichester, England, 2003, p. 1841).
6. See Turing's Cathedral: The Origins of the Digital Universe, by George Dyson (Pantheon/Random House, New York, 2012, p. 287).
7. Parallel Supercomputing in SIMD Architectures, by R. Michael Hord (CRC Press, Boca Raton, FL, 1990).
8. Although most supercomputers are now based on MIMD (multiple instruction, multiple data) architectures, their individual processing nodes generally embody small-scale SIMD capabilities. The still largely hypothetical quantum computer can be thought of as an SIMD machine with exponentially many virtual processors.
9. See AI: The Tumultuous History of the Search for Artificial Intelligence, by Daniel Crevier (Basic Books, New York, 1993, p. 26), or Encyclopedia of Computer Science (Fourth Edition), edited by Anthony Ralston, Edwin D. Reilly, and David Hemmendinger (Wiley, Chichester, England, 2003, p. 91).
10. The 2019 IEEE Xplore article Who Is the Father of Deep Learning? states: We conclude that Frank Rosenblatt developed and explored all the basic ingredients of the deep learning systems of today, and that he should be recognized as a Father of Deep Learning, perhaps together with Hinton, LeCun and Bengio who have just received the Turing Award as the fathers of the deep learning revolution. Lawrence Fogel initiated the field of evolutionary computing in 1960 and is considered to be the father of evolutionary programming, the first of four basic approaches that were eventually formulated and subsequently merged into the field of evolutionary computing.
11. See "Marshall Rosenbluth and the Metropolis algorithm," by J. E. Gubernatis, in Physics of Plasmas (12, 057303, 2005). According to this article, Nicholas Metropolis'+ only contribution to the algorithm's development was making available the use of MANIAC, the Los Alamos computer he had constructed as a replica of the computer that von Neumann designed and built at the Institute for Advanced Study. Of the other co-authors of the Metropolis algorithm paper, "arguably the most significant publication in the history of computational physics," Mici Teller initiated the assembly language programming work to code the algorithm, Arianna Rosenbluth+ took that over and produced from scratch the actual program used to test the algorithm, Edward Teller made "the crucial suggestion" to employ ensemble, rather than temporal averaging and how to do that, and Marshall Rosenbluth actually designed the algorithm, incorporating insights of his own. According to Gubernatis, the "key," as opposed to the "crucial," idea underpinning the algorithm's power was the principle of detailed balance, which was implicit in the original 1953 paper, but not made explicit until Rosenbluth formulated a general proof of the algorithm's validity in 1956. Rosenbluth went on to become one of the world's leading plasma theorists and a winner of both the Enrico Fermi Award (1985) and the US National Medal of Science (1997).
12. See http://www-gap.dcs.st-and.ac.uk/~history/Mathematicians/Dantzig_George.html.
13. Cooley and Tukey are generally credited with invention of the "modern" FFT. Their 1965 paper only referenced the prior work of I. J. Good, whose FFT algorithm was both somewhat different and less efficient. In a January 1992 paper in IEEE SP Magazine, entitled "How the FFT Gained Acceptance," Cooley reviewed other prior work and concluded that "it appears that Lanczos had the FFT algorithm" in 1942. He holds out the possibility that Gauss may have had it as early 1805, however. Gauss's work was not published until after his death and was in Latin and employed somewhat archaic notation, which made it difficult to decipher. It was only published in 1866 in his collected works. An analysis by Michael Heideman, Don Johnson, and C. Sidney Burrus has concluded that Gauss did indeed have the basic elements of the modern FFT in 1805, but due to its obscurity, his formulation appears to have had no influence on subsequent work.
14. See the next-to-last paragraphs in https://archive.siam.org/news/news.php?id=526 and in the obituary published in the November 2000 issue of Physics Today (pp. 75-76). See also the discussion in the Appendix to Tracking and Kalman Filtering Made Easy, by Eli Brookner (Wiley, New York, 1998, pp. 383-387).
* Georg Cantor and Herbert Simon had Jewish fathers; Simon's mother was of partial Jewish descent, which was also the case, at a minimum, for the mother of Georg Cantor. Max Newman and Vinton Cerf had Jewish fathers and non-Jewish mothers, while Richard Brodie, Wassily Leontief, Yuri Manin, John McCarthy, Larry Page, and Lotfi Zadeh have, or had, Jewish mothers. For more information, see the footnotes to these and other listings in Jewish Computer and Information Scientists, or in the cases of Leontief and Manin, in Jewish Economists and Jewish Mathematicians, respectively.
+ Non-Jewish.
QUESTIONS AND COMMENTS: CONTACT US
JINFO HOME | ||||||||
2453 | dbpedia | 1 | 53 | https://www.dotquantum.io/en/quantum-biographies/ | en | Scientists and Visionaries | [
"https://www.dotquantum.io/wp-content/uploads/2018/09/dotquantum-75.png",
"https://www.dotquantum.io/wp-content/uploads/2018/09/dotquantum-57.png",
"https://www.dotquantum.io/wp-content/uploads/2018/09/dotquantum_qc-menu.jpg",
"https://www.dotquantum.io/wp-content/uploads/2018/09/dotquantum_bio-menu.jpg",
"... | [] | [] | [
""
] | null | [] | 2018-09-19T21:03:34+02:00 | Quantum Biographies. Scientists and Visionaries. The characters who founded the quantum mechanics from which Quantum Computation was born. | en | dotQuantum.io | https://www.dotquantum.io/en/quantum-biographies/ | At school in Munich, Germany, Max Planck’s first love was music. When he asked a musician where he should go to study it he was told he’d better do something else if he had to ask that question. He turned to physics, but his professor complained that physics was a complete science: nothing more could be learned. Fortunately, Planck ignored him and went on to develop the concept of quanta. Planck endured the deaths of his wife and two sons killed in the world wars. Remaining in Germany, he was able to rebuild physics research there in the aftermath. Today, Germany’s prestigious Max Planck research institutes are named after him.
Abner Eliezer Shimony was an American physicist and philosopher. He specialized in quantum theory and philosophy of science. As a physicist he concentrated on the interaction between relativity theory and quantum mechanics. He has authored many works and research on complementarity in quantum entanglement as well as multiparticle quantum interferometry, both relating to quantum coherence. He has authored research articles and books on the foundations of quantum mechanics. He received the 1966 Lakatos Prize for his work in Philosophy of Science. On Stanford Encyclopedia of Philosophy you can read his Bell’s Theorem article.
Nick Herbert is an American physicist and author, best known for his book Quantum Reality. Herbert studied engineering physics at Ohio State University, graduating in 1959. He received a Ph.D. in physics from Stanford University in 1967 for work on nuclear scattering experiments. After a one-year teaching job at Monmouth College in Illinois, Herbert held a number of posts in industry. The most illustrious of these was senior physicist at Memorex Corporation in Santa Clara, California. While employed in industry, Herbert was part of the Fundamental Fysiks Group at Lawrence Berkeley National Laboratory, founded in May 1975 by Elizabeth Rauscher and George Weissmann. The group’s initial interest was in the interpretation of quantum mechanics, the EPR paradox, and Bell’s inequality, but members pursued diverse interests that lay outside of mainstream physics, exploring psychedelic drugs, psi phenomena, the nature of consciousness, and speculative connections of these areas with quantum physics. During the 1970s and 1980s, Herbert and Saul-Paul Sirag organized a yearly Esalen Seminar on the Nature of Reality, bringing together participants to discuss the interpretation of quantum mechanics.
John Francis Clauser is an American theoretical and experimental physicist known for contributions to the foundations of quantum mechanics, in particular the Clauser-Horne-Shimony-Holt inequality. John Clauser was born in Pasadena, California. He received his B.S. in physics from the California Institute of Technology in 1964. He received his M.A. in physics in 1966 and his Ph.D. in physics in 1969 from Columbia University.
From 1969 to 1996 he worked mainly at Lawrence Berkeley National Laboratory, Lawrence Livermore National Laboratory, and the University of California, Berkeley. Clauser was awarded the Wolf Prize in Physics in 2010 together with Alain Aspect and Anton Zeilinger. In 1972, working with Stuart Freedman, he carried out the first experimental test of the CHSH-Bell’s theorem predictions. This was the world’s first observation of quantum entanglement, and was the first experimental observation of a violation of a Bell inequality. In 1974, working with Michael Horne, he first showed that a generalization of Bell’s Theorem provides severe constraints for all local realistic theories of nature (a.k.a. objective local theories). That work introduced the Clauser–Horne (CH) inequality as the first fully general experimental requirement set by local realism. It also introduced the “CH no-enhancement assumption”, whereupon the CH inequality reduces to the CHSH inequality, and whereupon associated experimental tests also constrain local realism. Also in 1974 he made the first observation of sub-Poissonian statistics for light (via a violation of the Cauchy–Schwarz inequality for classical electromagnetic fields), and thereby, for the first time, demonstrated an unambiguous particle-like character for photons. In 1976 he carried out the world’s second experimental test of the CHSH-Bell’s Theorem predictions.
David Elieser Deutsch is an Israeli-born British physicist at the University of Oxford. He is a Visiting Professor in the Department of Atomic and Laser Physics at the Centre for Quantum Computation (CQC) in the Clarendon Laboratory of the University of Oxford. He pioneered the field of quantum computation by formulating a description for a quantum Turing machine, as well as specifying an algorithm designed to run on a quantum computer. He is a proponent of the many-worlds interpretation of quantum mechanics.
Lov Kumar Grover is an Indian-American computer scientist. He is the originator of the Grover database search algorithm used in quantum computing. Grover’s 1996 algorithm won renown as the second major algorithm proposed for quantum computing (after Shor’s 1994 algorithm), and in 2017 was finally implemented in a scalable physical quantum system. Grover’s algorithm has been the subject of numerous popular science articles. Grover has been ranked as the 9th most prominent computer scientist from India. Grover received his bachelor’s degree from the Indian Institute of Technology, Delhi in 1981 and his PhD in Electrical engineering from Stanford University in 1985. He then went to Bell Laboratories, where he worked for an assistant professor at Cornell University from 1987 to 1995. | |||||
2453 | dbpedia | 1 | 93 | https://philarchive.org/citations/AARWPS/order%3Dupdated%3Foffset%3D0%26page_size%3D50%26freeOnly%3D%26eId%3DAARWPS%26newWindow%3Doff%26categorizerOn%3Doff%26langFilter%3Doff%26proOnly%3Doff%26showCategories%3Doff%26publishedOnly%3Doff%26url%3D%26hideAbstracts%3Doff%26total%3D31%26filterByAreas%3Doff%26sqc%3Doff%26onlineOnly%3D%26direction%3Dcitations | en | Citations of: Why philosophers should care about computational complexity | [
"https://philarchive.org/assets/raw/pp_logo.svg",
"https://philarchive.org/assets/raw/subind.gif",
"https://philarchive.org/assets/raw/subind.gif",
"https://philarchive.org/assets/raw/subind.gif",
"https://philarchive.org/assets/raw/subind.gif",
"https://philarchive.org/assets/raw/subind.gif",
"https://... | [] | [] | [
""
] | null | [
"Scott Aaronson"
] | null | /assets/raw/icons/favicon-ppl.gif? | https://philpapers.org/citations/AARWPS/order%3Dupdated%3Foffset%3D0%26page_size%3D50%26freeOnly%3D%26eId%3DAARWPS%26newWindow%3Doff%26categorizerOn%3Doff%26langFilter%3Doff%26proOnly%3Doff%26showCategories%3Doff%26publishedOnly%3Doff%26url%3D%26hideAbstracts%3Doff%26total%3D31%26filterByAreas%3Doff%26sqc%3Doff%26onlineOnly%3D%26direction%3Dcitations | The philosophy of computer science.Raymond Turner - 2013 - Stanford Encyclopedia of Philosophy.details
Download Export citation Bookmark 16 citations
On the computational complexity of ethics: moral tractability for minds and machines.Jakob Stenseke - 2024 - Artificial Intelligence Review 57 (105):90.details
Why should moral philosophers, moral psychologists, and machine ethicists care about computational complexity? Debates on whether artificial intelligence (AI) can or should be used to solve problems in ethical domains have mainly been driven by what AI can or cannot do in terms of human capacities. In this paper, we tackle the problem from the other end by exploring what kind of moral machines are possible based on what computational systems can or cannot do. To do so, we analyze normative (...) ethics through the lens of computational complexity. First, we introduce computational complexity for the uninitiated reader and discuss how the complexity of ethical problems can be framed within Marr’s three levels of analysis. We then study a range of ethical problems based on consequentialism, deontology, and virtue ethics, with the aim of elucidating the complexity associated with the problems themselves (e.g., due to combinatorics, uncertainty, strategic dynamics), the computational methods employed (e.g., probability, logic, learning), and the available resources (e.g., time, knowledge, learning). The results indicate that most problems the normative frameworks pose lead to tractability issues in every category analyzed. Our investigation also provides several insights about the computational nature of normative ethics, including the differences between rule- and outcome-based moral strategies, and the implementation-variance with regard to moral resources. We then discuss the consequences complexity results have for the prospect of moral machines in virtue of the trade-off between optimality and efficiency. Finally, we elucidate how computational complexity can be used to inform both philosophical and cognitive-psychological research on human morality by advancing the moral tractability thesis. (shrink)
Download Export citation Bookmark
A metalinguistic and computational approach to the problem of mathematical omniscience.Zeynep Soysal - 2022 - Philosophy and Phenomenological Research 106 (2):455-474.details
In this paper, I defend the metalinguistic solution to the problem of mathematical omniscience for the possible-worlds account of propositions by combining it with a computational model of knowledge and belief. The metalinguistic solution states that the objects of belief and ignorance in mathematics are relations between mathematical sentences and what they express. The most pressing problem for the metalinguistic strategy is that it still ascribes too much mathematical knowledge under the standard possible-worlds model of knowledge and belief on which (...) these are closed under entailment. I first argue that Stalnaker's fragmentation strategy is insufficient to solve this problem. I then develop an alternative, computational strategy: I propose a model of mathematical knowledge and belief adapted from the algorithmic model of Halpern et al. which, when combined with the metalinguistic strategy, entails that mathematical knowledge and belief require computational abilities to access metalinguistic information, and thus aren't closed under entailment. As I explain, the computational model generalizes beyond mathematics to a version of the functionalist theory of knowledge and belief that motivates the possible-worlds account in the first place. I conclude that the metalinguistic and computational strategies yield an attractive functionalist, possible-worlds account of mathematical content, knowledge, and inquiry. (shrink)
Download Export citation Bookmark 3 citations
Review: Noson S. Yanofsky : The Outer Limits of Reason. What Science, Mathematics, and Logic Cannot Tell Us.Tim Räz - 2015 - Dialectica 69 (2):248-254.details
Download Export citation Bookmark
Euler’s Königsberg: the explanatory power of mathematics.Tim Räz - 2017 - European Journal for Philosophy of Science 8:331–46.details
The present paper provides an analysis of Euler’s solutions to the Königsberg bridges problem. Euler proposes three different solutions to the problem, addressing their strengths and weaknesses along the way. I put the analysis of Euler’s paper to work in the philosophical discussion on mathematical explanations. I propose that the key ingredient to a good explanation is the degree to which it provides relevant information. Providing relevant information is based on knowledge of the structure in question, graphs in the present (...) case. I also propose computational complexity and logical strength as measures of relevant information. (shrink)
Download Export citation Bookmark 5 citations
Euler’s Königsberg: the explanatory power of mathematics.Tim Räz - 2018 - European Journal for Philosophy of Science 8 (3):331-346.details
The present paper provides an analysis of Euler’s solutions to the Königsberg bridges problem. Euler proposes three different solutions to the problem, addressing their strengths and weaknesses along the way. I put the analysis of Euler’s paper to work in the philosophical discussion on mathematical explanations. I propose that the key ingredient to a good explanation is the degree to which it provides relevant information. Providing relevant information is based on knowledge of the structure in question, graphs in the present (...) case. I also propose computational complexity and logical strength as measures of relevant information. (shrink)
Download Export citation Bookmark 5 citations
La deriva genética como fuerza evolutiva.Ariel Jonathan Roffé - 2015 - In J. Ahumada, N. Venturelli & S. Seno Chibeni (eds.), Selección de Trabajos del IX Encuentro AFHIC y las XXV Jornadas de Epistemología e Historia de la ciencia. pp. 615-626.details
Download Export citation Bookmark
Computing in the nick of time.J. Brendan Ritchie & Colin Klein - 2023 - Ratio 36 (3):169-179.details
The medium‐independence of computational descriptions has shaped common conceptions of computational explanation. So long as our goal is to explain how a system successfully carries out its computations, then we only need to describe the abstract series of operations that achieve the desired input–output mapping, however they may be implemented. It is argued that this abstract conception of computational explanation cannot be applied to so‐called real‐time computing systems, in which meeting temporal deadlines imposed by the systems with which a device (...) interfaces are constitutive of the computing tasks that a device performs. Instead, real‐time computing reveals the need for alternative conceptions of computational explanation, as well as computational implementation, that eschew medium‐independence. (shrink)
Download Export citation Bookmark
Tractability and the computational mind.Rineke Verbrugge & Jakub Szymanik - 2018 - In Mark Sprevak & Matteo Colombo (eds.), The Routledge Handbook of the Computational Mind. Routledge. pp. 339-353.details
We overview logical and computational explanations of the notion of tractability as applied in cognitive science. We start by introducing the basics of mathematical theories of complexity: computability theory, computational complexity theory, and descriptive complexity theory. Computational philosophy of mind often identifies mental algorithms with computable functions. However, with the development of programming practice it has become apparent that for some computable problems finding effective algorithms is hardly possible. Some problems need too much computational resource, e.g., time or memory, to (...) be practically computable. Computational complexity theory is concerned with the amount of resources required for the execution of algorithms and, hence, the inherent difficulty of computational problems. An important goal of computational complexity theory is to categorize computational problems via complexity classes, and in particular, to identify efficiently solvable problems and draw a line between tractability and intractability. -/- We survey how complexity can be used to study computational plausibility of cognitive theories. We especially emphasize methodological and mathematical assumptions behind applying complexity theory in cognitive science. We pay special attention to the examples of applying logical and computational complexity toolbox in different domains of cognitive science. We focus mostly on theoretical and experimental research in psycholinguistics and social cognition. (shrink)
Download Export citation Bookmark 2 citations
Computational complexity in the philosophy of mind: unconventional methods to solve the problem of logical omniscience.Safal Aryal - manuscriptdetails
The philosophy of mind is traditionally concerned with the study of mental processes, language, the representation of knowledge and the relation of the mind shares with the body; computational complexity theory is related to the classification of computationally solvable problems (be it via execution time, storage requirements, etc...). While there are well-established links between computer science in general & the philosophy of mind, many possible solutions to traditional problems in the philosophy of mind have not yet been analyzed from the (...) more specific lens of computational complexity theory. In his paper "Why Philosophers Should Care about Computational Complexity", Scott Aaronson argues that many conventional theories of epistemology & mind implicitly make the presupposition of omniscience (by supposing that knowing base facts means a knower necessarily understands derivative facts) - he proposes that computational complexity theory could explain why this is not the case. In this paper, I argue for a theory of mental representation & epistemology compatible with Aaronson's observations on complexity theory, overcoming that presupposition of omniscience. (shrink)
Download Export citation Bookmark
Computers Are Syntax All the Way Down: Reply to Bozşahin.William J. Rapaport - 2019 - Minds and Machines 29 (2):227-237.details
A response to a recent critique by Cem Bozşahin of the theory of syntactic semantics as it applies to Helen Keller, and some applications of the theory to the philosophy of computer science.
Download Export citation Bookmark 1 citation
What is the upper limit of value?David Manheim & Anders Sandberg - manuscriptdetails
How much value can our decisions create? We argue that unless our current understanding of physics is wrong in fairly fundamental ways, there exists an upper limit of value relevant to our decisions. First, due to the speed of light and the definition and conception of economic growth, the limit to economic growth is a restrictive one. Additionally, a related far larger but still finite limit exists for value in a much broader sense due to the physics of information and (...) the ability of physical beings to place value on outcomes. We discuss how this argument can handle lexicographic preferences, probabilities, and the implications for infinite ethics and ethical uncertainty. (shrink)
Download Export citation Bookmark
A fresh look at research strategies in computational cognitive science: The case of enculturated mathematical problem solving.Regina E. Fabry & Markus Pantsar - 2019 - Synthese 198 (4):3221-3263.details
Marr’s seminal distinction between computational, algorithmic, and implementational levels of analysis has inspired research in cognitive science for more than 30 years. According to a widely-used paradigm, the modelling of cognitive processes should mainly operate on the computational level and be targeted at the idealised competence, rather than the actual performance of cognisers in a specific domain. In this paper, we explore how this paradigm can be adopted and revised to understand mathematical problem solving. The computational-level approach applies methods from (...) computational complexity theory and focuses on optimal strategies for completing cognitive tasks. However, human cognitive capacities in mathematical problem solving are essentially characterised by processes that are computationally sub-optimal, because they initially add to the computational complexity of the solutions. Yet, these solutions can be optimal for human cognisers given the acquisition and enactment of mathematical practices. Here we present diagrams and the spatial manipulation of symbols as two examples of problem solving strategies that can be computationally sub-optimal but humanly optimal. These aspects need to be taken into account when analysing competence in mathematical problem solving. Empirically informed considerations on enculturation can help identify, explore, and model the cognitive processes involved in problem solving tasks. The enculturation account of mathematical problem solving strongly suggests that computational-level analyses need to be complemented by considerations on the algorithmic and implementational levels. The emerging research strategy can help develop algorithms that model what we call enculturated cognitive optimality in an empirically plausible and ecologically valid way. (shrink)
Download Export citation Bookmark 10 citations
Computers Aren’t Syntax All the Way Down or Content All the Way Up.Cem Bozşahin - 2018 - Minds and Machines 28 (3):543-567.details
This paper argues that the idea of a computer is unique. Calculators and analog computers are not different ideas about computers, and nature does not compute by itself. Computers, once clearly defined in all their terms and mechanisms, rather than enumerated by behavioral examples, can be more than instrumental tools in science, and more than source of analogies and taxonomies in philosophy. They can help us understand semantic content and its relation to form. This can be achieved because they have (...) the potential to do more than calculators, which are computers that are designed not to learn. Today’s computers are not designed to learn; rather, they are designed to support learning; therefore, any theory of content tested by computers that currently exist must be of an empirical, rather than a formal nature. If they are designed someday to learn, we will see a change in roles, requiring an empirical theory about the Turing architecture’s content, using the primitives of learning machines. This way of thinking, which I call the intensional view of computers, avoids the problems of analogies between minds and computers. It focuses on the constitutive properties of computers, such as showing clearly how they can help us avoid the infinite regress in interpretation, and how we can clarify the terms of the suggested mechanisms to facilitate a useful debate. Within the intensional view, syntax and content in the context of computers become two ends of physically realizing correspondence problems in various domains. (shrink)
Download Export citation Bookmark 4 citations
On Two Different Kinds of Computational Indeterminacy.Philippos Papayannopoulos, Nir Fresco & Oron Shagrir - 2022 - The Monist 105 (2):229-246.details
It is often indeterminate what function a given computational system computes. This phenomenon has been referred to as “computational indeterminacy” or “multiplicity of computations.” In this paper, we argue that what has typically been considered and referred to as the challenge of computational indeterminacy in fact subsumes two distinct phenomena, which are typically bundled together and should be teased apart. One kind of indeterminacy concerns a functional characterization of the system’s relevant behavior. Another kind concerns the manner in which the (...) abstract states are interpreted. We discuss the similarities and differences between the two kinds of computational indeterminacy, their implications for certain accounts of “computational individuation” in the literature, and their relevance to different levels of description within the computational system. We also examine the inter-relationships between our proposed accounts of the two kinds of indeterminacy and the main accounts of “computational implementation.”. (shrink)
Download Export citation Bookmark 4 citations
Almost Ideal: Computational Epistemology and the Limits of Rationality for Finite Reasoners.Danilo Fraga Dantas - 2016 - Dissertation, University of California, Davisdetails
The notion of an ideal reasoner has several uses in epistemology. Often, ideal reasoners are used as a parameter of (maximum) rationality for finite reasoners (e.g. humans). However, the notion of an ideal reasoner is normally construed in such a high degree of idealization (e.g. infinite/unbounded memory) that this use is unadvised. In this dissertation, I investigate the conditions under which an ideal reasoner may be used as a parameter of rationality for finite reasoners. In addition, I present and justify (...) the research program of computational epistemology, which investigates the parameter of maximum rationality for finite reasoners using computer simulations. (shrink)
Download Export citation Bookmark
Universality, Invariance, and the Foundations of Computational Complexity in the light of the Quantum Computer.Michael Cuffaro - 2018 - In Hansson Sven Ove (ed.), Technology and Mathematics: Philosophical and Historical Investigations. Cham, Switzerland: Springer Verlag. pp. 253-282.details
Computational complexity theory is a branch of computer science dedicated to classifying computational problems in terms of their difficulty. While computability theory tells us what we can compute in principle, complexity theory informs us regarding our practical limits. In this chapter I argue that the science of \emph{quantum computing} illuminates complexity theory by emphasising that its fundamental concepts are not model-independent, but that this does not, as some suggest, force us to radically revise the foundations of the theory. For model-independence (...) never has been essential to those foundations. The fundamental aim of complexity theory is to describe what is achievable in practice under various models of computation for our various practical purposes. Reflecting on quantum computing illuminates complexity theory by reminding us of this, too often under-emphasised, fact. (shrink)
Download Export citation Bookmark 3 citations
Against the possibility of a formal account of rationality.Shivaram Lingamneni - manuscriptdetails
I analyze a recent exchange between Adam Elga and Julian Jonker concerning unsharp (or imprecise) credences and decision-making over them. Elga holds that unsharp credences are necessarily irrational; I agree with Jonker's reply that they can be rational as long as the agent switches to a nonlinear valuation. Through the lens of computational complexity theory, I then argue that even though nonlinear valuations can be rational, they come in general at the price of computational intractability, and that this problematizes their (...) use in defining rationality. I conclude that the meaning of "rationality" may be philosophically vague. (shrink)
Download Export citation Bookmark
Information.Pieter Adriaans - 2012 - Stanford Encyclopedia of Philosophy.details
Download Export citation Bookmark 27 citations
Virtual Machines and Real Implementations.Tyler Millhouse - 2018 - Minds and Machines 28 (3):465-489.details
What does it take to implement a computer? Answers to this question have often focused on what it takes for a physical system to implement an abstract machine. As Joslin observes, this approach neglects cases of software implementation—cases where one machine implements another by running a program. These cases, Joslin argues, highlight serious problems for mapping accounts of computer implementation—accounts that require a mapping between elements of a physical system and elements of an abstract machine. The source of these problems (...) is the complexity introduced by common design features of ordinary computers, features that would be relevant to any real-world software implementation. While Joslin is focused on contemporary views, his discussion also suggests a counterexample to recent mapping accounts which hold that genuine implementation requires simple mappings. In this paper, I begin by clarifying the nature of software implementation and disentangling it from closely related phenomena like emulation and simulation. Next, I argue that Joslin overstates the degree of complexity involved in his target cases and that these cases may actually give us reasons to favor simplicity-based criteria over relevant alternatives. Finally, I propose a novel problem for simplicity-based criteria and suggest a tentative solution. (shrink)
Download Export citation Bookmark
A Simplicity Criterion for Physical Computation.Tyler Millhouse - 2019 - British Journal for the Philosophy of Science 70 (1):153-178.details
The aim of this paper is to offer a formal criterion for physical computation that allows us to objectively distinguish between competing computational interpretations of a physical system. The criterion construes a computational interpretation as an ordered pair of functions mapping (1) states of a physical system to states of an abstract machine, and (2) inputs to this machine to interventions in this physical system. This interpretation must ensure that counterfactuals true of the abstract machine have appropriate counterparts which are (...) true of the physical system. The criterion proposes that rival interpretations be assessed on the basis of simplicity. Simplicity is construed as the Kolmogorov complexity of the interpretation. This approach is closely related to the notion of algorithmic information distance and draws on earlier work on real patterns. (shrink)
Download Export citation Bookmark 14 citations
Strict Finitism's Unrequited Love for Computational Complexity.Noel Arteche - manuscriptdetails
As a philosophy of mathematics, strict finitism has been traditionally concerned with the notion of feasibility, defended mostly by appealing to the physicality of mathematical practice. This has led the strict finitists to influence and be influenced by the field of computational complexity theory, under the widely held belief that this branch of mathematics is concerned with the study of what is “feasible in practice”. In this paper, I survey these ideas and contend that, contrary to popular belief, complexity theory (...) is not what the ultrafinitists think it is, and that it does not provide a theoretical framework in which to formalize their ideas —at least not while defending the material grounds for feasibility. I conclude that the subject matter of complexity theory is not proving physical resource bounds in computation, but rather proving the absence of exploitable properties in a search space. (shrink)
Download Export citation Bookmark
Intuition, intelligence, data compression.Jens Kipper - 2019 - Synthese 198 (Suppl 27):6469-6489.details
The main goal of my paper is to argue that data compression is a necessary condition for intelligence. One key motivation for this proposal stems from a paradox about intuition and intelligence. For the purposes of this paper, it will be useful to consider playing board games—such as chess and Go—as a paradigm of problem solving and cognition, and computer programs as a model of human cognition. I first describe the basic components of computer programs that play board games, namely (...) value functions and search functions. I then argue that value functions both play the same role as intuition in humans and work in essentially the same way. However, as will become apparent, using an ordinary value function is just a simpler and less accurate form of relying on a database or lookup table. This raises our paradox, since reliance on intuition is usually considered to manifest intelligence, whereas usage of a lookup table is not. I therefore introduce another condition for intelligence that is related to data compression. This proposal allows that even reliance on a perfectly accurate lookup table can be nonintelligent, while retaining the claim that reliance on intuition can be highly intelligent. My account is not just theoretically plausible, but it also captures a crucial empirical constraint. This is because all systems with limited resources that solve complex problems—and hence, all cognitive systems—need to compress data. (shrink)
Download Export citation Bookmark
The Role of Observers in Computations.Peter Leupold - 2018 - Minds and Machines 28 (3):427-444.details
John Searle raised the question whether all computation is observer-relative. Indeed, all of the common views of computation, be they semantical, functional or causal rely on mapping something onto the states of a physical or abstract process. In order to effectively execute such a mapping, this process would have to be observed in some way. Thus a probably syntactical analysis by an observer seems to be essential for judging whether a given process implements some computation or not. In order to (...) be able to explore the nature of these observers in a more formal way, we look at the Computing by Observing paradigm, a theoretical model of computation that includes an observer. We argue that the observers used there, monadic transducers, are good candidates for formalizing the way in which the syntax of a process must be analysed in order to judge whether it is computational. (shrink)
Download Export citation Bookmark
On quantum computing for artificial superintelligence.Anna Grabowska & Artur Gunia - 2024 - European Journal for Philosophy of Science 14 (2):1-30.details
Artificial intelligence algorithms, fueled by continuous technological development and increased computing power, have proven effective across a variety of tasks. Concurrently, quantum computers have shown promise in solving problems beyond the reach of classical computers. These advancements have contributed to a misconception that quantum computers enable hypercomputation, sparking speculation about quantum supremacy leading to an intelligence explosion and the creation of superintelligent agents. We challenge this notion, arguing that current evidence does not support the idea that quantum technologies enable hypercomputation. (...) Fundamental limitations on information storage within finite spaces and the accessibility of information from quantum states constrain quantum computers from surpassing the Turing computing barrier. While quantum technologies may offer exponential speed-ups in specific computing cases, there is insufficient evidence to suggest that focusing solely on quantum-related problems will lead to technological singularity and the emergence of superintelligence. Subsequently, there is no premise suggesting that general intelligence depends on quantum effects or that accelerating existing algorithms through quantum means will replicate true intelligence. We propose that if superintelligence is to be achieved, it will not be solely through quantum technologies. Instead, the attainment of superintelligence remains a conceptual challenge that humanity has yet to overcome, with quantum technologies showing no clear path toward its resolution. (shrink)
Download Export citation Bookmark
Olympia and Other O-Machines.Colin Klein - 2015 - Philosophia 43 (4):925-931.details
Against Maudlin, I argue that machines which merely reproduce a pre-programmed series of changes ought to be classed with Turing’s O-Machines even if they would counterfactually show Turing Machine-like activity. This can be seen on an interventionist picture of computational architectures, on which basic operations are the primitive loci for interventions. While constructions like Maudlin’s Olympia still compute, then, claims about them do not threaten philosophical arguments that depend on Turing Machine architectures and their computational equivalents.
Download Export citation Bookmark
PAC Learning and Occam’s Razor: Probably Approximately Incorrect.Daniel A. Herrmann - 2020 - Philosophy of Science 87 (4):685-703.details
Computer scientists have provided a distinct justification of Occam’s Razor. Using the probably approximately correct framework, they provide a theorem that they claim demonstrates that we should favor simpler hypotheses. The argument relies on a philosophical interpretation of the theorem. I argue that the standard interpretation of the result in the literature is misguided and that a better reading does not, in fact, support Occam’s Razor at all. To this end, I state and prove a very similar theorem that, if (...) interpreted the same way, would justify the contradictory Anti-Occam’s Razor—the principle that we should favor more complex hypotheses. (shrink)
Download Export citation Bookmark 1 citation
Quantum computing.Amit Hagar & Michael Cuffaro - 2019 - Stanford Encyclopedia of Philosophy.details
Combining physics, mathematics and computer science, quantum computing and its sister discipline of quantum information have developed in the past few decades from visionary ideas to two of the most fascinating areas of quantum theory. General interest and excitement in quantum computing was initially triggered by Peter Shor (1994) who showed how a quantum algorithm could exponentially “speed-up” classical computation and factor large numbers into primes far more efficiently than any (known) classical algorithm. Shor’s algorithm was soon followed by several (...) other algorithms that aimed to solve combinatorial and algebraic problems, and in the years since theoretical study of quantum systems serving as computational devices has achieved tremendous progress. Common belief has it that the implementation of Shor’s algorithm on a large scale quantum computer would have devastating consequences for current cryptography protocols which rely on the premise that all known classical worst-case algorithms for factoring take time exponential in the length of their input (see, e.g., Preskill 2005). Consequently, experimentalists around the world are engaged in attempts to tackle the technological difficulties that prevent the realisation of a large scale quantum computer. But regardless whether these technological problems can be overcome (Unruh 1995; Ekert and Jozsa 1996; Haroche and Raimond 1996), it is noteworthy that no proof exists yet for the general superiority of quantum computers over their classical counterparts. -/- The philosophical interest in quantum computing is manifold. From a social-historical perspective, quantum computing is a domain where experimentalists find themselves ahead of their fellow theorists. Indeed, quantum mysteries such as entanglement and nonlocality were historically considered a philosophical quibble, until physicists discovered that these mysteries might be harnessed to devise new efficient algorithms. But while the technology for harnessing the power of 50–100 qubits (the basic unit of information in the quantum computer) is now within reach (Preskill 2018), only a handful of quantum algorithms exist, and the question of whether these can truly outperform any conceivable classical alternative is still open. From a more philosophical perspective, advances in quantum computing may yield foundational benefits. For example, it may turn out that the technological capabilities that allow us to isolate quantum systems by shielding them from the effects of decoherence for a period of time long enough to manipulate them will also allow us to make progress in some fundamental problems in the foundations of quantum theory itself. Indeed, the development and the implementation of efficient quantum algorithms may help us understand better the border between classical and quantum physics (Cuffaro 2017, 2018a; cf. Pitowsky 1994, 100), and perhaps even illuminate fundamental concepts such as measurement and causality. Finally, the idea that abstract mathematical concepts such as computability and complexity may not only be translated into physics, but also re-written by physics bears directly on the autonomous character of computer science and the status of its theoretical entities—the so-called “computational kinds”. As such it is also relevant to the long-standing philosophical debate on the relationship between mathematics and the physical world. (shrink)
Download Export citation Bookmark 6 citations
The Accidental Philosopher and One of the Hardest Problems in the World.Sonje Finnestad & Eric Neufeld - 2022 - Philosophies 7 (4):76.details
Given the difficulties of defining “machine” and “think”, Turing proposed to replace the question “Can machines think?” with a proxy: how well can an agent engage in sustained conversation with a human? Though Turing neither described himself as a philosopher nor published much on philosophical matters, his Imitation Game has stood the test of time. Most understood at that time that success would not come easy, but few would have guessed just how difficult engaging in ordinary conversation would turn out (...) to be. Despite the proliferation of language processing tools, we have seen little progress towards doing well at the Imitation Game. Had Turing instead suggested ability at games or even translation as a proxy for intelligence, his paper might have been forgotten. We argue that these and related problems are amenable to mechanical, though sophisticated, formal techniques. Turing appears to have taken care to select sustained, productive conversation and that alone as his proxy. Even simple conversation challenges a machine to engage in the rich practice of human discourse in all its generality and variety. (shrink)
Download Export citation Bookmark
Cellular automata.Francesco Berto & Jacopo Tagliabue - 2012 - Stanford Encyclopedia of Philosophy.details
Cellular automata (henceforth: CA) are discrete, abstract computational systems that have proved useful both as general models of complexity and as more specific representations of non-linear dynamics in a variety of scientific fields. Firstly, CA are (typically) spatially and temporally discrete: they are composed of a finite or denumerable set of homogeneous, simple units, the atoms or cells. At each time unit, the cells instantiate one of a finite set of states. They evolve in parallel at discrete time steps, following (...) state update functions or dynamical transition rules: the update of a cell state obtains by taking into account the states of cells in its local neighborhood (there are, therefore, no actions at a distance). Secondly, CA are abstract, as they can be specified in purely mathematical terms and implemented in physical structures. Thirdly, CA are computational systems: they can compute functions and solve algorithmic problems. Despite functioning in a different way from traditional, Turing machine-like devices, CA with suitable rules can emulate a universal Turing machine, and therefore compute, given Turing's Thesis, anything computable.... (shrink)
Download Export citation Bookmark 11 citations
Computational Complexity Theory and the Philosophy of Mathematics†.Walter Dean - 2019 - Philosophia Mathematica 27 (3):381-439.details
Computational complexity theory is a subfield of computer science originating in computability theory and the study of algorithms for solving practical mathematical problems. Amongst its aims is classifying problems by their degree of difficulty — i.e., how hard they are to solve computationally. This paper highlights the significance of complexity theory relative to questions traditionally asked by philosophers of mathematics while also attempting to isolate some new ones — e.g., about the notion of feasibility in mathematics, the $\mathbf{P} \neq \mathbf{NP}$ (...) problem and why it has proven hard to resolve, and the role of non-classical modes of computation and proof. (shrink)
Download Export citation Bookmark 1 citation
Natural Recursion Doesn’t Work That Way: Automata in Planning and Syntax.Cem Bozsahin - 2016 - In Vincent C. Müller (ed.), Fundamental Issues of Artificial Intelligence. Cham: Springer. pp. 95-112.details
Natural recursion in syntax is recursion by linguistic value, which is not syntactic in nature but semantic. Syntax-specific recursion is not recursion by name as the term is understood in theoretical computer science. Recursion by name is probably not natural because of its infinite typeability. Natural recursion, or recursion by value, is not species-specific. Human recursion is not syntax-specific. The values on which it operates are most likely domain-specific, including those for syntax. Syntax seems to require no more (and no (...) less) than the resource management mechanisms of an embedded push-down automaton (EPDA). We can conceive EPDA as a common automata-theoretic substrate for syntax, collaborative planning, i-intentions, and we-intentions. They manifest the same kind of dependencies. Therefore, syntactic uniqueness arguments for human behavior can be better explained if we conceive automata-constrained recursion as the most unique human capacity for cognitive processes. (shrink)
Download Export citation Bookmark | |||||||
2453 | dbpedia | 2 | 29 | https://cacm.acm.org/research/fifty-years-of-p-vs-np-and-the-possibility-of-the-impossible/ | en | Fifty Years of P vs. NP and the Possibility of the Impossible – Communications of the ACM | [
"https://cacm.acm.org/wp-content/uploads/2024/08/Sept-2024.Cover_.1000x1338.jpg?w=1000",
"https://cacm.acm.org/wp-content/uploads/2022/01/121521_CACMpg77_Fifty-Years-PNP.jpg",
"https://cacm.acm.org/wp-content/uploads/2022/01/121521_CACMpg77_Fifty-Years-PNP.jpg?w=250",
"https://dl.acm.org/cms/attachment/d96dca... | [] | [] | [
""
] | null | [
"Lance Fortnow",
"Micah D. Beck",
"Shaoshan Liu",
"Alex Tray"
] | 2022-01-01T00:00:00-05:00 | en | https://cacm.acm.org/research/fifty-years-of-p-vs-np-and-the-possibility-of-the-impossible/ | On May 4, 1971, computer scientist/mathematician Steve Cook introduced the P vs. NP problem to the world in his paper, “The Complexity of Theorem Proving Procedures.” More than 50 years later, the world is still trying to solve it. In fact, I addressed the subject 12 years ago in a Communications article, “The Status of the P versus NP Problem.”13
Back to Top
Key Insights
The P vs. NP problem turned 50 in 2021 and its resolution remains far out of reach. Dramatic advances in algorithms and hardware have allowed us to tackle manv NP-complete problems while making little progress breaking cryptographic systems.
As we continue to make dramatic progress in machine learning and data-centric computing, P vs. NP can give us a valuable lens to view what is and is not possible for the future of learning.
While P vs. NP began as a way to difficult problems to solve computationally, we now view the problem as a way to chart the future possibilities for our field.
The P vs. NP problem, and the theory behind it, has not changed dramatically since that 2009 article, but the world of computing most certainly has. The growth of cloud computing has helped to empower social networks, smartphones, the gig economy, fintech, spatial computing, online education, and, perhaps most importantly, the rise of data science and machine learning. In 2009, the top 10 companies by market cap included a single Big Tech company: Microsoft. As of September 2020, the first seven are Apple, Microsoft, Amazon, Alphabet (Google), Alibaba, Facebook, and Tencent.38 The number of computer science (CS) graduates in the U.S. more than tripled8 and does not come close to meeting demand.
Rather than simply revise or update the 2009 survey, I have chosen to view advances in computing, optimization, and machine learning through a P vs. NP lens. I look at how these advances bring us closer to a world in which P = NP, the limitations still presented by P vs. NP, and the new opportunities of study which have been created. In particular, I look at how we are heading toward a world I call “Optiland,” where we can almost miraculously gain many of the advantages of P = NP while avoiding some of the disadvantages, such as breaking cryptography.
As an open mathematical problem, P vs. NP remains one of the most important; it is listed on the Clay Mathematical Institute’s Millennium Problems21 (the organization offers a million-dollar bounty for the solution). I close the article by describing some new theoretical computer science results that, while not getting us closer to solving the P vs. NP question, show us that thinking about P vs. NP still drives much of the important research in the area.
Back to Top
The P vs. NP Problem
Are there 300 Facebook users who are all friends with each other? How would you go about answering that question? Let’s assume you work at Facebook. You have access to the entire Facebook graph and can see which users are friends. You now need to write an algorithm to find that large clique of friends. You could try all groups of 300, but there are far too many to search them all. You could try something smarter, perhaps starting with small groups and merging them into bigger groups, but nothing you do seems to work. In fact, nobody knows of a significantly faster solution than to try all the groups, but neither do we know that no such solution exists.
This is basically the P vs. NP question. NP represents problems that have solutions you can check efficiently. If I tell you which 300 people might form a clique, you can check relatively quickly that the 44,850 pairs of users are all friends. Clique is an NP problem. P represents problems where you can find those solutions efficiently. We don’t know whether the clique problem is in P. Perhaps, surprisingly, Clique has a property called NP-complete—that is, we can efficiently solve the Clique problem quickly if and only if P = NP. Many other problems have this property, including 3-Coloring (can a map be colored using only three colors so that no two neighboring countries have the same color?), Traveling Salesman (find the shortest route through a list of cities, visiting every city and returning to the starting place), and tens to hundreds of thousands of others.
Formally, P stands for “polynomial time,” the class of problems that one can solve in time bounded by a fixed polynomial in the length of the input. NP stands for “nondeterministic polynomial time,” where one can use a nondeterministic machine that can magically choose the best answer. For the purposes of this survey, it is best to think of P and NP simply as efficiently computable and efficiently checkable.
For those who want a longer informal discussion on the importance of the P vs. NP problem, see the 2009 survey13 or the popular science book based on that survey.14 For a more technical introduction, the 1979 book by Michael Garey and David Johnson16 has held up surprisingly well and remains an invaluable reference for those who need to understand which problems are NP-complete.
Back to Top
Why Talk About It Now?
On that Tuesday afternoon in 1971, when Cook presented his paper to ACM Symposium on the Theory of Computing attendees at the Stouffer’s Somerset Inn in Shaker Heights, OH, he proved that Satisfiability is NP-complete and Tautology is NP-hard.10 The theorems suggest that Tautology is a good candidate for an interesting set not in [P], and I feel it is worth spending considerable effort trying to prove this conjecture. Such a proof would represent a major breakthrough in complexity theory.
Dating a mathematical concept is almost always a challenge, and there are many other possible times where we can start the P vs. NP clock. The basic notions of algorithms and proofs date back to at least the ancient Greeks, but as far as we know they never considered a general problem such as P vs. NP. The basics of efficient computation and nondeterminism were developed in the 1960s. The P vs. NP question was formulated earlier than that, we just didn’t know it.
The P vs. NP problem, and the theory behind it, has not changed dramatically, but the world of computing most certainly has.
Kurt Gödel wrote a letter17 in 1956 to John von Neumann that essentially described the P vs. NP problem. It is not clear if von Neumann, then suffering from cancer, ever read the letter, which was not discovered and widely distributed until 1988. The P vs. NP question didn’t really become a phenomenon until Richard Karp published his 1972 paper23 showing that a large number of well-known combinatorial problems were NP-complete, including Clique, 3-Coloring, and Traveling Salesman. In 1973, Leonid Levin, then in Russia, published a paper based on his independent 1971 research that defined the P vs. NP problem.27 By the time Levin’s paper reached the west, P vs. NP had already established itself as computing’s most important question.
Back to Top
Optiland
Russell Impagliazzo, in a classic 1995 paper,20 described five worlds with varying degrees of possibilities for the P vs. NP problem:
Algorithmica: P = NP or something “morally equivalent,” such as fast probabilistic algorithms for NP.
Heuristica: NP problems are hard in the worst case but easy on average.
Pessiland: We can easily create hard NP problems, but not hard NP problems where we know the solution. This is the worst of all possible worlds, since we can neither solve hard problems on average nor do we get any apparent cryptographic advantage from the difficulty of these problems.
Minicrypt: Cryptographic one-way functions exist, but we do not have public-key cryptography.
Cryptomania: Public-key cryptography is possible—that is, two parties can exchange secret messages over open channels.
These worlds are purposely not formally defined but rather suggest the unknown possibilities given our knowledge of the P vs. NP problem. The general belief, though not universal, is that we live in Cryptomania.
Impagliazzo draws upon a “you can’t have it all” from P vs. NP theory. You can either solve hard NP problems or have cryptography, but you can’t have both (you can have neither). Perhaps, though, we are heading to a de facto Optiland. Advances in machine learning and optimization in both software and hardware are allowing us to make progress on problems long thought difficult or impossible—from voice recognition to protein folding—and yet, for the most part, our cryptographic protocols remain secure.
In a section called “What if P=NP?” from the 2009 survey,13 I wrote, “Learning becomes easy by using the principle of Occam’s razor—we simply find the smallest program consistent with the data. Near-perfect vision recognition, language comprehension and translation, and all other learning tasks become trivial. We will also have much better predictions of weather and earthquakes and other natural phenomenon.”
Today, you can use face-scanning to unlock your smartphone, talk to the device to ask it a question and often get a reasonable answer, or have your question translated into a different language. Your phone receives alerts about weather and other climatic events, with far better predictions than we would have thought possible just a dozen years ago. Meanwhile, cryptography has gone mostly unscathed beyond brute-force-like attacks on small key lengths. Now let’s look at how recent advances in computing, optimization, and learning are leading us to Optiland.
Back to Top
Solving Hard Problems
In 2016, Bill Cook (no relation to Steve) and his colleagues decided to tackle the following challenge:9 How do you visit every pub in the U.K. in the shortest distance possible? They made a list of 24,727 pubs and created the ultimate pub crawl, a walking trip that spanned 45,495,239 meters—approximately 28,269 miles—a bit longer than walking around the earth.
Cook had cheated a bit, eliminating some pubs to keep the size reasonable. After some press coverage in the U.K.,7 many complained about missing their favorite watering holes. Cook and company went back to work, building up the list to 49,687 pubs. The new tour length would be 63,739,687 meters, or about 39,606 miles (see Figure). One needs just a 40% longer walk to reach more than twice as many pubs. The pub crawl is just a traveling salesman problem, one of the most famous of the NP-complete problems. The number of possible tours through all the 49,687 pubs is roughly three followed by 211,761 zeros. Of course, Cook’s computers don’t search the whole set of tours but use a variety of optimization techniques. Even more impressive, the tour comes with a proof of optimality based on linear program duality.
Figure. Shortest route through 49,687 U.K. pubs. Used by permission. (http://www.math.uwaterloo.ca/tsp/uk).
Taking on a larger task, Cook and company aimed to find the shortest tour through more than two million stars where distances could be computed. Their tour of 28,884,456 parsecs is within a mere 683 parsecs of optimal.
Beyond Traveling Salesman, we have seen major advances in solving satisfiability and mixed-integer programming—a variation of linear programming where some, but not necessarily all, of the variables are required to be integers. Using highly refined heuristics, fast processors, specialized hardware, and distributed cloud computing, one can often solve problems that arise in practice with tens of thousands of variables and hundreds of thousands or even millions of constraints.
Faced with an NP problem to solve, one can often formulate the problem as a satisfiability or mixed-integer programming question and throw it at one of the top solvers. These tools have been used successfully in verification and automated testing of circuits and code, computational biology, system security, product and packaging design, financial trading, and even to solve some difficult mathematical problems.
Back to Top
Data Science and Machine Learning
Any reader of Communications and most everyone else cannot dismiss the transformative effects of machine learning, particularly learning by neural nets. The notion of modeling computation by artificial neurons—basically objects that compute weighted threshold functions—goes back to the work of Warren McCulloch and Walter Pitts in the 1940s.28 In the 1990s, Yoshua Bengio, Geoffrey Hinton, and Yann LeCun26 developed the basic algorithms that would power the learning of neural nets, a circuit of these neurons several layers deep. Faster and more distributed computing, specialized hardware, and enormous amounts of data helped propel machine learning to the point where it can accomplish many human-oriented tasks surprisingly well. ACM recognized the incredible impact the work of Bengio, Hinton, and LeCun has had in our society with the 2018 A.M. Turing Award.
How does machine learning mesh with P vs. NP? In this section, when we talk about P = NP, it will be in the very strong sense of all problems in NP having efficient algorithms in practice. Occam’s razor states that “entities should not be multiplied without necessity” or, informally, that the simplest explanation is likely to be the right one. If P = NP, we can use this idea to create a strong learning algorithm: Find the smallest circuit consistent with the data. Even though we likely don’t have P = NP, machine learning can approximate this approach, which led to its surprising power. Nevertheless, the neural net is unlikely to be the “smallest” possible circuit. A neural net trained by today’s deep-learning techniques is typically fixed in structure with parameters that are only on the weights on the wires. To allow sufficient expressibility, there are often millions or more such weights. This limits the power of neural nets. They can do very well with face recognition, but they can’t learn to multiply based on examples.
Universal distribution and GPT-3. Consider distributions on the infinite set of binary strings. You can’t have a uniform distribution, but you could create distributions where every string of the same length has the same probability. However, some strings are simply more important than others. For example, the first million digits of π have more meaning than just a million digits generated at random. You might want to put a higher probability on the more meaningful strings. There are many ways to do this, but in fact there is a universal distribution that gets close to any other computable distribution (see Kirchherr et al.25) This distribution has great connections to learning—for example, any algorithm that learns with small error to this distribution will learn for all computable distributions. The catch is that this distribution is horribly non-computable even if P = NP. If P = NP, we still get something useful by creating an efficiently computable distribution universal to other efficiently computable distributions.
What do we get out of machine learning? Consider the Generative Pre-trained Transformer (GPT), particularly GPT-3 released in 2020.5 GPT-3 has 175 billion parameters trained on 410 billion tokens taken from as much of the written corpus as could be made available. It can answer questions, write essays given a prompt, and even do some coding. Though it has a long way to go, GPT-3 has drawn rave reviews for its ability to generate material that looks human-produced. One can view GPT-3 in some sense like a distribution, where we can look at the probability of outputs generated by the algorithm, a weak version of a universal distribution. If we restrict a universal distribution to have a given prefix, that provides a random sample prompted by that prefix. GPT-3 can also build on such prompts, handling a surprisingly wide range of domain knowledge without further training. As this line of research progresses, we will get closer to a universal metric from which one can perform built-in learning: Generate a random example from a given context.
Science and medicine. In science, we have made advances by doing large-scale simulations to understand, for example, exploring nuclear fusion reactions. Researchers can then apply a form of the scientific method: Create a hypothesis for a physical system; use that model to make a prediction; and then, instead of attempting to create an actual reaction, use an experimental simulation to test that prediction. If the answer is not as predicted, then change or throw away the model and start again.
After we have a strong model, we can then make that expensive test in a physical reactor. If P = NP, we could, as mentioned above, use an Occam’s Razor approach to create hypotheses—find the smallest circuits that are consistent with the data. Machine-learning techniques can work along these lines, automating the hypothesis creation. Given data—whether generated by simulations, experiments, or sensors—machine learning can create models that match the data. We can use these models to make predictions and then test those predictions as before.
While these techniques allow us to find hypotheses and models that might have been missed, they can also lead to false positives. We generally accept a hypothesis with a 95% confidence level, meaning that one out of 20 bad hypotheses might pass. Machine-learning and data science tools can allow us to generate hypotheses that will run the risk of publishing results not grounded in truth. Medical researchers, particularly those trying to tackle diseases such as cancer, often hit upon hard algorithmic barriers. Biological systems are incredibly complex structures. We know that our DNA forms a code that describes how our bodies are formed and the functions they perform, but we have only a very limited understanding on how these processes work.
On November 30, 2020, Google’s DeepMind announced AlphaFold, a new algorithm that predicts the shape of a protein based on its amino acid sequence.22 AlphaFold’s predictions nearly reach the accuracy of experimentally building the amino acid sequence and measuring the shape of the protein that forms. There is some controversy as to whether DeepMind has actually “solved” protein folding and it is far too early to gauge its impact, but in the long run this could give us a new digital tool to study proteins, understand how they interact, and learn how to design them to fight disease.
Beyond P vs. NP: chess and go. NP is like solving a puzzle. Sudoku, on an arbitrarily sized board, is NP-complete to solve from a given initial setting of numbers in some of the squares. But what about games with two players who take alternate turns, such as chess and go, when we ask about who wins from a given initial setting of the pieces? Even if we have P = NP, it wouldn’t necessarily give us a perfect chess program. You would have to ask if there is a move for white such that for every move of black, there is a move for white such that for every move of black … white wins. You just can’t do all those alternations of white and black on P = NP alone. Games like these tend to be wha is called PSPACE-hard, hard for computation that uses a reasonable amount of memory without any limit on time. Chess and go could even be harder depending on the precise formulation of the rules (see Demaine and Hearn.11)
This doesn’t mean you can’t get a good chess program if P = NP. You could find an efficient computer program of one size that beats all efficient programs of slightly smaller sizes, if that’s possible. Meanwhile, even without P = NP, computers have gotten very strong at chess and go. In 1997, IBM’s Deep Blue defeated Gary Kasparov, chess world champion at the time, but go programs struggled against even strong amateurs. Machine learning has made dramatic improvements to computer game playing. While there is a lengthy history, let me jump to AlphaZero, developed in 2017 by Google’s DeepMind.35
AlphaZero uses a technique known as Monte Carlo tree search (MCTS) that randomly makes moves for both players to determine the best course of action. AlphaZero uses deep learning to predict the best distributions for the game positions to optimize the chances to win using MCTS. While AlphaZero is not the first program to use MCTS, it does not have any built-in strategy or access to a previous game database. AlphaZero assumes nothing more than the rules of the game. This allows AlphaZero to excel at both chess and go, two very different games that share little other than alternating moves and a fixed-size board. DeepMind recently went even further with MuZero,33 which doesn’t even get the full rules, just some representation of board position, a list of legal moves, and whether the position is a win, lose, or draw. Now we’ve come to the point that pure machine learning easily beats any human or other algorithm in chess or go. Human intervention only gets in the way. For games such as chess and go, machine learning can achieve success where P = NP wouldn’t be enough.
Machine learning may not do well when faced with tasks that are not from the distribution in which it was trained.
Explainable AI. Many machine-learning algorithms seem to work very well but we don’t know why. If you look at a neural net trained for voice recognition, it’s often very hard to understand why it makes the choices it makes. Why should we care? Here are a few of several reasons.
Trust: How do we know that the neural net is acting correctly? Beyond checking input/output pairs we can’t do any other analysis. Different applications have different levels of trust. It’s okay if Netflix makes a bad movie recommendation, but less so if a self-driving car recommends a wrong turn.
Fairness@: Many examples abound in which algorithms trained on data learn the intended and unintended biases in that data (see O’Neil30). If you don’t understand the program, how do you figure out the biases?
Security: If you use machine learning to monitor security systems, you won’t know what exploits still exist, especially if your adversary is being adaptive. If you can understand the code, you could spot and fix security leaks. Of course, if adversaries have the code, they might find exploits.
Cause and effect: Right now, you can, at best, check that a machine-learning algorithm only correlates with the kind of output you desire. Understanding the code might help us understand the causality in the data, leading to better science and medicine.
Would we get a better scenario if P = NP? If you had a quick algorithm for NP-complete problems, you could use it to find the smallest possible circuit for matching or Traveling Salesman, but you would not know why that circuit works. On the other hand, the reason you might want an explainable algorithm is so you can understand its properties, but we could use P = NP to derive those properties directly. Whole conferences have cropped up studying explainable AI, such as the ACM Conference on Fairness, Accountability, and Trust.
Limits of machine learning. While machine learning has shown many surprising results in the last decade, these systems are far from perfect and, in most applications, can still be bested by humans. We will continue to improve machine-learning capability through new and optimized algorithms, data collection, and specialized hardware. Machine learning does seem to have its limits. As we’ve seen above, machine learning will give us a taste of P = NP, but it will never substitute for it. Machine learning makes little progress on breaking cryptography, which we will see later in the article.
Machine learning seems to fail learning simple arithmetic—for example, summing up a large collection of numbers or multiplying large numbers. One could imagine combining machine learning with symbolic mathematical tools. While we’ve seen some impressive advances in theorem provers,19 we sit a long way from my dream task of taking one of my research papers, with its informal proofs, and having an AI system fill in the details and verify the proof.
Again, P = NP would make these tasks easy or at least tractable. Machine learning may not do well when faced with tasks that are not from the distribution in which it was trained. That could be low-probability edge cases, such as face recognition from a race not well represented in the training data, or even an adversarial attempt to force a different output by making a small change in the input—for example, changing a few pixels of a stop sign to force an algorithm to interpret it as a speed limit sign.12 Deep neural-net algorithms can have millions of parameters, so they may not generalize well off distribution. If P = NP, one can produce minimum-sized models that would hopefully do a better job of generalizing, but without the experiment we can’t perform, we will never know.
As impressive as machine learning is, we have not achieved anything close to artificial general intelligence, a term that refer to something like true comprehension of a topic or to an artificial system that achieves true consciousness or self-awareness. Defining these terms can be tricky, controversial, or even impossible. Personally, I’ve never seen a formal definition of consciousness that captures my intuitive notion of the concept. I suspect we will never achieve artificial general intelligence in the strong sense, even if P = NP.
Back to Top
Cryptography
While we have seen much progress in attacking NP problems, cryptography in its many forms, including one-way functions, secure hashes, and public-key cryptography, seems to have survived intact. An efficient algorithm for NP, were it to exist, would break all cryptosystems save those that are information-theoretically safe, such as one-time pads and some based on quantum physics. We have seen many successful cybersecurity attacks, but usually they stem from bad implementations, weak random number generators, or human error, but rarely if ever from breaking the cryptography.
Most CPU chips now have AES built in, so once we’ve used public-key cryptography to set up a private key, we can send encrypted data as easily as plain text. Encryption powers blockchain and cryptocurrencies, meaning people trust cryptography enough to exchange money for bits. Michael Kearns and Leslie Valiant24 showed in 1994 that learning the smallest circuit, even learning the smallest bounded-layer neural net, could be used to factor numbers and break public-key crypto-systems. So far, machine-learning algorithms have not been successfully used to break cryptographic protocols nor are they ever expected to.
I suspect we will never achieve artificial general intelligence in the strong sense, even if P = NP.
Why does encryption do so well when we’ve made progress on many other NP problems? In cryptography, we can choose the problem, specifically designed to be hard to compute and well tested by the community. Other NP problems generally come to us from applications or nature. They tend to not be the hardest cases and are more amenable to current technologies.
Quantum computing seems to threaten current public-key protocols that secure our Internet transactions. Shor’s algorithm34 can factor numbers and other related number-theory computations. This concern can be tempered in a few ways. Despite some impressive advances in quantum computing, we are still decades if not centuries away from developing quantum machines that can handle enough entangled bits to implement Shor’s algorithm on a scale that can break today’s codes. Also, researchers have made good progress toward developing public-key cryptosystems that appear resistant to quantum attacks.31 We will dwell more on quantum computing later in this article.
Factoring is not known to be NP-complete, and it is certainly possible that a mathematical breakthrough could lead to efficient algorithms even if we don’t have large-scale quantum computers. Having multiple approaches to public-key systems may come in handy no matter your view of quantum’s future.
Back to Top
Complexity as Friction
What advantages can we get from computational hardness? Cryptography comes to mind. But perhaps the universe made computation difficult for a reason, not unlike friction. In the physical world, overcoming friction usually comes at the cost of energy, but we can’t walk without it. In the computational world, complexity can often slow progress, but if it didn’t exist, we could have many other problems. P = NP would allow us to, in many cases, eliminate this friction.
Recent advances in computing show us that eliminating friction can sometimes have negative consequences. For instance, no one can read our minds, only see the actions that we take. Economists have a term, “preference revelation,” which attempts to determine our desires based on our actions. For most of history, the lack of data and computing power made this at best a highly imprecise art.
Today, we’ve collected a considerable amount of information about people from their web searches, their photos and videos, the purchases they make, the places they visit (virtual and real), their social media activity, and much more. Moreover, machine learning can process this information and make eerily accurate predictions about people’s behavior. Computers often know more about us than we know about ourselves.
We have the technological capability to wear glasses that would allow you to learn the name, interests and hobbies, and even the political persuasion of the person you are looking at. Complexity no longer affords us privacy. We need to preserve privacy with laws and corporate responsibility.
Computational friction can go beyond privacy. The U.S. government deregulated airline pricing in 1978 but finding the best price for a route required making phone calls to several airlines or working through a travel agent, who wasn’t always incentivized to find the lowest price. Airlines worked on reputation, some for great service and others for lower prices. Today, we can easily find the cheapest airline flights, so airlines have put considerable effort into competing on this single dimension of price and have used computation to optimize pricing and fill their planes, at the expense of the whole flying experience.
Friction helped clamp down on cheating by students. Calculus questions I had to answer as a college student in the 1980s can now be tackled easy by Mathematica. For my introductory theory courses, I have trouble creating homework and exam questions whose answers and solutions cannot be found online. With GPT-3 and its successors, even essay and coding questions can be automatically generated. How do we motivate students when GPT and the like can answer even their most complex questions?
Stock trading used to happen in big pits, where traders used hand signals to match prices. Now, trading algorithms automatically adjust to new pricing, occasionally leading to “flash crashes.” Machine-learning techniques have led to decision-making systems or face recognition, matching social media content to users and judicial sentencing often at scale. These decision systems have done some good but have also led to significant challenges, such as amplifying biases and political polarization.30 There are no easy answers here.
These are just a few of many such possible scenarios. Our goal, as computer scientists, is to make computation as efficient and simple as possible, but we must keep the costs of reducing friction on our minds.
Back to Top
The Power of Quantum Computers
As the limits of Moore’s law have become more apparent, computer researchers have looked toward non-traditional computation models to make the next breakthroughs, leading to large growth in the research and application of quantum computing. Major tech companies, such as Google, Microsoft, and IBM—not to mention a raft of startups—have thrown considerable resources at developing quantum computers. The U.S. has launched a National Quantum Initiative and other countries, notably China, have followed suit.
In 2019, Google announced1 it used a quantum computer with 53 qubits to achieve “quantum supremacy,” solving a computational task that current traditional computation cannot. While some have questioned this claim, we certainly sit at the precipice of a new era in quantum computing. Nevertheless, we remain far away from having the tens of thousands of quantum bits required to run Peter Shor’s algorithm34 to find prime factors of numbers that today’s machines cannot factor. Often, quantum computing gets described as the number of states represented by the bits—for example, the 253 states of a 53-qubit machine. This might suggest that we could use quantum computing to solve NP-complete problems by creating enough states to, for instance, check all the potential cliques in a graph. Unfortunately, there are limits to how a quantum algorithm can manipulate these states, and all evidence suggests that quantum computers cannot solve NP-complete problems,3 beyond a quadratic improvement given by Grover’s algorithm.18
Back to Top
Complexity Updates
Since the 2009 survey, we have seen several major advances in our understanding of the power of efficient computation. While these results do not make significant progress toward resolving P vs. NP, they still show how it continues to inspire great research.
Graph isomorphism. Some NP problems resist characterization as either in P (efficiently solvable) or NP-complete (as hard as the Clique problem). The most famous, integer factoring, which we discussed previously, still requires exponential time to solve. For another such problem, graph isomorphism, we have recently seen dramatic progress. Graph isomorphism asks whether two graphs are identical up to relabeling. Thinking in terms of Facebook, given two groups of 1,000 people, can we map names from one group onto the other in a way that preserves friendships?
Results related to interactive proofs in the 1980s offered strong evidence that graph isomorphism is not NP-complete,4 and even simple heuristics can generally solve such problems quickly in practice. Nevertheless, we still lack a polynomial-time algorithm for graph isomorphism that works in all instances. László Babai achieved a breakthrough result in 2016, presenting a quasipolynomial-time algorithm for graph isomorphism.2 The problems in P run in polynomial-time—that is, nk for some constant k, where n is the size of the input, such as the number of people in each group. A quasipolynomial-time algorithm runs in time n(logn)k, a bit worse than polynomial time but considerably better than the exponential time (2nε) that we expect NP-complete problems will need.
Babai’s proof is a tour-de-force masterpiece combining combinatorics and group theory. Although getting the algorithm to run in polynomial-time would require several new breakthroughs, Babai provides a major theoretical result, making dramatic progress on one of the most important problems between P and NP-complete.
Circuits. If NP does not have small circuits over a complete basis (AND, OR, NOT) then P ≠ NP. While there were significant circuit complexity results in the 1980s, none get close to showing P ≠ NP. The 2009 survey remarked that there were no major results in circuit complexity in the 20 years prior. That lasted about one more year. In 1987, Razborov32 and Smolensky36 showed the impossibility of computing the majority function with constant-depth circuits of AND, OR, NOT, and Modp gates for some fixed prime p. We could prove little, though, for circuits with Mod6 gates. Even showing that NEXP, an exponential-time version of NP, could not be computed by small, constant-depth circuits of AND, OR, NOT, and Mod6 gates remained open for decades. Constant-depth circuits are believed to be computationally weak. The lack of results reflects the paltry progress we have had in showing the limits of computation models.
In 2010, Ryan Williams showed39 that NEXP indeed didn’t have such small constant-depth circuits with Mod6 or any other Mod gate. He had created a new technique, applying satisfiability algorithms that do just slightly better than trying all assignments and drawing in several complexity tools to achieve the lower bounds. Later, Williams and his student Cody Murray strengthened29 the result to show that nondeterministic quasipolynomial-time doesn’t have small constant-depth circuits with Modm gates for any fixed m. Nevertheless, showing that NP does not have small circuits of arbitrary depth—which is what you would need to show P ≠ NP—remains far out of reach.
All evidence suggests that quantum computers cannot solve NP-complete problems, beyond a quadratic improvement given by Grover’s algorithm.
Complexity strikes back? In a section of the 2009 survey titled, “A New Hope?”13 we discussed a new geometric-complexity-theory approach to attacking P vs. NP based on algebraic geometry and representation theory developed by Ketan Mulmuley and Milind Sohoni. In short, Mulmuley and Sohoni sought to create high-dimension polygons capturing the power of a problem in an algebraic version of NP and show that it had different properties than any such polygon corresponding to an algebraic property of P. One of their conjectures considered the property that the polygons contained a certain representation-theoretic object. In 2016, Peter Bürgisser, Christian Ikenmeyer, and Greta Panova6 showed that this approach cannot succeed.
While the Bürgisser-Ikenmeyer-Panova result deals a blow to the GCT approach to separating P vs. NP, it does not count it out. One could still potentially create polygons that differ based on the number of these representation-theoretic objects. Nevertheless, we shouldn’t expect the GCT approach to settle the P vs. NP problem anytime in the near future.
Back to Top
The Possibility of the Impossible
As we reflect on P vs. NP, we see the question having many different meanings. There is P vs. NP the mathematical question—formally defined, stubbornly open, and still with a million-dollar bounty on its head. There were times when we could see a way forward toward settling P vs. NP through tools of computability theory, circuits, proofs, and algebraic geometry. At the moment, we don’t have a strong way forward to solving the P vs. NP problem. In some sense, we are further from solving it than we ever were.
There are also the NP problems we just want or need to solve. In the classic 1976 text, Computers and Intractability: A Guide to the Theory of NP-Completeness,16 Garey and Johnson give an example of a hapless employee asked to solve an NP-complete optimization problem. Ultimately, the employee goes to the boss and says, “I can’t find an efficient algorithm, but neither can all these famous people,” indicating that the boss shouldn’t fire the employee since no other hire could solve the problem.
In those early days of P vs. NP, we saw NP-completeness as a barrier—these were problems that we just couldn’t solve. As computers and algorithms evolved, we found we could make progress on many NP problems through a combination of heuristics, approximation, and brute-force computing. In the Garey and Johnson story, if I were the boss, I might not fire the employee but advise trying mixed-integer programming, machine learning, or a brute-force search. We are well past the time that NP-complete means impossible. It just means there is likely no algorithm that will always work and scale.
In my 2013 book on P vs. NP,14 I have a chapter titled, “A Beautiful World,” where I imagine an idealized world in which a Czech mathematician proves P = NP, leading to a very efficient algorithm for all NP problems. While we do not and likely will not ever live in this ideal world—with medical advances, virtual worlds indistinguishable from reality, and learning algorithms that generate new works of art—the wonderful (and not so wonderful) consequences of P = NP no longer seem out of reach, but rather an eventual consequence of our further advances in computing.
We are truly on our way to nearly completely reversing the meaning of the P vs. NP problems. Instead of representing a barrier, think of P vs. NP opening doors, presenting us with new directions, and showing us the possibility of the impossible.
Back to Top
Acknowledgments | |||||||
2453 | dbpedia | 3 | 6 | https://www.altaonline.com/dispatches/a7178/quantum-computing-timothy-ferris/ | en | The Most Interesting Machine in the World | [
"https://www.altaonline.com/_assets/design-tokens/fre/static/icons/search.f1c199c.svg?primary=%2523ffffff",
"https://www.altaonline.com/_assets/design-tokens/fre/static/icons/close.38e3324.svg?primary=%2523ffffff",
"https://www.altaonline.com/_assets/design-tokens/altaonline/static/images/logos/logo.fdf990e.svg... | [] | [] | [
"Timothy Ferris",
"Quantum Computing",
"Math",
"John Preskill",
"Features",
"technology",
"Will Hearst",
"Spotlight"
] | null | [
"Timothy Ferris"
] | 2020-05-18T12:00:48+00:00 | Quantum computing is wild and weird and full of surprises.Tech companies are betting they can harness its potential to replicate nature itself. | en | /_assets/design-tokens/altaonline/static/images/favicon.a018d4b.ico | Alta Online | https://www.altaonline.com/dispatches/a7178/quantum-computing-timothy-ferris/ | One of the machines used by Google to attain quantum supremacy—the ability to solve problems no conventional computer can handle.
Quantum computing—magnificent in conception though embryonic in performance—is being touted as the next great information-technology revolution.
Enthusiasts are predicting that quantum machines will solve problems beyond the reach of conventional computers, transforming everything from medical research to the concepts of space and time.
Meanwhile, the actual quantum computers being tested in university and corporate laboratories are mostly exotic divas that run at temperatures colder than intergalactic space and crash in milliseconds if intruded on by the outside world.
“I worry a lot about the hype,” John Preskill told me recently as we chatted in his office in the gleaming, glass-shrouded building at the California Institute of Technology in Pasadena where his Institute for Quantum Information does its weird work. A long-sighted physicist in the tradition of Caltech’s Richard Feynman, Preskill is a leading advocate of quantum computing. But even he pooh-poohs the idea that quantum computers will soon replace our laptops.
“Everybody believes it, but nobody can prove it,” he said. “Changing everything in 10 years is not realistic.”
Such reservations haven’t kept governments and the private sector from betting that visionaries like Preskill—and like China’s Pan Jian-Wei, known in his homeland as the “father of quantum”—will succeed. The Chinese government is reportedly investing $11 billion in developing quantum computers and quantum-ready networks. The U.S. government and the European Union are in for more than a billion dollars each. IBM has put a rudimentary quantum computer online, complete with tutorials on how to frame questions it can understand. (Sample instruction: “Apply a Hadamard gate to q[0] by dragging and dropping the H gate onto the q[0] line.”) Amazon’s cloud-computing services now include access to quantum computers operated by IonQ, D-Wave Systems, and the Berkeley chipmaker Rigetti Computing. Google claims to have attained “quantum supremacy,” a term Preskill coined for the ability to solve problems no conventional computer can handle.
Google’s facility sits inconspicuously in an aging industrial park near the Santa Barbara airport. It’s identified only by a bumper sticker on the glass front door. Inside, I was shown five quantum computers, all humming away. Each was housed in a giant thermos hung on chains to minimize vibrations from the ground. Dozens of reedy silver coaxial cables fed into each computer, conveying microwave pulses through its quantum chip and back to the dozens of scientists hunched over display terminals in the next room. Google research scientist Erik Lucero told me that the team’s goal is to make quantum computers practical, then “give them to the world.”
Google research scientist Erik Lucero with a quantum computer’s vacuum feedthrough, which holds coaxial cables used to carry microwave pulses to and from its processors.
The world could use them.
Ordinary computer chips are approaching their theoretical limits. They’ve been getting smaller and faster for decades, but their millions of tiny transistors cannot be shrunk much more without running into interference from—ironically enough—the quantum fluctuations that pervade the universe on submicroscopic scales.
Conventional computers also raise environmental concerns. The global information-technology sector, growing by 3 percent a year, already spews as much greenhouse gas emissions as the airlines do. Supercomputers that gobble up more electricity than 10,000 homes are starting to look as antiquated as steam locomotives, considering that a quantum chip about the size of a postage stamp could, in theory, do more in seconds than a supercomputer could accomplish in a thousand years. Quantum’s greener.
Long-term prospects aside, there’s a hardball motive for investing in quantum computing right now: if you don’t, somebody else may get there first.
Consider encryption. Today’s commercial and military encryption systems were designed to foil conventional—not quantum—computers. In the popular public-key encryption system, each financial transaction is identified by a “public” number, generated by multiplying two primes. Cracking the code requires determining which two prime numbers were multiplied, a task that would take a conventional computer billions of years to accomplish.
Such cryptography systems, immune to brute-force decoding because doing so would take too long, seemed pretty secure until Peter Shor came along.
Shor, a graduate of Marin County’s Tamalpais High School and Caltech who went on to win the Gödel Prize in theoretical computer science, demonstrated in 1994 that a proper quantum computer could break public-key encryptions in a matter of seconds. As a recent National Academy of Sciences report rather dryly put it, Shor’s algorithm sparked “strong commercial interest in deploying post-quantum cryptography well before such a quantum computer has been built.”
The challenge is starkly clear. Build a fully functional quantum computer first, and you might crack the other side’s codes before they can crack yours. Miss out, and you’re toast.
The code-busting potential of quantum computing has not been lost on the Chinese government, which last November passed a law threatening to “punish” any private corporations employing ciphers the authorities can’t break. Much of the money China has earmarked for quantum computer research is said to be going toward deploying computer networks designed to resist quantum intrusion. Chinese researchers are experimenting with quantum-encoding techniques to create messages that cannot be eavesdropped on without the recipient seeing evidence of it. In one such test, on September 29, 2017, Pan Jian-Wei and his colleagues dispatched a quantum-encrypted key from an orbiting satellite to Vienna and Beijing.
QUEST FOR QUBITS
Conventional digital computers manipulate binary digits, or bits—the zeros and ones that, as Alan Turing proved in 1936, can in principle replicate anything in the universe. Visions of a “universal” computer, which a century earlier had so enchanted the mathematician Charles Babbage that he came to be regarded as a raving crank, grew into today’s digital world with its five billion people using mobile phones.
Quantum computers, too, use bits to communicate with the outside world. But inside their quantum world, they employ what are called quantum bits, or qubits.
A single qubit, such as an isolated atom or electron, can generate only a single, on-or-off, zero-or-one state—just like each transistor on a conventional chip. The magic of qubits resides in their ability to be combined—“entangled,” in the jargon—with one another, so their many qubits start working together. Entangled qubits scale up exponentially: A 4-qubit quantum computer has not 4 but 16 times the power of a 1-qubit machine. A reliable 300-qubit quantum-computing chip could outperform a conventional computer the size of the observable universe.
The current state of the art is somewhere between IBM’s 50-qubit Q System One—a black-lacquer showpiece encased in a nine-foot borosilicate glass cube—and a 72-qubit machine being tested by Google. The Google machine said to have attained quantum supremacy employs a 53-qubit chip. (It was built to run 54 qubits but one never worked, so the researchers went with what they had.)
Most such machines are what Preskill calls noisy intermediate-scale quantum systems, or NISQs. They’re “noisy,” he notes, in that researchers “have imperfect control” over their qubits. They’re “intermediate” because properly controlling their 50 or so qubits would produce more power than any existing supercomputer but still fall short of quantum computing’s potential. Until the noise can be significantly reduced, Preskill predicts, “quantum computers with 50 to 100 qubits may be able to perform tasks which surpass the capabilities of today’s classical digital computers but…will not change the world right away.”
NISQ qubits are typically made in superconducting circuits, each a tiny oval racecourse interrupted by a single barrier called a Josephson junction. To build such a NISQ, pack a bunch of Josephson junctions close together, to encourage them to entangle, and chill them to nearly absolute zero in your laboratory thermos. Electrons will circle each racecourse ceaselessly, going in both directions at once, quantum-leaping through the barriers to create a single entity with an enormous calculating potential.
Once that’s happening, hit your supercooled chip with a shaped microwave pulse. The pulse excites the quantum system, which responds by exploring its vast internal space of possible futures, canceling out those that exclude one another and delivering the result as an output pulse. Repeat the process, sifting out noise, until the computation is complete.
In a typical quantum computer, such Q-and-A events can take place a trillion times a second.
Entanglement is fragile. Anything from heat to cosmic rays to an overzealous input pulse can wreck it. But it’s so promising that it’s been called “a physical resource, like energy,” and its exploitation an industry.
Preskill characterizes his research as exploring “the entanglement frontier.”
WEIRD QUEST
The fact that much more goes on inside a quantum system than can ever be detected was established in the mid-1920s by the physicist Werner Heisenberg. Dubbed “uncertainty,” it was long regarded as a limitation on human knowledge. The uncertainty principle means, for instance, that the more one learns about a quantum particle’s velocity, the less can be known about its location. This is the basis of the joke in which Heisenberg, pulled over for speeding by a cop who tells him, “You were going 90 miles an hour,” replies, “Thanks a lot. Now I have no idea where I am.”
But by the 1980s, as personal computers were becoming commonplace, scientists started to think about the other side of the Heisenberg coin. They speculated that the vast internal states of quantum systems might be put to work for computing. It wouldn’t matter that a quantum chip’s internal deliberations cannot be observed; what mattered was that they might deliver accurate results to the outer world. Since quanta are how nature works, the answers would be coming, so to speak, from the horse’s mouth. Suddenly, Heisenberg’s quantum uncertainty began to look less like a limitation than a resource.
Richard Feynman had started exploring the prospect of quantum computing decades earlier. “When our computers get faster and faster and more and more elaborate,” he predicted in 1959, “we will have to make them smaller and smaller. But there is plenty of room to make them smaller.”
Preskill, Caltech’s Richard P. Feynman Professor of Theoretical Physics, has something of Feynman’s sense of humor—responding to a Twitter poll, he said he became a scientist because “I don’t mind being confused most of the time”—and something of his showmanship. Preskill kicked off a black-tie celebration of Feynman’s legacy a few years back by singing an ode to quantum computing that he’d written to the tune of South Pacific’s “Some Enchanted Evening”:
Quantum’s inviting
Just as Feynman knew.
The future’s exciting
If we see it through!
Once we have dreamt it
We can make it so.
Once we have dreamt it
We can make it so!
Preskill readily rattles off potential practical benefits of quantum computing—from more efficient solar cells to quantum-entangled space telescopes orbiting the sun—but a scientist of his stature doesn’t devote decades to a subject just to stimulate spin-offs. Preskill wants to use quantum computers to simulate nature itself, investigating realms of reality beyond the reach of observation and experiment.
John Preskill, a quantum-computing expert at the California Institute of Technology, cautions against the hype surrounding this area of research.
PROBING SPACE AND TIME
Quantum physics, discovered by Max Planck in 1900 and largely defined by 1930, revealed that the fundamental building blocks of nature are not particles or waves but quanta. (Quanta are the irreducible packets of information that can be extracted from any process.)
Generations of theoretical work and laboratory experiments have confirmed the validity of the quantum approach. As the Caltech physicist Sean Carroll writes, “nature is quantum from the start. Quantum mechanics isn’t just an approximation of the truth: It is the truth.”
There is, however, a conspicuous gap in quantum theory: gravity, the force that gathered together the incoherent masses emerging from the big bang to make galaxies, stars, and the planet we live on. Exquisitely accurate quantum theories account for the behavior of the other three fundamental forces—electromagnetism and the strong and weak nuclear forces—but gravity is much too weak to play a significant role in most laboratory experiments. Using existing technology to probe quantum gravity would require constructing a particle collider the size of the solar system. Experiments conducted at the edge of a black hole might be fruitful, but the nearest black hole is 3,000 light-years from Earth.
“Quantum gravity is hard,” Preskill notes, “because you can’t do experiments.”
It might be possible, though, to use quantum computers to simulate how gravity works. Einstein having shown that gravity curves space, and that space and time are two aspects of the same phenomenon, quantum simulations could lay bare the nature of space and time.
Simulations using conventional computers are already widely successful. Formula One race drivers put in long hours on simulators before getting to the track, and commercial pilots use simulators to acquaint themselves with new models of aircraft. But a conventional computer can’t even simulate the behavior of a hundred atoms for a millionth of a second, much less that of quanta roiling at the edge of a black hole. The only known way to get quantum is to go quantum.
Quantum computers have the advantage of working in the same way as the systems they’d be simulating. As Feynman argued in 1982, “Nature isn’t classical, dammit, and if you want to make a simulation of nature, you’d better make it quantum mechanical.
“By golly it’s a wonderful problem,” he added, “because it doesn’t look so easy.”
The scientific potential of using quantum computers to simulate quantum gravity can be summarized by a single, rather astounding fact: any quantum system can simulate any other quantum system, provided it has at least as many qubits as the system being simulated.
As Preskill puts it, a quantum computer using enough entangled qubits could “simulate efficiently any physical process that occurs in nature.” He expects such simulations to reveal the deeper quantum process that generates space and time. “Space-time comes from the emergent properties of this underlying system,” he asserts.
What is understood can be controlled—although this isn’t always obvious at first. Einstein discovered that enormous amounts of energy are locked inside atoms, but he thought it unlikely that the energy could ever be extracted to generate power. Today, nuclear power generates roughly 14 percent of the world’s electricity. Electrons were once regarded as so utterly exotic that physicists at a 1911 annual dinner toasted, “To the electron! May it never be of any use to anybody!” Yet so many uses were found that the global electronics industry is currently valued at over a trillion dollars.
What, then, might an understanding of quantum space-time enable humans to do?
Preskill expects that it might become possible to “create new worlds.”
“I really believe this is going to happen,” he said in his Caltech office, leaning back and smiling pleasantly, as one might expect of a would-be creator of universes.
Timothy Ferris is an emeritus professor at UC Berkeley and the author of a dozen books, among them Seeing in the Dark and Coming of Age in the Milky Way. He produced the Golden Record, an artifact of human music and other sounds of Earth launched aboard the twin Voyager interstellar spacecraft now exiting our solar system.
QUANTUM SUPREMACY
STANDING ON THE BRINK OF CHANGE?
Computer science is no more about computers than astronomy is about telescopes.
—Edsger Dijkstra
In 1488, the artist Leonardo da Vinci sketched a flying machine. His 1505 Codex on the Flight of Birds advanced the idea that human flight should be possible. After all, birds can fly. Should it not be possible for some kind of device—perhaps with larger wings and some kind of superhuman energy to power it—to allow humans to fly, too?
But despite Leonardo’s profound insight and engineering genius, nobody could get the idea to work for some 400 years—not until 1903, when two bicycle mechanics constructed a flying machine.
Some 30 years later, Charles Lindbergh flew across the Atlantic. And some 30 years after that, thousands of people were crossing oceans and continents, sipping cocktails, and complaining about the movie.
Once the tipping point was reached, progress accelerated dramatically. Yet in the 1890s—a few years before the Wright brothers—several inventors had come close to creating flying machines. They could glide. Balloons could float. But actual human flight was just out of reach.
Something similar is happening today in the field of quantum computing. It seems like it might be possible to radically improve on what can be computed. After all, quantum systems exist in nature. Should it not be possible for us to build some kind of machine to harness the quantum facts of nature—and thereby vastly transform what can be constructed and what can be calculated?
The question becomes this: Are we in the Leonardo era of quantum computing, fantasizing about a mere possibility, or are we—as in the 1890s—in the antechamber of the future, where, within a few more years, and with some clever engineering, a revolution will occur in our daily lives? | ||||
2453 | dbpedia | 3 | 73 | https://blog.computationalcomplexity.org/2006/01/ | en | Computational Complexity | https://blog.computationalcomplexity.org/favicon.ico | https://blog.computationalcomplexity.org/favicon.ico | [
"https://resources.blogblog.com/img/icon18_email.gif",
"https://resources.blogblog.com/img/icon18_edit_allbkg.gif",
"https://resources.blogblog.com/img/icon18_email.gif",
"https://resources.blogblog.com/img/icon18_edit_allbkg.gif",
"https://resources.blogblog.com/img/icon18_email.gif",
"https://resources.... | [] | [] | [
""
] | null | [
"Lance Fortnow"
] | null | Computational Complexity and other fun stuff in math and computer science from Lance Fortnow and Bill Gasarch | en | https://blog.computationalcomplexity.org/favicon.ico | https://blog.computationalcomplexity.org/2006/01/ | GUEST BLOGGER: Bill Gasarch This is my last day guest blogging, so I'll end where I began, THREE points on LUDDITES I) Janos Simon corrected my history of Luddites, for which I thank him. If you are interested, go to HIS comment on MY post from Monday Jan 23 for a link to a very nice article. II) My father and father-in-law offer an interesting contrast: FATHER-IN-LAW (Engineering Major, career mostly in Business, now retired): LUDDITE: Does not program his VCR. Not sure if he doesn't know how to or just doesn't want to. So he HAS to be home on Sunday to watch Desperate Housewives (a show I found distasteful- My father in law is hipper than I am). NON-LUDDITE: Took a course on C at a local community college when he was 70. Pays all his bills on line. FATHER (English Major, High School English Teacher and Vice Principal, now retired) LUDDITE: Got a computer recently and still can't get email or pay his bills on line. NON-LUDDITE: Uses his VCR to tape ALOT of shows. He needs it since he watches ALOT: West Wing, My Name is Earl, The Sopranos, Sex in the City when it was on (a show I find distasteful- My dad is hipper than I am), 6 feet under, Deadwood, all four Law and Orders, and all three CSI's, Without a trace, other stuff I can't recall. This from the man who restricted me, wisely, to no more than an hour of TV a night when I was a kid.) III) Stuart Kurtz emailed me some more questions for my Luddite quiz. I asked him if I could post them and he suggested asking for other inputs. No one replied, so here are his: STUART BEGIN: 9) Do you write emails (or blog posts) in a) variable width fonts with formatting, b) variable width fonts without formatting, c) fixed width fonts, d) What's a blog?, e) What's email?, or f) What's writing? 10) Do you indicate emphasis by a) using italic or slanted font, b) using a bold faced font, c) metadiscourse, i.e., "I want to emphasize that... ", d) ALL CAPS, or e) Shouting and waving your arms. 11) Does your mouse have a) four buttons, b) three buttons, c) two buttons, d) one button, e) control characters are good enough for RMS, and they're good enough for me, or f) four feet and a tail. 12) What's your favorite programming language? a) Ruby or Python, b) Java c) Lisp, d) C++, e) Awk, f) IBM-360 assembly language, g) C, h) Lisp, or i) graduate student. [I know Lisp occurs twice, but c and h are still different answers. Note that there's no point asking for Perl -- as Perl programmers can only write, not read.] STUART END. bill g. P.S. I am supposed to say ``Now that I've guest blogged for a week I'm even more impressed with Lance getting a topic out every day'' But this is NOT TRUE. I was SO IMPRESSED with Lance in the first place that I can't be ``more impressed''
GUEST BLOGGER BILL GASARCH TOPIC: How much is what we do influenced by non-scientific criteria? (BEFORE I START TODAYS BLOG- A REQUEST. EMAIL ME OTHER LUDDITE QUESTIONS- I WILL POST THE BEST ONES ON FRIDAY) I) AN INCOMPLETE SUMMARY OF Thomas Kuhn's book The Structure of Scientific Revolution: For long periods of time a field of science will agree on the basic terms and problems of the field and will all work with that worldview (also called a paradigm). This is called Normal Science. This is GOOD since if people were working with different paradigms progress would be hard. BUT there comes a time when some problems just cannot be solved using the usual techniques. There will be an effort to jam this problem and some approaches to it into the current paradigm, but eventually, the old paradigm will fall and a new one will take its place. The new one will help to answer some old questions, and pose new ones that could not have even been asked in the old one. Newtonian Phy vs Einstein is the usual example, though there are others on a much less cosmic scale. II) People after him have misconstrued his work to saying that science has NO objective truth, that it ALL depends on the Paradigm. This is, of course, hogwash. More so when they claim that its a tool by the elite to dominate the masses, or some such (look up SOKAL HOAX on google for one view of this view). III) But a fair question CAN be raised along these lines: How MUCH of what scientists do depends on political or personality or other factors VERSUS how much is driven by objective scientific principles? A few examples a) What if in response to Russell's paradox the math world essentially axiomized what set theorist now call V=L (every object is constructable). Then we would know LOTs more about L, we would KNOW that the Axiom of Choice is true, and we would know that Cont Hyp is true. We might know that there were these weird other models that are unnatural where CH is false, but we wouldn't care. (Some Set Theorists tell me this could never happen- that people would be interested in other models. They are wrong.) b) What if in response to the Banach Tarski paradox mathematicians rejected some version of the axiom of choice? This would have been quite possible before AC began being used in so many places. c) The people who believe in constructive methods only (e.g, Brouwer) are portrayed as cranky old men holding onto an old paradigm that no longer worked. But if they had won then people like Hilbert would be viewed as crazy rebels who fortunately were never taken seriously. (This one I am less sure of- nonconstructive techniques are SO powerful that I think they may be inevitable.) d) If Computing Devices were invented either earlier or later then they were would have a drastic effect on Theory. While we think that P vs NP is a natural problem, it only came out once the technology was in place. Was it inevitable that it arise? Probably Was it inevitable that it be considered important? Hard to say. e) There is ALOT of work in Quantum Computing because (i) Peter Shor proved FACTORING in Quantum P hence giving the problem new interest, or (ii) There is (or actually was) lots of Grant money in it. (of course these two are linked) f) Do schools like MIT have too big an influence on what gets studied? (They have less influence now than the used to.) MORE GENERALLY, if I had the time and the energy I would do research on history/phil of math asking the question HOW MUCH DO EXTERNAL FORCES EFFECT WHAT IS STUDIED ? and I would do it WITHOUT an ax to grind.
GUEST BLOGGER: Bill Gasarch (I will be guest blogging this week while Lance is on Vacation.) Are you a Luddite? The original Luddites were workers who, scared of lower wages via technology, destroyed factory machines. This was around 1811. Their leader was General Ned Ludd. (Not sure if General was an honorary title) TODAY the term has come to mean someone who does not adapt to technology or does not like technology. If you are NOT one, you can use Google to find out more about them. Are you a Luddite? I offer the following questions and let you score yourselves. 1) At a conference do you use a) Powerpoint with fancy animation and pictures off the web. b) Powerpoint with nice backgrounds, but nothing much else c) pdf files d) physical slides made using latex e) physical slides made using magic markers and overlays f) physical slides without overlays g) chalk h) draw diagrams in the sand with a twig 2) Same as question 1 but for large classroom lecture (over 50), small classroom lectures (under 10), seminars (8 people who actually know something). 3) For writing papers do you use a) LaTeX (or some other package) b) Typewriter (YOU HAVE A TYPEWRITER? MIGHT BE WORTH SOMETHING ON EBAY! c) Handwritten and give to your secretary to type (YOU HAVE A SECRETARY? MIGHT BE WORTH SOMETHING ON EBAY!) d) Quill pen and inkwell on parchment. 4) When listening to talks do you a) Take notes with an e-pen that automatically puts it online b) Take notes in an e-notebook c) Take notes in a p-notebook (thats paper) c) Not take notes at all d) Fall asleep 5) When you applied to grad school did you a) Check out the website of the school b) Ask your roomate who also majored in CS and Math c) Ask your roomate who majored in Political Science d) Apply to schools you heard were good e) Apply to schools randomly (time bounded Kolmogorov Random) 6) If you need a result that is already known do you a) Goto Google b) Goto the library c) Goto your own file cabinet d) Rederive the result by yourself 7) Which of these might you most likely say? a) When is the next version coming out so I can update? b) I'll update in 2 years (and you do) c) I'll update in 2 years (but you don't) d) You can have my chalk when you pry it from my cold dead hands. 8) Do you play music on a) MP3's b) CD's c) LP's d) 78's e) Wax Cylinders (WAX CYLINDERS! MIGHT BE WORTH SOMETHING ON EBAY!) bill g. Postscript: Thanks to my collegue Jack Lutz for catching that I spelled Luddite wrong originally. I used him instead of a spell checker, and note that the error he found would not have been discovered with a spell checker.
I am on vacation next week and I've lined up Bill Gasarch as a guest Blogger in my absence. But today we have a guest post from Kamal Jain. This is a long post but well worth reading through.
This post is prompted by recent development and discussions on electronic publishing, which themselves are prompted by book scanning initiative of Google and Open Content Alliance. Although, I am not talking about paper books being converted into electronic format, I like the idea of having the books available in a searchable electronic format. And certainly this is a must have feature for any newly written book.
Recently, I got two invitations to write for books. The first was to write a book on Network Coding. I felt that I was not the best person so I did not accept. If I had, then I would have insisted on a free electronic copy. Second, I got an invitation to co-write a chapter on Cost Sharing with Mohammad Mahdian for a book, Algorithmic Game Theory, edited by, Noam Nisan, Tim Roughgarden, Eva Tardos and Vijay Vazirani. I agreed to this because I felt that such a book is a great idea and I could make a positive contribution. My selfish motive was to spread knowledge of the subject to which I have contributed. And, I guess that was also the expected motive of the other contributors. This I could say because the explicit incentive offered in the invitation to the contributors was that the editors (originally Eva and Vijay only) have made an excellent deal with a publisher, Springer Verlag. The deal they have is $40 for up to six hundred pages. I am not sure whether it is a paper back or hard-cover. But that was not my focus anyway. My focus is the absence of any electronic publishing component in the deal. Because of that, I felt this is not such a good deal in today's electronic age. On one side we are talking about scanning paper books, starting electronic journals, writing wikis, blogs and on the other we do not even make a deal on electronic publishing of newly written books. I wrote an email back to the editors that I do not think Springer deal is a good one. I was hoping to get back a response and start a discussion with them on this, which IMO, was obligatory for them because I point blank disagreed with the incentive they explicitly offered. At this point I am assuming that there is no electronic publishing agreement with the publisher. This was the background.
Now, I realize that this is not something to discuss with the editors in private. This is an important issue which is likely to reoccur in other situations. So I requested this space from Lance so that I could discuss with the whole community. Following are some of my random thoughts and I like to hear everybody's thoughts too, random or not :-) Please press the comment button and put your thoughts in writing so that Springer and other publishers would know what we want from them.
There are at least two kinds of books. First kind, written by individual authors. Second kind, written collaboratively by the community like the above proposed Algorithmic Game Theory. Individual authors write books for various reasons and it is up to them what kind of deal they lock with the publishers. The books written by a community has a predetermined goal and that is to spread the knowledge of the subject. It is not up to one or two persons to lock whatever deal they think is great. So the community must form unspoken guidelines to facilitate the negotiation between editors and publishers. These unspoken guidelines must include minimum desires of the community. Such a set of guidelines would have resolved the prisoner's dilemma for me. I did not like the absence of electronic publishing agreement. If I decline the invitation then the book still has gone ahead without my contribution and if I accept the invitation, which I did, then I know that my efforts are not optimally used. But in case it were a common expectation from the editors to negotiate an electronic publishing agreement, then I know that I could reject the invitation because others invitee would also do the same, thereby insisting that the editors go back to the publisher and make an electronic publishing agreement. One would ask why publishers have any electronic publishing agreement. For information, Reinhard Diestel's book, Graph Theory, has a free searchable and hyperlinked electronic edition and further this book is published by Springer Verlag. Let us first discuss what Springer provides to us and what we provide to Springer. Then we should discuss whether we are getting the optimal deal.
Springer does the marketing which sells the book.
Springer provides the brand name which sells the book.
Springer provides the brand name which makes the line in our resume about the book a bit bolder.
Springer prints and binds the book, for which the buyer pays.
Springer gave peanut financial support ($2000) to pay to students to draw pictures. This fund is for those contributors who do not have their own funds.
We give to Springer
Free content and transfer copyright so that they can legally publish the content. I am assuming there is no royalties involved in a community written book.
Word of mouth marketing.
Use our own funds for other expenses.
Our university or companies resources.
What are the possible deals we could have:
Status Quo. Springer publishes the book and sells them. Takes the copyright and does not provide free electronic copy. In future, if Springer wants, makes more money from electronic copy too.
Reinhard Diestel model. Provides free searchable and hyperlinked electronic edition. A user can't conveniently print the pages.
Springer publishes the book and sells them. Takes an exclusive time bound license, say one year. After one year, Springer still keeps the exclusive license on the paper publishing, but we could put the free electronic copies on our webpages.
Springer publishes the book and sells them. Takes the exclusive right to publish the book in paper format — that's all it needs to legally publish the book. We keep all other rights. We put the book in electronic format on our webpages or at some cheap servers.
Note that in all the above 4 options Springer is still getting something for free — the content. So it still is a good deal for Springer. 1. is the best deal for Springer. The only reason Springer could insist on 1. is because we do not insist with unity (Reinhard probably insisted very hard). If we insist then we could possibly get them to agree on 4. It is an irony that this book is about Game Theory, and the game theory principles are not used to get a better deal. Mohammad suggested that even if Springer wins on getting the first deal, we could still put our chapters on our webpages. This does not make sense because of three reasons. First, there are going to be cross-references. Second, the chapters together provide a synergy and that's the reason we all agreed to put our chapters together. Third, if we could all put chapters on our webpages then why can't we compile them together and put on a single webpage. A book is more than the sum of its chapters. A question which is typically raised about free electronic version is the following. If people could download the book for free then why would they buy from Springer? I think people would still buy, libraries would buy, professors would buy and anybody who needs to read a significant part of the book would buy. Still, for a moment let us assume that people won't buy the paper book in the presence of a free electronic version. In this case, it simply means people want only the free electronic version and not the paid paper version. That is having only the electronic version is what everybody desires. Then, under this assumption, why even deal with Springer?
Because, as mentioned above, Springer provides some value. We could still avoid Springer and create these values ourselves. We anyway will be spending couple of thousand hours on this book (my experience on working with Vijay is that it takes at least few hours per page). There are at least two ways to avoid Springer.
We go to a small publisher and get the book published. Transfer the exclusive right to publish the book in paper format. We keep all other rights.
We publish only the electronic version.
What role would Springer play?
Springer does the marketing. We will discuss this later to see how we could do the marketing ourselves.
Springer provides the brand name to sell the book. I think the brand name of the editors and the authors is much more in this case. This is also the case with any good book written by a community.
Springer provides the brand name to make the line related to this book in our resume a bit bolder. First, most authors contributing in the book already have enough lines in their resume that they can do with one fewer line. Second, this line is minor for a community written book. Each person contributes a chapter, may be equivalent to writing one or two journal papers.
Springer prints and binds the book. I do not know how much it costs to print and bind the book. "The Search" by John Battelle is a three hundred page hard-bound book and available at 16 bucks at Amazon. Well The Search probably will sell more than this technical book. But it shows that $40 for Algorithmic Game Theory could very well be an optimum profit making point for Springer rather than a favor as they want to portray to us. A small publisher would be able to beat that even in the presence of competing free electronic version.
The last is the peanut financial support. I am sure we could arrange $2000 bucks without Springer. Even if we fail, grad student would be happy to contribute this for a credit. If I do not personally have time to draw pictures, then I do not mind having a co-author who does that for me. A picture is worth thousand words. If I am claiming authorship for writing thousand words then anybody who draws pictures deserves the equal credit.
So the only value Springer provides is marketing. There are various ways we could do that too.
We create a pamphlet and a poster which we distribute to the program chair of various conferences.
Put the electronic version at one place. Let each of the contributor links to it. If there are fifty links from places like, Cornell, Georgia Tech, Stanford then on searches related to the keyword in the book, the book should show up at the top.
Let Citeseer crawl the book, let Google crawl the book, let us upload it on Wikipedia.
Even if it is not sufficient then we could market for money via search engine paid listing. We could raise the money by having only one or two ads in the book, let us say in the content and index pages. If we have an electronic version we could even have Google Adsense ads at the book download page. Certainly Google Adsense would put ads for academic people. In this case, if we are anyway buying something we could buy through those ads.
One question which one could raise is that many people in the world still live on the other side of the digital divide. But such people do not have $40 bucks either. The solution for them is to have a publisher in India or China to publish this book and sells to these people.
Pre-bottom line is we give more to Springer than it is giving back in return. Game theoretically it is not a fair solution and we could do better. I am not sure whether there is any electronic publishing deal which the editors of this book have with the publisher, if they had then they probably would have told me. In any case this posting is about many others future books which will be written co-operatively. Bottom line is, any book which is not written for money must be available free of charge in an electronic format.
The Internet has led to a complete shifts in how we deal with storing and sharing information, but when it comes to academic papers the changes we see are ad hoc and added in a piecemeal basis.
Suppose we could start from scratch and create a proper system for research papers. Here is how I would envision such a system.
XML has become the standard for storing information on the internet; it gives a simple machine-readable method for creating tree structures. Academic papers have such a tree structure (Sections, subsections, theorems, proofs, etc.) that would lend it itself well to XML. Mathematical equations should also be written using XML, we already have a MathML specification for doing this.
A academic paper XML file would only have content information, not any formatting information. For this we would use XSL files, themselves XML files that describe how to format the document. You would use different XSL files depending on whether the paper is viewed on the screen or printed, and different publishers can develop their own XSL files to have consistent looking papers. LaTeX, the system used by most theoretical computer scientists, has similar capabilities but because LaTeX does not enforce any standards, changing style files often requires considerable editing.
Researchers will not have to create these XML files directly (unless they want to) but can use word processors that will save the documents according to those standards.
For citations we should just point to a unique identifier for a paper, no longer should we need to cut and paste bibliographic information. The formatting program can go online based on the identifier to get the information to create a human readable bibliography with web links if appropriate. Most publishers already use Digital Object Identifiers (DOI), we just need DOIs to point to an XML file giving bibliographic information, have DOIs for unpublished papers and have a method for DOIs to point to a later version of a paper.
The author information on academic papers are often useless (like my postal address) or out of date as academics change locations. Each academic research should get their own DOI-like number that points to an XML file giving personal and contact information and then we only need add these DOIs to the academic papers.
Most importantly we need to have enforced standards for each of these XML documents (via XML schemas). If we can truly separate the content from the formatting of documents, and make that content available in an easy machine-readable forms, not only can researchers focus more on the writing and less on the style but will also open the door to applications that we cannot even imagine today. | |||
2453 | dbpedia | 0 | 26 | https://blog.computationalcomplexity.org/2006/01/ | en | Computational Complexity | https://blog.computationalcomplexity.org/favicon.ico | https://blog.computationalcomplexity.org/favicon.ico | [
"https://resources.blogblog.com/img/icon18_email.gif",
"https://resources.blogblog.com/img/icon18_edit_allbkg.gif",
"https://resources.blogblog.com/img/icon18_email.gif",
"https://resources.blogblog.com/img/icon18_edit_allbkg.gif",
"https://resources.blogblog.com/img/icon18_email.gif",
"https://resources.... | [] | [] | [
""
] | null | [
"Lance Fortnow"
] | null | Computational Complexity and other fun stuff in math and computer science from Lance Fortnow and Bill Gasarch | en | https://blog.computationalcomplexity.org/favicon.ico | https://blog.computationalcomplexity.org/2006/01/ | GUEST BLOGGER: Bill Gasarch This is my last day guest blogging, so I'll end where I began, THREE points on LUDDITES I) Janos Simon corrected my history of Luddites, for which I thank him. If you are interested, go to HIS comment on MY post from Monday Jan 23 for a link to a very nice article. II) My father and father-in-law offer an interesting contrast: FATHER-IN-LAW (Engineering Major, career mostly in Business, now retired): LUDDITE: Does not program his VCR. Not sure if he doesn't know how to or just doesn't want to. So he HAS to be home on Sunday to watch Desperate Housewives (a show I found distasteful- My father in law is hipper than I am). NON-LUDDITE: Took a course on C at a local community college when he was 70. Pays all his bills on line. FATHER (English Major, High School English Teacher and Vice Principal, now retired) LUDDITE: Got a computer recently and still can't get email or pay his bills on line. NON-LUDDITE: Uses his VCR to tape ALOT of shows. He needs it since he watches ALOT: West Wing, My Name is Earl, The Sopranos, Sex in the City when it was on (a show I find distasteful- My dad is hipper than I am), 6 feet under, Deadwood, all four Law and Orders, and all three CSI's, Without a trace, other stuff I can't recall. This from the man who restricted me, wisely, to no more than an hour of TV a night when I was a kid.) III) Stuart Kurtz emailed me some more questions for my Luddite quiz. I asked him if I could post them and he suggested asking for other inputs. No one replied, so here are his: STUART BEGIN: 9) Do you write emails (or blog posts) in a) variable width fonts with formatting, b) variable width fonts without formatting, c) fixed width fonts, d) What's a blog?, e) What's email?, or f) What's writing? 10) Do you indicate emphasis by a) using italic or slanted font, b) using a bold faced font, c) metadiscourse, i.e., "I want to emphasize that... ", d) ALL CAPS, or e) Shouting and waving your arms. 11) Does your mouse have a) four buttons, b) three buttons, c) two buttons, d) one button, e) control characters are good enough for RMS, and they're good enough for me, or f) four feet and a tail. 12) What's your favorite programming language? a) Ruby or Python, b) Java c) Lisp, d) C++, e) Awk, f) IBM-360 assembly language, g) C, h) Lisp, or i) graduate student. [I know Lisp occurs twice, but c and h are still different answers. Note that there's no point asking for Perl -- as Perl programmers can only write, not read.] STUART END. bill g. P.S. I am supposed to say ``Now that I've guest blogged for a week I'm even more impressed with Lance getting a topic out every day'' But this is NOT TRUE. I was SO IMPRESSED with Lance in the first place that I can't be ``more impressed''
GUEST BLOGGER BILL GASARCH TOPIC: How much is what we do influenced by non-scientific criteria? (BEFORE I START TODAYS BLOG- A REQUEST. EMAIL ME OTHER LUDDITE QUESTIONS- I WILL POST THE BEST ONES ON FRIDAY) I) AN INCOMPLETE SUMMARY OF Thomas Kuhn's book The Structure of Scientific Revolution: For long periods of time a field of science will agree on the basic terms and problems of the field and will all work with that worldview (also called a paradigm). This is called Normal Science. This is GOOD since if people were working with different paradigms progress would be hard. BUT there comes a time when some problems just cannot be solved using the usual techniques. There will be an effort to jam this problem and some approaches to it into the current paradigm, but eventually, the old paradigm will fall and a new one will take its place. The new one will help to answer some old questions, and pose new ones that could not have even been asked in the old one. Newtonian Phy vs Einstein is the usual example, though there are others on a much less cosmic scale. II) People after him have misconstrued his work to saying that science has NO objective truth, that it ALL depends on the Paradigm. This is, of course, hogwash. More so when they claim that its a tool by the elite to dominate the masses, or some such (look up SOKAL HOAX on google for one view of this view). III) But a fair question CAN be raised along these lines: How MUCH of what scientists do depends on political or personality or other factors VERSUS how much is driven by objective scientific principles? A few examples a) What if in response to Russell's paradox the math world essentially axiomized what set theorist now call V=L (every object is constructable). Then we would know LOTs more about L, we would KNOW that the Axiom of Choice is true, and we would know that Cont Hyp is true. We might know that there were these weird other models that are unnatural where CH is false, but we wouldn't care. (Some Set Theorists tell me this could never happen- that people would be interested in other models. They are wrong.) b) What if in response to the Banach Tarski paradox mathematicians rejected some version of the axiom of choice? This would have been quite possible before AC began being used in so many places. c) The people who believe in constructive methods only (e.g, Brouwer) are portrayed as cranky old men holding onto an old paradigm that no longer worked. But if they had won then people like Hilbert would be viewed as crazy rebels who fortunately were never taken seriously. (This one I am less sure of- nonconstructive techniques are SO powerful that I think they may be inevitable.) d) If Computing Devices were invented either earlier or later then they were would have a drastic effect on Theory. While we think that P vs NP is a natural problem, it only came out once the technology was in place. Was it inevitable that it arise? Probably Was it inevitable that it be considered important? Hard to say. e) There is ALOT of work in Quantum Computing because (i) Peter Shor proved FACTORING in Quantum P hence giving the problem new interest, or (ii) There is (or actually was) lots of Grant money in it. (of course these two are linked) f) Do schools like MIT have too big an influence on what gets studied? (They have less influence now than the used to.) MORE GENERALLY, if I had the time and the energy I would do research on history/phil of math asking the question HOW MUCH DO EXTERNAL FORCES EFFECT WHAT IS STUDIED ? and I would do it WITHOUT an ax to grind.
GUEST BLOGGER: Bill Gasarch (I will be guest blogging this week while Lance is on Vacation.) Are you a Luddite? The original Luddites were workers who, scared of lower wages via technology, destroyed factory machines. This was around 1811. Their leader was General Ned Ludd. (Not sure if General was an honorary title) TODAY the term has come to mean someone who does not adapt to technology or does not like technology. If you are NOT one, you can use Google to find out more about them. Are you a Luddite? I offer the following questions and let you score yourselves. 1) At a conference do you use a) Powerpoint with fancy animation and pictures off the web. b) Powerpoint with nice backgrounds, but nothing much else c) pdf files d) physical slides made using latex e) physical slides made using magic markers and overlays f) physical slides without overlays g) chalk h) draw diagrams in the sand with a twig 2) Same as question 1 but for large classroom lecture (over 50), small classroom lectures (under 10), seminars (8 people who actually know something). 3) For writing papers do you use a) LaTeX (or some other package) b) Typewriter (YOU HAVE A TYPEWRITER? MIGHT BE WORTH SOMETHING ON EBAY! c) Handwritten and give to your secretary to type (YOU HAVE A SECRETARY? MIGHT BE WORTH SOMETHING ON EBAY!) d) Quill pen and inkwell on parchment. 4) When listening to talks do you a) Take notes with an e-pen that automatically puts it online b) Take notes in an e-notebook c) Take notes in a p-notebook (thats paper) c) Not take notes at all d) Fall asleep 5) When you applied to grad school did you a) Check out the website of the school b) Ask your roomate who also majored in CS and Math c) Ask your roomate who majored in Political Science d) Apply to schools you heard were good e) Apply to schools randomly (time bounded Kolmogorov Random) 6) If you need a result that is already known do you a) Goto Google b) Goto the library c) Goto your own file cabinet d) Rederive the result by yourself 7) Which of these might you most likely say? a) When is the next version coming out so I can update? b) I'll update in 2 years (and you do) c) I'll update in 2 years (but you don't) d) You can have my chalk when you pry it from my cold dead hands. 8) Do you play music on a) MP3's b) CD's c) LP's d) 78's e) Wax Cylinders (WAX CYLINDERS! MIGHT BE WORTH SOMETHING ON EBAY!) bill g. Postscript: Thanks to my collegue Jack Lutz for catching that I spelled Luddite wrong originally. I used him instead of a spell checker, and note that the error he found would not have been discovered with a spell checker.
I am on vacation next week and I've lined up Bill Gasarch as a guest Blogger in my absence. But today we have a guest post from Kamal Jain. This is a long post but well worth reading through.
This post is prompted by recent development and discussions on electronic publishing, which themselves are prompted by book scanning initiative of Google and Open Content Alliance. Although, I am not talking about paper books being converted into electronic format, I like the idea of having the books available in a searchable electronic format. And certainly this is a must have feature for any newly written book.
Recently, I got two invitations to write for books. The first was to write a book on Network Coding. I felt that I was not the best person so I did not accept. If I had, then I would have insisted on a free electronic copy. Second, I got an invitation to co-write a chapter on Cost Sharing with Mohammad Mahdian for a book, Algorithmic Game Theory, edited by, Noam Nisan, Tim Roughgarden, Eva Tardos and Vijay Vazirani. I agreed to this because I felt that such a book is a great idea and I could make a positive contribution. My selfish motive was to spread knowledge of the subject to which I have contributed. And, I guess that was also the expected motive of the other contributors. This I could say because the explicit incentive offered in the invitation to the contributors was that the editors (originally Eva and Vijay only) have made an excellent deal with a publisher, Springer Verlag. The deal they have is $40 for up to six hundred pages. I am not sure whether it is a paper back or hard-cover. But that was not my focus anyway. My focus is the absence of any electronic publishing component in the deal. Because of that, I felt this is not such a good deal in today's electronic age. On one side we are talking about scanning paper books, starting electronic journals, writing wikis, blogs and on the other we do not even make a deal on electronic publishing of newly written books. I wrote an email back to the editors that I do not think Springer deal is a good one. I was hoping to get back a response and start a discussion with them on this, which IMO, was obligatory for them because I point blank disagreed with the incentive they explicitly offered. At this point I am assuming that there is no electronic publishing agreement with the publisher. This was the background.
Now, I realize that this is not something to discuss with the editors in private. This is an important issue which is likely to reoccur in other situations. So I requested this space from Lance so that I could discuss with the whole community. Following are some of my random thoughts and I like to hear everybody's thoughts too, random or not :-) Please press the comment button and put your thoughts in writing so that Springer and other publishers would know what we want from them.
There are at least two kinds of books. First kind, written by individual authors. Second kind, written collaboratively by the community like the above proposed Algorithmic Game Theory. Individual authors write books for various reasons and it is up to them what kind of deal they lock with the publishers. The books written by a community has a predetermined goal and that is to spread the knowledge of the subject. It is not up to one or two persons to lock whatever deal they think is great. So the community must form unspoken guidelines to facilitate the negotiation between editors and publishers. These unspoken guidelines must include minimum desires of the community. Such a set of guidelines would have resolved the prisoner's dilemma for me. I did not like the absence of electronic publishing agreement. If I decline the invitation then the book still has gone ahead without my contribution and if I accept the invitation, which I did, then I know that my efforts are not optimally used. But in case it were a common expectation from the editors to negotiate an electronic publishing agreement, then I know that I could reject the invitation because others invitee would also do the same, thereby insisting that the editors go back to the publisher and make an electronic publishing agreement. One would ask why publishers have any electronic publishing agreement. For information, Reinhard Diestel's book, Graph Theory, has a free searchable and hyperlinked electronic edition and further this book is published by Springer Verlag. Let us first discuss what Springer provides to us and what we provide to Springer. Then we should discuss whether we are getting the optimal deal.
Springer does the marketing which sells the book.
Springer provides the brand name which sells the book.
Springer provides the brand name which makes the line in our resume about the book a bit bolder.
Springer prints and binds the book, for which the buyer pays.
Springer gave peanut financial support ($2000) to pay to students to draw pictures. This fund is for those contributors who do not have their own funds.
We give to Springer
Free content and transfer copyright so that they can legally publish the content. I am assuming there is no royalties involved in a community written book.
Word of mouth marketing.
Use our own funds for other expenses.
Our university or companies resources.
What are the possible deals we could have:
Status Quo. Springer publishes the book and sells them. Takes the copyright and does not provide free electronic copy. In future, if Springer wants, makes more money from electronic copy too.
Reinhard Diestel model. Provides free searchable and hyperlinked electronic edition. A user can't conveniently print the pages.
Springer publishes the book and sells them. Takes an exclusive time bound license, say one year. After one year, Springer still keeps the exclusive license on the paper publishing, but we could put the free electronic copies on our webpages.
Springer publishes the book and sells them. Takes the exclusive right to publish the book in paper format — that's all it needs to legally publish the book. We keep all other rights. We put the book in electronic format on our webpages or at some cheap servers.
Note that in all the above 4 options Springer is still getting something for free — the content. So it still is a good deal for Springer. 1. is the best deal for Springer. The only reason Springer could insist on 1. is because we do not insist with unity (Reinhard probably insisted very hard). If we insist then we could possibly get them to agree on 4. It is an irony that this book is about Game Theory, and the game theory principles are not used to get a better deal. Mohammad suggested that even if Springer wins on getting the first deal, we could still put our chapters on our webpages. This does not make sense because of three reasons. First, there are going to be cross-references. Second, the chapters together provide a synergy and that's the reason we all agreed to put our chapters together. Third, if we could all put chapters on our webpages then why can't we compile them together and put on a single webpage. A book is more than the sum of its chapters. A question which is typically raised about free electronic version is the following. If people could download the book for free then why would they buy from Springer? I think people would still buy, libraries would buy, professors would buy and anybody who needs to read a significant part of the book would buy. Still, for a moment let us assume that people won't buy the paper book in the presence of a free electronic version. In this case, it simply means people want only the free electronic version and not the paid paper version. That is having only the electronic version is what everybody desires. Then, under this assumption, why even deal with Springer?
Because, as mentioned above, Springer provides some value. We could still avoid Springer and create these values ourselves. We anyway will be spending couple of thousand hours on this book (my experience on working with Vijay is that it takes at least few hours per page). There are at least two ways to avoid Springer.
We go to a small publisher and get the book published. Transfer the exclusive right to publish the book in paper format. We keep all other rights.
We publish only the electronic version.
What role would Springer play?
Springer does the marketing. We will discuss this later to see how we could do the marketing ourselves.
Springer provides the brand name to sell the book. I think the brand name of the editors and the authors is much more in this case. This is also the case with any good book written by a community.
Springer provides the brand name to make the line related to this book in our resume a bit bolder. First, most authors contributing in the book already have enough lines in their resume that they can do with one fewer line. Second, this line is minor for a community written book. Each person contributes a chapter, may be equivalent to writing one or two journal papers.
Springer prints and binds the book. I do not know how much it costs to print and bind the book. "The Search" by John Battelle is a three hundred page hard-bound book and available at 16 bucks at Amazon. Well The Search probably will sell more than this technical book. But it shows that $40 for Algorithmic Game Theory could very well be an optimum profit making point for Springer rather than a favor as they want to portray to us. A small publisher would be able to beat that even in the presence of competing free electronic version.
The last is the peanut financial support. I am sure we could arrange $2000 bucks without Springer. Even if we fail, grad student would be happy to contribute this for a credit. If I do not personally have time to draw pictures, then I do not mind having a co-author who does that for me. A picture is worth thousand words. If I am claiming authorship for writing thousand words then anybody who draws pictures deserves the equal credit.
So the only value Springer provides is marketing. There are various ways we could do that too.
We create a pamphlet and a poster which we distribute to the program chair of various conferences.
Put the electronic version at one place. Let each of the contributor links to it. If there are fifty links from places like, Cornell, Georgia Tech, Stanford then on searches related to the keyword in the book, the book should show up at the top.
Let Citeseer crawl the book, let Google crawl the book, let us upload it on Wikipedia.
Even if it is not sufficient then we could market for money via search engine paid listing. We could raise the money by having only one or two ads in the book, let us say in the content and index pages. If we have an electronic version we could even have Google Adsense ads at the book download page. Certainly Google Adsense would put ads for academic people. In this case, if we are anyway buying something we could buy through those ads.
One question which one could raise is that many people in the world still live on the other side of the digital divide. But such people do not have $40 bucks either. The solution for them is to have a publisher in India or China to publish this book and sells to these people.
Pre-bottom line is we give more to Springer than it is giving back in return. Game theoretically it is not a fair solution and we could do better. I am not sure whether there is any electronic publishing deal which the editors of this book have with the publisher, if they had then they probably would have told me. In any case this posting is about many others future books which will be written co-operatively. Bottom line is, any book which is not written for money must be available free of charge in an electronic format.
The Internet has led to a complete shifts in how we deal with storing and sharing information, but when it comes to academic papers the changes we see are ad hoc and added in a piecemeal basis.
Suppose we could start from scratch and create a proper system for research papers. Here is how I would envision such a system.
XML has become the standard for storing information on the internet; it gives a simple machine-readable method for creating tree structures. Academic papers have such a tree structure (Sections, subsections, theorems, proofs, etc.) that would lend it itself well to XML. Mathematical equations should also be written using XML, we already have a MathML specification for doing this.
A academic paper XML file would only have content information, not any formatting information. For this we would use XSL files, themselves XML files that describe how to format the document. You would use different XSL files depending on whether the paper is viewed on the screen or printed, and different publishers can develop their own XSL files to have consistent looking papers. LaTeX, the system used by most theoretical computer scientists, has similar capabilities but because LaTeX does not enforce any standards, changing style files often requires considerable editing.
Researchers will not have to create these XML files directly (unless they want to) but can use word processors that will save the documents according to those standards.
For citations we should just point to a unique identifier for a paper, no longer should we need to cut and paste bibliographic information. The formatting program can go online based on the identifier to get the information to create a human readable bibliography with web links if appropriate. Most publishers already use Digital Object Identifiers (DOI), we just need DOIs to point to an XML file giving bibliographic information, have DOIs for unpublished papers and have a method for DOIs to point to a later version of a paper.
The author information on academic papers are often useless (like my postal address) or out of date as academics change locations. Each academic research should get their own DOI-like number that points to an XML file giving personal and contact information and then we only need add these DOIs to the academic papers.
Most importantly we need to have enforced standards for each of these XML documents (via XML schemas). If we can truly separate the content from the formatting of documents, and make that content available in an easy machine-readable forms, not only can researchers focus more on the writing and less on the style but will also open the door to applications that we cannot even imagine today. | |||
2453 | dbpedia | 3 | 24 | https://scottaaronson.blog/%3Fp%3D4450 | en | Optimized | [
"https://149663533.v2.pressablecdn.com/wp-content/plugins/really-simple-facebook-twitter-share-buttons/images/specificfeeds_follow.png",
"https://149663533.v2.pressablecdn.com/wp-content/plugins/really-simple-facebook-twitter-share-buttons/images/specificfeeds_follow.png",
"https://149663533.v2.pressablecdn.com... | [] | [] | [
""
] | null | [] | null | The Blog of Scott Aaronson | en | Shtetl-Optimized | https://scottaaronson.blog/ | This morning I was pondering one of the anti-Israel protesters’ favorite phrases—I promise, out of broad philosophical curiosity rather than just parochial concern for my extended family’s survival.
“We’re on the right side of history. Don’t put yourself on the wrong side by opposing us.”
Why do the protesters believe they shouldn’t face legal or academic sanction for having blockaded university campuses, barricaded themselves in buildings, shut down traffic, or vandalized Jewish institutions? Because, just like the abolitionists and Civil Rights marchers and South African anti-apartheid heroes, they’re on the right side of history. Surely the rules and regulations of the present are of little concern next to the vindication of future generations?
The main purpose of this post is not to adjudicate whether their claim is true or false, but to grapple with something much more basic: what kind of claim are they even making, and who is its intended audience?
One reading of “we’re on the right of history” is that it’s just a fancy way to say “we’re right and you’re wrong.” In which case, fair enough! Few people passionately believe themselves to be wrong.
But there’s a difficulty: if you truly believe your side to be right, then you should believe it’s right win or lose. For example, an anti-Zionist should say that, even if Israel continues existing, and even if everyone else on the planet comes to support it, still eliminating Israel would’ve been the right choice. Conversely, a Zionist should say that if Israel is destroyed and the whole rest of the world celebrates its destruction forevermore—well then, the whole world is wrong. (That, famously, is more-or-less what the Jews did say, each time Israel and Judah were crushed in antiquity.)
OK, but if the added clause “of history” is doing anything in the phrase “the right side of history,” that extra thing would appear to be an empirical prediction. The protesters are saying: “just like the entire world looks back with disgust at John Calhoun, Bull Connor, and other defenders of slavery and then segregation, so too will the world look back with disgust at anyone who defends Israel now.”
Maybe this is paired with a theory about the arc of the moral universe bending toward justice: “we’ll win the future and then look back with disgust on you, and we’ll be correct to do so, because morality inherently progresses over time.” Or maybe it has merely the character of a social threat: “we’ll win the future and then look back with disgust on you, so regardless of whether we’ll be right or wrong, you’d better switch to our side if you know what’s good for you.”
Either way, the claim of winning the future is now the kind of thing that could be wagered about in a prediction market. And, in essence, the Right-Side-of-History people are claiming to be able to improve on today’s consensus estimate: to have a hot morality tip that beats the odds. But this means that they face the same problem as anyone who claims it’s knowable that, let’s say, a certain stock will increase a thousandfold. Namely: if it’s so certain, then why hasn’t the price shot up already?
The protesters and their supporters have several possible answers. Many boil down to saying that most people—because they need to hold down a job, earn a living, etc.—make all sorts of craven compromises, preventing them from saying what they know in their hearts to be true. But idealistic college students, who are free from such burdens, are virtually always right.
Does that sound like a strawman? Then recall the comedian Sarah Silverman’s famous question from eight years ago:
PLEASE tell me which times throughout history protests from college campuses got it wrong. List them for me
Crucially, lots of people happily took Silverman up on her challenge. They pointed out that, in the Sixties and Seventies, thousands of college students, with the enthusiastic support of many of their professors, marched for Ho Chi Minh, Mao, Castro, Che Guevara, Pol Pot, and every other murderous left-wing tyrant to sport a green uniform and rifle. Few today would claim that these students correctly identified the Right Side of History, despite the students’ certainty that they’d done so.
(There were also, of course, moderate protesters, who merely opposed America’s war conduct—just like there are moderate protesters now who merely want Israel merely to end its Gaza campaign rather than its existence. But then as now, the revolutionaries sucked up much of the oxygen, and the moderates rarely disowned them.)
What’s really going on, we might say, is reference class tennis. Implicitly or explicitly, the anti-Israel protesters are aligning themselves with Gandhi and MLK and Nelson Mandela and every other celebrated resister of colonialism and apartheid throughout history. They ask: what are the chances that all those heroes were right, and we’re the first ones to be wrong?
The trouble is that someone else could just as well ask: what are the chances that Hamas is the first group in history to be morally justified in burning Jews alive in their homes … even though the Assyrians, Babylonians, Romans, Crusaders, Inquisitors, Cossacks, Nazis, and every other group that did similar things to the Jews over 3000 years is now acknowledged by nearly every educated person to have perpetrated an unimaginable evil? What are the chances that, with Israel’s establishment in 1948, this millennia-old moral arc of Western civilization suddenly reversed its polarity?
We should admit from the outset that such a reversal is possible. No one, no matter how much cruelty they’ve endured, deserves a free pass, and there are certainly many cases where victims turned into victimizers. Still, one could ask: shouldn’t the burden be on those who claim that today‘s campaign against Jewish self-determination is history’s first justified one?
It’s like, if I were a different person, born to different parents in a different part of the world, maybe I’d chant for Israel’s destruction with the best of them. Even then, though, I feel like the above considerations would keep me awake at night, would terrify me that maybe I’d picked the wrong side, or at least that the truth was more complicated. The certainty implied by the “right side of history” claim is the one part I don’t understand, as far as I try to stretch my sympathetic imagination.
For all that, I, too, have been moved by rhetorical appeals to “stand on the right side of history”—say, for the cause of Ukraine, or slowing down climate change, or saving endangered species, or defeating Trump. Thinking it over, this has happened when I felt sure of which side was right (and would ultimately be seen to be right), but inertia or laziness or inattention or whatever else prevented me from taking action.
When does this happen for me? As far as I can tell, the principles of the Enlightenment, of reason and liberty and progress and the flourishing of sentient life, have been on the right side of every conflict in human history. My abstract commitment to those principles doesn’t always tell me which side of the controversy du jour is correct, but whenever it does, that’s all I ever need cognitively; the rest is “just” motivation and emotion.
(Amusingly, I expect some people to say that my “reason and Enlightenment” heuristic is vacuous, that it works only because I define those ideals to be the ones that pick the right side. Meanwhile, I expect others to say that the heuristic is wrong and to offer counterexamples.)
Anyway, maybe this generalizes. Sure, a call to “stand on the right side of history” could do nontrivial work, but only in the same way that a call to buy Bitcoin in 2011 could—namely, for those who’ve already concluded that buying Bitcoin is a golden opportunity, but haven’t yet gotten around to buying it. Such a call does nothing for anyone who’s already considered the question and come down on the opposite side of it. The abuse of “arc of the moral universe” rhetoric—i.e., the calling down of history’s judgment in favor of X, even though you know full well that your listeners see themselves as having consulted history’s judgment just as earnestly as you did, and gotten back not(X) instead—yeah, that’s risen to be one of my biggest pet peeves. If I ever slip up and indulge in it, please tell me and I’ll stop.
Q1: Who will you be voting for in November?
A: Kamala Harris (and mainstream Democrats all down the ballot), of course.
Q2: Of course?
A: If the alternative is Trump, I would’ve voted for Biden’s rotting corpse. Or for Hunter Biden. Or for…
Q3: Why can’t you see this is just your Trump Derangement Syndrome talking?
A: Look, my basic moral commitments remain pretty much as they’ve been since childhood. Namely, that I’m on the side of reason, Enlightenment, scientific and technological progress, secular government, pragmatism, democracy, individual liberty, justice, intellectual honesty, an American-led peaceful world order, preservation of the natural world, mitigation of existential risks, and human flourishing. (Crazy and radical, I know.)
Only when choosing between candidates who all espouse such values, do I even get the luxury of judging them on any lower-order bits. Sadly, I don’t have that luxury today. Trump’s values, such as they are, would seem to be “America First,” protectionism, vengeance, humiliation of enemies, winning at all costs, authoritarianism, the veneration of foreign autocrats, and the veneration of himself. No amount of squinting can ever reconcile those with the values I listed before.
Q4: Is that all that’s wrong with him?
A: No, there are also the lies, and worst of all the “Big Lie.” Trump is the first president in US history to incite a mob to try to overturn the results of an election. He was serious! He very nearly succeeded, and probably would have, had Mike Pence been someone else. It’s now inarguable that Trump rejects the basic rules of our system, or “accepts” them only when he wins. We’re numb from having heard it so many times, but it’s a big deal, as big a deal as the Civil War was.
Q5: Oh, so this is about your precious “democracy.” Why do you care? Haven’t you of all people learned that the masses are mostly idiots and bullies, who don’t deserve power? As Curtis Yarvin keeps trying to explain to you, instead of “democracy,” you should want a benevolent king or dictator-CEO, who could offer a privileged position to the competent scientists like yourself.
A: Yeah, so how many examples does history furnish where that worked out well? I suppose you might make a partial case for Napoleon, or Ataturk? More to the point: even if benevolent, science-and-reason-loving authoritarian strongmen are possible in theory, do you really expect me to believe that Trump could be one of them? I still love how Scott Alexander put it in 2016:
Can anyone honestly say that Trump or his movement promote epistemic virtue? That in the long-term, we’ll be glad that we encouraged this sort of thing, that we gave it power and attention and all the nutrients it needed to grow? That the road to whatever vision of a just and rational society we imagine, something quiet and austere with a lot of old-growth trees and Greek-looking columns, runs through LOCK HER UP?
I don’t like having to vote for the lesser of two evils. But at least I feel like I know who it is.
Q6: But what about J. D. Vance? He got his start in Silicon Valley, was championed by Peter Thiel, and is obviously highly intelligent. Doesn’t he seem like someone who might listen to and empower tech nerds like yourself?
A: Who can say what J. D. Vance believes? Here are a few choice quotes of his from eight years ago:
I’m obviously outraged at Trump’s rhetoric, and I worry most of all about how welcome Muslim citizens feel in their own country. But I also think that people have always believed crazy shit (I remember a poll from a few years back suggesting that a near majority of democratic voters blame ‘the Jews’ for the financial crisis). And there have always been demagogues willing to exploit the people who believe crazy shit.
The more white people feel like voting for trump, the more black people will suffer. I really believe that.
[Trump is] just a bad man. A morally reprehensible human being.
To get from that to being Trump’s running mate is a Simone-Biles-like feat of moral acrobatics. Vance reminds me of the famous saying by L. Ron Hubbard from his pre-Dianetics days: “If a man really wants to make a million dollars, the best way would be to start his own religion.” (And I feel like Harris’s whole campaign strategy should just be to replay Vance’s earlier musings in wall-to-wall ads while emphasizing her agreement with them.) No, Vance is not someone I trust to share my values, if he has values at all.
Q7: What about the other side’s values, or lack thereof? I mean, don’t you care that the whole Democratic establishment—including Harris—colluded to cover up that Biden was senile and cognitively unfit to be president now, let alone for another term?
A: Look, we’ve all seen what happens as a relative gets old. It’s gradual. It’s hard for anyone to say at which specific moment they can no longer drive a car, or be President of the United States, or whatever. This means that I don’t necessarily read evil intent into the attempts to cover up Biden’s decline—merely an epic, catastrophic failure of foresight. That failure of foresight itself would’ve been a huge deal in normal circumstances, but these are not normal circumstances—not if you believe, as I do, that the alternative is the beginning of the end of a 250-year-old democratic experiment.
Q8: Oh stop being so melodramatic. What terrible thing happened to you because of Trump’s first term? Did you lose your job? Did fascist goons rough you up in the street?
A: Well, my Iranian PhD student came close to having his visa revoked, and it became all but impossible to recruit PhD students from China. That sucked, since I care about my students’ welfare like I care about my own. Also, the downfall of Roe v. Wade, which enabled Texas’ draconian new abortion laws, made it much harder for us to recruit faculty at UT Austin. But I doubt any of that will impress you. “Go recruit American students,” you’ll say. “Go recruit conservative faculty who are fine with abortion being banned.”
The real issue is that Trump was severely restrained in his first term, by being surrounded by people who (even if, in many cases, they started out loyal to him) were also somewhat sane and valued the survival of the Republic. Alas, he learned from that, and he won’t repeat that mistake the next time.
Q9: Why do you care so much about Trump’s lies? Don’t you realize that all politicians lie?
A: Yes, but there are importantly different kinds of lies. There are white lies. There are scheming, 20-dimensional Machiavellian lies, like a secret agent’s cover story (or is that only in fiction?). There are the farcical, desperate, ever-shifting lies of the murderer to the police detective or the cheating undergrad to the professor. And then there are the lies of bullies and mob bosses and populist autocrats, which are special and worse.
These last, call them power-lies, are distinguished by the fact that they aren’t even helped by plausibility. Often, as with conspiracy theories (which strongly overlap with power-lies), the more absurd the better. Obama was born in Kenya. Trump’s crowd was the biggest in history. The 2020 election was stolen by a shadowy conspiracy involving George Soros and Dominion and Venezuela.
The central goal of a power-lie is just to demonstrate your power to coerce others into repeating it, much like with the Party making Winston Smith affirm 2+2=5, or Petruchio making Katharina call the sun the moon in The Taming of the Shrew. A closely-related goal is as a loyalty test for your own retinue.
It’s Trump’s embrace of the power-lie that puts him beyond the pale for me.
Q10: But Scott, we haven’t even played our “Trump” card yet. Starting on October 7, 2023, did you not witness thousands of your supposed allies, the educated secular progressives on “the right side of history,” cheer the sadistic mass-murder of Jews—or at least, make endless excuses for those who did? Did this not destabilize your entire worldview? Will you actually vote for a party half of which seems at peace with the prospect of your family members’ physical annihilation? Or will you finally see who your real friends now are: Arkansas MAGA hillbillies who pray for your people’s survival?
A: Ah, this is your first slash that’s actually drawn blood. I won’t pretend that the takeover of part of the US progressive coalition by literal Hamasniks hasn’t been one of the most terrifying experiences of my life. Yes, if I had to be ruled by either (a) a corrupt authoritarian demagogue or (b) an idiot college student chanting for “Intifada Revolution,” I’d be paralyzed. So it’s lucky that I don’t face that choice! I get to vote, once more, for a rather boring mainstream Democrat—alongside at least 70% of American Jews. The idea of Harris as an antisemite would be ludicrous even if she didn’t have a Jewish husband or wasn’t strongly considering a pro-Israel Jew as her running mate.
Q11: Sure, Kamala Harris might mouth all the right platitudes about Israel having a right to defend itself, but she’ll constantly pressure Israel to make concessions to Hamas and Hezbollah. She’ll turn a blind eye to Iran’s imminent nuclearization. Why don’t you stay up at night worrying that, if you vote for a useful idiot like her, you’ll have Israel’s annihilation and a second Holocaust on your conscience forever?
A: Look, oftentimes—whenever, for example, I’m spending hours reading anti-Zionists on Twitter—I feel like there’s no limit to how intensely Zionist I am. On reflection, though, there is a limit. Namely, I’m not going to be more Zionist than the vast majority of my Israeli friends and colleagues—the ones who served in the IDF, who in some cases did reserve duty in Gaza, who prop up the Israeli economy with their taxes, and who will face the consequences of whatever happens more directly than I will. With few exceptions, these friends despise the Trump/Bibi alliance with white-hot rage, and they desperately want more moderate leadership in both countries.
Q12: Suppose I concede that Kamala is OK on Israel. We both know that she’s not the future of the Democratic Party, any more than Biden is. The future is what we all saw on campuses this spring. “Houthis Houthis make us proud, turn another ship around.” How can you vote for a party whose rising generation seems to want you and your family dead?
A: Let me ask you something. When Trump won in 2016, did that check the power of the campus radicals? Or as Scott Alexander prophesied at the time, did it energize and embolden them like nothing else, by dramatically confirming their theology of a planet held hostage by the bullying, misogynistic rich white males? I fundamentally reject your premise that, if I’m terrified of crazy left-wing extremists, then a good response is to vote for the craziest right-wing extremists I can find, in hopes that the two will somehow cancel each other out. Instead I should support a coherent Enlightenment alternative to radicalism, or the closest thing to that available.
Q13: Even leaving aside Israel, how can you not be terrified by what the Left has become? Which side denounced you on social media a decade ago, as a misogynist monster who wanted all women to be his sex slaves? Which side tried to ruin your life and career? Did we, the online rightists, do that? No. We did not. We did nothing worse to you than bemusedly tell you to man up, grow a pair, and stop pleading for sympathy from feminists who will hate you no matter what.
A: I’ll answer with a little digression. Back in 2017, when Kamala Harris was in the Senate, her office invited me to DC to meet with them to provide advice about the National Quantum Initiative Act, which Kamala was then spearheading. Kamala herself sent regrets that she couldn’t meet me, because she had to be at the Kavanaugh hearings. I have (nerdy, male) friends who did meet her about tech policy and came away with positive impressions.
And, I dunno, does that sound like someone who wants me dead for the crime of having been born a nerdy heterosexual male? Or having awkwardly and ineptly asked women on dates, including the one who became my wife? OK, maybe Amanda Marcotte wants me dead for those crimes. Maybe Arthur Chu does (is he still around?). Good that they’re not running for president then.
Q14: Let me try one more time to show you how much your own party hates you. Which side has been at constant war against the SAT and other standardized tests, and merit-based college admissions, and gifted programs, and academic tracking and acceleration, and STEM magnet schools, and every single other measure by which future young Scott Aaronsons (and Saket Agrawals) might achieve their dreams in life? Has that been our side, or theirs?
A: To be honest, I haven’t seen the Trump or Harris campaigns take any position on any of these issues. Even if they did, there’s very little that the federal government can do: these battles happen in individual states and cities and counties and universities. So I’ll vote for Harris while continuing to advocate for what I think is right in education policy.
Q15: Can you not see that Kamala Harris is a vapid, power-seeking bureaucratic machine—that she has no fixed principles at all? For godsakes, she all but condemned Biden as a racist in the 2020 primary, then agreed to serve as his running mate!
A: I mean, she surely has more principles than Vance does. As far as I can tell, for example, she’s genuinely for abortion rights (as I am). Even if she believed in nothing, though, better a cardboard cutout on which values I recognize are written, than a flesh-and-blood person shouting values that horrify me.
Q16: What, if anything, could Republicans do to get you to vote for them?
A: Reject all nutty conspiracy theories. Fully, 100% commit to the peaceful transfer of power. Acknowledge the empirical reality of human-caused climate change, and the need for both technological and legislative measures to slow it and mitigate its impacts. Support abortion rights, or at least a European-style compromise on abortion. Republicans can keep the anti-wokeness stuff, which actually seems to have become their defining issue. If they do all that, and also the Democrats are taken over by frothing radicals who want to annihilate the state of Israel and abolish the police … that’s, uh, probably the point when I start voting Republican.
Q17: Aha, so you now admit that there exist conceivable circumstances that would cause you to vote Republican! In that case, why did you style yourself “Never-Trump From Here to Eternity”?
A: Tell you what, the day the Republicans (and Trump himself?) repudiate authoritarianism and start respecting election outcomes, is the day I’ll admit my title was hyperbolic.
Q18: In the meantime, will you at least treat us Trump supporters with civility and respect?
A: Not only does civil disagreement not compromise any of my values, it is a value to which I think we should all aspire. And to whatever extent I’ve fallen short of that ideal—even when baited into it—I’m sorry and I’ll try to do better. Certainly, age and experience have taught me that there’s hardly anyone so far gone that I can’t find something on which I agree with them, while disagreeing with most of the rest of the world.
The news these days feels apocalyptic to me—as if we’re living through, if not the last days of humanity, then surely the last days of liberal democracy on earth.
All the more reason to ignore all of that, then, and blog instead about the notorious Busy Beaver function! Because holy moly, what news have I got today. For lovers of this super-rapidly-growing sequence of integers, I’ve honored to announce the biggest Busy Beaver development that there’s been since 1983, when I slept in a crib and you booted up your computer using a 5.25-inch floppy. That was the year when Allen Brady determined that BusyBeaver(4) was equal to 107. (Tibor Radó, who invented the Busy Beaver function in the 1960s, quickly proved with his student Shen Lin that the first three values were 1, 6, and 21 respectively. The fourth value was harder.)
Only now, after an additional 41 years, do we know the fifth Busy Beaver value. Today, an international collaboration called bbchallenge is announcing that it’s determined, and even formally verified using the Coq proof system, that BB(5) is equal to 47,176,870—the value that’s been conjectured since 1990, when Heiner Marxen and Jürgen Buntrock discovered a 5-state Turing machine that runs for exactly 47,176,870 steps before halting, when started on a blank tape. The new bbchallenge achievement is to prove that all 5-state Turing machines that run for more steps than 47,176,870, actually run forever—or in other words, that 47,176,870 is the maximum finite number of steps for which any 5-state Turing machine can run. That’s what it means for BB(5) to equal 47,176,870.
For more on this story, see Ben Brubaker’s superb article in Quanta magazine, or bbchallenge’s own announcement. For more background on the Busy Beaver function, see my 2020 survey, or my 2017 big numbers lecture, or my 1999 big numbers essay, or the Googology Wiki page, or Pascal Michel’s survey.
The difficulty in pinning down BB(5) was not just that there are a lot of 5-state Turing machines (16,679,880,978,201 of them to be precise, although symmetries reduce the effective number). The real difficulty is, how do you prove that some given machine runs forever? If a Turing machine halts, you can prove that by simply running it on your laptop until halting (at least if it halts after a “mere” ~47 million steps, which is child’s-play). If, on the other hand, the machine runs forever, via some never-repeating infinite pattern rather than a simple infinite loop, then how do you prove that? You need to find a mathematical reason why it can’t halt, and there’s no systematic method for finding such reasons—that was the great discovery of Gödel and Turing nearly a century ago.
More precisely, the Busy Beaver function grows faster than any function that can be computed, and we know that because if a systematic method existed to compute arbitrary BB(n) values, then we could use that method to determine whether a given Turing machine halts (if the machine has n states, just check whether it runs for more than BB(n) steps; if it does, it must run forever). This is the famous halting problem, which Turing proved to be unsolvable by finite means. The Busy Beaver function is Turing-uncomputability made flesh, a finite function that scrapes the edge of infinity.
There’s also a more prosaic issue. Proofs that particular Turing machines run forever tend to be mind-numbingly tedious. Even supposing you’ve found such a “proof,” why should other people trust it, if they don’t want to spend days staring at the outputs of your custom-written software?
And so for decades, a few hobbyists picked away at the BB(5) problem. One, who goes by the handle “Skelet”, managed to reduce the problem to 43 holdout machines whose halting status was still undetermined. Or maybe only 25, depending who you asked? (And were we really sure about the machines outside those 43?)
The bbchallenge collaboration improved on the situation in two ways. First, it demanded that every proof of non-halting be vetted carefully. While this went beyond the original mandate, a participant named “mxdys” later upped the standard to fully machine-verifiable certificates for every non-halting machine in Coq, so that there could no longer be any serious question of correctness. (This, in turn, was done via “deciders,” programs that were crafted to recognize a specific type of parameterized behavior.) Second, the collaboration used an online forum and a Discord server to organize the effort, so that everyone knew what had been done and what remained to be done.
Despite this, it was far from obvious a priori that the collaboration would succeed. What if, for example, one of the 43 (or however many) Turing machines in the holdout set turned out to encode the Goldbach Conjecture, or one of the other great unsolved problems of number theory? Then the final determination of BB(5) would need to await the resolution of that problem. (We do know, incidentally, that there’s a 27-state Turing machine that encodes Goldbach.)
But apparently the collaboration got lucky. Coq proofs of non-halting were eventually found for all the 5-state holdout machines.
As a sad sidenote, Allen Brady, who determined the value of BB(4), apparently died just a few days before the BB(5) proof was complete. He was doubtful that BB(5) would ever be known. The reason, he wrote in 1988, was that “Nature has probably embedded among the five-state holdout machines one or more problems as illusive as the Goldbach Conjecture. Or, in other terms, there will likely be nonstopping recursive patterns which are beyond our powers of recognition.”
Maybe I should say a little at this point about what the 5-state Busy Beaver—i.e., the Marxen-Buntrock Turing machine that we now know to be the champion—actually does. Interpreted in English, the machine iterates a certain integer function g, which is defined by
g(x) = (5x+18)/3 if x = 0 (mod 3),
g(x) = (5x+22)/3 if x = 1 (mod 3),
g(x) = HALT if x = 2 (mod 3).
Starting from x=0, the machine computes g(0), g(g(0)), g(g(g(0))), and so forth, halting if and if it ever reaches … well, HALT. The machine runs for millions of steps because it so happens that this iteration eventually reaches HALT, but only after a while:
0 → 6 → 16 → 34 → 64 → 114 → 196 → 334 → 564 → 946 → 1584 → 2646 → 4416 → 7366 → 12284 → HALT.
(And also, at each iteration, the machine runs for a number of steps that grows like the square of the number x.)
Some readers might be reminded of the Collatz Conjecture, the famous unsolved problem about whether, if you repeatedly replace a positive integer x by x/2 if x is even or 3x+1 if x is odd, you’ll always eventually reach x=1. As Scott Alexander would say, this is not a coincidence because nothing is ever a coincidence. (Especially not in math!)
It’s a fair question whether humans will ever know the value of BB(6). Pavel Kropitz discovered, a couple years ago, that BB(6) is at least 10^10^10^10^10^10^10^10^10^10^10^10^10^10^10 (i.e., 10 raised to itself 15 times). Obviously Kropitz didn’t actually run a 6-state Turing machine for that number of steps until halting! Instead he understood what the machine did—and it turned out to apply an iterative process similar to the g function above, but this time involving an exponential function. And the process could be proven to halt after ~15 rounds of exponentiation.
Meanwhile Tristan Stérin, who coordinated the bbchallenge effort, tells me that a 6-state machine was recently discovered that “iterates the Collatz-like map {3x/2, (3x-1)/2} from the number 8 and halts if and only if the number of odd terms ever gets bigger than twice the number of even terms.” This shows that, in order to determine the value of BB(6), one would first need to prove or disprove the Collatz-like conjecture that that never happens.
Basically, if and when artificial superintelligences take over the world, they can worry about the value of BB(6). And then God can worry about the value of BB(7).
I first learned about the BB function in 1996, when I was 15 years old, from a book called The New Turing Omnibus by A. K. Dewdney. From what I gather, Dewdney would go on to become a nutty 9/11 truther. But that’s irrelevant to the story. What matters was that his book provided my first exposure to many of the key concepts of computer science, and probably played a role in my becoming a theoretical computer scientist at all.
And of all the concepts in Dewdney’s book, the one I liked the most was the Busy Beaver function. What a simple function! You could easily explain its definition to Archimedes, or Gauss, or any of the other great mathematicians of the past. And yet, by using it, you could name definite positive integers (BB(10), for example) incomprehensibly larger than any that they could name.
It was from Dewdney that I learned that the first four Busy Beaver numbers were the unthreatening-looking 1, 6, 21, and 107 … but then that the fifth value was already unknown (!!), and at any rate at least 47,176,870. I clearly remember wondering whether BB(5) would ever be known for certain, and even whether I might be the one to determine it. That was almost two-thirds of my life ago.
As things developed, I played no role whatsoever in the determination of BB(5) … except for this. Tristan Stérin tells me that reading my survey article, The Busy Beaver Frontier, was what inspired him to start and lead the bbchallenge collaboration that finally cracked the problem. It’s hard to express how gratified that makes me.
Why care about determining particular values of the Busy Beaver function? Isn’t this just a recreational programming exercise, analogous to code golf, rather than serious mathematical research?
I like to answer that question with another question: why care about humans landing on the moon, or Mars? Those otherwise somewhat arbitrary goals, you might say, serve as a hard-to-fake gauge of human progress against the vastness of the cosmos. In the same way, the quest to determine the Busy Beaver numbers is one concrete measure of human progress against the vastness of the arithmetical cosmos, a vastness that we learned from Gödel and Turing won’t succumb to any fixed procedure. The Busy Beaver numbers are just … there, Platonically, as surely as 13 was prime long before the first caveman tried to arrange 13 rocks into a nontrivial rectangle and failed. And yet we might never know the sixth of these numbers and only today learned the fifth.
Anyway, huge congratulations to the bbchallenge team on their accomplishment. At a terrifying time for the world, I’m happy that, whatever happens, at least I lived to see this.
Dana, the kids, and I got back to the US last week after a month spent in England and then Israel. We decided to visit Israel because … uhh, we heard there’s never been a better time.
We normally go every year to visit Dana’s family and our many friends there, and to give talks. Various well-meaning friends suggested that maybe we should cancel or postpone this year—given, you know, the situation. To me, though, the situation felt like all the more reason to go. To make Israel seem more and more embattled, dangerous, isolated, abnormal, like not an acceptable place to visit (much less live), in order to crater its economy, demoralize its population, and ultimately wipe it from the face of earth … that is explicitly much of the world’s game plan right now, laid out with shocking honesty since October 7 (a day that also showed us what the “decolonization” will, concretely, look like). So, if I oppose this plan, then how could I look myself in the mirror while playing my tiny part in it? Shouldn’t I instead raise a middle finger to those who’d murder my family, and go?
Besides supporting our friends and relatives, though, I wanted to see the post-October-7 reality for myself, rather than just spending hours per day reading about it on social media. I wanted to form my own impression of the mood in Israel: fiercely determined? angry? hopeless? just carrying on like normal?
Anyway, in two meeting-packed weeks, mostly in Tel Aviv but also in Jerusalem, Haifa, and Be’er Sheva, I saw stuff that could support any of those narratives. A lot was as I’d expected, but not everything. In the rest of this post, I’ll share eleven observations:
(1) This presumably won’t shock anyone, but in post-October-7 Israel, you indeed can’t escape October 7. Everywhere you look, on every building, in every lobby, hanging from every highway overpass, there are hostage posters and “Bring Them Home Now” signs and yellow ribbons—starting at the airport, where every single passenger is routed through a long corridor of hostage posters, each one signed and decorated by the hostage’s friends and family. It sometimes felt as though Yad Vashem had expanded to encompass the entire country. Virtually everyone we talked to wanted to share their stories and opinions about the war, most of all their depression and anger. While there was also plenty of discussion about quantum error mitigation and watermarking of large language models and local family events, no one even pretended to ignore the war.
(2) Having said that, the morning after we landed, truthfully, the first thing that leapt out at me wasn’t anything to do with October 7, hostages, or Gaza. It was the sheer number of children playing outside, in any direction you looked. Full, noisy playgrounds on block after block. It’s one thing to know intellectually that Israel has by far the highest birthrate of any Western country, another to see it for yourself. The typical secular family probably has three kids; the typical Orthodox family has more. (The Arab population is of course also growing rapidly, both in Israel and in the West Bank and Gaza.) New apartment construction is everywhere you look in Tel Aviv, despite building delays caused by the war. And it all seems perfectly normal … unless you’ve lived your whole life in environments where 0.8 or 1.2 children per couple is the norm.
This, of course, has giant implications for anyone interested in Israel’s future. It’s like, a million Israeli leftists could get fed up and flee to the US or Canada or Switzerland, and Israel would still have a large and growing Jewish population—because having a big family is “just what people do” in a state that was founded to defy the Holocaust. In particular: anyone who dreams of dismantling the illegal, settler-colonial, fascist Zionist ethnostate, and freeing Palestine from river to sea, had better have some plan for what they’re going to do with all these millions of young Jews, who don’t appear to be going anywhere.
(3) The second thing I noticed was the heat—comparable to the Texas summer heat that we try to escape when possible. Because of the roasting sun, our own two pampered offspring mostly refused to go outside during daytime, and we mostly met friends indoors. I more than once had the dark thought that maybe Israel will survive Hamas, Hezbollah, Iran, and its own Jewish extremists … only to be finished off in the end (along with much of the rest of the planet) by global warming. I wonder whether Israel will manage to engineer its way out of the crisis, as it dramatically engineered its way out of its water crisis via desalination. The Arab petrostates have been trying to engineer their way out of the Middle East’s increasingly Mercury-like climate, albeit with decidedly mixed results.
(4) But nu, what did our Israeli friends say about the war? Of course it’s a biased sample, because our friends are mostly left-wing academics and tech workers. But, at risk of overgeneralizing: they’re unhappy. Very, very unhappy. As for Bibi and his far-right yes-men? Our friends’ rage at them was truly a sight to behold. American progressives are, like, mildly irked by Trump in comparison. Yes, our friends blame Bibi for the massive security and intelligence failures that allowed October 7 to happen. They blame him for dragging out the war to stave off elections. They blame him for empowering the contemptible Ben-Gvir and Smotrich. They blame him for his failure to bring back the remaining hostages. Most of all, they blame him for refusing even to meet with the hostage families, and more broadly, for evading responsibility for all that he did wrong, while arrogating credit for any victories (like the rescue of Noa Argamani).
(5) One Israeli friend offered to take me along to the giant anti-Bibi rally that now happens every Saturday night in Azrieli Center in Tel Aviv. (She added that, if I left before 9pm, it would reduce the chances of the police arresting me.) As the intrepid blogger-investigator I am, of course I agreed.
While many of the protesters simply called for new elections to replace Netanyahu (a cause that I 3000% support), others went further, demanding a deal to free the hostages and an immediate end to the war (even if, as they understood, that would leave Hamas in power).
Watching the protesters, smelling their pot smoke that filled the air, I was seized by a thought: these Israeli leftists actually see eye-to-eye with the anti-Israel American leftists on a huge number of issues. In a different world, they could be marching together as allies. Except, of course, for one giant difference: namely, the Tel Aviv protesters are proudly waving Israeli flags (sometimes modified to add anti-Bibi images, or to depict the Star of David “crying”), rather than burning or stomping on those flags. They’re marching to save the Israel that they know and remember, rather than to destroy it.
(6) We did meet one ultra-right-wing (and Orthodox) academic colleague. He was virtually the only person we met on this trip who seemed cheerful and optimistic about Israel’s future. He brought me to his synagogue to celebrate the holiday of Shavuot, while he himself stood guarding the door of the synagogue with a gargantuan rifle (his volunteer duty since October 7). He has six kids.
(7) Again and again, our secular liberal friends told us they’re thinking about moving from Israel, because if the Bibi-ists entrench their power (and of course the demographics are trending in that direction), then they don’t see that the country has any worthwhile future for them or their children. Should this be taken more seriously than the many Americans who promise that this time, for real, they’ll move to Canada if Trump wins? I’m not sure. I can only report what I heard.
(8) At the same time, again and again I got the following question from Israelis (including the leftist ones): how bad is the situation for Jews in the US? Have the universities been taken over by militant anti-Zionists, like it shows in the news? I had to answer: it’s complicated. Because I live my life enbubbled in the STEM field of computer science, surrounded by friends and colleagues of many backgrounds, ethnicities, religions, and political opinions who are thoughtful and decent (otherwise, why would they be my friends and colleagues?), I’m able to live a very nice life even in the midst of loud protesters calling to globalize the intifada against my family.
If, on the other hand, I were in a typical humanities department? Yeah, then I’d be pretty terrified. My basic options would be to (a) shut up about my (ironically) moderate, middle-of-the-road opinions on Israel/Palestine, such as support for the two-state solution; (b) live a miserable and embattled existence; or (c) pack up and move, for example to Israel.
An astounding irony right now is that, just as Israeli leftists are talking about moving from Israel, some of my American Jewish friends have talked to me about moving to Israel, to escape a prejudice that they thought died with their grandparents. I don’t know where the grass is actually greener (or is it brown everywhere?). Nor do I know how many worriers will actually follow through. What’s clear is that, both in Israel and in the diaspora, Jews are feeling an existential fear that they haven’t felt for generations.
(9) Did I fear for my own family’s safety during the trip? Not really. Maybe I should have. When we visited Haifa, we found that GPS was scrambled all across northern Israel, to make targeting harder for Hezbollah missiles. As a result, we couldn’t use Google Maps, got completely lost driving, and had to change plans with our friends. For the first time, now I really feel angry at Hezbollah: they made my life worse and it’s personal!
The funniest part, though, was how the scrambling was implemented: when you opened Google Maps anywhere in the north, it told you that you were in Beirut. It then dutifully gave you walking or driving directions to wherever you were going in Israel, passing through Syria close to Damascus (“warning: this route passes through multiple countries”).
(10) The most darkly comical thing that I heard on the entire trip: “oh, no, I don’t object in the slightest if the anti-Zionists want to kill us all. I only object if they want to kill us because of an incorrect understanding of the relevant history.” Needless to say, this was a professor.
(11) After my two-week investigation, what grand insight can I offer about Israel’s future? Not much, but maybe this: I think we can definitively rule out the scenario where Israel, having been battered by October 7, and bracing itself to be battered worse by Hezbollah, just sort of … withers away and disappears. Yes, Israel might get hotter, more crowded, more dangerous, more right-wing, and more Orthodox. But it will stay right where it is, unless and until its enemies destroy it in a cataclysmic war. You can’t scare people away, break their will, if they believe they have nowhere else on the planet to go. You can only kill them or else live next to them in peace, as the UN proposed in 1947 and as Oslo proposed in the 1990s. May we live to see peace.
Anyway, on that pleasant note, time soon to tune in to the Trump/Biden debate! I wonder who these two gentlemen are, and what they might stand for? | |||||
2453 | dbpedia | 2 | 68 | https://igorpak.wordpress.com/tag/peter-sarnak/ | en | Igor Pak's blog | [
"https://igorpak.wordpress.com/wp-content/uploads/2024/04/image.png?w=1024",
"https://igorpak.wordpress.com/wp-content/uploads/2024/04/image-1.png?w=1024",
"https://igorpak.wordpress.com/wp-content/uploads/2024/04/image-2.png?w=1024",
"https://igorpak.wordpress.com/wp-content/uploads/2024/04/image-3.png?w=731... | [] | [] | [
""
] | null | [] | null | Posts about Peter Sarnak written by igorpak | en | Igor Pak's blog | https://igorpak.wordpress.com/tag/peter-sarnak/ | Conjectures are a staple of mathematics. They are everywhere, permeating every area, subarea and subsubarea. They are diverse enough to avoid a single general adjective. They come in al shapes and sizes. Some of them are famous, classical, general, important, inspirational, far-reaching, audacious, exiting or popular, while others are speculative, narrow, technical, imprecise, far-fetched, misleading or recreational. That’s a lot of beliefs about unproven claims, yet we persist in dispensing them, inadvertently revealing our experience, intuition and biases.
The conjectures also vary in attitude. Like a finish line ribbon they all appear equally vulnerable to an outsider, but in fact differ widely from race to race. Some are eminently reachable, the only question being who will get there first (think 100 meter dash). Others are barely on the horizon, requiring both great effort, variety of tools, and an extended time commitment (think ironman triathlon). The most celebrated third type are like those Sci-Fi space expeditions in requiring hundreds of years multigenerational commitments, often losing contact with civilization it left behind. And we can’t forget the romantic fourth type — like the North Star, no one actually wants to reach them, as they are largely used for navigation, to find a direction in unchartered waters.
Now, conjectures famously provide a foundation of the scientific method, but that’s not at all how we actually think of them in mathematics. I argued back in this pointed blog post that citations are the most crucial for the day to day math development, so one should take utmost care in making references. While this claim is largely uncontroversial and serves as a raison d’être for most GoogleScholar profiles, conjectures provide a convenient idealistic way out. Thus, it’s much more noble and virtuous to say “I dedicated my life to the study of the XYZ Conjecture” (even if they never publish anything), than “I am working hard writing so many papers to gain respect of my peers, get a promotion, and provide for my family“. Right. Obviously…
But given this apparent (true or perceived) importance of conjectures, are you sure you are using them right? What if some/many of these conjectures are actually wrong, what then? Should you be flying that starship if there is no there there? An idealist would argue something like “it’s a journey, not a destination“, but I strongly disagree. Getting closer to the truth is actually kind of important, both as a public policy and on an individual level. It is thus pretty important to get it right where we are going.
What are conjectures in mathematics?
That’s a stupid question, right? Conjectures are mathematical claims whose validity we are trying to ascertain. Is that all? Well, yes, if you don’t care if anyone will actually work on the conjecture. In other words, something about the conjecture needs to interesting and inspiring.
What makes a conjecture interesting?
This is a hard question to answer because it is as much psychological as it is mathematical. A typical answer would be “oh, because it’s old/famous/beautiful/etc.” Uhm, ok, but let’s try to be a little more formal.
One typically argues “oh, that’s because this conjecture would imply [a list of interesting claims and known results]”. Well, ok, but this is self-referential. We already know all those “known results”, so no need to prove them again. And these “claims” are simply other conjectures, so this is really an argument of the type “this conjecture would imply that conjecture”, so not universally convincing. One can argue: “look, this conjecture has so many interesting consequences”. But this is both subjective and unintuitive. Shouldn’t having so many interesting conjectural consequences suggest that perhaps the conjecture is too strong and likely false? And if the conjecture is likely to be false, shouldn’t this make it uninteresting?
Also, wouldn’t it be interesting if you disprove a conjecture everyone believes to be true? In some sense, wouldn’t it be even more interesting if until now everyone one was simply wrong?
None of this are new ideas, of course. For example, faced with the need to justify the “great” BC conjecture, or rather 123 pages of survey on the subject (which is quite interesting and doesn’t really need to be justified), the authors suddenly turned reflective. Mindful of self-referential approach which they quickly discard, they chose a different tactic:
We believe that the interest of a conjecture lies in the feeling of unity of mathematics that it entails. [M.P. Gomez Aparicio, P. Julg and A. Valette, “The Baum-Connes conjecture“, 2019]
Huh? Shouldn’t math be about absolute truths, not feelings? Also, in my previous blog post, I mentioned Noga Alon‘s quote that Mathematics is already “one unit“. If it is, why does it need a new “feeling of unity“? Or is that like one of those new age ideas which stop being true if you don’t reinforce them at every occasion?
If you are confused at this point, welcome to the club! There is no objective way to argue what makes certain conjectures interesting. It’s all in our imagination. Nikolay Konstantinov once told me that “mathematics is a boring subject because every statement is equivalent to saying that some set is empty.” He meant to be provocative rather than uninspiring. But the problem he is underlying is quite serious.
What makes us believe a conjecture is true?
We already established that in order to argue that a conjecture is interesting we need to argue it’s also true, or at least we want to believe it to be true to have all those consequences. Note, however, that we argue that a conjecture is true in exactly the same way we argue it’s interesting: by showing that it holds is some special cases, and that it would imply other conjectures which are believed to be true because they are also checked in various special cases. So in essence, this gives “true = interesting” in most cases. Right?
This is where it gets complicated. Say, you are working on the “abc conjecture” which may or may not be open. You claim that it has many consequences, which makes it both likely true and interesting. One of them is the negative solution to the Erdős–Ulam problem about existence of a dense set in the plane with rational pairwise distances. But a positive solution to the E-U problem implies the Harborth’s conjecture (aka the “integral Fáry problem“) that every graph can be drawn in the plane with rational edge lengths. So, counterintuitively, if you follow the logic above shouldn’t you be working on a positive solution to Erdős–Ulam since it would both imply one conjecture and give a counterexample to another? For the record, I wouldn’t do that, just making a polemical point.
I am really hoping you see where I am going. Since there is no objective way to tell if a conjecture is true or not, and what exactly is so interesting about it, shouldn’t we discard our biases and also work towards disproving the conjecture just as hard as trying to prove it?
What do people say?
It’s worth starting with a general (if slightly poetic) modern description:
In mathematics, [..] great conjectures [are] sharply formulated statements that are most likely true but for which no conclusive proof has yet been found. These conjectures have deep roots and wide ramifications. The search for their solution guides a large part of mathematics. Eternal fame awaits those who conquer them first. Remarkably, mathematics has elevated the formulation of a conjecture into high art. [..] A well-chosen but unproven statement can make its author world-famous, sometimes even more so than the person providing the ultimate proof. [Robbert Dijkgraaf, The Subtle Art of the Mathematical Conjecture, 2019]
Karl Popper thought that conjectures are foundational to science, even if somewhat idealized the efforts to disprove them:
[Great scientists] are men of bold ideas, but highly critical of their own ideas: they try to find whether their ideas are right by trying first to find whether they are not perhaps wrong. They work with bold conjectures and severe attempts at refuting their own conjectures. [Karl Popper, Heroic Science, 1974]
Here is how he reconciled somewhat the apparent contradiction:
On the pre-scientific level we hate the very idea that we may be mistaken. So we cling dogmatically to our conjectures, as long as possible. On the scientific level, we systematically search for our mistakes. [Karl Popper, quoted by Bryan Magee, 1971]
Paul Erdős was, of course, a champion of conjectures and open problems. He joked that the purpose of life is “proof and conjecture” and this theme is repeatedly echoed when people write about him. It is hard to overestimate his output, which included hundreds of talks titled “My favorite problems“. He wrote over 180 papers with collections of conjectures and open problems (nicely assembled by Zbl. Math.)
Peter Sarnak has a somewhat opposite point of view, as he believes one should be extremely cautious about stating a conjecture so people don’t waste time working on it. He said once, only half-jokingly:
Since we reward people for making a right conjecture, maybe we should punish those who make a wrong conjecture. Say, cut off their fingers. [Peter Sarnak, UCLA, c. 2012]
This is not an exact quote — I am paraphrasing from memory. Needless to say, I disagree. I don’t know how many fingers he wished Erdős should lose, since some of his conjectures were definitely disproved: one, two, three, four, five, and six. This is not me gloating, the opposite in fact. When you are stating hundreds of conjectures in the span of almost 50 years, having only a handful to be disproved is an amazing batting average. It would, however, make me happy if Sarnak’s conjecture is disproved someday.
Finally, there is a bit of a controversy whether conjectures are worth as much as theorems. This is aptly summarized in this quote about yet another champion of conjectures:
Louis J. Mordell [in his book review] questioned Hardy‘s assessment that Ramanujan was a man whose native talent was equal to that of Euler or Jacobi. Mordell [..] claims that one should judge a mathematician by what he has actually done, by which Mordell seems to mean, the theorems he has proved. Mordell’s assessment seems quite wrong to me. I think that a felicitous but unproved conjecture may be of much more consequence for mathematics than the proof of many a respectable theorem. [Atle Selberg, “Reflections Around the Ramanujan Centenary“, 1988]
So, what’s the problem?
Well, the way I see it, the efforts made towards proving vs. disproving conjectures is greatly out of balance. Despite all the high-minded Popper’s claims about “severe attempts at refuting their own conjectures“, I don’t think there is much truth to that in modern math sciences. This does not mean that disproofs of famous conjectures aren’t celebrated. Sometimes they are, see below. But it’s clear to me that the proofs are celebrated more frequently, and to a much greater degree. I have only anecdotal evidence to support my claim, but bear with me.
Take prizes. Famously, Clay Math Institute gives $1 million for a solution of any of these major open problems. But look closely at the rules. According to the item 5b, except for the P vs. NP problem and the Navier–Stokes Equation problem, it gives nothing ($0) for a disproof of these problems. Why, oh why?? Let’s look into CMI’s “primary objectives and purposes“:
To recognize extraordinary achievements and advances in mathematical research.
So it sounds like CMI does not think that disproving the Riemann Hypothesis needs to be rewarded because this wouldn’t “advance mathematical research”. Surely, you are joking? Whatever happened to “the opposite of a profound truth may well be another profound truth“? Why does the CMI wants to put its thumb on the scale and support only one side? Do they not want to find out the solution whatever it is? Shouldn’t they be eager to dispense with the “wrong conjecture” so as to save numerous researches from “advances to nowhere“?
I am sure you can see that my blood is boiling, but let’s proceed to the P vs. NP problem. What if it’s independent of ZFC? Clearly, CMI wouldn’t pay for proving that. Why not? It’s not like this kind of thing never happened before (see obligatory link to CH). Some people believe that (or at least they did in 2012), and some people like Scott Aaronson take this seriously enough. Wouldn’t this be a great result worthy of an award as much as the proof that P=NP, or at least a nonconstructive proof that P=NP?
If your head is not spinning hard enough, here is another amusing quote:
Of course, it’s possible that P vs. NP is unprovable, but that that fact itself will forever elude proof: indeed, maybe the question of the independence of P vs. NP is itself independent of set theory, and so on ad infinitum! But one can at least say that, if P vs. NP (or for that matter, the Riemann hypothesis, Goldbach’s conjecture, etc.) were proven independent of ZF, it would be an unprecedented development. [Scott Aaronson, P vs. NP, 2016].
Speaking of Goldbach’s Conjecture, the most talked about and the most intuitively correct statement in Number Theory that I know. In a publicity stunt, for two years there was a $1 million prize by a publishing house for the proof of the conjecture. Why just for the proof? I never heard of anyone not believing the conjecture. If I was the insurance underwriter for the prize (I bet they had one), I would allow them to use “for the proof or disproof” for a mere extra $100 in premium. For another $50 I would let them use “or independent of ZF” — it’s a free money, so why not? It’s such a pernicious idea of rewarding only one kind of research outcome!
Curiously, even for Goldbach’s Conjecture, there is a mild divergence of POVs on what the future holds. For example, Popper writes (twice in the same book!) that:
[On whether Goldbach’s Conjecture is ‘demonstrable’] We don’t know: perhaps we may never know, and perhaps we can never know. [Karl Popper, Conjectures and Refutations, 1963]
Ugh. Perhaps. I suppose anything can happen… For example, our civilizations can “perhaps” die out in the next 200 years. But is that likely? Shouldn’t the gloomy past be a warning, not a prediction of the future? The only thing more outrageously pessimistic is this theological gem of a quote:
Not even God knows the number of permutations of 1000 avoiding the 1324 pattern. [Doron Zeilberger, quoted here, 2005]
Thanks, Doron! What a way to encourage everyone! Since we know from numerical estimates that this number is ≈ 3.7 × 101017 (see this paper and this follow up), Zeilberger is suggesting that large pattern avoidance numbers are impossibly hard to compute precisely, already in the range of only about 1018 digits. I really hope he is proved wrong in his lifetime.
But I digress. What I mean to emphasize, is that there are many ways a problem can be resolved. Yet some outcomes are considered more valuable than others. Shouldn’t the research achievements be rewarded, not the desired outcome? Here is yet another colorful opinion on this:
Given a conjecture, the best thing is to prove it. The second best thing is to disprove it. The third best thing is to prove that it is not possible to disprove it, since it will tell you not to waste your time trying to disprove it. That’s what Gödel did for the Continuum Hypothesis. [Saharon Shelah, Rutgers Univ. Colloqium, 2001]
Why do I care?
For one thing, disproving conjectures is part of what I do. Sometimes people are a little shy to unambiguously state them as formal conjectures, so they phrase them as questions or open problems, but then clarify that they believe the answer is positive. This is a distinction without a difference, or at least I don’t see any (maybe they are afraid of Sarnak’s wrath?) Regardless, proving their beliefs wrong is still what I do.
For example, here is my old bog post on my disproof of the Noonan-Zeiberger Conjecture (joint with Scott Garrabrant). And in this recent paper (joint with Danny Nguyen), we disprove in one big swoosh both Barvinok’s Problem, Kannan’s Problem, and Woods Conjecture. Just this year I disproved three conjectures:
The Kirillov–Klyachko Conjecture (2004) that the reduced Kronecker coefficients satisfy the saturation property (this paper, joint with Greta Panova).
The Brandolini et al. Conjecture (2019) that concrete lattice polytopes can multitile the space (this paper, joint with Alexey Garber).
Kenyon’s Problem (c. 2005) that every integral curve in R3 is a boundary of a PL surface comprised of unit triangles (this paper, joint with Alexey Glazyrin).
On top of that, just two months ago in this paper (joint with Han Lyu), we showed that the remarkable independence heuristic by I. J. Good for the number of contingency tables, fails badly even for nearly all uniform marginals. This is not exactly disproof of a conjecture, but it’s close, since the heuristic was introduced back in 1950 and continues to work well in practice.
In addition, I am currently working on disproving two more old conjectures which will remain unnamed until the time we actually resolve them (which might never happen, of course). In summary, I am deeply vested in disproving conjectures. The reasons why are somewhat complicated (see some of them below). But whatever my reasons, I demand and naively fully expect that my disproofs be treated on par with proofs, regardless whether this expectation bears any relation to reality.
My favorite disproofs and counterexamples:
There are many. Here are just a few, some famous and some not-so-famous, in historical order:
Fermat‘s conjecture (letter to Pascal, 1640) on primality of Fermat numbers, disproved by Euler (1747)
Tait’s conjecture (1884) on hamiltonicity of graphs of simple 3-polytopes, disproved by W.T. Tutte (1946)
General Burnside Problem (1902) on finiteness of periodic groups, resolved negatively by E.S. Golod (1964)
Keller’s conjecture (1930) on tilings with unit hypercubes, disproved by Jeff Lagarias and Peter Shor (1992)
Borsuk’s Conjecture (1932) on partitions of convex sets into parts of smaller diameter, disproved by Jeff Kahn and Gil Kalai (1993)
Hirsch Conjecture (1957) on the diameter of graphs of convex polytopes, disproved by Paco Santos (2010)
Woods’s conjecture (1972) on the covering radius of certain lattices, disproved by Oded Regev, Uri Shapira and Barak Weiss (2017)
Connes embedding problem (1976), resolved negatively by Zhengfeng Ji, Anand Natarajan, Thomas Vidick, John Wright and Henry Yuen (2020)
In all these cases, the disproofs and counterexamples didn’t stop the research. On the contrary, they gave a push to further (sometimes numerous) developments in the area.
Why should you disprove conjectures?
There are three reasons, of different nature and importance.
First, disproving conjectures is opportunistic. As mentioned above, people seem to try proving much harder than they try disproving. This creates niches of opportunity for an open-minded mathematician.
Second, disproving conjectures is beautiful. Let me explain. Conjectures tend to be rigid, as in “objects of the type pqr satisfy property abc.” People like me believe in the idea of “universality“. Some might call it “completeness” or even “Murphy’s law“, but the general principle is always the same. Namely: it is not sufficient that one wishes that all pqr satisfy abc to actually believe in the implication; rather, there has to be a strong reason why abc should hold. Barring that, pqr can possibly be almost anything, so in particular non-abc. While some would argue that non-abc objects are “ugly” or at least “not as nice” as abc, the idea of universality means that your objects can be of every color of the rainbow — nice color, ugly color, startling color, quiet color, etc. That kind of palette has its own sense of beauty, but it’s an acquired taste I suppose.
Third, disproving conjectures is constructive. It depends on the nature of the conjecture, of course, but one is often faced with necessity to construct a counterexample. Think of this as an engineering problem of building some pqr which at the same time is not abc. Such construction, if at all possible, might be difficult, time consuming and computer assisted. But so what? What would you rather do: build a mile-high skyscraper (none exist yet) or prove that this is impossible? Curiously, in CS Theory both algorithms and (many) complexity results are constructive (you need gadgets). Even the GCT is partially constructive, although explaining that would take us awhile.
What should the institutions do?
If you are an institution which awards prizes, stop with the legal nonsense: “We award […] only for a publication of a proof in a top journal”. You need to set up a scientific committee anyway, since otherwise it’s hard to tell sometimes if someone deserves a prize. With mathematicians you can expect anything anyway. Some would post two arXiv preprints, give a few lectures and then stop answering emails. Others would publish only in a journal where they are Editor-in-Chief. It’s stranger than fiction, really.
What you should do is say in the official rules: “We have [this much money] and an independent scientific committee which will award any progress on [this problem] partially or in full as they see fit.” Then a disproof or an independence result will receive just as much as the proof (what’s done is done, what else are you going to do with the money?) This would also allow some flexibility for partial solutions. Say, somebody proves Goldbach’s Conjecture for integers > exp(exp(10100000)), way way beyond computational powers for the remaining integers to be checked. I would give this person at least 50% of the prize money, leaving the rest for future developments of possibly many people improving on the bound. However, under the old prize rules such person gets bupkes for their breakthrough.
What should the journals do?
In short, become more open to results of computational and experimental nature. If this sounds familiar, that’s because it’s a summary of Zeilberger’s Opinions, viewed charitably. He is correct on this. This includes publishing results of the type “Based on computational evidence we believe in the following UVW conjecture” or “We develop a new algorithm which confirms the UVW conjecture for n<13″. These are still contributions to mathematics, and the journals should learn to recognize them as such.
To put in context of our theme, it is clear that a lot more effort has been placed on proofs than on finding counterexamples. However, in many areas of mathematics there are no small counterexamples, so a heavy computational effort is crucial for any hope of finding one. Such work is not be as glamorous as traditional papers. But really, when it comes to standards, if a journal is willing to publish the study of something like the “null graphs“, the ship has sailed for you…
Let me give you a concrete example where a computational effort is indispensable. The curious Lovász conjecture states that every finite connected vertex-transitive graph contains a Hamiltonian path. This conjecture got to be false. It hits every red flag — there is really no reason why pqr = “vertex transitive” should imply abc = “Hamiltonian”. The best lower bound for the length of the longest (self-avoiding) path is only about square root of the number of vertices. In fact, even the original wording by Lovász shows he didn’t believe the conjecture is true (also, I asked him and he confirmed).
Unfortunately, proving that some potential counterexample is not Hamiltonian is computationally difficult. I once had an idea of one (a nice cubic Cayley graph on “only” 3600 vertices), but Bill Cook quickly found a Hamiltonian cycle dashing my hopes (it was kind of him to look into this problem). Maybe someday, when the TSP solvers are fast enough on much larger graphs, it will be time to return to this problem and thoroughly test it on large Cayley graphs. But say, despite long odds, I succeed and find a counterexample. Would a top journal publish such a paper?
Editor’s dilemma
There are three real criteria for evaluation a solution of an open problem by the journal:
Is this an old, famous, or well-studied problem?
Are the tools interesting or innovative enough to be helpful in future studies?
Are the implications of the solution to other problems important enough?
Now let’s make a hypothetical experiment. Let’s say a paper is submitted to a top math journal which solves a famous open problem in Combinatorics. Further, let’s say somebody already proved it is equivalent to a major problem in TCS. This checks criteria 1 and 3. Until not long ago it would be rejected regardless, so let’s assume this is happening relatively recently.
Now imagine two parallel worlds, where in the first world the conjecture is proved on 2 pages using beautiful but elementary linear algebra, and in the second world the conjecture is disproved on a 2 page long summary of a detailed computational search. So in neither world we have much to satisfy criterion 2. Now, a quiz: in which world the paper will be published?
If you recognized that the first world is a story of Hao Huang‘s elegant proof of the induced subgraphs of hypercubes conjecture, which implies the sensitivity conjecture. The Annals published it, I am happy to learn, in a welcome break with the past. But unless we are talking about some 200 year old famous conjecture, I can’t imagine the Annals accepting a short computational paper in the second world. Indeed, it took a bit of a scandal to accept even the 400 year old Kepler’s conjecture which was proved in a remarkable computational work.
Now think about this. Is any of that fair? Shouldn’t we do better as a community on this issue?
What do other people do?
Over the years I asked a number of people about the uncertainty created by the conjectures and what do they do about it. The answers surprised me. Here I am paraphrasing them:
Some were dumbfounded: “What do you mean this conjecture could be false? It has to be true, otherwise nothing I am doing make much sense.”
Others were simplistic: “It’s an important conjecture. Famous people said it’s true. It’s my job to prove it.”
Third were defensive: “Do you really think this conjecture could be wrong? Why don’t you try to disprove it then? We’ll see who is right.”
Fourth were biblical: “I tend to work 6 days a week towards the proof and one day towards the disproof.”
Fifth were practical: “I work on the proof until I hit a wall. I use the idea of this obstacle to try constructing potential counterexamples. When I find an approach to discard such counterexamples, I try to generalize the approach to continue working on the proof. Continue until either side wins.”
If the last two seem sensible to you to, that’s because they are. However, I bet fourth are just grandstanding — no way they actually do that. The fifth sound great when this is possible, but that’s exceedingly rare, in my opinion. We live in a technical age when proving new results often requires great deal of effort and technology. You likely have tools and intuition to work in only one direction. Why would you want to waste time working in another?
What should you do?
First, remember to make conjectures. Every time you write a paper, tell a story of what you proved. Then tell a story of what you wanted to prove but couldn’t. State it in the form of a conjecture. Don’t be afraid to be wrong, or be right but oversharing your ideas. It’s a downside, sure. But the upside is that your conjecture might prove very useful to others, especially young researchers. In might advance the area, or help you find a collaborator to resolve it.
Second, learn to check your conjectures computationally in many small cases. It’s important to give supporting evidence so that others take your conjectures seriously.
Third, learn to make experiments, explore the area computationally. That’s how you make new conjectures.
Fourth, understand yourself. Your skill, your tools. Your abilities like problem solving, absorbing information from the literature, or making bridges to other fields. Faced with a conjecture, use this knowledge to understand whether at least in principle you might be able to prove or disprove a conjecture.
Fifth, actively look for collaborators. Those who have skills, tools, or abilities you are missing. More importantly, they might have a different POV on the validity of the conjecture and how one might want to attack it. Argue with them and learn from them. | |||||
2453 | dbpedia | 1 | 46 | https://www.scientificamerican.com/article/searching-for-easy-answers-hard-computational-questions/ | en | Searching for Easy Answers to Hard Computational Questions | [
"https://static.scientificamerican.com/sciam/cache/file/ACCDD1C6-2886-41D5-BF41E58E34C80BBF_source.jpg",
"https://www.scientificamerican.com/static/assets/footerProductImg-f6732134.png"
] | [] | [] | [
""
] | null | [
"John Pavlus"
] | 2012-09-01T04:00:00+00:00 | Whether or not machines can quickly answer yes-or-no questions could affect everything from national security to the limits of human knowledge | en | https://www.scientificamerican.com/account/sciam-favicon.ico | Scientific American | https://www.scientificamerican.com/article/searching-for-easy-answers-hard-computational-questions/ | On a snowy day in Princeton, N.J., in March 1956, a short, owlish-looking man named Kurt Gödel wrote his last letter to a dying friend. Gödel addressed John von Neumann formally even though the two had known each other for decades as colleagues at the Institute for Advanced Study in Princeton. Both men were mathematical geniuses, instrumental in establishing the U.S.'s scientific and military supremacy in the years after World War II. Now, however, von Neumann had cancer, and there was little that even a genius like Gödel could do except express a few overoptimistic pleasantries and then change the subject:
Dear Mr. von Neumann:
With the greatest sorrow I have learned of your illness…. As I hear, in the last months you have undergone a radical treatment and I am happy that this treatment was successful as desired, and that you are now doing better….
On supporting science journalism
If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.
Since you now, as I hear, are feeling stronger, I would like to allow myself to write you about a mathematical problem, of which your opinion would very much interest me….
Gödel's description of this problem is utterly unintelligible to nonmathematicians. (Indeed, he may simply have been trying to take von Neumann's mind off of his illness by engaging in an acutely specialized version of small talk.) He wondered how long it would take for a hypothetical machine to spit out answers to a problem. What he concluded sounds like something out of science fiction:
If there really were [such] a machine … this would have consequences of the greatest importance. Namely, it would obviously mean that … the mental work of a mathematician concerning Yes-or-No questions could be completely replaced by a machine.
By “mental work,” Gödel didn't mean trivial calculations like adding 2 and 2. He was talking about the intuitive leaps that mathematicians take to illuminate entirely new areas of knowledge. Twenty-five years earlier Gödel's now famous incompleteness theorems had forever transformed mathematics. Could a machine be made to churn out similar world-changing insights on demand?
A few weeks after Gödel sent his letter, von Neumann checked into Walter Reed Army Medical Center in Washington, D.C., where he died less than a year later, never having answered his friend. But the problem would outlive both of them. Now known as P versus NP, Gödel's question went on to become an organizing principle of modern computer science. It has spawned an entirely new area of research called computational complexity theory—a fusion of mathematics, science and engineering that seeks to prove, with total certainty, what computers can and cannot do under realistic conditions.
But P versus NP is about much more than just the plastic-and-silicon contraptions we call computers. The problem has practical implications for physics and molecular biology, cryptography, national security, evolution, the limits of mathematics and perhaps even the nature of reality. This one question sets the boundaries for what, in theory, we will ever be able to compute. And in the 21st century the limits of computation look more and more like the limits of human knowledge itself.
The Bet
Michael Sipser was only a graduate student, but he knew someone would solve the P versus NP problem soon. He even thought he might be the one to do it. It was the fall of 1975, and he was discussing the problem with Leonard Adleman, a fellow graduate student in the computer science department at the University of California, Berkeley. “I had a fascination with P versus NP, had this feeling that I was somehow able to understand it in a way that went beyond the way everyone else seemed to be approaching it,” says Sipser, who is now head of the mathematics department at the Massachusetts Institute of Technology. He was so sure of himself that he made a wager that day with Adleman: P versus NP would be solved by the end of the 20th century, if not sooner. The terms: one ounce of pure gold.
Sipser's bet made a kind of poetic sense because P versus NP is itself a problem about how quickly other problems can be solved. Sometimes simply following a checklist of steps will get you to the end result in relatively short order. Think of grocery shopping: you tick off the items one by one until you reach the end of the list. Complexity theorists label these problems P, for “polynomial time,” which is a mathematically precise way of saying that no matter how long the grocery list becomes, the amount of time that it will take to tick off all the items will never grow at an unmanageable rate.
In contrast, many more problems may or may not be practical to solve by simply ticking off items on a list, but checking the solution is easy. A jigsaw puzzle is a good example: even though it may take effort to put together, you can recognize the right solution just by looking at it. Complexity theorists call these quickly checkable, “jigsaw puzzle–like” problems NP.
Four years before Sipser made his bet, a mathematician named Stephen Cook had proved that these two kinds of problems are related: every quickly solvable P problem is also a quickly checkable NP problem. The P versus NP question that emerged from Cook's insight—and that has hung over the field ever since—asks if the reverse is also true: Are all quickly checkable problems quickly solvable as well? Intuitively speaking, the answer seems to be no. Recognizing a solved jigsaw puzzle (“Hey, you got it!”) is hardly the same thing as doing all the work to find the solution. In other words, P does not seem to equal NP.
What fascinated Sipser was that nobody had been able to mathematically prove this seemingly obvious observation. And without a proof, a chance remained, however unlikely or strange, that all NP problems might actually be P problems in disguise. P and NP might be equal—and because computers can make short work of any problem in P, P equals NP would imply that computers' problem-solving powers are vastly greater than we ever imagined. They would be exactly what Gödel described in his letter to von Neumann: mechanical oracles that could efficiently answer just about any question put to them, so long as they could be programmed to verify the solution.
Sipser knew this outcome was vanishingly improbable. Yet proving the opposite, much likelier, case—that P is not equal to NP—would be just as groundbreaking.
Like Gödel's incompleteness theorems, which revealed that mathematics must contain true but unprovable propositions, a proof showing that P does not equal NP would expose an objective truth concerning the limitations of knowledge. Solving a jigsaw puzzle and recognizing that one is solved are two fundamentally different things, and there are no shortcuts to knowledge, no matter how powerful our computers get.
Proving a negative is always difficult, but Gödel had done it. So to Sipser, making his bet with Adleman, 25 years seemed like more than enough time to get the job done. If he couldn't prove that P did not equal NP himself, someone else would. And he would still be one ounce of gold richer.
Complicated Fast
Adleman shared Sipser's fascination, if not his confidence, because of one cryptic mathematical clue. Cook's paper establishing that P problems are all NP had also proved the existence of a special kind of quickly checkable type of problem called NP-complete. These problems act like a set of magic keys: if you find a fast algorithm for solving one of them, that algorithm will also unlock the solution to every other NP problem and prove that P equals NP.
There was just one catch: NP-complete problems are among the hardest anyone in computer science had ever seen. And once discovered, they began turning up everywhere. Soon after Cook's paper appeared, one of Adleman's mentors at Berkeley, Richard M. Karp, published a landmark study showing that 21 classic computational problems were all NP-complete. Dozens, then hundreds, soon followed. “It was like pulling a finger out of a dike,” Adleman says. Scheduling air travel, packing moving boxes into a truck, solving a Sudoku puzzle, designing a computer chip, seating guests at a wedding reception, playing Tetris and thousands of other practical, real-world problems have been proved to be NP-complete.
How could this tantalizing key to solving P versus NP seem so commonplace and so uncrackable at the same time? “That's why I was interested in studying the P versus NP problem,” says Adleman, who is now a professor at the University of Southern California. “The power and breadth of these computational questions just seemed deeply awesome. But we certainly didn't understand them. And it didn't seem like we would be understanding them anytime soon.” (Adleman's pessimism about P versus NP led to a world-changing invention: a few years after making his bet, Adleman and his colleagues Ronald Rivest and Adi Shamir exploited the seeming incommensurability of P and NP to create their eponymous RSA encryption algorithm, which remains in wide use for online banking, communications and national security applications.)
NP-complete problems are hard because they get complicated fast. Imagine you are a backpacker planning a trip through a number of cities in Europe, and you want a route that takes you through each city while minimizing the total distance you will need to travel. How do you find the best route? The simplest method is just to try out each possibility. With five cities to visit, you need to check only 12 possible routes. With 10 cities, the number of possible routes mushrooms to more than 180,000. At 60 cities, the number of paths exceeds the number of atoms in the known universe. This computational nightmare is known as the traveling salesman problem, and in over 80 years of intense study, no one has ever found a general way to solve it that works better than trying every possibility one at a time.
That is the perverse essence of NP-completeness—and of P versus NP: not only are all NP-complete problems equally impossible to solve except in the simplest cases—even if your computer has more memory than God and the entire lifetime of the universe to work with—they seem to pop up everywhere. In fact, these NP-complete problems don't just frustrate computer scientists. They seem to put limits on the capabilities of nature itself.
Nature's Code
The pioneering Dutch programmer Edsger Dijkstra understood that computational questions have implications beyond mathematics. He once remarked that “computer science is no more about computers than astronomy is about telescopes.” In other words, computation is a behavior exhibited by many systems besides those made by Google and Intel. Indeed, any system that transforms inputs into outputs by a set of discrete rules—including those studied by biologists and physicists—can be said to be computing.
In 1994 mathematician Peter Shor proved that cleverly arranged subatomic particles could break modern encryption schemes. In 2002 Adleman used strands of DNA to find an optimal solution to an instance of the traveling salesman problem. And in 2005 Scott Aaronson, an expert in quantum computing who is now at M.I.T.'s Computer Science and Artificial Intelligence Laboratory, used soap bubbles, of all things, to efficiently compute optimal solutions to a problem known as the Steiner tree. These are all exactly the kinds of NP problems that computers should choke their circuit boards on. Do these natural systems know something about P versus NP that computers don't?
“Of course not,” Aaronson says. His soap bubble experiment was actually a reductio ad absurdum of the claim that simple physical systems can somehow transcend the differences between P and NP problems. Although the soap bubbles did “compute” perfect solutions to the minimum Steiner tree in a few instances, they quickly failed as the size of the problem increased, just like a computer would. Adleman's DNA-strand experiment hit the same wall. Shor's quantum algorithm does work in all instances, but the factoring problem that it cracks is almost certainly not NP-complete. Therefore, the algorithm doesn't provide the key that would unlock every other NP problem. Biology, classical physics and quantum systems all seem to support the idea that NP-complete problems have no shortcuts. And that would only be true if P did not equal NP.
“Of course, we still can't prove it with airtight certainty,” Aaronson says. “But if we were physicists instead of complexity theorists, ‘P does not equal NP’ would have been declared a law of nature long ago—just like the fact that nothing can go faster than the speed of light.” Indeed, some physical theories about the fundamental nature of the universe—such as the holographic principle, suggested by Stephen Hawking's work on black holes—imply that the fabric of reality itself is not continuous but made of discrete bits, just like a computer [see “Is Space Digital?” by Michael Moyer; Scientific American, February]. Therefore, the apparent intractability of NP problems—and the limitations on knowledge that this implies—may be baked into the universe at the most fundamental level.
Brain Machine
So if the very universe itself is beholden to the computational limits imposed by P versus NP, how can it be that NP-complete problems seem to get solved all the time—even in instances where finding these solutions should take trillions of years or more?
For example, as a human fetus gestates in the womb, its brain wires itself up out of billions of individual neurons. Finding the best arrangement of these cells is an NP-complete problem—one that evolution appears to have solved. “When a neuron reaches out from one point to get to a whole bunch of other synapse points, it's basically a graph-optimization problem, which is NP-hard,” says evolutionary neurobiologist Mark Changizi. Yet the brain doesn't actually solve the problem—it makes a close approximation. (In practice, the neurons consistently get within 3 percent of the optimal arrangement.) The Caenorhabditis elegans worm, which has only 302 neurons, still doesn't have a perfectly optimal neural-wiring diagram, despite billions on billions of generations of natural selection acting on the problem. “Evolution is constrained by P versus NP,” Changizi says, “but it works anyway because life doesn't always require perfection to function well.”
And neither, it turns out, do computers. That modern computers can do anything useful at all—much less achieve the wondrous feats we all take for granted on our video-game consoles and smartphones—is proof that the problems in P encompass a great many of our computing needs. For the rest, often an imperfect approximating algorithm is good enough. In fact, these “good enough” algorithms can solve immensely complex search and pattern-matching problems, many of which are technically NP-complete. These solutions are not always mathematically optimal in every case, but that doesn't mean they aren't useful.
Take Google, for instance. Many complexity researchers consider NP problems to be, in essence, search problems. But according to Google's director of research Peter Norvig, the company takes pains to avoid dealing with NP problems altogether. “Our users care about speed more than perfection,” he says. Instead Google researchers optimize their algorithms for an even faster computational complexity category than P (referred to as linear time) so that search results appear nearly instantaneously. And if a problem comes up that cannot be solved in this way? “We either reframe it to be easier, or we don't bother,” Norvig says.
That is the legacy and the irony of P versus NP. Writing to von Neumann in 1956, Gödel thought the problem held the promise of a future filled with infallible reasoning machines capable of replacing “the mental work of a mathematician” and churning out bold new truths at the push of a button. Instead decades of studying P versus NP have helped build a world in which we extend our machines' problem-solving powers by embracing their limitations. Lifelike approximation, not mechanical perfection, is how Google's autonomous cars can drive themselves on crowded Las Vegas freeways and IBM's Watson can guess its way to victory on Jeopardy.
Gold Rush
The year 2000 came and went, and Sipser mailed Adleman his ounce of gold. “I think he wanted it to be embedded in a cube of Lucite, so he could put it on his desk or something,” Sipser says. “I didn't do that.” That same year the Clay Mathematics Institute in Cambridge, Mass., offered a new bounty for solving P versus NP: $1 million. The prize helped to raise the problem's profile, but it also attracted the attention of amateurs and cranks; nowadays, like many prominent complexity theorists, Sipser says, he regularly receives unsolicited e-mails asking him to review some new attempt to prove that P does not equal NP—or worse, the opposite.
Although P versus NP remains unsolved, many complexity researchers still think it will yield someday. “I never really gave up on it,” Sipser says. He claims to still pull out pencil and paper from time to time and work on it—almost for recreation, like a dog chewing on a favorite bone. P versus NP is, after all, an NP problem itself: the only way to find the answer is to keep searching. And while that answer may never come, if it does, we will know it when we see it. | ||||
2453 | dbpedia | 2 | 13 | https://ercim-news.ercim.eu/en134/announcements/prestigious-goedel-prize-for-ronald-de-wolf | en | Prestigious Gödel Prize for Ronald de Wolf | https://ercim-news.ercim.eu/templates/purity_iii/favicon.ico | https://ercim-news.ercim.eu/templates/purity_iii/favicon.ico | [
"https://ercim-news.ercim.eu/images/stories/EN134/cover.jpg",
"https://ercim-news.ercim.eu/images/stories/EN138-short.jpg",
"https://ercim-news.ercim.eu/images/stories/EN137-short.jpg",
"https://ercim-news.ercim.eu/images/stories/EN136-short.jpg",
"https://ercim-news.ercim.eu/images/stories/EN135/cover.jpg"... | [] | [] | [
""
] | null | [
"Peter Kunz"
] | 2023-06-23T12:07:16+00:00 | ERCIM News, the quarterly magazine of the European Research Consortium for Informatics and Mathematics | en | /templates/purity_iii/favicon.ico | null | Ronald de Wolf from CWI and his co-authors receive the 2023 Gödel Prize for outstanding papers in theoretical computer science.
Ronald de Wolf (CWI, UvA, QuSoft) and his co-authors receive the prestigious Gödel Prize for outstanding papers in theoretical computer science. The Gödel Prize is jointly awarded by the ACM Special Interest Group on Algorithms and Computation Theory (ACM SIGACT) and the European Association for Theoretical Computer Science (EATCS). The prize will be awarded during STOC 2023, one of the most important conferences in theoretical computer science, which takes place on 20-23 June 2023 in Orlando, Florida. This year, there are two winning articles. The other paper receiving the 2023 Gödel Prize is by Thomas Rothvoss.
Ronald de Wolf says: “I am very proud and humbled to win this prize along with my co-authors, and to be listed among the amazing papers and amazing researchers that have received this prize before”. Earlier winners of the Gödel Prize include well-known researchers like Cynthia Dwork, Shafi Goldwasser, Johan Håstad, László Lovász, Peter Shor, Dan Spielman, Mario Szegedy and Avi Wigderson.
Travelling Salesman Problem
Authors Samuel Fiorini, Serge Massar, Sebastian Pokutta, Hans Raj Tiwary and Ronald de Wolf were given the award for their article ‘Exponential Lower Bounds for Polytopes in Combinatorial Optimization’. One of its main conclusions was that a particular attempt to solve the famous travelling salesman problem cannot possibly work. Ronald de Wolf explains: “This paper refutes an attempt to solve hard computational problems such as Travelling Salesman (TSP). We know how to solve so-called linear programs efficiently, so since the 1980s researchers have been trying to write down a small linear program for TSP. If successful, this approach would have momentous consequences for efficient algorithms. However, our paper - which generalizes work by Yannakakis from 1988 - definitively showed that the approach is doomed to fail, by proving that every linear program that describes TSP needs to be exponentially large. The proof combines geometry, combinatorics, and even a connection with quantum communication theory.”
At STOC 2012, Ronald de Wolf and the rest of the team already received a Best Paper Award for their work, and in 2022 they won the ACM STOC 10-year Test of Time Award. Ronald de Wolf won the ERCIM Cor Baayen Award in 2003. | |||
2453 | dbpedia | 1 | 11 | https://www.quantamagazine.org/why-is-quantum-computing-so-hard-to-explain-20210608/ | en | What Makes Quantum Computing So Hard to Explain? | [
"https://d2r55xnwy6nx47.cloudfront.net/uploads/2021/06/0607_QComputer_Lede-2880x1620_F_v0-1720x968.jpg",
"https://d2r55xnwy6nx47.cloudfront.net/uploads/2021/06/Aaronson_Scott.jpg",
"https://d2r55xnwy6nx47.cloudfront.net/uploads/2021/05/Youtube_Article_160.gif",
"https://d2r55xnwy6nx47.cloudfront.net/uploads/2... | [] | [] | [
""
] | null | [
"civil conversation. Abusive",
"self-promotional"
] | 2021-06-08T15:10:00+00:00 | To understand what quantum computers can do — and what they can’t — avoid falling for overly simple explanations. | en | /favicon.png | Quanta Magazine | https://www.quantamagazine.org/why-is-quantum-computing-so-hard-to-explain-20210608/ | Quantum computers, you might have heard, are magical uber-machines that will soon cure cancer and global warming by trying all possible answers in different parallel universes. For 15 years, on my blog and elsewhere, I’ve railed against this cartoonish vision, trying to explain what I see as the subtler but ironically even more fascinating truth. I approach this as a public service and almost my moral duty as a quantum computing researcher. Alas, the work feels Sisyphean: The cringeworthy hype about quantum computers has only increased over the years, as corporations and governments have invested billions, and as the technology has progressed to programmable 50-qubit devices that (on certain contrived benchmarks) really can give the world’s biggest supercomputers a run for their money. And just as in cryptocurrency, machine learning and other trendy fields, with money have come hucksters.
In reflective moments, though, I get it. The reality is that even if you removed all the bad incentives and the greed, quantum computing would still be hard to explain briefly and honestly without math. As the quantum computing pioneer Richard Feynman once said about the quantum electrodynamics work that won him the Nobel Prize, if it were possible to describe it in a few sentences, it wouldn’t have been worth a Nobel Prize.
Not that that’s stopped people from trying. Ever since Peter Shor discovered in 1994 that a quantum computer could break most of the encryption that protects transactions on the internet, excitement about the technology has been driven by more than just intellectual curiosity. Indeed, developments in the field typically get covered as business or technology stories rather than as science ones.
That would be fine if a business or technology reporter could truthfully tell readers, “Look, there’s all this deep quantum stuff under the hood, but all you need to understand is the bottom line: Physicists are on the verge of building faster computers that will revolutionize everything.”
The trouble is that quantum computers will not revolutionize everything.
Yes, they might someday solve a few specific problems in minutes that (we think) would take longer than the age of the universe on classical computers. But there are many other important problems for which most experts think quantum computers will help only modestly, if at all. Also, while Google and others recently made credible claims that they had achieved contrived quantum speedups, this was only for specific, esoteric benchmarks (ones that I helped develop). A quantum computer that’s big and reliable enough to outperform classical computers at practical applications like breaking cryptographic codes and simulating chemistry is likely still a long way off.
But how could a programmable computer be faster for only some problems? Do we know which ones? And what does a “big and reliable” quantum computer even mean in this context? To answer these questions we have to get into the deep stuff.
Let’s start with quantum mechanics. (What could be deeper?) The concept of superposition is infamously hard to render in everyday words. So, not surprisingly, many writers opt for an easy way out: They say that superposition means “both at once,” so that a quantum bit, or qubit, is just a bit that can be “both 0 and 1 at the same time,” while a classical bit can be only one or the other. They go on to say that a quantum computer would achieve its speed by using qubits to try all possible solutions in superposition — that is, at the same time, or in parallel.
This is what I’ve come to think of as the fundamental misstep of quantum computing popularization, the one that leads to all the rest. From here it’s just a short hop to quantum computers quickly solving something like the traveling salesperson problem by trying all possible answers at once — something almost all experts believe they won’t be able to do.
The thing is, for a computer to be useful, at some point you need to look at it and read an output. But if you look at an equal superposition of all possible answers, the rules of quantum mechanics say you’ll just see and read a random answer. And if that’s all you wanted, you could’ve picked one yourself.
What superposition really means is “complex linear combination.” Here, we mean “complex” not in the sense of “complicated” but in the sense of a real plus an imaginary number, while “linear combination” means we add together different multiples of states. So a qubit is a bit that has a complex number called an amplitude attached to the possibility that it’s 0, and a different amplitude attached to the possibility that it’s 1. These amplitudes are closely related to probabilities, in that the further some outcome’s amplitude is from zero, the larger the chance of seeing that outcome; more precisely, the probability equals the distance squared.
But amplitudes are not probabilities. They follow different rules. For example, if some contributions to an amplitude are positive and others are negative, then the contributions can interfere destructively and cancel each other out, so that the amplitude is zero and the corresponding outcome is never observed; likewise, they can interfere constructively and increase the likelihood of a given outcome. The goal in devising an algorithm for a quantum computer is to choreograph a pattern of constructive and destructive interference so that for each wrong answer the contributions to its amplitude cancel each other out, whereas for the right answer the contributions reinforce each other. If, and only if, you can arrange that, you’ll see the right answer with a large probability when you look. The tricky part is to do this without knowing the answer in advance, and faster than you could do it with a classical computer.
Twenty-seven years ago, Shor showed how to do all this for the problem of factoring integers, which breaks the widely used cryptographic codes underlying much of online commerce. We now know how to do it for some other problems, too, but only by exploiting the special mathematical structures in those problems. It’s not just a matter of trying all possible answers at once.
Compounding the difficulty is that, if you want to talk honestly about quantum computing, then you also need the conceptual vocabulary of theoretical computer science. I’m often asked how many times faster a quantum computer will be than today’s computers. A million times? A billion?
This question misses the point of quantum computers, which is to achieve better “scaling behavior,” or running time as a function of n, the number of bits of input data. This could mean taking a problem where the best classical algorithm needs a number of steps that grows exponentially with n, and solving it using a number of steps that grows only as n2. In such cases, for small n, solving the problem with a quantum computer will actually be slower and more expensive than solving it classically. It’s only as n grows that the quantum speedup first appears and then eventually comes to dominate.
But how can we know that there’s no classical shortcut — a conventional algorithm that would have similar scaling behavior to the quantum algorithm’s? Though typically ignored in popular accounts, this question is central to quantum algorithms research, where often the difficulty is not so much proving that a quantum computer can do something quickly, but convincingly arguing that a classical computer can’t. Alas, it turns out to be staggeringly hard to prove that problems are hard, as illustrated by the famous P versus NP problem (which asks, roughly, whether every problem with quickly checkable solutions can also be quickly solved). This is not just an academic issue, a matter of dotting i’s: Over the past few decades, conjectured quantum speedups have repeatedly gone away when classical algorithms were found with similar performance.
Note that, after explaining all this, I still haven’t said a word about the practical difficulty of building quantum computers. The problem, in a word, is decoherence, which means unwanted interaction between a quantum computer and its environment — nearby electric fields, warm objects, and other things that can record information about the qubits. This can result in premature “measurement” of the qubits, which collapses them down to classical bits that are either definitely 0 or definitely 1. The only known solution to this problem is quantum error correction: a scheme, proposed in the mid-1990s, that cleverly encodes each qubit of the quantum computation into the collective state of dozens or even thousands of physical qubits. But researchers are only now starting to make such error correction work in the real world, and actually putting it to use will take much longer. When you read about the latest experiment with 50 or 60 physical qubits, it’s important to understand that the qubits aren’t error-corrected. Until they are, we don’t expect to be able to scale beyond a few hundred qubits.
Once someone understands these concepts, I’d say they’re ready to start reading — or possibly even writing — an article on the latest claimed advance in quantum computing. They’ll know which questions to ask in the constant struggle to distinguish reality from hype. Understanding this stuff really is possible — after all, it isn’t rocket science; it’s just quantum computing! | ||||
2453 | dbpedia | 2 | 52 | https://www.mdpi.com/2079-9292/12/12/2643 | en | Evaluation and Comparison of Lattice-Based Cryptosystems for a Secure Quantum Computing Era | [
"https://pub.mdpi-res.com/img/design/mdpi-pub-logo-black-small1.svg?da3a8dcae975a41c?1723031622",
"https://pub.mdpi-res.com/img/design/mdpi-pub-logo-black-small1.svg?da3a8dcae975a41c?1723031622",
"https://pub.mdpi-res.com/img/journals/electronics-logo.png?8600e93ff98dbf14",
"https://www.mdpi.com/bundles/mdpis... | [] | [] | [
""
] | null | [
"Maria E. Sabani",
"Ilias K. Savvas",
"Dimitrios Poulakis",
"Georgia Garani",
"Georgios C. Makris",
"Maria E",
"Ilias K",
"Georgios C"
] | 2023-06-12T00:00:00 | The rapid development of quantum computing devices promises powerful machines with the potential to confront a variety of problems that conventional computers cannot. Therefore, quantum computers generate new threats at unprecedented speed and scale and specifically pose an enormous threat to encryption. Lattice-based cryptography is regarded as the rival to a quantum computer attack and the future of post-quantum cryptography. So, cryptographic protocols based on lattices have a variety of benefits, such as security, efficiency, lower energy consumption, and speed. In this work, we study the most well-known lattice-based cryptosystems while a systematic evaluation and comparison is also presented. | en | MDPI | https://www.mdpi.com/2079-9292/12/12/2643 | by
Maria E. Sabani
1,*,† ,
Ilias K. Savvas
1,† ,
Dimitrios Poulakis
2 ,
Georgia Garani
1 and
Georgios C. Makris
1
1
Department of Digital Systems, University of Thessaly, Geopolis Campus, Larissa-Trikala Ring-Road, 415 00 Larissa, Greece
2
Department of Mathematics, Aristotle University of Thessaloniki, 541 24 Thessaloniki, Greece
*
Author to whom correspondence should be addressed.
†
These authors contributed equally to this work.
Electronics 2023, 12(12), 2643; https://doi.org/10.3390/electronics12122643
Submission received: 5 May 2023 / Revised: 6 June 2023 / Accepted: 7 June 2023 / Published: 12 June 2023
(This article belongs to the Special Issue Quantum Computation and Its Applications)
Abstract
:
The rapid development of quantum computing devices promises powerful machines with the potential to confront a variety of problems that conventional computers cannot. Therefore, quantum computers generate new threats at unprecedented speed and scale and specifically pose an enormous threat to encryption. Lattice-based cryptography is regarded as the rival to a quantum computer attack and the future of post-quantum cryptography. So, cryptographic protocols based on lattices have a variety of benefits, such as security, efficiency, lower energy consumption, and speed. In this work, we study the most well-known lattice-based cryptosystems while a systematic evaluation and comparison is also presented.
1. Introduction
Quantum computing constitutes a critical issue as the impact of its advent and development will be present in every cell of our technology and therefore, our life. Quantum computational systems use the qubit (QUantum BIT) instead of the typical bit, which has a unique property; it can be in basic states ∣0〉, ∣1〉 but it can also be in a state that is a linear combination of these two states, such that a ∣ 0 〉 + b ∣ 1 〉 , a , b ∈ C , ∧ a 2 + b 2 = 1 [1]. This is an algebraic-mathematical expression of quantum superposition which claims that two quantum states can be added and their sum can also be a valid quantum state [2]. Regardless of superposition, quantum computers’ power and capability are based on quantum physics and specifically on the phenomenon of quantum entanglement and the no-cloning system. The odd phenomenon of quantum entanglement states that there are particles that are generated, interact, and are connected, regardless of the distance or the obstacles that separate them [3]. This fundamental law of quantum physics allows us to know or to measure the state of one particle if we know or measure the other particles.
Programmable quantum devices are capable of solving and overcoming problems that typical computers cannot solve in logical time. A quantum computer can perform operations with enormous speed, and in the flash of an eye, can process and store an extensive amount of information. This huge computational power which makes quantum computers superior to classical computers was described in 2012 by John Preskill with the term quantum supremacy [4]. Quantum mechanics provides us a fascinating theorem, the no-cloning theorem. As an evolution of the no-go theorem by James Park, the no-cloning theorem was proposed, a fundamental theorem of quantum physics and quantum cryptography. According to this theorem, the independent and identical replication of any unknown quantum state is impossible [2].
Cryptography is one of the oldest sciences and was developed out of the human necessity for secure communication [5]. Cryptographic protocols and algorithms are based on complex mathematics and cryptosystems appear in every electronic transaction and communication in our everyday life. The security, efficiency, and speed of these cryptographic methods and schemes are a main issue of interest and study. Contemporary cryptosystems are considered to be vulnerable to a quantum computer attack. In 1994, the American mathematician and cryptography professor Peter Shor presented an algorithm [6], which dumbfounded scientists. Shor in his work argued that with the implementation of the proposed algorithm in a quantum device, there would be no more security in current computational systems. This was a real revolution for the science of computing and a great motivator for the design and construction of quantum computational devices. The science that studies and develops cryptographic algorithms resistant to attacks by quantum computers is well known as post-quantum cryptography [7]. By bringing up to date mathematically based algorithms and standards, post-quantum cryptography examines and studies how to prepare the world for the era of quantum computing. [8,9].
Lattice-based cryptographic protocols attract the interest of researchers for a number of reasons. Firstly, the algorithms that are applied to lattice-based protocols are simple and efficient. Additionally, they have proven to be secure protocols and create a multitude of applications.
In this review, we examine the cryptographic schemes that are developed for a quantum computer. The following research questions are answered:
How much is the science of Cryptography affected by quantum computers ?
Which cryptosystems are efficient and secure for the quantum era?
Which are the most known lattice-based cryptographic schemes and how do they function?
How can we evaluate NTRU, LWE, and GGH cryptosystem?
What are their strengths and weaknesses ?
The rest of the paper is organized as follows. In Section 2, we present changes and challenges due to quantum devices in cryptography and in Section 3, cryptographic schemes in the quantum era are described. In Section 4, we present some basic issues of lattice theory. In Section 5 and Section 6, we present the lattice-based cryptographic schemes NTRU, LWE, and GGH correspondingly, while a discrete implementation of them is given. In addition, the GGH cryptosystem is described in Section 7. Results and comparisons are given in Section 8, while some future work directions are presented in Section 9. Finally, Section 10 concludes this work.
2. The Evolution of Quantum Computing in Cryptography
Cryptography is an indispensable tool for protecting information in computer systems, and difficult mathematical problems such as the discrete logarithm problem and the factorization of large prime numbers are the basis of current cryptographic protocols. We can divide the cryptographic protocols into two broad categories: symmetric cryptosystems and asymmetric (public key cryptosystems) cryptosystems [5].
The same key for both encryption and decryption is being used in symmetric cryptosystems, and despite their speed and their easy implementation, they have certain disadvantages. One main issue of this type of cryptosystem is the secret key distribution between two parties that want to communicate safely. Another drawback of symmetric cryptographic schemes is that the private keys which are being used must be changed frequently in order not to be known by a fraudulent user. If we can ensure the existence of an efficient method to generate and exchange keys, symmetric encryption and decryption methods are considered to be secure [10,11].
Asymmetric cryptographic schemes use a pair of keys, private and public keys, for encryption and decryption. This type of cryptosystem relies on mathematical problems that are characterized as hard to solve [12]. Some of the most widely known and implemented public key cryptosystems are RSA [13], the Diffie–Helman protocol, ECDSA, and others. Since the early 1990s, all these cryptographic schemes were believed to be effective and secure but Shor’s algorithm changed things.
Peter Shor proved with his algorithm that a quantum computer could quickly and easily compute the period of a periodic function in polynomial time [14]. Since 1994, when Shor’s protocol was presented, has been a great amount of study, analysis, and implementation of the algorithm both in classical and quantum computing devices. Shor’s method solves both the discrete logarithm problem and the factorization problem that are the basis of the current cryptographic schemes and therefore, the public key cryptosystems are insecure and vulnerable to a quantum attack [6].
2.1. Quantum Cryptography
In 1982, for the first time the term “Quantum Cryptography” was recommended but the idea of quantum information appeared for the first time in the decade of the 1970s, from Stephen Wiesner and his work about quantum money [15]. The science of quantum cryptography uses the fundamental laws of quantum physics to securely transfer or store data. In general, in quantum cryptography, the transmission and the encryption procedure is performed with the aid of quantum mechanics [16]. Quantum cryptography exploits the fundamental laws of quantum mechanics such as superposition and quantum entanglement, and constructs cryptographic protocols in a more advanced and efficient way.
A basic problem in classical cryptographic schemes is the key generation and exchange, as this process is endangered and unsafe when it takes place in an insecure environment [17]. When two different parties want to communicate and transfer data, they exchange information (i.e., key, message) and this procedure occurs in a public channel, so their communication could be vulnerable to an attack by a third party [18]. The most fascinating and also the most useful discovery and a widely used method of quantum cryptography is quantum key distribution.
2.2. Quantum Key Distribution
Quantum key distribution (QKD) utilizes the laws of quantum physics in the creation of a secret key through a quantum channel. With the principles of quantum physics, in QKD a secret key is generated and a secure communication between two (or more parties) is established. The inherent randomness of the quantum states and the results accrue from their measurements and they have as a result total randomness in the generation of the key. Quantum mechanics solves the problem of key distribution—the main challenge in cryptographic schemes—with the aid of quantum superposition, quantum entanglement, and the Uncertainty Principle of Heisenberg. Heisenberg’s Principle argues that two quantum states cannot be measured simultaneously [3]. This principle has as a consequence the detection of someone who tries to eavesdrop on the communication between two parties. If a fraudulent user tries to change the quantum system, he will be detected and the users abort the protocol.
Let us suppose that we have two parties that want to communicate and use a quantum key distribution protocol to generate a secret key. A quantum key distribution scheme has two phases and for its implementation the existence of a classical and a quantum channel is necessary. In the quantum channel, the private key is generated and reproduced and in the classical channel, the communication of the two parties takes place. Into the quantum channel are sent polarized photons and each one of the photons has a random quantum state [17]. Both the two parties have in their possession a device that collects and measures the polarization of these photons. Due to Heisenberg’s principle, the measurement of the polarized photons can reveal a possible eavesdropper as in his effort to elicit information, the state of the quantum system changes and the fraudulent user is detected [19].
The BB84 protocol, named after its creators and the year it was published, was the first quantum key distribution protocol and it was proposed in 1984 by Charles Bennett and Gilles Brassard [20]. BB84 is the most studied, analyzed, and implemented QKD protocol; since then, various QKD protocols have been proposed. B92 and SARG04, which are known as variants of BB84 and E91 that exploit the phenomenon of quantum entanglement, are a few of the widely known quantum key distribution protocols [1]. All these QKD protocol are in theory well designed and structured and are proved to be secure, but in practice, there are imperfections in their implementation. Loopholes, such as poorly constructed detectors or defective optical fibers, and general imperfections in devices and the practical QKD system make the QKD protocols vulnerable to attacks. By exploiting these weaknesses of the system, one can perform certain types of attacks and this is the basic issue of research and study, QKD security.
Significant progress has been made in the implementation of the quantum phase of communication and the development of quantum systems. Entanglement dynamics in CV quantum channels for both common and independent reservoirs have received a lot of attention recently [21]. As the security of QKD is the main goal, interesting experiments have shown that non-Markovian features can be used to improve security and/or locate an eavesdropper along the transmission line and determine their location [22]. Additionally, the entanglement dynamics have been studied and recent experiments have shown that photonic band gap media are promising to acquire non-Markovian behaviour and that materials with a photonic bandgap may be able to transmit entanglement reliably over long distances [23]. Moreover, the study of the phase modulation of coherent states in channels where the quantum communication phase takes place has turned into a subject of interest. Very interesting and useful studies and experiments have proven that phase diffusion is the most damaging kind of noise in a phase modulation scheme, where the information is encoded in the phase of a quantum seed signal [24]. Additionally, time-independent Markovian noise, specifically when the seed state is coherent, has been shown to be detrimental to information transfer and may compromise the channel’s overall performance [25,26]. The environment’s spectral structure, on the other hand, may lead to non-Markovian damping or diffusion channels in quantum optical communications [22,26]. It has also demonstrated that phase channels better preserve the transfer of information above a threshold on the loss and phase noise parameters, which is compared to the lossy coherent states amplitude-based scheme. So, in the presence of time-correlated noise, which results in dynamical non-Markovian phase diffusion, the interaction between the use of NLA and the memory effects results in a pronounced rise in performance [24].
3. Cryptographic Schemes in Quantum Era
The advances in computer processing power and the evolution of quantum computers seem for many people to be a threat in the distant future. On the other hand, researchers and security technologists are anxious about the capabilities of a quantum computational device to threaten the security of contemporary cryptographic algorithms. Shor’s algorithm consists of two parts, a classical part and a quantum part, and with the aid of a quantum routine could break modern cryptographic schemes, such as RSA and the Diffie–Hellman cryptosystem [27]. The factorization problem and the discrete logarithm problem are the fundamental basis for modern cryptographic schemes and serve as the foundation for these kinds of cryptosystems.
From that moment and after, it has been widely known in the scientific and technological community that with the arrival of a sufficiently large quantum computer, there is no more security in our encryption schemes. Therefore, post-quantum data encryption protocols are a basic topic of research and work, with the main goal being to construct cryptosystems resistant to quantum computers’ attacks [7,8]. Subsequently, we present certain cryptographic schemes that have been developed and that are secure under an attack of a quantum computer.
3.1. Code-Based Cryptosystems
Coding Theory is an important scientific field which studies and analyzes linear codes that are being used for digital communication. The main subject of research in coding theory is finding a secure and efficient data transmission method. In the process of data transmission, data are often lost due to errors owing to noise, interference, or other reasons, and the main subject of study of coding theory is to minimize this data loss [28]. When two discrete parties want to communicate and transfer data, they add extra information to each message which is transferred to enable the message to be decoded despite the existing errors.
Code-based cryptographic schemes are based on the theory of error-correcting codes and are considered to be prominent for the quantum computing era. These cryptosystems are considered to be reliable and their hardness relies on hard problems of coding theory, such as syndrome decoding (SN) and learning parity with noise (LPN).
The first code-based cryptosystem was proposed by Robert McEliece in 1978. It was based on the difficulty of decoding random linear codes, a problem which is considered to be NP-hard [29]. The main idea of McEliece is to use an error-correcting code, for which a decoding algorithm is known and which is capable to correct up to t errors to generate the secret key. The public key is constructed by the private key, covering up the selected code as a general linear code. The sender creates a codeword using the public key that is disturbed up to t errors. The receiver performs error correction and efficient decoding of the codeword and decrypts the message.
McEliece’s cryptosystem and the Niederreiter cryptosystem that was proposed by Harald Niederreiter in 1986 [30] can be suitable and efficient for encryption, hashing, and signature generation. The McEliece cryptosystem has a basic disadvantage, which is the large size of the keys and ciphertexts. In modern variants of the McEliece cryptosystem, there has been an effort to reduce the size of the keys. However, these types of cryptographic schemes are considered to withstand attacks by quantum computers and this makes them prominent for post-quantum cryptography.
3.2. Hash-Based Cryptosystems
Hash-based cryptographic schemes in general generate digital signatures and rely on cryptographic hash functions’ security, such as SHA-3. In 1979, Ralph Merkle proposed an asymmetric signature scheme based on one-time signature (OTS) and the Merkle signature scheme is considered to be the simplest and the most widely known hash-based cryptosystem [31]. This digital signature cryptographic scheme converts a weak signature with the aid of a hash function to a strong one.
The Merkle signature scheme is a practical development of Leslie Lamport’s idea of OTS that turn it into a many-times signature scheme, a signature process that could be used multiple times. The generated signatures are based on hash functions and their security is guaranteed even against quantum attacks.
Many of the reliable signature schemes based on hash functions have the drawback that the person who signs must keep record of the precise number of messages that have been signed before, and any error in this record will create a gap in their security [32]. Another disadvantage of these schemes is that a certain number of digital signatures can be generated and if this number increases indefinitely, then the size of the digital signatures is exceptionally big. However, hash-based algorithms for digital signatures are regarded as safe and strong against a quantum attack and can be used for post-quantum cryptography.
3.3. Multivariate Cryptosystems
In 1988, T. Matsumoto and H. Imai [33] presented a cryptographic scheme which relied on two-degree multivariate polynomials over a finite field for encryption and for signature verification. In 1996, J. Patarin [34] implemented a cryptosystem, the security of which relied on the fact that multivariate polynomial systems in finite fields are difficult to solve.
The multivariate quadratic polynomial problem states that given m quadratic polynomials f 1 , … , f m in n variables x 1 , … , x n with their coefficients to be chosen from a field F , it is requested to find a solution z ∈ F n such that f i ( z ) = 0 , for i ∈ [ m ] . The choice of the parameters make the cryptosystem reliable and safe against attacks, so this problem is considered to be NP-hard.
These type of cryptographic schemes are believed to be efficient and fast with high-speed computation processes and suitable for implementation on smaller devices. The need for new, stronger cryptosystems with the evolution of quantum computers created various candidates for secure cryptographic schemes based on the multivariate quadratic polynomial problem [8]. These type of cryptosystems are considered to be an active issue of research due to their quantum resilience.
3.4. Lattice-Based Cryptosystems
Cryptographic algorithms that are based on lattice theory have gained the interest of researchers and are perhaps the most famous of all candidates for post-quantum cryptography. Imagine a lattice like a set of points in an n dimensional space with periodic structure. The algorithms which are implemented in lattice-based cryptosystems are characterized by simplicity and efficiency and are highly parallelizable [35].
Lattice-based cryptographic protocols are proven to be secure, as their strong security relies on well-known lattice problems such as the Shortest Vector Problem (SVP) and the Learning with Errors problem (LWE) [36]. Additionally, they create powerful and efficient cryptographic primitives, such as functional encryption and fully homomorphic encryption [37]. Moreover, lattice-based cryptosystems create several applications, such as key exchange protocols and digital signature schemes. For all these reasons, lattice-based cryptographic schemes are believed to be the most dynamic field of exploration in post-quantum cryptography and the most prominent and promising one.
4. Lattices
Lattices are considered to be a typical subject in both cryptography and cryptanalysis and an essential tool for future cryptography, especially with the transition to the quantum computing era. The study and the analysis of the lattices goes back to the 18th century, when C.F. Gauss and J.L. Lagrange used lattices in number theory and H. Minkowski with his great work “geometry of numbers” sparked the study of lattice theory [38]. In the late 1990s, a lattice was used for the first time in a cryptographic scheme, and in recent years the evolution in this scientific field has been enormous, as there are lattice-based cryptographic schemes for encryption, digital signatures, trapdoor functions, and much more.
A lattice is a discrete subgroup of points in n-dimensional space with periodic structure. Any subgroup of Z n is a lattice, which is called integer lattice. It is appropriate to describe a lattice using its basis [35]. The basis of a lattice is a set of independent vectors in R n and by combining them, the lattice can be generated.
Definition 1.
A set of vectors { b 1 , b 2 , … , b n } ⊂ R m is linearly independent if the equation
c 1 b 1 + c 2 b 2 + ⋯ + c n b n = 0 , where c i ∈ R ( i = 1 , … , n )
accepts only the trivial solution c 1 = c 2 = ⋯ = c n = 0 .
Definition 2.
Given n linearly independent vectors b 1 , b 2 , … , b n ∈ R m , the lattice generated by them is defined as
L ( b 1 , b 2 , … , b n ) = { ∑ x i b i / x i ∈ Z } .
Therefore, a lattice consists of all integral linear combinations of a set of linearly independent vectors and this set of vectors { b 1 , b 2 , … , b n } is called a lattice basis. So, a lattice can be generated by different bases as can be seen in Figure 1.
Definition 3.
The same number d i m ( L ) of elements of all the bases of a lattice L it is called the dimension (or rank) of the lattice, since it matches the dimension of the vector subspace s p a n ( L ) spanned by L .
Definition 4.
Let L be a lattice with dimension n and B = { b 1 , b 2 , … , b n } a basis of the lattice. We define as fundamental parallelepiped the set:
P ( b 1 , b 2 , … , b n ) = { t 1 b 1 , t 2 b 2 , … , t n b n : 0 ≤ t i < 1 } = ∑ j = 1 n [ 0 , 1 ) b j
Not every given set of vectors forms a basis of a lattice and the following theorem gives us a criterion.
Theorem 1.
Let L be a lattice with rank n and { b 1 , b 2 , … , b n } ∈ L , n linearly independent lattice vectors. The vectors { b 1 , b 2 , … , b n } form a basis of L if and only if P ( b 1 , b 2 , … , b n ) ∩ L = { 0 } .
Definition 5.
A matrix U ∈ Z n × n is called unimodular if d e t U = ± 1 .
For example, the matrix
4 5 13 16
with d e t ( U ) = − 1 .
Theorem 2.
Two bases B 1 , B 2 ∈ R m × n generate the same lattice if and only if there is an umimodular matrix U ∈ R n × n such that B 2 = B 1 U .
Definition 6.
Let L = L ( B ) be a lattice of rank n and let B a basis of L . We define the determinant of L denoted d e t ( L ) , as the n-dimensional volume of P ( B ) .
We can write
d e t ( L ( B ) ) = v o l ( P ) and also
d e t ( L ) = d e t ( B T B ) .
An interesting property of the lattices is that the smaller the determinant of the lattice is, the denser the lattice is.
Definition 7.
For any lattice L = L ( B ) , the minimum distance of L is the smallest distance between any two lattice points:
λ ( L ) = i n f { ∥ x − y ∥ : x , y ∈ L / x ≠ y }
It is obvious that the minimum distance can be equivalently defined as the length of the shortest nonzero lattice vector:
λ ( L ) = i n f { ∥ v ∥ : v ∈ L , { 0 } }
4.1. Shortest Vector Problem (SVP)
The Shortest Vector Problem (SVP) is a very interesting and extensively studied computational problem on lattices. The Shortest Vector Problem states that given a lattice L , the shortest nonzero vector in L should be found.
That is to say, given a basis B = { b 1 , b 2 , … , b n } ∈ R m × n , the shortest vector problem is to find a vector v → satisfying
∥ v → ∥ = min u → ∈ L ( B ) / 0 = λ ( L ( B ) )
which is a variant of the Shortest Vector Problem is computing the length of the shortest nonzero vector in L (e.g., λ ( L ) ) without necessarily finding the vector.
Theorem 3.
Minkowski’s first theorem. The shortest nonzero vector in any n-dimensional lattice L has length at most γ n d e t ( L ) 1 / n , where γ n is an absolute constant (approximately equals to n ) that depend only of the dimension n and d e t ( L ) is the determinant of the lattice.
Two great mathematicians, J. Lagrange and C.F. Gauss, were the first ones to study the lattices and they knew an algorithm to find the shortest nonzero vector in two dimensional lattices. In 1773, Lagrange proposed an efficient algorithm to find a shortest vector of a lattice and Gauss, working independently, made a publication with his proposal for this algorithm in 1801 [38].
A g-approximation algorithm for SVP is an algorithm that on input a lattice L , outputs a nonzero lattice vector of length at most g times the length of the shortest vector in the lattice. The LLL lattice reduction algorithm is capable of approximating SVP within a factor g = O ( ( 2 / 3 ) n ) where n is the dimension of the lattice. Micciancio proved that the Shortest Vector Problem is NP-hard even to approximate within any factor less than 2 [39]. SVP is considered to be a hard mathematical problem and can be used as a cornerstone for the construction of provably secure cryptographic schemes, such as lattice-based cryptography.
One more form of the CVP is figuring the distance of the objective from the lattice without finding the nearest vector of the lattice, and numerous applications are only interested in finding a vector in the lattice that is somewhat close to the objective, not necessarily the nearest one.
4.2. Closest Vector Problem (CVP)
The Closest Vector Problem (CVP) is a computational problem on lattices that relates closely to the Shortest Vector Problem. CVP states that given a target point x → , the lattice point closest to the target should be found.
Let L be a lattice and a fixed point t ∈ R n ; we define the distance:
d ( t , L ) : m i n x ∈ L ∥ x − t ∥ .
CVP can be formulated as follows: Given a basis matrix B for the lattice L and a t ∈ R n , compute a non-zero vector v ∈ L such that ∥ t − v ∥ is minimal. So, we search a non-zero vector v ∈ L , such that ∥ v ∥ = d ( t , L ) .
Another version of the CVP is computing the distance of the target from the lattice without finding the closest vector of the lattice, and many applications only demand to find a lattice vector that is not too far from the target, not necessarily the closest one [40].
The most famous polynomial-time algorithms to solve the Shortest Vector Problem are Babai’s algorithm and Kannan’s algorithm which are based on lattice reduction. Below, in Algorithm 1, we present the first algorithm which was proposed by Lazlo Babai in 1986 [41].
Algorithm 1 Babai’s Round-off Algorithm.
Input: basis B = { b 1 , b 2 , … , b n } ∈ Z n , target vector c ∈ R
Output: approximate closest lattice point of c in L ( B )
1: procedure RoundOff
2: Compute inverse of B : B − 1 ∈ Q n
3: v : = B [ B − 1 c ]
4: return v
5: end procedure
CVP is the foundation of many cryptographic schemes of lattice cryptography, where the decryption procedure corresponds to a CVP computation. It is regarded as NP-hard to solve approximately within any constant factor [42]. Besides cryptography, the problem of finding a good CVP approximation algorithm with approximation factors that grow as a polynomial in the dimension of a lattice has numerous applications in computer science and is an active open problem in lattice theory.
4.3. Lattice Reduction
Lattice reduction, or lattice basis reduction, is about finding an interesting, useful basis of a lattice. Such a requested useful basis, from a mathematical point of view, satisfies a few strong properties. A lattice reduction algorithm is an algorithm that takes as input a basis of the lattice and returns a simpler basis which generates the same lattice. For computing science, we are interested in computing such bases in a reasonable time, given an arbitrary basis. In general, a reduced basis is composed from vectors with good properties, such as being short or being orthogonal.
A polynomial-time basis reduction algorithm developed by Laszlo Lovasz, Arjen Lenstra, and Hendrik Lenstra was published in 1982, the LLL, which took its name from the initials of their surnames [43]. The basis reduction algorithm approaches the solution of the smallest vector problem in small dimensions, especially in two dimensions; the shortest vector is too small to be computed in a polynomial time. On the contrary, in large dimensions there is no algorithm known which solves the SVP in a polynomial time. With the aid of the Gram–Schmidt orthonormalization method, we define the base reduction method LLL.
5. The NTRU Cryptosystem
A public key cryptosystem known as NTRU was presented in 1996 by Joseph H. Silverman, Jill Pipher, and Jeffrey Hoffstein. [44]. Until 2013, the NTRU cryptosystem was only commercially available, but after that it was released into the public domain for public use. The NTRU is based on the shortest vector problem in a lattice and is one of the fastest public key cryptographic schemes. It encrypts and decrypts data using polynomial rings. NTRU is more efficient than other current cryptosystems such as RSA, and it is believed to be resistant to quantum computer attacks, and this makes it a prominent post-quantum cryptosystem.
To describe the way the NTRU cryptographic scheme operates, we first have to give some definitions.
Definition 8.
Fix a positive integer N. The ring of convolution polynomials (of rank N) is the quotient ring
R = Z [ X ] ( X N − 1 ) .
(1)
Definition 9.
The ring of convolution polynomials (modulo q) is the quotient ring
R q = ( Z / q Z ) [ x ] ) ( X N − 1 ) .
(2)
Definition 10.
We consider a polynomial a ( x ) as an element of R q by reducing its coefficients mopulo q. For any positive integers d 1 and d 2 , we let
L ( d 1 , d 2 ) = a ( x ) ∈ R : a ( x ) has d 1 coefficients equal to 1 a ( x ) has d 2 coefficients equal to − 1 a ( x ) has all other coefficients equal to 0
(3)
Polynomials in L ( d 1 , d 2 ) are called ternary (or trinary) polynomials. They are analogous to binary polynomials, which have only 0’s and 1’s as coefficients.
We assume we have two polynomials a ( x ) and b ( x ) . The product of these two polynomials is given by the formula
a ( x ) × b ( x ) = c ( x ) with c k = ∑ i = 0 k a i b k − i + ∑ i = k + 1 N − 1 a i b N + k − i = ∑ i + j ≡ k mod N a i b j
(4)
We will denote the inverses by F q and F p , such that
F q × f ≡ 1 ( mod q ) and F p × f ≡ 1 ( mod p )
(5)
5.1. Description
The NTRU cryptographic scheme is based firstly on three well-chosen parameters ( N , p , q ) , such that N is a fixed positive large integer, p and q, is not necessary to be prime but are relatively prime, e.g., g c d ( p , q ) = 1 and q will be always larger than p [44]. Secondly, NTRU depends on four sets of polynomials L f , L g , L ϕ and L m with integer coefficients of degree N − 1 and works on the ring R = Z [ X ] X N − 1 .
Every element f ∈ R is written as a polyonomial or as vector f = ∑ N − 1 i = 0 f i x i = [ f 0 , f 1 , … , f N − 1 ] . We make the assumption that Alice and Bob are the two parties that they want to transfer data, to communicate with security. A trusted party or the first party selects public parameters ( N , p , q , d ) such that N,p are prime numbers, g c d ( p , q ) = g c d ( N , q ) = 1 and q > ( 6 d + 1 ) p .
Alice chooses randomly two polynomials f ( x ) ∈ L ( d + 1 , d ) and g ( x ) ∈ L ( d , d ) . These two polynomials are Alice’s private key.
Alice computes the inverse polynomials
F q ( x ) = f ( x ) − 1 ∈ R q and F p ( x ) = f ( x ) − 1 ∈ R p
(6)
Alice computes h ( x ) = F q ( x ) × g ( x ) ∈ R q and the polynomial h ( x ) is Alice’s public key. Alice’s private key is the pair ( f ( x ) , F p ( x ) ) and by only using this key, she can decrypt messages. Otherwise, she can store it, which is probably mod q and compute F p ( x ) when she needs it.
Alice publishes her key h.
Bob wants to encrypt a message and chooses his plaintext m ( x ) ∈ R p . The m ( x ) is a polynomial with coefficients m i such that − 1 2 p ≤ m i ≤ 1 2 p .
Bob chooses a random polynomial r ( x ) ∈ T ( d , d ) , which is called ephemeral key, and computes
e ( x ) ≡ p h ( x ) × r ( x ) + m ( x ) ( mod q )
(7)
and this is the encrypted message that Bob sends to Alice.
Alice computes
a ( x ) ≡ f ( x ) × e ( x ) ( mod q )
(8)
Alice chooses the coefficients of a in the interval from − q / 2 to q / 2 (center lifts a ( x ) to an element of R).
Alice computes
b ( x ) ≡ F p ( x ) × a ( x ) ( mod p )
(9)
and she recovers the message m as if the parameters have been chosen correctly; the polynomial b ( x ) equals the plaintext m ( x ) .
Depending on the choice of the ephemeral key r ( x ) the plaintext m ( x ) can be encrypted with many ways, as its possible encryptions are p h ( x ) × r ( x ) + m ( x ) . The ephemeral key should be used one time only, e.g., it should not be used to encrypt two different plaintexts. Additionally, Bob should not encrypt the same plaintext by using two different ephemeral keys.
5.2. Discrete Implementation
Assume the trusted party chooses the parameters ( N , p , q , d ) = ( 11 , 3 , 61 , 2 ) . As we can see, N = 11 and p = 3 are prime numbers, g c d ( 3 , 61 ) = g c d ( 11 , 2 ) = 1 and the condition q > ( 6 d + 1 ) p is satisfied as it is 61 > ( 6 · 2 + 1 ) 3 = 39 .
Alice chooses the polynomials
f ( x ) = x 10 − x 8 − x 6 + x 4 + x 2 + x + 1 ∈ L ( 3 , 2 ) g ( x ) = x 9 − x 8 − x 6 + x 4 + x 2 + 1 ∈ L ( 2 , 2 )
These polynomials, f , g are the private key of Alice.
Alice computes the inverses
F 61 ( x ) = f ( x ) − 1 mod 61 = = 45 x 10 + 49 x 9 + 26 x 8 + 40 x 7 + 53 x 6 + 47 x 5 + 21 x 4 + 24 x 3 + 60 x 2 + 32 x + 31 ∈ R 61 F 3 ( x ) = f ( x ) − 1 = x 9 + x 7 + x 5 + 2 x 4 + 2 x 3 + 2 x 2 + x ∈ R 3
Alice can store ( f ( x ) , F 3 ( x ) ) as her private key.
Alice computes
h ( x ) = F 61 ( x ) × g ( x ) = = 11 x 10 + 49 x 9 + 26 x 8 + 46 x 7 + 28 x 6 + 53 x 5 + 31 x 4 + 36 x 3 + 30 x 2 + 5 x + 50
and publishes her public key h ( x ) .
Bob decides to encrypt the message m ( x ) = x 7 − x 4 + x 3 + x + 1 and uses the ephemeral key r ( x ) = x 9 + x 7 + x 4 − x 3 + 1 .
Bob computes and sends to Alice the encrypted message
e ( x ) ≡ p h ( x ) × r ( x ) + m ( x ) ( mod q )
that is
e ( x ) = 11 x 10 + 49 x 9 + 52 x 8 + 35 x 7 + 30 x 6 + 25 x 5 + 35 x 4 + 32 x 3 + 18 x 2 + 56 x + 28 ( mod 61 ) .
Alice receives the ciphertext e ( x ) and computes
f ( x ) × e ( x ) = = 58 x 10 + 60 x 9 + 60 x 8 + 4 x 7 + 56 x 5 + 6 x 4 + 55 x 2 + 3 x + 6 ∈ R 61
Therefore, Alice centerlifts modulo 61 to obtain
a ( x ) = − 3 x 10 − x 9 − x 8 + 4 x 7 + 5 x 5 + 6 x 4 − 6 x 2 + 3 x + 6 ∈ R 61
She reduces a ( x ) modulo 3 and computes
F 3 ( x ) × a ( x ) = x 7 + 2 x 4 + x 3 + x + 1 ∈ R 3
and recovers Bob’s message m ( x ) = x 7 − x 4 + x 3 + x + 1
5.3. Security
Lattice-based NTRU is one of the fastest public key cryptosystems and it is used for encryption (NTRU-Encrypt) and digital signatures (NTRUSign). From the moment that NTRU was presented in 1996, NTRU security has been a main issue of interest and research. NTRU hardness relies on the hard mathematical problems in a lattice, such as the Shortest Vector Problem [35].
The authors of NTRU in their paper [44] argue that the secret key can be recovered by the public key, by finding a sufficiently short vector of the lattice that is generated in the NTRU algorithm. D. Coppersmith and A. Shamir proposed a simple attack against the NTRU cryptosystem. In their work, they argued that the target vector f | | g ∈ Z 2 N (the symbol || denotes vector concatenation) belongs to the natural lattice:
L C S = { F | | G ∈ Z 2 N | F ≡ h × G mod q where F , G ∈ R } .
It is obvious that L C S is a full dimension lattice in Z 2 N , with volume q N . The target vector is the shortest vector of L C S , so the private keys should be outputted heuristically by SVP-oracle f and g. Hoffstein et al. claimed that if one chooses the number N reasonably, the NTRU is sufficiently secure, as all these types of attacks are exponential in N. These types of attacks are based on the difficulty of solving certain lattice problems, such as SVP and CVP [45]. Lattice attacks can be used to recover the private key of an NTRU system, but they are generally considered to be infeasible for the current parameters of NTRU. It is important that the key size of the NTRU protocol is O ( N log q ) and this fact makes NTRU a promising cryptographic scheme for post-quantum cryptography [46].
Furthermore, the cryptanalysis of NTRU is an active area of research and other types of attacks against the NTRU cryptosystem have been developed [47,48,49]. We refer to some of them as detailed below.
Brute-Force Attack. In this type of attack, all possible values of the private key are tested until the correct one is found. Brute-force attacks are generally not practical for NTRU, as the size of the key space is very large [50].
Key Recovery Attack. This type of attack relies on exploiting vulnerabilities in the key-generation process of NTRU. For example, if assuming the arbitrary number generator used to create the confidential key is frail, a fraudulent user may be able to recover the private key [51].
Side-channel Attack. This type of attack take advantage of the weaknesses in the implementation of NTRU, such as timing attack, power analysis attack, and fault attack. Side-channel attacks require the device to be physically accessible running the implementation [52,53].
To protect NTRU against these types of attacks and avoid the leak of secret data and information, researchers use various techniques to ensure its security, such as parameter selection, randomization, and error-correcting codes.
6. The LWE Cryptosystem
In 2005, O. Regev presented a new public key cryptographic scheme, the Learning with Errors cryptosystem, and for this work, Regev won the Godel Prize in 2018 [54]. LWE is one of the most famous lattice-based cryptosystems and one of the most widely studied in recent years. It is based on the Learning with Errors problem and the hardness of finding a random linear function of a secret vector modulo a prime number. A probabilistic cryptosystem with a high probability algorithm is the LWE public key cryptosystem. Since LWE proved to be secure and efficient, it has become one of the most contemporary and innovative research topics in both lattice-based cryptography and computer science.
6.1. The Learning with Errors Problem
Firstly, we have to introduce the Learning with Errors problem (LWE). Assuming that we have a secret vector s = ( s 1 , s 2 , … , s n ) ∈ Z n with coefficient integer numbers and n linear equations, such that
a 11 s 1 + a 12 s 2 + … + a 1 n s n ≈ a a 21 s 1 + a 22 s 2 + … + a 2 n s n ≈ b ⋮ a m 1 s 1 + a m 2 s 2 + … + a m n s n ≈ m
We use the symbol “≈” to claim that within a certain error, the value approaches the actual response. This is a difficult problem because adding and multiplying rows together will increase the number of errors in each equation, resulting in the final row reduced state being worthless and the answer being far away from the real value.
Definition 11.
Let s ∈ Z q n be a secret vector and χ be a given distribution on Z q . An LWE distribution A s , n , q , χ generates a sample ( a , b ) ∈ Z q n × Z q or ( A , b ) ∈ Z q m × n × Z q m where a ∈ Z q n is uniformly distributed and b = 〈 a , s 〉 + e , where e ← χ and 〈 a , s 〉 is the inner product of a and s in Z q .
We call A s , n , q , χ = ( a , b ) ∈ Z q n × Z q the LWE distribution, s is called the private key, and e is called the error distribution. If b ∈ Z q is uniformly distributed, then it is called the uniform LWE distribution.
Definition 12.
Fix n ≥ 1 , q ≥ 2 and an error probability distribution χ on Z q . Let s be a vector with n coefficients in Z q . Let A s , χ on Z q n × Z q be the probability distribution choosing a vector a ∈ Z q uniformly at random, choosing e ∈ Z q according to χ and outputting ( a , 〈 a , s 〉 + e ) where additions are performed in Z q . We say an algorithm solves LWE with modulus q and error distribution χ if for any s ∈ Z q n given enough samples from A s , χ it outputs s with high probability.
Definition 13.
Suppose we have a way of generating samples from A s , χ as above, and also generating random uniformly distributed samples of ( a , b ) from Z q n × Z q . We call this uniform distribution U. The decision-LWE problem is to determine after a polynomial number of samples whether the samples are coming from A s , χ or U.
Simplifying the definition and formulated in more compact matrix notation, if we want to generate a uniformly random matrix A with coefficients between 0 and q and two secret vectors s, e with coefficients drawn from a distribution with small variance, the LWE sample can be calculated as: ( A , b = A s + e mod q ) . According to the LWE problem, it is challenging to locate the secret s from such a sample.
Definition 14.
For a > 0 , the family Ψ a is the (uncountable) set of all elliptical Gaussian distributions D r over a number field K R in which r ≥ a .
The choice of the parameters is crucial for the hardness of this problem. The distribution is a Gaussian distribution or a binomial distribution with variance 1 to 3; the length of the secret vector n is such that 2 9 < n < 2 10 and the modulus q is in the range 2 8 to 2 16 .
6.2. Description
Assume n ≥ 1 , q ≥ 2 are positive integers and χ is a given probability distribution in Z q . The LWE cryptographic scheme is based on LWE distribution A s , χ and is described below.
The parameters of the LWE cryptosystem are crucial to the protocol’s security. So, let n be the security parameter of the system; m, q are two integer numbers and χ is a probability distribution on Z q .
The security and the correctness of the cryptosystem are based on the following parameters, which are be chosen appropriately.
Choose q, a prime number between n 2 and 2 n 2 .
Let m = ( 1 + ϵ ) ( n + 1 ) log q for some arbitrary constant ϵ > 0 .
The probability distribution is chosen to be χ = Ψ a ( n ) for a ( n ) ∈ O ( 1 / n log n )
We suppose that there are two parties, Alice and Bob, who want to transfer information securely. The LWE cryptosystem has the typical structure of a cryptographic scheme and its steps are the following.
Alice chooses uniformly at random s ∈ Z q n . s is the private key.
Alice generates a public key by choosing m vectors a 1 , a 2 , … , a m ∈ Z q n independently from the uniform distribution. She also chooses elements (error offsets) e 1 , e 2 , … , e m ∈ Z q n independently according to χ . The public key is ( a i , b i ) i = 1 m , where b i = 〈 a i , s 〉 + e i .
In matrix form, the public key is the LWE sample ( A , b = A s + e mod q ) , where s is the secret vector.
Bob, in order to encrypt a bit, chooses a random set S uniformly among all 2 m subsets of [ m ] . The encryption is ( ∑ i ∈ S a i , ∑ i ∈ S b i ) if the bit is 0 and ( ∑ i ∈ S a i , ⌊ q 2 ⌋ + ∑ i ∈ S b i ) if the bit is 1.
In matrix form, Bob can encrypt a bit m by calculating two LWE problems: one using A as random public element, and one using b. Bob generates his own secret vectors s ′ , e ′ and e and make the LWE samples ( A , b ′ = A T s ′ + e ′ mod q ) , ( b , v ′ = b T s ′ + e ′ ′ mod q ) . Bob has to add the message that wants to encrypt to one of these samples, where v ′ is a random integer between 0 and q. The encrypted message of Bob consists of the two samples ( A , b ′ = A T s ′ + e ′ mod q ) , ( b , v ′ = b T s ′ + e ′ ′ + q 2 m mod q ) .
Alice wants to decrypt Bob’s ciphertext. The decryption of a pair ( a , b ) is 0 if b − 〈 a , s 〉 is closer to 0 than to ⌊ q 2 ⌋ modulo q. In another case, the decryption is 1.
In matrix form, Alice firstly calculates Δ v = v ′ − b ′ T s . As long as e T s ′ + e ′ ′ − s T e ′ is small enough, Alice recovers the message as m e s = ⌊ 2 q Δ v ⌉ .
6.3. Discrete Implementation
We choose n = 4 and q = 13 .
Alice chooses the private key s = [ 2 , 5 , 0 , 6 ] .
Let m = 3 so Alice generates the public key with the aid of three vectors a i , i = 1 , 2 , 3 and three elements e i , i = 1 , 2 , 3 (error terms). She chooses: a 1 = [ 1 , 6 , 2 , 4 ] and e 1 = 1 , a 2 = [ 0 , 3 , 5 , 1 ] and e 2 = 0 and a 3 = [ 2 , 1 , 6 , 3 ] and e 3 = − 1 . Therefore, Alice’s public key is:
{ ( [ 1 , 6 , 2 , 4 ] , 4 ) , ( [ 0 , 3 , 5 , 1 ] , 8 ) , ( [ 2 , 1 , 6 , 0 ] , 0 ) }
Bob wants to encrypt 0 so he takes the subset S = { 1 , 2 } . So, he computes
( ∑ i ∈ S a i , ∑ i ∈ S b i ) = ( [ 1 , 6 , 2 , 4 ] + [ 0 , 3 , 5 , 1 ] , 4 + 8 ) = ( [ 1 , 9 , 7 , 5 ] , 12 )
Alice performs the decryption algorithm by computing
b − 〈 a , s 〉 = 12 − 〈 [ 1 , 9 , 7 , 5 ] , [ 2 , 5 , 0 , 6 ] 〉 = 12 − 12 = 0
and obviously the decryption is 0 since the output value is closer to 0 (in this case equal to 0) than to ⌊ 13 2 ⌋ modulo 13.
Therefore, the encryption scheme worked correctly.
6.4. Implementations and Variants
The Learning with Errors (LWE) cryptosystem is a popular post-quantum cryptographic scheme that relies on the hardness of using lattices to solve particular computational problems. There are several variants of the LWE cryptosystem, including the Ring-LWE, the Dual LWE, the Module-LWE, the Binary-LWE, the multilinear LWE, and others [55,56,57].
The RING-LWE Cryptosystem
This variant of LWE uses polynomial rings instead of the more general lattices used in standard LWE. Ring-LWE has a simpler structure, which improves execution speed and memory utilization efficiency. In 2013, Lyubashevsky et al. [46] presented a new public key cryptographic scheme that is based in the LWE problem.
The Ring-LWE cryptosystem structure.
Lyubachevsky et al. proposed a well-analyzed cryptosystem that uses two ring elements for both public key and ciphertext and it is a plain lattice-based version of the public key cryptographic system.
The two parties they want to communicate agree on the complexity value of n, the highest co-efficient power to be used. Let R = Z [ X ] ( X n + 1 ) be the fixed ring and an integer q is chosen, such as q = 2 n − 1 . The steps of the Ring-LWE protocol are described below.
A secret vector s with n length is chosen with modulo q integer entries in ring R q , where q ∈ Z + . This is the private key of the system.
An element a ∈ R q is chosen and a random small element e ∈ R from the error distribution and we compute b = a s ˙ + e .
The public key of the system is the pair ( a , b ) .
Let m be the n bit message that is meant for encryption.
The message m is considered an element of R and the bits are used as coefficients of a polynomial of a degree less than n.
The elements e 1 , e 2 , r ∈ R are generated from error distribution.
The u = a · r + e 1 mod q is computed.
The v = b · r + e 2 + · ⌊ q 2 ⌉ · m mod q is computed and it is sent ( u , v ) ∈ R q 2 to receiver.
The second party receives the payload ( u , v ) ∈ R q 2 and computes r = v − u · s = ( r · e − s · e 1 + e 2 ) + ⌊ q 2 ⌉ · m mod q . Each r i is evaluated and if r 1 ≈ q 2 , then the bits are recovered back to 1, or else 0.
The Ring-LWE cryptographic scheme is similar to the LWE cryptosystem that was proposed by Regev. Their difference is that the inner products are replaced with ring products, so the result is a new ring structure, increasing the efficiency of the operations.
6.5. Security
Learning with Errors (LWE) is a computational problem that is the basis for cryptosystems and especially for cryptographic schemes of post-quantum cryptography. It is considered to be a hard mathematical problem and as a consequence, cryptosystems that are based on the LWE problem are of high security as well. LWE cryptographic protocols are a contemporary and active field of research and therefore their security is studied and analyzed continually and steadily.
There are various attacks that can be performed against the cryptosystems which are based on the LWE problem. We can say that these types of attacks are, in general, attacks that exploit weaknesses in the LWE problem itself, and attacks that exploit weaknesses in the specific implementation of the cryptosystem. Below, we present some of these types of attacks that can be launched against LWE-based cryptographic schemes.
Dual Attack. This type of attack is based on the dual lattice and is most effective against LWE instances with small size of plaintext messages.
Thus, hybrid dual attacks are appropriate for spare and small secrets, and in a hybrid attack, one estimates part of the secret without knowledge and performs some attacks on the leftover part [58] The cost of attacking the remaining portion of the secret is decreased because guessing reduces the problem’s size. Additionally, the component of the lattice attack can be utilized for multiple guesses. When the lattice attack component is a primal attack, we call it a hybrid primal attack and a hybrid dual attack, respectively, and the optimal attack is achieved when the cost of guessing is equal to the lattice attack cost.
Sieving Attack. This type of attack relies on the idea of sieving, which claims to find linear combinations of the LWE samples that reveal information about the secret. Sieving attacks can be used to solve the LWE problem with fewer samples than its original complexity.
Algebraic attack. This type of attack is based on the idea of finding algebraic relations between the LWE samples that let out secret data information. Algebraic attacks can be suitable for solving the LWE problem with fewer samples than the original complexity as well.
Side-channel attack. This type of attack exploits weaknesses in the implementation of the LWE-based scheme, such as timing attacks and others. Side-channel attacks are generally easier to mount than attacks against the LWE problem itself, but they require physical access to the device running the implementation.
Attacks that use the BKW algorithm. This is a classic attack; it is considered to be sub-exponential and is most effective against small or small-structured LWE instances.
To mitigate these attacks, LWE-based schemes typically use various techniques such as parameter selection, randomization, and error-correcting codes. These techniques are designed to make the LWE problem harder to solve and to prevent attackers from taking advantage of vulnerabilities in the implementation [59,60].
7. The GGH Cryptosystem
In 1997, Oded Goldreich, Shafi Goldwasser, and Shai Halevi proposed a cryptosystem (GGH) [61] based on algrebraic coding theory and it can be seen as a lattice analogue of the McEliece cryptosystem [29]. In both the GGH and McEliece schemes, the addition of a random noise vector to the plaintext is called the ciphertext [35]. In the GGH cryptosystem, the public and the private key are a representation of a lattice and in the McEliece, the public and the private key are a representation of a linear code. The basic distinction between these two cryptographic schemes is that the domains in which the operations take place are different. The main idea and structure of the GGH cryptographic scheme is characterized by simplicity and it is based on the difficulty of reducing lattices.
7.1. Description
The GGH public key encryption scheme is formed by the key generation algorithm K, the encryption algorithm E, and the decryption algorithm D. It is based on lattices in Z n , a key derivation function h : Z n × Z n → K s and a symmetric cryptosystem ( K s , P , C , E s , D s ), where K is the key generation algorithm, P the set of plain texts, C the set of ciphertexts, E s the encryption algorithm, and D s the decryption algorithm.
The key generation algorithm K generates a lattice L by choosing a basis matrix V that is nearly orthogonal. An integer matrix U it is chosen which has determinant d e t ( U ) = ± 1 and the algorithm computes W = U V . Then, the algorithm outputs e k = W and d k = V .
The encryption algorithm E receives as input an encryption key e k = W and a plain message m ∈ P . It chooses a random vector u ∈ Z n and a random noise vector u. Then it computes x = u W , z = x + r and encrypts the message w = E s ( h ( x , r ) , m ) . It outputs the ciphertext c = ( z , w ) .
The decryption algorithm D takes as input a decryption key d k = V and a ciphertext c = ( z , w ) . It computes x = ⌊ z V − 1 ⌉ V and r = z − x and decrypts as m = D s ( h ( x , r ) , w ) . If D s algorithm outputs the symbol ⊥ the decryption fails and then D outputs ⊥, otherwise the algorithm outputs m.
We assume that there exist two users, Alice and Bob, who want to communicate secretly. The main (classical) process of the GGH cryptosystem is described below.
Alice chooses a set of linearly independent vectors v 1 , v 2 , … , v n ∈ Z n which form the matrix V = [ v 1 , v 2 , … , v n ] , v i ∈ Z n , 1 ≤ i ≤ n . Alice, by calculating the Hadamard Ratio of matrix V and verifying that is not too small, checks her vector’s choice. This is Alice’s private key and we let L be the lattice generated by these vectors.
Alice chooses an n × n unimodular matrix U with integer coefficients that satisfies d e t ( U ) = ± 1 .
Alice computes a bad basis w 1 , w 2 , … , w n for the lattice L, as the rows of W = U V , and this is Alice’s public key. Then, she publishes the key w 1 , w 2 , … , w n .
Bob chooses a plaintext that he wants to encrypt and he chooses a small vector m (e.g., a binary vector) as his plaintext. Then, he chooses a small random “noise” vector r which acts as a random element and r has been chosen randomly between − δ and δ , where δ is a fixed public parameter.
Bob computes the vector e = m W + r = ∑ i = 1 n m i w i + r = x 1 w 1 + x 2 w 2 + ⋯ + x n w n + r using Alice’s public key and sends the ciphertext e to Alice.
Alice, with the aid of Babai’s algorithm, uses the basis v 1 , v 2 , … , v n to find a vector in L that is close to e. This vector is the a = m W , since the “noise” vector r is small and since she uses a good basis. Then, she computes a W − 1 = m W W − 1 ans she recovers m.
Supposing there is an eavesdropper, Eve, who wants to obtain information of the communication between Alice and Bob. Eve has in her possession the message e that Bob sends to Alice and therefore tries to find the closest vector to e, solving the CVP, using the public basis W. As she uses vectors that are not reasonably orthogonal, Eve will recover a message e ^ which probably will not be near to m.
7.2. Discrete Implementation
Alice chooses a private basis v 1 → = ( 48 , 1 ) and v 2 → = ( − 1 , 48 ) which is a good basis since v 1 → and v 2 → are orthogonal vectors, e.g., it is 〈 v 1 → , v 2 → 〉 = 0 . The rows of the matrix V = 48 1 − 1 48 are Alice’s private key. The lattice L spanned by v 1 → and v 2 → has determinant d e t ( L ) = 2305 and the Hadamard ratio of the basis is H = ( d e t ( L ) / | v 1 → | | v 2 → | ) 1 / 3 ≃ 1
Alice chooses the unimodular matrix U that its determinant is equal to 1, such that U = 5 8 3 5 with d e t ( U ) = + 1 .
Alice computes the matrix W, such that W = U V = 232 389 139 243 . Its rows are Alice’s bad basis w 1 → = ( 232 , 389 ) and w 2 → = ( 139 , 243 ) , since it is c o s ( w 1 → , w 2 → ) ≃ 0.99948 and these vectors are nearly parallel, so they are suitable for a public key.
It is very important for the noise vector to be selected carefully and that it is not shifted where the nearest point is located. For Alice’s basis that generates the lattice L, r → is chosen that | r → | < 20 . So, the vector r → is chosen to be ( r x , r y ) with − 10 ≤ r x and r y ≤ 10 .
Bob wants to encrypt the message m = ( 35 , 27 ) . The message can be seen as a linear combination of the basis w 1 → , w 2 → , such as 35 w 1 → + 25 w 2 → and the noise vector r → can be added.
The corresponding ciphertext is e = m W + r = ( 35 , 27 ) 232 389 139 243 + ( − 9 , 1 ) = ( 19 , 285 , 17 , 064 ) + ( − 9 , 1 ) = ( 19 , 276 , 17 , 065 ) and Bob sends it to Alice.
Alice, using the private basis, applies Babai’s algorithm and finds the closest lattice point. So, she solves the equation a 1 ( 48 , 1 ) + a 2 ( − 1 , 48 ) = ( 19 , 276 , 17 , 065 ) and finds a 1 ≃ 463.02 and a 2 ≃ 345.8 . So, the closest lattice point is a 1 ( 48 , 1 ) + a 2 ( − 1 , 48 ) = 463 ( 48 , 1 ) + 346 ( − 1 , 48 ) = ( 21 , 878 , 17 , 071 ) and this lattice vector is close to e.
Alice realizes that Bob must have computed ( 21 , 878 , 17 , 071 ) as a linear combination of the public basis vectors and then solving the linear combination again m 1 ( 232 , 389 ) + m 2 ( 139 , 243 ) = ( 21 , 878 , 17 , 071 ) , she finds m 1 = 35 and m 2 = 27 and recovers the message m = ( m 1 , m 2 ) = ( 35 , 27 ) .
Eve has in her possession the encrypted message ( 19 , 276 , 17 , 065 ) that Bob had sent to Alice and she tries to solve the CVP using the public basis. So, she is solving the equation m 1 ( 232 , 389 ) + m 2 ( 139 , 243 ) = ( 19 , 276 , 17 , 065 ) ; she finds the incorrect values m 1 ≃ 1003.1 , m 2 ≃ − 1535.5 and recovers the incorrect encryption m ′ = ( m 1 , m 2 ) = ( 1003 , − 1535 ) .
In 1999 and in 2001, D. Micciancio proposed a simple technique to reduce both the size of the key and size of the ciphertext of GGH cryptosystem without decreasing the level of its security [62,63].
7.3. Security
In the GGH cryptographic scheme, if a security parameter n is chosen, the time required for encryption and the size of the key is O ( n 2 log n ) and it is more efficient than other cryptosystems such as AD.
There are some natural ways to perform an attack on the GGH cryptographic scheme.
Leak information and obtain the private key V from the public key W.
For this type of attack, a lattice basis reduction (LLL) algorithm is performed on the public key, the matrix W. It is possible that the output is a basis W ′ that is good enough to enable the effective solution of the necessary instances of the closest vector. It will be extremely difficult for this attack to succeed if the dimension of the lattice is sufficiently large.
Assuming we have a small error vector r, try to extract information about the message from the ciphertext e.
For this type of attack, it is useful that in the ciphertext e = m W + r , the error vector r is a vector with small entries. An idea is to compute e W − 1 = m W W − 1 + r W − 1 and try to deduce possible values for some entries of r W − 1 . For example, if the j-th column of W − 1 has a particularly small norm, then one can deduce that the j-th entry of r W − 1 is always small and hence get an accurate estimate for the j-th entry of m. To defeat this attack, one should only use some low-order bits of some entries of m to carry information, or use an appropriate randomized padding scheme
Try to solve the Closest Vector Problem of e with respect to the lattice that is being generated by W, for example, by performing the Babai’s nearest plane algorithm or the embedding technique.
Moreover, certain types of attacks can be performed against GGH which are discussed below, such as Nguyen’s attack and Lee and Hahn attack.
Goldreich, Goldwasser, and Halevi claimed that increasing the key size compensates for the decrease in computation time [35]. When presenting their paper, the three authors published five numerical challenges that corresponded to increase the value of the parameters n in higher dimensions with the aim of supporting their algorithm. In each challenge, a public key and a ciphertext were given and it was requested to recover the plaintext.
In 1999, P. Nguyen exploited the weakness specific to the way the parameters are chosen and developed an attack against the GGH cryptographic scheme [64]. The first four challenges, for n = 200 , 250 , 300 , 350 were broken; since then, GGH is considered to be broken partially in its original form. Nguyen argued that the choice of the error vector is its weakness and that it makes it vulnerable to a possible attack. The error vectors used in the encryption of the GGH algorithm must be shorter than the vectors that generate the lattice. This weakness makes Closest Vector Problem instances arising from GGH easier than general CVP instances [35].
The other weakness of the GGH cryptosystem is the choice of the error vector e in the encryption algorithm procedure. The e vector is in { ± σ } n and it is chosen to maximize the Euclidean norm under requirements on the nity norm. Nguyen takes the ciphertext c = m B + e modulo s i g m a , where m is the plaintext and B the public key, and the e disappears from the equation. This is because e ∈ { ± σ } n and every choice is 0 mod σ . So, this leaks information about the message m ( mod σ ) and increasing the modulus to 2 σ and adding an all − σ vector s to the equation. If this equation is solved for m, it leaks information for m ( mod 2 σ ) . Nguyen also demonstrated that in most cases, this equation could be easily solved for m.
In 2006, Nguyen and Regev performed an attack on the GGH signatures scheme, transforming a geometrical problem to a multivariate optimization problem [65]. The final numerical challenge for n = 400 was solved by M.S. Lee and S.G. Hahn in 2010 [66]. Therefore, GGH has weaknesses and trapdoors, such that it is vulnerable to certain type of attacks, such as one attack that allows a fraudulent user to recover the secret key using a small amount of information about the ciphertext. Specifically, if an attacker can obtain the two smallest vectors in the lattice, they can give information and recover the secret key using Coppersmith’s algorithm [67]. As a result, GGH has limited practical use and has been largely superseded by newer and more secure lattice-based cryptosystems. So, while GGH made an important early contribution to the field of lattice-based cryptography, it is not currently considered a practical choice for secure communication due to its limitations in security.
8. Evaluation, Comparison and Discussion
We have presented a few of the main cryptographic schemes that are based on the hardness of lattice problems and especially based on the Closest Vector Problem. GGH is a public key cryptosystem which is based in algebraic coding theory. A plaintext is been added with a vector noise and the result of this addition is a ciphertext. Both the private and the public keys are a depiction of a lattice and the private key has a specific structure. Nguyen’s attack [64] revealed the weakness and vulnerability of the GGH cryptosystem and many researchers after that considered GGH to be unusable [64,68]
Therefore, in 2010, M.S. Lee and S.G. Hahn presented a method that solved the numerical challenge of the highest dimension 400 [66]. Applying this specific method, Lee and Hann came to the conclusion that the decryption of the ciphertext could be accomplished using partial information of the plaintext. Thus, this method requires some knowledge of the plaintext and cannot be performed in actually real cryptanalysis circumstances. On the other side, in 2012 M. Yoshino and N. Kunihiro and C. Gu et al. in 2015 presented a few modifications and improvements in the GGH cryptosystem, claiming that they made it more resistant to these attacks [67,69].
The same year, C.F. de Barros and L.M. Schechter, in their paper “GGH may not be dead after all”, proposed certain improvements for GGH and finally a variation of the GGH cryptographic scheme [70]. De Barros and Schecher, by reducing the public key in order to find a basis with the aid of Babai’s algorithm, perform a direct way to attack to GGH. They increase the length of the noise vector r → setting a new parameter k that modified the GGH cryptographic algorithm. Their modifications resulted in a variation of GGH more resistant to cryptanalysis, but with slower decryption process of the algorithm. In 2015, Brakerski et al. described certain types of attacks against some variations of the GGH cryptosystem and relied on the linearity of the zero-testing procedure [71].
GGH was a milestone in the evolution of post-quantum cryptography; it was one of the earliest lattice-based cryptographic schemes and it is based on the Shortest Vector Problem’s difficulty. Even though is is viewed as one of the most significant lattice-based cryptosystems and still has a theoretical interest, it is not recommended for practical use due to its security weaknesses. GGH is less efficient than other lattice-based cryptosystems [72]. The process to encrypt and decrypt a message requires a large amount of computations and this fact makes the GGH cryptosystem obviously slower and less practical than other lattice-based cryptosystems.
Thus, the GGH protocol is vulnerable to certain attacks, such as Coppersmith’s attack and Babai’s nearest plane algorithm, and it is considered not to be strong enough. These attacks disputed the security of the GGH and made it less preferable than newer, stronger, and more secure lattice-based cryptosystems. Evaluating the efficiency of GGH cryptographic protocol, GGH is relatively inefficient compared to other lattice-based cryptosystems such as NTRU, LWE, and others, and especially in the key generation and for large key length. As the GGH cryptosystem is based in multiplications of matrices, when we choose large keys, it requires a computationally expensive basis reduction algorithm for the encryption and decryption procedure.
Moreover, GGH is considered to be a complex cryptographic scheme which requires concepts and knowledge of lattices and linear algebra to study, analyze, and implement. GGH also has one more drawback, which is the lack of standardization, and this makes hard the comparison of its functionality, security, and connectivity with other cryptographic schemes. GGH was one of the first cryptographic schemes that were developed based on lattice theory and cryptography. In spite of the fact that GGH certainly has interesting theoretical basis and properties, GGH is not used in practice due to its limitations in security, efficiency, and complexity.
NTRU is a public key cryptographic scheme that is based on the Shortest Vector Problem in a lattice and was first presented in the 1990s. It is one of the most well studied and analyzed lattice-based cryptosystems and there have been many cryptanalysis studies of NTRU algorithms, including NTRU signatures. NTRU has a high level of security and efficiency and it is a promising protocol for post-quantum cryptography. Moreover, the NTRU cryptographic algorithm uses polynomial multiplication as its basic operation and it is notable for its simplicity.
A main advantage of the NTRU cryptosystem is its speed and it has been used in certain commercial applications where speed is a priority. NTRU has a fast implementation compared with other lattice-based cryptosystems, such as GGH, LWE, and Ajtai-Dwork. For this reason, NTRU is preferable for applications that require fast encryptions and decryption, such as in IoT devices or in embedded systems. In addition to its speed, NTRU uses smaller key sizes than other public key cryptosystems, but the same level of security is maintained. This makes it ideal for applications or environments with limited memory and processing power.
NTRU is considered to be a secure cryptographic scheme against various types of attacks. It is designed to be resistant against attacks such as lattice basis reduction, meet-in-the-middle attacks, and chosen ciphertext attacks. NTRU is believed to be a strong cryptographic scheme for the quantum era, meaning that it is considered to be resistant against attacks by quantum computers.
NTRU has become famous and widely usable after 2017, because before then, it was under a patent and it was difficult for researchers to use it and modify it. Thus, NTRU is not widely used or standardized in the industry, making it difficult to assess its interoperability with other cryptosystems. Furthermore, NTRU is considered to be a public key cryptographic protocol with relative complexity, and its analysis and implementation require a good understanding of lattice-based cryptography and ring theory. NTRU is a promising lattice-based cryptosystem for post-quantum cryptography that offers fast implementation and strong security guarantees [73].
Learning with Errors (LWE) is a widely used and well-studied public key cryptographic scheme that is based in lattice theory [74]. LWE is considered to be secure against both quantum and classical attacks and indeed, it is considered to be among the most secure and efficient of these schemes, while NTRU has limitations in terms of its security [75]. LWE depends its hardness on the difficulty of finding a random error vector in a matrix product and this makes it a resistant cryptosystem against various types of attacks, the same types of attacks as with NTRU. It is considered to be a strongly secure cryptosystem and post-quantum secure, which means that it is resistant to attacks by a quantum computer [76].
LWE uses keys with small length size compared with other cryptographic schemes that are designed for the quantum era, such as code-based and hash-based cryptosystems [77]. Just like NTRU, LWE is appropriate for implementation in resource-constrained environments, such as in IoT devices or in embedded systems. A basic advantage of the LWE cryptosystem is its flexibility, as it is a versatile cryptographic scheme that can be suitable in a variety of cryptographic methods such as digital signatures, key exchange, and encryption. LWE also serves as a foundation for more advanced cryptographic protocols, which developed other variations of it.
LWE can be vulnerable to certain type of attacks, such as side-channel attacks, i.e., timing attacks or power analysis attacks, if we do not take the right countermeasures [78]. Just like NTRU, LWE is not considered to be standardized and widely adopted by the computing industry and this makes it difficult to assess its interoperability with other cryptosystems and make a comparison with them. Moreover, LWE cryptographic protocol is characterized by complexity and understanding and modifying it becomes challenging.
Undoubtedly, both NTRU and LWE are fast, efficient, and secure cryptographic schemes. NTRU uses smaller key sizes and that makes it suitable for applications where memory and computational power are limited. Both LWE and NTRU are considered to be strong and resistant to various types of attacks and are considered to be prominent for post-quantum cryptography. Thus, LWE is an adaptable cryptographic protocol and can be used in a wide range of cryptographic tasks and methods, while NTRU is primarily used for encryption and decryption.
In summary, LWE and NTRU are both promising lattice-based cryptosystems that offer strong security guarantees and are resistant to quantum attacks. NTRU is known for its fast implementation and smaller key sizes, while LWE offers more flexibility in cryptographic primitives and is currently undergoing standardization. Ultimately, the choice between LWE and NTRU will depend on specific use cases and implementation requirements.
Overall, each lattice-based cryptosystem has its own strengths and weaknesses depending on the specific use case. Choosing the right one requires careful consideration of factors such as security, efficiency, and ease of implementation.
9. Lattice-Based Cryptographic Implementations and Future Research
Quantum research over the past few years has been particularly transformative, with scientific breakthroughs that will allow exponential increases in computing speed and precision. In 2016, the National Institute of Standards and Technology (NIST) announced an invitation to researchers to submit their proposals for developed public—key post-quantum cryptographic algorithms. At the end of 2017, when was the initial submission deadline, 23 signature schemes and 59 encryption—key encapsulation mechanism (KEM) schemes were submitted, in total, 82 candidates’ proposals.
In July 2022, the NIST finished the third round of selection and chose a set of encryption tools designed to be secure against attacks by future quantum computers. The four selected cryptographic algorithms are regarded as an important milestone in securing sensitive data against the possibility of cyberattacks from a quantum computer in the future [79].
The algorithms are created for the two primary purposes for which encryption is commonly employed: general encryption, which is used to secure data transferred over a public network, and digital signatures, which are used to verify an individual’s identity. Experts from several institutions and nations collaborated to develop all four algorithms which are presented below.
CRYSTALS-Kyber
This cryptographic scheme is selected by NIST for general encryption and is based on the module Learning with Errors problem. CRYSTALS-Kyber is similar to the Ring-LWE cryptographic scheme but it is considered to be more secure and flexible. The parties that communicate can use small encrypted keys and exchange them easily with high speed.
CRYSTALS-Dilithium
This algorithm is recommended for digital signatures and relies its security on the difficulty of lattice problems over module lattices. Like other digital signature schemes, the Dilithium signature scheme allows a sender to sign a message with their private key, and a recipient uses the sender’s public key to verify the signature but Dilithium has the minor public key and signature size of any lattice-based signature scheme that only uses uniform sampling.
FALCON
FALCON is a cryptographic protocol which is proposed for digital signatures. The FALCON cryptosystem is based on the theoretical framework of Gentry et al [80]. It is a promising post-quantum algorithm as it provides capabilities for quick signature generation and verification. The FALCON cryptographic algorithm has strong advantages such as security, compactness, speed, scalability, and RAM Economy.
SPHINCS+
SPHINCS plus is the third digital signature algorithm that was selected by NIST. SPHINCS + uses hash functions and is considered to be a bit larger and slower than FALCON and Dilithium. It is regarded as an improvement of the SPHINCS signature scheme, which was presented in 2015, as it reduces the size of the signature. One of the key points of interest of SPHINCS+ over other signature schemes is its resistance to quantum attacks by depending on the hardness of a one-way function.
10. Conclusions
In recent years, significant progress has been made, taking us beyond classical computing and into a new era of data called quantum computing. Quantum research over the past few years has been particularly transformative, with scientific breakthroughs that will allow exponential increases in computing speed and precision. Research on post-quantum algorithms is active and huge sums of money are being invested for this reason, because it is necessary for the existence of strong cryptosystems.
It is considered almost certain that both the symmetric key algorithm and hash functions will continue to be used as tools of post-quantum cryptography. A variety of cryptographic schemes have been proposed for the quantum era of computing and this is a topic of ongoing research. The development and the standardization of an efficient post-quantum algorithm is the challenge of the academic community. What was once considered a science fiction fantasy is now a technological reality. The quantum age is coming and it will bring enormous changes; therefore, we have to be prepared.
Author Contributions
Investigation, G.C.M.; Writing—original draft, M.E.S.; Supervision, I.K.S., D.P. and G.G. All authors have read and agreed to the published version of the manuscript.
Funding
This research received no external funding.
Conflicts of Interest
The authors declare no conflict of interest.
References
Sabani, M.; Savvas, I.K.; Poulakis, D.; Makris, G. Quantum Key Distribution: Basic Protocols and Threats. In Proceedings of the 26th Pan-Hellenic Conference on Informatics (PCI 2022), Athens, Greece, 25–27 November 2022; ACM: New York, NY, USA, 2022; pp. 383–388. [Google Scholar]
Nielsen, M.; Chuang, I. Quantum Computation and Quantum Information; Cambridge University Press: Cambridge, UK, 2011. [Google Scholar]
Sabani, M.; Savvas, I.K.; Poulakis, D.; Makris, G.; Butakova, M. The BB84 Quantum Key Protocol and Potential Risks. In Proceedings of the 8th International Congress on Information and Communication Technology (ICICT 2023), London, UK, 20–23 February 2023. [Google Scholar]
Preskill, J. Quantum computing and the entanglement frontier. In Proceedings of the 25th Solvay Conference on Physics, Brussels, Belgium, 19–25 October 2011; Available online: https://arxiv.org/abs/1203.5813 (accessed on 26 March 2012).
Poulakis, D. Cryptography, the Science of Secure Communication, 1st ed.; Ziti Publications: Thessaloniki, Greece, 2004. [Google Scholar]
Shor, P.W. Polynomial-time algorithms for prime factorization and discrete logarithms on a quantum computer. J. Comput. SIAM 1997, 26, 1484–1509. [Google Scholar] [CrossRef] [Green Version]
Alkim, E.; Dukas, L.; Pöppelmann, T.; Schwabe, P. Post-Quantum Key Exchange—A New Hope. In Proceedings of the USENIX Security Symposium 2016, Austin, TX, USA, 10–12 August 2016; Available online: https://eprint.iacr.org/2015/1092.pdf (accessed on 11 November 2015).
Berstein, D.J.; Buchmann, J.; Brassard, G.; Vazirani, U. Post-Quantum Cryptography; Springer: Berlin/Heidelberg, Germany, 2009. [Google Scholar]
Zheng, Z.; Tian, K.; Liu, F. Modern Cryptography Volume 2 a Classical Introduction to Informational and Mathematical Principle; Springer: Singapore, 2023. [Google Scholar]
Silverman, J.H.; Piher, J.; Hoffstein, J. An Introduction to Mathematical Cryptopraphy, 1st ed.; Springer: New York, NY, USA, 2008. [Google Scholar]
Galbraith, S. Mathematics of Public Key Cryptography; Cambridge University Press: Cambridge, UK, 2012. [Google Scholar]
Kannan, R. Algorithmic Geometry of Numbers. In Annual Reviews of Computer Science; Annual Review Inc.: Palo Alto, CA, USA, 1987; pp. 231–267. [Google Scholar]
Rivest, R.L.; Shamir, A.; Adleman, A. Method for Obtaining Digital Signatures and Public-Key Cryptosystems. J. ACM 1978, 21, 120–126. [Google Scholar] [CrossRef] [Green Version]
Sabani, M.; Galanis, I.P.; Savvas, I.K.; Garani, G. Implementation of Shor’s Algorithm and Some Reliability Issues of Quantum Computing Devices. In Proceedings of the 25th Pan-Hellenic Conference on Informatics (PCI 2021), Volos, Greece, 26–28 November 2021; ACM: New York, NY, USA, 2021; pp. 296–392. [Google Scholar]
Wiesner, S. Conjugate coding. Sigact News 1983, 15, 78–88. [Google Scholar] [CrossRef]
Van Assche, G. Quantum Cryptography and Secret-Key Distillation, 3rd ed.; Cambridge University Press: New York, NY, USA, 2006. [Google Scholar]
Scherer, W. Mathematics of Quantum Computing, An Introduction; Springer: Berlin/Heidelberg, Germany, 2019. [Google Scholar]
Bennett, C.H.; Brassard, G.; Ekert, A. Quantum cryptography. Sci. Am. 1992, 50–57. Available online: https://https://www.jstor.org/stable/e24939235 (accessed on 2 October 1992). [CrossRef]
Bennett, C.H.; Brassard, G.; Breidbart, S.; Wiesner, S. Quantum cryptography, or Unforgeable subway tokens. In Advances in Cryptology; Conference Paper; Springer Science + Business Media: New York, NY, USA, 1982; pp. 267–275. [Google Scholar] [CrossRef]
Bennett, C.H.; Brassard, G. Quantum Cryptography: Public Key Distribution and Coin Tossing. In Proceedings of the International Conference in Computer Systems and Signal Processing, Bangalore, India, 10–12 December 1984. [Google Scholar]
Teklu, B. Continuous-variable entanglement dynamics in Lorenzian environment. Phys. Lett. A 2022, 432, 128022. [Google Scholar] [CrossRef]
Vasile, R.; Olivares, S.; Paris, M.G.A.; Maniscalco, S. Continuous variable quantum key distribution in non-Markovian channels. Phys. Rev. A 2011, 83, 042321. [Google Scholar] [CrossRef] [Green Version]
Teklu, B.; Bina, M.; Paris, M.G.A. Noisy propagation of Gaussian states in optical media with finite bandwidth. Sci. Rep. 2022, 12, 11646. Available online: https://www.nature.com/articles/s41598-022-15865-5 (accessed on 8 June 2023). [CrossRef]
Adnane, H.; Teklu, B.; Paris, M.G. Quantum phase communication assisted by non-deterministic noiseless amplifiers. J. Opt. Soc. Am. B 2019, 36, 2938–2945. [Google Scholar] [CrossRef]
Teklu, B.; Trapani, J.; Olivares, S.; Paris, M.G.A. Noisy quantum phase communication channels. Phys. Scr. 2015, 90, 074027. [Google Scholar] [CrossRef]
Trapani, J.; Teklu, B.; Olivares, S.; Paris, M.G.A. Quantum phase communication channels in the presence of static and dynamical phase diffusion. Phys. Rev. A 2015, 92, 012317. [Google Scholar] [CrossRef] [Green Version]
Diffie, W.; Hellman, M. New Directions in Cryptography. IEEE Trans. Inf. Theory 1976, 22, 644–654. [Google Scholar] [CrossRef] [Green Version]
Trappe, W.; Washington, L.C. Introduction to Cryptography with Coding Theory; Pearson Education: New York, NY, USA, 2006. [Google Scholar]
McEliece, R. A public key cryptosystem based on alegbraic coding theory. DSN Prog. Rep. 1978, 42–44, 114–116. [Google Scholar]
Niederreiter, H. Knapsack-type cryptosystems and algebraic coding theory. Probl. Control Inf. Theory Probl. Upr. I Teor. Inf. 1986, 15, 159–166. [Google Scholar]
Merkle, R. A certified digital signature. In Advances in Cryptology—CRYPTO’89, Proceedings of the CRYPTO ’89, 9th Annual International Cryptology Conference, Santa Barbara, California, USA, 20–24 August 1989; Springer: Berlin/Heidelberg, Germany, 1989; pp. 218–238. [Google Scholar]
Bai, S.; Gong, Z.; Hu, L. Revisiting the Security of Full Domain Hash. In Proceedings of the 6th International Conference on Security, Privacy and Anonymity in Computation, Communication and Storage, Nanjing, China, 18–20 December 2013. [Google Scholar]
Matsumoto, T.; Imai, H. Public quadratic polynomials-tuples for efficient signature verification and message encryption. Adv. Cryptol. Eur. Crypt’88 1988, 330, 419–453. [Google Scholar]
Patarin, J. Hidden field equations and isomorphism of polynomials. In Proceedings of the Eurocrypto’96, Zaragoza, Spain, 12–16 May 1996. [Google Scholar]
Nguyen, P.Q.; Stern, J. The two faces of Lattices in Cryptology. In Proceedings of the International Cryptography and Lattices Conference, Rhode, RI, USA, 29–30 March 2001; pp. 146–180. [Google Scholar]
Micciancio, D.; Regev, O. Lattice-based cryptography. In Post-Quantum Cryptography; Springer: Berlin/Heidelberg, Germany, 2009. [Google Scholar]
Lyubashevsky, V. A Decade of Lattice Cryptography. In Advances in Cryptology—EUROCRYPT 2015; Springer: Berlin/Heidelberg, Germany, 2015. [Google Scholar]
Peikert, C. Lattice-Based Cryptography: A Primer. IACR Cryptol. ePrint Arch. 2016. Available online: https://eprint.iacr.org/2015/939.pdf (accessed on 17 February 2016).
Micciancio, D. On the Hardness of the Shortest Vector Problem. Ph.D. Thesis, Massachusetts Institute of Technology, Cambridge, MA, USA, 1998. [Google Scholar]
Micciancio, D. The shortest vector problem is NP-hard to approximate within some constant. In Proceedings of the 39th FOCS IEEE, Palo Alto, CA, USA, 8–11 November 1998. [Google Scholar]
Babai, L. On Lovasz’ lattice reduction and the nearest lattice point problem. Combinatorica 1986, 6, 1–13. [Google Scholar] [CrossRef]
Micciancio, D. The hardness of the closest vector problem with preprocessing. IEEE Trans. Inform. Theory 2001, 47, 1212–1215. [Google Scholar] [CrossRef] [Green Version]
Lenstra, A.K.; Lenstra, H.W., Jr.; Lovasz, L. Factoring polynomials with rational coefficients. Math. Ann. 1982, 261, 513–534. [Google Scholar] [CrossRef]
Hoffstein, J.; Pipher, J.; Silverman, J. NTRU: A ring-based public key cryptosystem. In Algorithmic Number Theory (Lecture Notes in Computer Science); Springer: New York, NY, USA, 1998; Volume 1423, pp. 267–288. [Google Scholar]
Faugère, J.C.; Otmani, A.; Perret, L.; Tillich, J.P. On the Security of NTRU Encryption. In Advances in Cryptology—EUROCRYPT 2010; Springer: Berlin/Heidelberg, Germany, 2010. [Google Scholar]
Lyubashevsky, V.; Peikert, C.; Regev, O. On Ideal Lattices and Learning with Errors over Rings. ACM 2013, 60, 43:1–43:35. [Google Scholar] [CrossRef]
Albrecht, M.; Ducas, L. Lattice Attacks on NTRU and LWE: A History of Refinements; Cambridge University Press: Cambridge, UK, 2021. [Google Scholar]
Ashur, T.; Tromer, E. Key Recovery Attacks on NTRU and Schnorr Signatures with Partially Known Nonces. In Proceedings of the 38th Annual International Cryptology Conference, Santa Barbara, CA, USA, 19–23 August 2018. [Google Scholar]
Coppersmith, D.; Shamir, A. Lattice attacks on NTRU. In Advances in Cryptology—EUROCRYPT’97; Springer: Berlin/Heidelberg, Germany, 1997. [Google Scholar]
Buchmann, J.; Dahmen, E.; Vollmer, U. Cryptanalysis of the NTRU Signature Scheme. In Proceedings of the 6th IMA International Conference on Cryptography and Coding, Cirencester, UK, 17–19 December 1997. [Google Scholar]
Singh, S.; Padhye, S. Cryptanalysis of NTRU with n public keys. IEEE. 2017. Available online: https://ieeexplore.ieee.org/document/7976980 (accessed on 13 July 2017).
May, A.; Peikert, C. Lattice Reduction and NTRU. In Proceedings of the 46th Annual IEEE Symposium on Foundations of Computer Science, Pittsburgh, PA, USA, 23–25 October 2005. [Google Scholar]
Buchmann, J.; Dahmen, E.; Hulsing, A. XMSS - A Practical Forward Secure Signature Scheme Based on Minimal Security Assumptions. In Post-Quantum Cryptography; Springer: Berlin/Heidelberg, Germany, 2011. [Google Scholar]
Regev, O. On lattices, learning with errors, random linear codes, and cryptography. J. ACM 2009, 56, 1–40. [Google Scholar] [CrossRef]
Komano, Y.; Miyazaki, S. On the Hardness of Learning with Rounding over Small Modulus. In Proceedings of the 21st Annual International Conference on the Theory and Application of Cryptology and Information Security, Auckland, New Zealand, 29 November–3 December 2015. [Google Scholar]
Regev, O. Learning with Errors over Rings. In Algorithmic Number Theory: 9th International Symposium, ANTS-IX, Nancy, France, 19–23 July 2010. Proceedings 9, 2010. Available online: https://link.springer.com/chapter/10.1007/978-3-642-14518-6_3 (accessed on 10 July 2010).
Brakerski, Z.; Gentry, C.; Vaikuntanathan, V. New Constructions of Strongly Unforgeable Signatures Based on the Learning with Errors Problem. In Proceedings of the 48th Annual ACM Symposium on Theory of Computing, Cambridge, MA, USA, 19–21 June 2016. [Google Scholar]
Bi, L.; Lu, X.; Luo, J.; Wang, K.; Zhang, Z. Hybrid Dual Attack on LWE with Arbitrary Secrets. Cryptol. ePrint Arch. 2022. Available online: https://eprint.iacr.org/2021/152 (accessed on 25 February 2021).
Bos, W.; Costello, C.; Ducas, L.L.; Mironov, I.; Naehrig, M.; Nikolaenko, V.; Raghunathan, A.; Stebila, D. Frodo: Take off the ring! Practical, quantum-secure key exchange from LWE. In Proceedings of the CCS 2016; Vienna, Austria, 24–28 October 2016, Available online: https://eprint.iacr.org/2016/659.pdf (accessed on 28 June 2016).
Chunsheng, G. Integer Version of Ring-LWE and its Applications. Cryptol. ePrint Arch. 2017. Available online: https://eprint.iacr.org/2017/641.pdf (accessed on 24 October 2019).
Goldreich, O.; Goldwasser, S.; Halive, S. Public-Key cryptosystems from lattice reduction problems. Crypto’97 1997, 10, 112–131. [Google Scholar]
Micciancio, D. Lattice based cryptography: A global improvement. Technical report. Theory Cryptogr. Libr. 1999, 99-05. Available online: https://citeseerx.ist.psu.edu/document?repid=rep1&type=pdf&doi=9591bda3813b0d09522eff2ba17c3665b530ebb9 (accessed on 4 March 1999).
Micciancio, D. Improving Lattice Based Cryptosystems Using the Hermite Normal Form. In Cryptography and Lattices Conference; Springer: Berlin/Heidelberg, Germany, 2001. [Google Scholar]
Nguyen, P.Q. Cryptanalysis of the Goldreich-Goldwasser-Halevi cryptosystem from crypto’97. In Annual International Cryptology Conference; Springer: Santa Barbara, CA, USA, 1999; pp. 288–304. [Google Scholar]
Nguyen, P.Q.; Regev, O. Learning a parallelepiped: Cryptanalysis of GGH and NTRU signatures. J. Cryptol. 2009, 22, 139–160. [Google Scholar] [CrossRef]
Lee, M.S.; Hahn, S.G. Cryptanalysis of the GGH Cryptosystem. Math. Comput. Sci. 2010, 3, 201–208. [Google Scholar] [CrossRef]
Gu, C.; Yu, Z.; Jing, Z.; Shi, P.; Qian, J. Improvement of GGH Multilinear Map. In Proceedings of the IEEE Conference on P2P, Parallel, Grid, Cloud and Internet Computing (3PGCIC), Krakow, Poland, 4–6 November 2015; pp. 407–411. [Google Scholar]
Minaud, B.; Fouque, P.A. Cryptanalysis of the New Multilinear Map over the Integers. IACR Cryptol. ePrint Arch. 2015, 941. Available online: https://eprint.iacr.org/2015/941 (accessed on 28 September 2015).
Yoshino, M. Kunihiro, Improving GGH Cryptosystem for Large Error Vector. In Proceedings of the International Symposium on Information Theory and Its Applications, Honolulu, HI, USA, 28–31 October 2012; pp. 416–420. [Google Scholar]
Barros, C.; Schechter, L.M. GGH may not be dead after all. In Proceedings of the Congresso Nacional de Matemática Aplicada e Computacional, Sao Paolo, PR, Brazil, 8–12 September 2014. [Google Scholar]
Brakerski, Z.; Gentry, C.; Halevi, S.; Lepoint, T.; Sahai, A.; Tibouchi, M. Cryptanalysis of the Quadratic Zero-Testing of GGH. IACR Cryptol. ePrint. Available online: https://eprint.iacr.org/2015/845 (accessed on 21 September 2015).
Susilo, W.; Mu, Y. Information Security and Privacy; Springer: Berlin/Heidelberg, Germany, 2014; Volume 845. [Google Scholar]
Bonte, C.; Iliashenko, I.; Park, J.; Pereira, H.V.; Smart, N. FINAL: Faster FHE Instantiated with NTRU and LWE. Cryptol. ePrint Arch. 2022. Available online: https://eprint.iacr.org/2022/074 (accessed on 20 January 2022).
Bai, S.; Chen, Y.; Hu, L. Efficient Algorithms for LWE and LWR. In Proceedings of the 10th International Conference on Applied Cryptography and Network Security, Singapore, 26–29 June 2012. [Google Scholar]
Brakerski, Z.; Langlois, A.; Regev, O.; Stehl, D. Classical Hardness of Learning with Errors. In Proceedings of the 45th Annual ACM Symposium on Theory of Computing (STOC), Palo Alto, CA, USA, 2–4 June 2013; pp. 575–584. [Google Scholar]
Lyubashevsky, V.; Micciancio, D. Generalized Compact Knapsacks Are Collision Resistant. In Proceedings of the 33rd International Colloquium on Automata, Languages and Programming, Venice, Italy, 10–14 July 2006; pp. 144–155. [Google Scholar]
Takagi, T.; Kiyomoto, S. Improved Sieving Algorithms for Shortest Lattice Vector Problem and Its Applications to Security Analysis of LWE-based Cryptosystems. In Proceedings of the 23rd Annual International Conference on the Theory and Applications of Cryptographic Techniques, Lyon, France, 2–6 May 2004. [Google Scholar]
Balbas, D. The Hardness of LWE and Ring-LWE: A Survey. Cryptol. ePrint Arch. 2021. Available online: https://eprint.iacr.org/2021/1358.pdf (accessed on 8 October 2021).
Post-Quantum Cryptography. Available online: https://csrc.nist.gov/Projects/post-quantum-cryptography/selected-algorithms-2022 (accessed on 8 June 2023).
Gentry, C.; Peikert, C.; Vaikuntanathan, V. Trapdoors for Hard Lattices and New Cryptographic Constructions. Cryptol. ePrint Arch. 2007. Available online: https://eprint.iacr.org/2007/432 (accessed on 24 November 2007).
Figure 1. Bases of a lattice.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
MDPI and ACS Style
Sabani, M.E.; Savvas, I.K.; Poulakis, D.; Garani, G.; Makris, G.C. Evaluation and Comparison of Lattice-Based Cryptosystems for a Secure Quantum Computing Era. Electronics 2023, 12, 2643. https://doi.org/10.3390/electronics12122643
AMA Style
Sabani ME, Savvas IK, Poulakis D, Garani G, Makris GC. Evaluation and Comparison of Lattice-Based Cryptosystems for a Secure Quantum Computing Era. Electronics. 2023; 12(12):2643. https://doi.org/10.3390/electronics12122643
Chicago/Turabian Style
Sabani, Maria E., Ilias K. Savvas, Dimitrios Poulakis, Georgia Garani, and Georgios C. Makris. 2023. "Evaluation and Comparison of Lattice-Based Cryptosystems for a Secure Quantum Computing Era" Electronics 12, no. 12: 2643. https://doi.org/10.3390/electronics12122643
Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.
Article Metrics
No
No
Article Access Statistics
For more information on the journal statistics, click here.
Multiple requests from the same IP address are counted as one view. | |||||
2453 | dbpedia | 1 | 50 | https://inspirehep.net/literature/2643637 | en | INSPIRE | [] | [] | [] | [
""
] | null | [] | null | en | /favicon.ico | null | |||||||
2453 | dbpedia | 1 | 27 | https://www.cs.cmu.edu/~./scsnews/nov5-99.html | en | SCS News Releases | [
"https://www.cs.cmu.edu/~./scsnews/alumlogo.gif"
] | [] | [] | [
""
] | null | [] | null | null | News Releases
Public Relations Office, School of Computer Science, Carnegie Mellon University
Pittsburgh PA 15213-3891
(412)268-3830 . (412)268-5016 (fax)
5 November 1999
Quantum Computing Expert Peter Shor Receives Carnegie Mellon's 1998 Dickson Prize in Science
Computer scientist and mathematician Peter Shor, a principal research scientist at AT&T Labs (Florham, N.J.), is the recipient of Carnegie Mellon University=B9s 1998 Dickson Prize in Science for his pioneering research in the theory of quantum computing. The Dickson Prize is funded through a trust left to the university by Pittsburgh physician Dr. Joseph Z. Dickson and his wife Agnes Fisher Dickson to honor people who make outstanding contributions to the field of science in the United States. The 1998 prize includes a $40,000 award.
Shor is the 25th recipient of this prize. He will give a lecture titled "Quantum Computing" at 4:30 p.m., Monday, November 8, in the auditorium of Carnegie Mellon's Mellon Institute Building. The lecture and reception which follows are open to the public.
A quantum computer (which is hypothetical today) operates on a different system of logic than a conventional computer, using the special properties of atoms and light rather than electronics to carry out a calculation. It has the potential to far surpass the speed of digital computers and solve problems that are intractable even for today's fastest parallel machines. However, the possibilities of such a device were not appreciated until 1994, when Shor discovered an algorithm for factoring large integers or whole numbers into prime numbers on a quantum computer. His breakthrough unleashed a flurry of research among physicists and computer scientists that is helping to bring quantum computers out of the realm of theory and into the prototype stage.
The difficulty of factoring long numbers using conventional computers is the basis of some widely used methods for encrypting information on the Internet. For this reason a quantum computer could, at least potentially, compromise the security of electronic cash and on-line signatures. However, a device which could actually carry out Shor's algorithm for large numbers is still many years away, because numerous technical difficulties have to be overcome. So, while security organizations are watching developments in the field, there is not yet any serious concern.
Shor has received several awards and honors for his work, including the Nevanlinna Prize, awarded at the 1998 International Congress of Mathematicians, and the 1998 International Quantum Communication Award. He is an AT&T fellow and a recipient of the 1999 Godel Prize. Most recently, he was honored with the 1999 MacArthur Foundation Fellowship.
Shor, 40, was born in New York City. He grew up in Washington, D.C. and in the San Francisco Bay area. He received a bachelor's degree in mathematics from the California Institute of Technology in 1981 and a doctor's degree in applied mathematics from the Massachusetts Institute of Technology in 1985. He joined AT&T Labs (then AT&T Bell Laboratories) in 1986 after a postdoctoral fellowship at the Mathematical Sciences Research Institute in Berkeley, California. For several years, he worked mainly on algorithms for conventional computers while also researching probability and combinatorics. In 1994, he says, after thinking about the problem on and off for nearly a year, he made his discovery. Since then, he has spent most of his research time investigating quantum computing and quantum information theory.
Shor and his wife Jennifer have a two-year-old daughter, Valerie.
Contact: | ||||||||
2453 | dbpedia | 2 | 25 | https://scottaaronson.blog/%3Fp%3D4450 | en | Optimized | [
"https://149663533.v2.pressablecdn.com/wp-content/plugins/really-simple-facebook-twitter-share-buttons/images/specificfeeds_follow.png",
"https://149663533.v2.pressablecdn.com/wp-content/plugins/really-simple-facebook-twitter-share-buttons/images/specificfeeds_follow.png",
"https://149663533.v2.pressablecdn.com... | [] | [] | [
""
] | null | [] | null | The Blog of Scott Aaronson | en | Shtetl-Optimized | https://scottaaronson.blog/ | This morning I was pondering one of the anti-Israel protesters’ favorite phrases—I promise, out of broad philosophical curiosity rather than just parochial concern for my extended family’s survival.
“We’re on the right side of history. Don’t put yourself on the wrong side by opposing us.”
Why do the protesters believe they shouldn’t face legal or academic sanction for having blockaded university campuses, barricaded themselves in buildings, shut down traffic, or vandalized Jewish institutions? Because, just like the abolitionists and Civil Rights marchers and South African anti-apartheid heroes, they’re on the right side of history. Surely the rules and regulations of the present are of little concern next to the vindication of future generations?
The main purpose of this post is not to adjudicate whether their claim is true or false, but to grapple with something much more basic: what kind of claim are they even making, and who is its intended audience?
One reading of “we’re on the right of history” is that it’s just a fancy way to say “we’re right and you’re wrong.” In which case, fair enough! Few people passionately believe themselves to be wrong.
But there’s a difficulty: if you truly believe your side to be right, then you should believe it’s right win or lose. For example, an anti-Zionist should say that, even if Israel continues existing, and even if everyone else on the planet comes to support it, still eliminating Israel would’ve been the right choice. Conversely, a Zionist should say that if Israel is destroyed and the whole rest of the world celebrates its destruction forevermore—well then, the whole world is wrong. (That, famously, is more-or-less what the Jews did say, each time Israel and Judah were crushed in antiquity.)
OK, but if the added clause “of history” is doing anything in the phrase “the right side of history,” that extra thing would appear to be an empirical prediction. The protesters are saying: “just like the entire world looks back with disgust at John Calhoun, Bull Connor, and other defenders of slavery and then segregation, so too will the world look back with disgust at anyone who defends Israel now.”
Maybe this is paired with a theory about the arc of the moral universe bending toward justice: “we’ll win the future and then look back with disgust on you, and we’ll be correct to do so, because morality inherently progresses over time.” Or maybe it has merely the character of a social threat: “we’ll win the future and then look back with disgust on you, so regardless of whether we’ll be right or wrong, you’d better switch to our side if you know what’s good for you.”
Either way, the claim of winning the future is now the kind of thing that could be wagered about in a prediction market. And, in essence, the Right-Side-of-History people are claiming to be able to improve on today’s consensus estimate: to have a hot morality tip that beats the odds. But this means that they face the same problem as anyone who claims it’s knowable that, let’s say, a certain stock will increase a thousandfold. Namely: if it’s so certain, then why hasn’t the price shot up already?
The protesters and their supporters have several possible answers. Many boil down to saying that most people—because they need to hold down a job, earn a living, etc.—make all sorts of craven compromises, preventing them from saying what they know in their hearts to be true. But idealistic college students, who are free from such burdens, are virtually always right.
Does that sound like a strawman? Then recall the comedian Sarah Silverman’s famous question from eight years ago:
PLEASE tell me which times throughout history protests from college campuses got it wrong. List them for me
Crucially, lots of people happily took Silverman up on her challenge. They pointed out that, in the Sixties and Seventies, thousands of college students, with the enthusiastic support of many of their professors, marched for Ho Chi Minh, Mao, Castro, Che Guevara, Pol Pot, and every other murderous left-wing tyrant to sport a green uniform and rifle. Few today would claim that these students correctly identified the Right Side of History, despite the students’ certainty that they’d done so.
(There were also, of course, moderate protesters, who merely opposed America’s war conduct—just like there are moderate protesters now who merely want Israel merely to end its Gaza campaign rather than its existence. But then as now, the revolutionaries sucked up much of the oxygen, and the moderates rarely disowned them.)
What’s really going on, we might say, is reference class tennis. Implicitly or explicitly, the anti-Israel protesters are aligning themselves with Gandhi and MLK and Nelson Mandela and every other celebrated resister of colonialism and apartheid throughout history. They ask: what are the chances that all those heroes were right, and we’re the first ones to be wrong?
The trouble is that someone else could just as well ask: what are the chances that Hamas is the first group in history to be morally justified in burning Jews alive in their homes … even though the Assyrians, Babylonians, Romans, Crusaders, Inquisitors, Cossacks, Nazis, and every other group that did similar things to the Jews over 3000 years is now acknowledged by nearly every educated person to have perpetrated an unimaginable evil? What are the chances that, with Israel’s establishment in 1948, this millennia-old moral arc of Western civilization suddenly reversed its polarity?
We should admit from the outset that such a reversal is possible. No one, no matter how much cruelty they’ve endured, deserves a free pass, and there are certainly many cases where victims turned into victimizers. Still, one could ask: shouldn’t the burden be on those who claim that today‘s campaign against Jewish self-determination is history’s first justified one?
It’s like, if I were a different person, born to different parents in a different part of the world, maybe I’d chant for Israel’s destruction with the best of them. Even then, though, I feel like the above considerations would keep me awake at night, would terrify me that maybe I’d picked the wrong side, or at least that the truth was more complicated. The certainty implied by the “right side of history” claim is the one part I don’t understand, as far as I try to stretch my sympathetic imagination.
For all that, I, too, have been moved by rhetorical appeals to “stand on the right side of history”—say, for the cause of Ukraine, or slowing down climate change, or saving endangered species, or defeating Trump. Thinking it over, this has happened when I felt sure of which side was right (and would ultimately be seen to be right), but inertia or laziness or inattention or whatever else prevented me from taking action.
When does this happen for me? As far as I can tell, the principles of the Enlightenment, of reason and liberty and progress and the flourishing of sentient life, have been on the right side of every conflict in human history. My abstract commitment to those principles doesn’t always tell me which side of the controversy du jour is correct, but whenever it does, that’s all I ever need cognitively; the rest is “just” motivation and emotion.
(Amusingly, I expect some people to say that my “reason and Enlightenment” heuristic is vacuous, that it works only because I define those ideals to be the ones that pick the right side. Meanwhile, I expect others to say that the heuristic is wrong and to offer counterexamples.)
Anyway, maybe this generalizes. Sure, a call to “stand on the right side of history” could do nontrivial work, but only in the same way that a call to buy Bitcoin in 2011 could—namely, for those who’ve already concluded that buying Bitcoin is a golden opportunity, but haven’t yet gotten around to buying it. Such a call does nothing for anyone who’s already considered the question and come down on the opposite side of it. The abuse of “arc of the moral universe” rhetoric—i.e., the calling down of history’s judgment in favor of X, even though you know full well that your listeners see themselves as having consulted history’s judgment just as earnestly as you did, and gotten back not(X) instead—yeah, that’s risen to be one of my biggest pet peeves. If I ever slip up and indulge in it, please tell me and I’ll stop.
Q1: Who will you be voting for in November?
A: Kamala Harris (and mainstream Democrats all down the ballot), of course.
Q2: Of course?
A: If the alternative is Trump, I would’ve voted for Biden’s rotting corpse. Or for Hunter Biden. Or for…
Q3: Why can’t you see this is just your Trump Derangement Syndrome talking?
A: Look, my basic moral commitments remain pretty much as they’ve been since childhood. Namely, that I’m on the side of reason, Enlightenment, scientific and technological progress, secular government, pragmatism, democracy, individual liberty, justice, intellectual honesty, an American-led peaceful world order, preservation of the natural world, mitigation of existential risks, and human flourishing. (Crazy and radical, I know.)
Only when choosing between candidates who all espouse such values, do I even get the luxury of judging them on any lower-order bits. Sadly, I don’t have that luxury today. Trump’s values, such as they are, would seem to be “America First,” protectionism, vengeance, humiliation of enemies, winning at all costs, authoritarianism, the veneration of foreign autocrats, and the veneration of himself. No amount of squinting can ever reconcile those with the values I listed before.
Q4: Is that all that’s wrong with him?
A: No, there are also the lies, and worst of all the “Big Lie.” Trump is the first president in US history to incite a mob to try to overturn the results of an election. He was serious! He very nearly succeeded, and probably would have, had Mike Pence been someone else. It’s now inarguable that Trump rejects the basic rules of our system, or “accepts” them only when he wins. We’re numb from having heard it so many times, but it’s a big deal, as big a deal as the Civil War was.
Q5: Oh, so this is about your precious “democracy.” Why do you care? Haven’t you of all people learned that the masses are mostly idiots and bullies, who don’t deserve power? As Curtis Yarvin keeps trying to explain to you, instead of “democracy,” you should want a benevolent king or dictator-CEO, who could offer a privileged position to the competent scientists like yourself.
A: Yeah, so how many examples does history furnish where that worked out well? I suppose you might make a partial case for Napoleon, or Ataturk? More to the point: even if benevolent, science-and-reason-loving authoritarian strongmen are possible in theory, do you really expect me to believe that Trump could be one of them? I still love how Scott Alexander put it in 2016:
Can anyone honestly say that Trump or his movement promote epistemic virtue? That in the long-term, we’ll be glad that we encouraged this sort of thing, that we gave it power and attention and all the nutrients it needed to grow? That the road to whatever vision of a just and rational society we imagine, something quiet and austere with a lot of old-growth trees and Greek-looking columns, runs through LOCK HER UP?
I don’t like having to vote for the lesser of two evils. But at least I feel like I know who it is.
Q6: But what about J. D. Vance? He got his start in Silicon Valley, was championed by Peter Thiel, and is obviously highly intelligent. Doesn’t he seem like someone who might listen to and empower tech nerds like yourself?
A: Who can say what J. D. Vance believes? Here are a few choice quotes of his from eight years ago:
I’m obviously outraged at Trump’s rhetoric, and I worry most of all about how welcome Muslim citizens feel in their own country. But I also think that people have always believed crazy shit (I remember a poll from a few years back suggesting that a near majority of democratic voters blame ‘the Jews’ for the financial crisis). And there have always been demagogues willing to exploit the people who believe crazy shit.
The more white people feel like voting for trump, the more black people will suffer. I really believe that.
[Trump is] just a bad man. A morally reprehensible human being.
To get from that to being Trump’s running mate is a Simone-Biles-like feat of moral acrobatics. Vance reminds me of the famous saying by L. Ron Hubbard from his pre-Dianetics days: “If a man really wants to make a million dollars, the best way would be to start his own religion.” (And I feel like Harris’s whole campaign strategy should just be to replay Vance’s earlier musings in wall-to-wall ads while emphasizing her agreement with them.) No, Vance is not someone I trust to share my values, if he has values at all.
Q7: What about the other side’s values, or lack thereof? I mean, don’t you care that the whole Democratic establishment—including Harris—colluded to cover up that Biden was senile and cognitively unfit to be president now, let alone for another term?
A: Look, we’ve all seen what happens as a relative gets old. It’s gradual. It’s hard for anyone to say at which specific moment they can no longer drive a car, or be President of the United States, or whatever. This means that I don’t necessarily read evil intent into the attempts to cover up Biden’s decline—merely an epic, catastrophic failure of foresight. That failure of foresight itself would’ve been a huge deal in normal circumstances, but these are not normal circumstances—not if you believe, as I do, that the alternative is the beginning of the end of a 250-year-old democratic experiment.
Q8: Oh stop being so melodramatic. What terrible thing happened to you because of Trump’s first term? Did you lose your job? Did fascist goons rough you up in the street?
A: Well, my Iranian PhD student came close to having his visa revoked, and it became all but impossible to recruit PhD students from China. That sucked, since I care about my students’ welfare like I care about my own. Also, the downfall of Roe v. Wade, which enabled Texas’ draconian new abortion laws, made it much harder for us to recruit faculty at UT Austin. But I doubt any of that will impress you. “Go recruit American students,” you’ll say. “Go recruit conservative faculty who are fine with abortion being banned.”
The real issue is that Trump was severely restrained in his first term, by being surrounded by people who (even if, in many cases, they started out loyal to him) were also somewhat sane and valued the survival of the Republic. Alas, he learned from that, and he won’t repeat that mistake the next time.
Q9: Why do you care so much about Trump’s lies? Don’t you realize that all politicians lie?
A: Yes, but there are importantly different kinds of lies. There are white lies. There are scheming, 20-dimensional Machiavellian lies, like a secret agent’s cover story (or is that only in fiction?). There are the farcical, desperate, ever-shifting lies of the murderer to the police detective or the cheating undergrad to the professor. And then there are the lies of bullies and mob bosses and populist autocrats, which are special and worse.
These last, call them power-lies, are distinguished by the fact that they aren’t even helped by plausibility. Often, as with conspiracy theories (which strongly overlap with power-lies), the more absurd the better. Obama was born in Kenya. Trump’s crowd was the biggest in history. The 2020 election was stolen by a shadowy conspiracy involving George Soros and Dominion and Venezuela.
The central goal of a power-lie is just to demonstrate your power to coerce others into repeating it, much like with the Party making Winston Smith affirm 2+2=5, or Petruchio making Katharina call the sun the moon in The Taming of the Shrew. A closely-related goal is as a loyalty test for your own retinue.
It’s Trump’s embrace of the power-lie that puts him beyond the pale for me.
Q10: But Scott, we haven’t even played our “Trump” card yet. Starting on October 7, 2023, did you not witness thousands of your supposed allies, the educated secular progressives on “the right side of history,” cheer the sadistic mass-murder of Jews—or at least, make endless excuses for those who did? Did this not destabilize your entire worldview? Will you actually vote for a party half of which seems at peace with the prospect of your family members’ physical annihilation? Or will you finally see who your real friends now are: Arkansas MAGA hillbillies who pray for your people’s survival?
A: Ah, this is your first slash that’s actually drawn blood. I won’t pretend that the takeover of part of the US progressive coalition by literal Hamasniks hasn’t been one of the most terrifying experiences of my life. Yes, if I had to be ruled by either (a) a corrupt authoritarian demagogue or (b) an idiot college student chanting for “Intifada Revolution,” I’d be paralyzed. So it’s lucky that I don’t face that choice! I get to vote, once more, for a rather boring mainstream Democrat—alongside at least 70% of American Jews. The idea of Harris as an antisemite would be ludicrous even if she didn’t have a Jewish husband or wasn’t strongly considering a pro-Israel Jew as her running mate.
Q11: Sure, Kamala Harris might mouth all the right platitudes about Israel having a right to defend itself, but she’ll constantly pressure Israel to make concessions to Hamas and Hezbollah. She’ll turn a blind eye to Iran’s imminent nuclearization. Why don’t you stay up at night worrying that, if you vote for a useful idiot like her, you’ll have Israel’s annihilation and a second Holocaust on your conscience forever?
A: Look, oftentimes—whenever, for example, I’m spending hours reading anti-Zionists on Twitter—I feel like there’s no limit to how intensely Zionist I am. On reflection, though, there is a limit. Namely, I’m not going to be more Zionist than the vast majority of my Israeli friends and colleagues—the ones who served in the IDF, who in some cases did reserve duty in Gaza, who prop up the Israeli economy with their taxes, and who will face the consequences of whatever happens more directly than I will. With few exceptions, these friends despise the Trump/Bibi alliance with white-hot rage, and they desperately want more moderate leadership in both countries.
Q12: Suppose I concede that Kamala is OK on Israel. We both know that she’s not the future of the Democratic Party, any more than Biden is. The future is what we all saw on campuses this spring. “Houthis Houthis make us proud, turn another ship around.” How can you vote for a party whose rising generation seems to want you and your family dead?
A: Let me ask you something. When Trump won in 2016, did that check the power of the campus radicals? Or as Scott Alexander prophesied at the time, did it energize and embolden them like nothing else, by dramatically confirming their theology of a planet held hostage by the bullying, misogynistic rich white males? I fundamentally reject your premise that, if I’m terrified of crazy left-wing extremists, then a good response is to vote for the craziest right-wing extremists I can find, in hopes that the two will somehow cancel each other out. Instead I should support a coherent Enlightenment alternative to radicalism, or the closest thing to that available.
Q13: Even leaving aside Israel, how can you not be terrified by what the Left has become? Which side denounced you on social media a decade ago, as a misogynist monster who wanted all women to be his sex slaves? Which side tried to ruin your life and career? Did we, the online rightists, do that? No. We did not. We did nothing worse to you than bemusedly tell you to man up, grow a pair, and stop pleading for sympathy from feminists who will hate you no matter what.
A: I’ll answer with a little digression. Back in 2017, when Kamala Harris was in the Senate, her office invited me to DC to meet with them to provide advice about the National Quantum Initiative Act, which Kamala was then spearheading. Kamala herself sent regrets that she couldn’t meet me, because she had to be at the Kavanaugh hearings. I have (nerdy, male) friends who did meet her about tech policy and came away with positive impressions.
And, I dunno, does that sound like someone who wants me dead for the crime of having been born a nerdy heterosexual male? Or having awkwardly and ineptly asked women on dates, including the one who became my wife? OK, maybe Amanda Marcotte wants me dead for those crimes. Maybe Arthur Chu does (is he still around?). Good that they’re not running for president then.
Q14: Let me try one more time to show you how much your own party hates you. Which side has been at constant war against the SAT and other standardized tests, and merit-based college admissions, and gifted programs, and academic tracking and acceleration, and STEM magnet schools, and every single other measure by which future young Scott Aaronsons (and Saket Agrawals) might achieve their dreams in life? Has that been our side, or theirs?
A: To be honest, I haven’t seen the Trump or Harris campaigns take any position on any of these issues. Even if they did, there’s very little that the federal government can do: these battles happen in individual states and cities and counties and universities. So I’ll vote for Harris while continuing to advocate for what I think is right in education policy.
Q15: Can you not see that Kamala Harris is a vapid, power-seeking bureaucratic machine—that she has no fixed principles at all? For godsakes, she all but condemned Biden as a racist in the 2020 primary, then agreed to serve as his running mate!
A: I mean, she surely has more principles than Vance does. As far as I can tell, for example, she’s genuinely for abortion rights (as I am). Even if she believed in nothing, though, better a cardboard cutout on which values I recognize are written, than a flesh-and-blood person shouting values that horrify me.
Q16: What, if anything, could Republicans do to get you to vote for them?
A: Reject all nutty conspiracy theories. Fully, 100% commit to the peaceful transfer of power. Acknowledge the empirical reality of human-caused climate change, and the need for both technological and legislative measures to slow it and mitigate its impacts. Support abortion rights, or at least a European-style compromise on abortion. Republicans can keep the anti-wokeness stuff, which actually seems to have become their defining issue. If they do all that, and also the Democrats are taken over by frothing radicals who want to annihilate the state of Israel and abolish the police … that’s, uh, probably the point when I start voting Republican.
Q17: Aha, so you now admit that there exist conceivable circumstances that would cause you to vote Republican! In that case, why did you style yourself “Never-Trump From Here to Eternity”?
A: Tell you what, the day the Republicans (and Trump himself?) repudiate authoritarianism and start respecting election outcomes, is the day I’ll admit my title was hyperbolic.
Q18: In the meantime, will you at least treat us Trump supporters with civility and respect?
A: Not only does civil disagreement not compromise any of my values, it is a value to which I think we should all aspire. And to whatever extent I’ve fallen short of that ideal—even when baited into it—I’m sorry and I’ll try to do better. Certainly, age and experience have taught me that there’s hardly anyone so far gone that I can’t find something on which I agree with them, while disagreeing with most of the rest of the world.
The news these days feels apocalyptic to me—as if we’re living through, if not the last days of humanity, then surely the last days of liberal democracy on earth.
All the more reason to ignore all of that, then, and blog instead about the notorious Busy Beaver function! Because holy moly, what news have I got today. For lovers of this super-rapidly-growing sequence of integers, I’ve honored to announce the biggest Busy Beaver development that there’s been since 1983, when I slept in a crib and you booted up your computer using a 5.25-inch floppy. That was the year when Allen Brady determined that BusyBeaver(4) was equal to 107. (Tibor Radó, who invented the Busy Beaver function in the 1960s, quickly proved with his student Shen Lin that the first three values were 1, 6, and 21 respectively. The fourth value was harder.)
Only now, after an additional 41 years, do we know the fifth Busy Beaver value. Today, an international collaboration called bbchallenge is announcing that it’s determined, and even formally verified using the Coq proof system, that BB(5) is equal to 47,176,870—the value that’s been conjectured since 1990, when Heiner Marxen and Jürgen Buntrock discovered a 5-state Turing machine that runs for exactly 47,176,870 steps before halting, when started on a blank tape. The new bbchallenge achievement is to prove that all 5-state Turing machines that run for more steps than 47,176,870, actually run forever—or in other words, that 47,176,870 is the maximum finite number of steps for which any 5-state Turing machine can run. That’s what it means for BB(5) to equal 47,176,870.
For more on this story, see Ben Brubaker’s superb article in Quanta magazine, or bbchallenge’s own announcement. For more background on the Busy Beaver function, see my 2020 survey, or my 2017 big numbers lecture, or my 1999 big numbers essay, or the Googology Wiki page, or Pascal Michel’s survey.
The difficulty in pinning down BB(5) was not just that there are a lot of 5-state Turing machines (16,679,880,978,201 of them to be precise, although symmetries reduce the effective number). The real difficulty is, how do you prove that some given machine runs forever? If a Turing machine halts, you can prove that by simply running it on your laptop until halting (at least if it halts after a “mere” ~47 million steps, which is child’s-play). If, on the other hand, the machine runs forever, via some never-repeating infinite pattern rather than a simple infinite loop, then how do you prove that? You need to find a mathematical reason why it can’t halt, and there’s no systematic method for finding such reasons—that was the great discovery of Gödel and Turing nearly a century ago.
More precisely, the Busy Beaver function grows faster than any function that can be computed, and we know that because if a systematic method existed to compute arbitrary BB(n) values, then we could use that method to determine whether a given Turing machine halts (if the machine has n states, just check whether it runs for more than BB(n) steps; if it does, it must run forever). This is the famous halting problem, which Turing proved to be unsolvable by finite means. The Busy Beaver function is Turing-uncomputability made flesh, a finite function that scrapes the edge of infinity.
There’s also a more prosaic issue. Proofs that particular Turing machines run forever tend to be mind-numbingly tedious. Even supposing you’ve found such a “proof,” why should other people trust it, if they don’t want to spend days staring at the outputs of your custom-written software?
And so for decades, a few hobbyists picked away at the BB(5) problem. One, who goes by the handle “Skelet”, managed to reduce the problem to 43 holdout machines whose halting status was still undetermined. Or maybe only 25, depending who you asked? (And were we really sure about the machines outside those 43?)
The bbchallenge collaboration improved on the situation in two ways. First, it demanded that every proof of non-halting be vetted carefully. While this went beyond the original mandate, a participant named “mxdys” later upped the standard to fully machine-verifiable certificates for every non-halting machine in Coq, so that there could no longer be any serious question of correctness. (This, in turn, was done via “deciders,” programs that were crafted to recognize a specific type of parameterized behavior.) Second, the collaboration used an online forum and a Discord server to organize the effort, so that everyone knew what had been done and what remained to be done.
Despite this, it was far from obvious a priori that the collaboration would succeed. What if, for example, one of the 43 (or however many) Turing machines in the holdout set turned out to encode the Goldbach Conjecture, or one of the other great unsolved problems of number theory? Then the final determination of BB(5) would need to await the resolution of that problem. (We do know, incidentally, that there’s a 27-state Turing machine that encodes Goldbach.)
But apparently the collaboration got lucky. Coq proofs of non-halting were eventually found for all the 5-state holdout machines.
As a sad sidenote, Allen Brady, who determined the value of BB(4), apparently died just a few days before the BB(5) proof was complete. He was doubtful that BB(5) would ever be known. The reason, he wrote in 1988, was that “Nature has probably embedded among the five-state holdout machines one or more problems as illusive as the Goldbach Conjecture. Or, in other terms, there will likely be nonstopping recursive patterns which are beyond our powers of recognition.”
Maybe I should say a little at this point about what the 5-state Busy Beaver—i.e., the Marxen-Buntrock Turing machine that we now know to be the champion—actually does. Interpreted in English, the machine iterates a certain integer function g, which is defined by
g(x) = (5x+18)/3 if x = 0 (mod 3),
g(x) = (5x+22)/3 if x = 1 (mod 3),
g(x) = HALT if x = 2 (mod 3).
Starting from x=0, the machine computes g(0), g(g(0)), g(g(g(0))), and so forth, halting if and if it ever reaches … well, HALT. The machine runs for millions of steps because it so happens that this iteration eventually reaches HALT, but only after a while:
0 → 6 → 16 → 34 → 64 → 114 → 196 → 334 → 564 → 946 → 1584 → 2646 → 4416 → 7366 → 12284 → HALT.
(And also, at each iteration, the machine runs for a number of steps that grows like the square of the number x.)
Some readers might be reminded of the Collatz Conjecture, the famous unsolved problem about whether, if you repeatedly replace a positive integer x by x/2 if x is even or 3x+1 if x is odd, you’ll always eventually reach x=1. As Scott Alexander would say, this is not a coincidence because nothing is ever a coincidence. (Especially not in math!)
It’s a fair question whether humans will ever know the value of BB(6). Pavel Kropitz discovered, a couple years ago, that BB(6) is at least 10^10^10^10^10^10^10^10^10^10^10^10^10^10^10 (i.e., 10 raised to itself 15 times). Obviously Kropitz didn’t actually run a 6-state Turing machine for that number of steps until halting! Instead he understood what the machine did—and it turned out to apply an iterative process similar to the g function above, but this time involving an exponential function. And the process could be proven to halt after ~15 rounds of exponentiation.
Meanwhile Tristan Stérin, who coordinated the bbchallenge effort, tells me that a 6-state machine was recently discovered that “iterates the Collatz-like map {3x/2, (3x-1)/2} from the number 8 and halts if and only if the number of odd terms ever gets bigger than twice the number of even terms.” This shows that, in order to determine the value of BB(6), one would first need to prove or disprove the Collatz-like conjecture that that never happens.
Basically, if and when artificial superintelligences take over the world, they can worry about the value of BB(6). And then God can worry about the value of BB(7).
I first learned about the BB function in 1996, when I was 15 years old, from a book called The New Turing Omnibus by A. K. Dewdney. From what I gather, Dewdney would go on to become a nutty 9/11 truther. But that’s irrelevant to the story. What matters was that his book provided my first exposure to many of the key concepts of computer science, and probably played a role in my becoming a theoretical computer scientist at all.
And of all the concepts in Dewdney’s book, the one I liked the most was the Busy Beaver function. What a simple function! You could easily explain its definition to Archimedes, or Gauss, or any of the other great mathematicians of the past. And yet, by using it, you could name definite positive integers (BB(10), for example) incomprehensibly larger than any that they could name.
It was from Dewdney that I learned that the first four Busy Beaver numbers were the unthreatening-looking 1, 6, 21, and 107 … but then that the fifth value was already unknown (!!), and at any rate at least 47,176,870. I clearly remember wondering whether BB(5) would ever be known for certain, and even whether I might be the one to determine it. That was almost two-thirds of my life ago.
As things developed, I played no role whatsoever in the determination of BB(5) … except for this. Tristan Stérin tells me that reading my survey article, The Busy Beaver Frontier, was what inspired him to start and lead the bbchallenge collaboration that finally cracked the problem. It’s hard to express how gratified that makes me.
Why care about determining particular values of the Busy Beaver function? Isn’t this just a recreational programming exercise, analogous to code golf, rather than serious mathematical research?
I like to answer that question with another question: why care about humans landing on the moon, or Mars? Those otherwise somewhat arbitrary goals, you might say, serve as a hard-to-fake gauge of human progress against the vastness of the cosmos. In the same way, the quest to determine the Busy Beaver numbers is one concrete measure of human progress against the vastness of the arithmetical cosmos, a vastness that we learned from Gödel and Turing won’t succumb to any fixed procedure. The Busy Beaver numbers are just … there, Platonically, as surely as 13 was prime long before the first caveman tried to arrange 13 rocks into a nontrivial rectangle and failed. And yet we might never know the sixth of these numbers and only today learned the fifth.
Anyway, huge congratulations to the bbchallenge team on their accomplishment. At a terrifying time for the world, I’m happy that, whatever happens, at least I lived to see this.
Dana, the kids, and I got back to the US last week after a month spent in England and then Israel. We decided to visit Israel because … uhh, we heard there’s never been a better time.
We normally go every year to visit Dana’s family and our many friends there, and to give talks. Various well-meaning friends suggested that maybe we should cancel or postpone this year—given, you know, the situation. To me, though, the situation felt like all the more reason to go. To make Israel seem more and more embattled, dangerous, isolated, abnormal, like not an acceptable place to visit (much less live), in order to crater its economy, demoralize its population, and ultimately wipe it from the face of earth … that is explicitly much of the world’s game plan right now, laid out with shocking honesty since October 7 (a day that also showed us what the “decolonization” will, concretely, look like). So, if I oppose this plan, then how could I look myself in the mirror while playing my tiny part in it? Shouldn’t I instead raise a middle finger to those who’d murder my family, and go?
Besides supporting our friends and relatives, though, I wanted to see the post-October-7 reality for myself, rather than just spending hours per day reading about it on social media. I wanted to form my own impression of the mood in Israel: fiercely determined? angry? hopeless? just carrying on like normal?
Anyway, in two meeting-packed weeks, mostly in Tel Aviv but also in Jerusalem, Haifa, and Be’er Sheva, I saw stuff that could support any of those narratives. A lot was as I’d expected, but not everything. In the rest of this post, I’ll share eleven observations:
(1) This presumably won’t shock anyone, but in post-October-7 Israel, you indeed can’t escape October 7. Everywhere you look, on every building, in every lobby, hanging from every highway overpass, there are hostage posters and “Bring Them Home Now” signs and yellow ribbons—starting at the airport, where every single passenger is routed through a long corridor of hostage posters, each one signed and decorated by the hostage’s friends and family. It sometimes felt as though Yad Vashem had expanded to encompass the entire country. Virtually everyone we talked to wanted to share their stories and opinions about the war, most of all their depression and anger. While there was also plenty of discussion about quantum error mitigation and watermarking of large language models and local family events, no one even pretended to ignore the war.
(2) Having said that, the morning after we landed, truthfully, the first thing that leapt out at me wasn’t anything to do with October 7, hostages, or Gaza. It was the sheer number of children playing outside, in any direction you looked. Full, noisy playgrounds on block after block. It’s one thing to know intellectually that Israel has by far the highest birthrate of any Western country, another to see it for yourself. The typical secular family probably has three kids; the typical Orthodox family has more. (The Arab population is of course also growing rapidly, both in Israel and in the West Bank and Gaza.) New apartment construction is everywhere you look in Tel Aviv, despite building delays caused by the war. And it all seems perfectly normal … unless you’ve lived your whole life in environments where 0.8 or 1.2 children per couple is the norm.
This, of course, has giant implications for anyone interested in Israel’s future. It’s like, a million Israeli leftists could get fed up and flee to the US or Canada or Switzerland, and Israel would still have a large and growing Jewish population—because having a big family is “just what people do” in a state that was founded to defy the Holocaust. In particular: anyone who dreams of dismantling the illegal, settler-colonial, fascist Zionist ethnostate, and freeing Palestine from river to sea, had better have some plan for what they’re going to do with all these millions of young Jews, who don’t appear to be going anywhere.
(3) The second thing I noticed was the heat—comparable to the Texas summer heat that we try to escape when possible. Because of the roasting sun, our own two pampered offspring mostly refused to go outside during daytime, and we mostly met friends indoors. I more than once had the dark thought that maybe Israel will survive Hamas, Hezbollah, Iran, and its own Jewish extremists … only to be finished off in the end (along with much of the rest of the planet) by global warming. I wonder whether Israel will manage to engineer its way out of the crisis, as it dramatically engineered its way out of its water crisis via desalination. The Arab petrostates have been trying to engineer their way out of the Middle East’s increasingly Mercury-like climate, albeit with decidedly mixed results.
(4) But nu, what did our Israeli friends say about the war? Of course it’s a biased sample, because our friends are mostly left-wing academics and tech workers. But, at risk of overgeneralizing: they’re unhappy. Very, very unhappy. As for Bibi and his far-right yes-men? Our friends’ rage at them was truly a sight to behold. American progressives are, like, mildly irked by Trump in comparison. Yes, our friends blame Bibi for the massive security and intelligence failures that allowed October 7 to happen. They blame him for dragging out the war to stave off elections. They blame him for empowering the contemptible Ben-Gvir and Smotrich. They blame him for his failure to bring back the remaining hostages. Most of all, they blame him for refusing even to meet with the hostage families, and more broadly, for evading responsibility for all that he did wrong, while arrogating credit for any victories (like the rescue of Noa Argamani).
(5) One Israeli friend offered to take me along to the giant anti-Bibi rally that now happens every Saturday night in Azrieli Center in Tel Aviv. (She added that, if I left before 9pm, it would reduce the chances of the police arresting me.) As the intrepid blogger-investigator I am, of course I agreed.
While many of the protesters simply called for new elections to replace Netanyahu (a cause that I 3000% support), others went further, demanding a deal to free the hostages and an immediate end to the war (even if, as they understood, that would leave Hamas in power).
Watching the protesters, smelling their pot smoke that filled the air, I was seized by a thought: these Israeli leftists actually see eye-to-eye with the anti-Israel American leftists on a huge number of issues. In a different world, they could be marching together as allies. Except, of course, for one giant difference: namely, the Tel Aviv protesters are proudly waving Israeli flags (sometimes modified to add anti-Bibi images, or to depict the Star of David “crying”), rather than burning or stomping on those flags. They’re marching to save the Israel that they know and remember, rather than to destroy it.
(6) We did meet one ultra-right-wing (and Orthodox) academic colleague. He was virtually the only person we met on this trip who seemed cheerful and optimistic about Israel’s future. He brought me to his synagogue to celebrate the holiday of Shavuot, while he himself stood guarding the door of the synagogue with a gargantuan rifle (his volunteer duty since October 7). He has six kids.
(7) Again and again, our secular liberal friends told us they’re thinking about moving from Israel, because if the Bibi-ists entrench their power (and of course the demographics are trending in that direction), then they don’t see that the country has any worthwhile future for them or their children. Should this be taken more seriously than the many Americans who promise that this time, for real, they’ll move to Canada if Trump wins? I’m not sure. I can only report what I heard.
(8) At the same time, again and again I got the following question from Israelis (including the leftist ones): how bad is the situation for Jews in the US? Have the universities been taken over by militant anti-Zionists, like it shows in the news? I had to answer: it’s complicated. Because I live my life enbubbled in the STEM field of computer science, surrounded by friends and colleagues of many backgrounds, ethnicities, religions, and political opinions who are thoughtful and decent (otherwise, why would they be my friends and colleagues?), I’m able to live a very nice life even in the midst of loud protesters calling to globalize the intifada against my family.
If, on the other hand, I were in a typical humanities department? Yeah, then I’d be pretty terrified. My basic options would be to (a) shut up about my (ironically) moderate, middle-of-the-road opinions on Israel/Palestine, such as support for the two-state solution; (b) live a miserable and embattled existence; or (c) pack up and move, for example to Israel.
An astounding irony right now is that, just as Israeli leftists are talking about moving from Israel, some of my American Jewish friends have talked to me about moving to Israel, to escape a prejudice that they thought died with their grandparents. I don’t know where the grass is actually greener (or is it brown everywhere?). Nor do I know how many worriers will actually follow through. What’s clear is that, both in Israel and in the diaspora, Jews are feeling an existential fear that they haven’t felt for generations.
(9) Did I fear for my own family’s safety during the trip? Not really. Maybe I should have. When we visited Haifa, we found that GPS was scrambled all across northern Israel, to make targeting harder for Hezbollah missiles. As a result, we couldn’t use Google Maps, got completely lost driving, and had to change plans with our friends. For the first time, now I really feel angry at Hezbollah: they made my life worse and it’s personal!
The funniest part, though, was how the scrambling was implemented: when you opened Google Maps anywhere in the north, it told you that you were in Beirut. It then dutifully gave you walking or driving directions to wherever you were going in Israel, passing through Syria close to Damascus (“warning: this route passes through multiple countries”).
(10) The most darkly comical thing that I heard on the entire trip: “oh, no, I don’t object in the slightest if the anti-Zionists want to kill us all. I only object if they want to kill us because of an incorrect understanding of the relevant history.” Needless to say, this was a professor.
(11) After my two-week investigation, what grand insight can I offer about Israel’s future? Not much, but maybe this: I think we can definitively rule out the scenario where Israel, having been battered by October 7, and bracing itself to be battered worse by Hezbollah, just sort of … withers away and disappears. Yes, Israel might get hotter, more crowded, more dangerous, more right-wing, and more Orthodox. But it will stay right where it is, unless and until its enemies destroy it in a cataclysmic war. You can’t scare people away, break their will, if they believe they have nowhere else on the planet to go. You can only kill them or else live next to them in peace, as the UN proposed in 1947 and as Oslo proposed in the 1990s. May we live to see peace.
Anyway, on that pleasant note, time soon to tune in to the Trump/Biden debate! I wonder who these two gentlemen are, and what they might stand for? | |||||
2453 | dbpedia | 1 | 66 | https://quantumfrontiers.com/2013/01/03/ignacio-cirac-and-peter-zoller-get-what-they-deserve-2/ | en | Ignacio Cirac and Peter Zoller get what they deserve | [
"https://quantumfrontiers.com/wp-content/uploads/2016/10/cropped-008_150428_painterlab_eephoto-retouched.jpg",
"https://quantumfrontiers.com/wp-content/uploads/2013/01/franklinphysicslaureates.jpg?w=584",
"https://2.gravatar.com/avatar/ec3600c7aebcb13ba238ce2561b6561d889765f26c2cb18bcae734463ebec0cc?s=68&d=iden... | [] | [] | [
""
] | null | [] | 2013-01-03T00:00:00 | A good thing about a blog is that when my friends win prizes I have the opportunity to say nice things about them. This seems to be happening a lot lately (Kitaev, Wineland, Kimble, Hawking, Polchinski, ...). Today's very exciting news is that Ignacio Cirac and Peter Zoller have won the 2013 Wolf Prize in… | en | https://s1.wp.com/i/favicon.ico | Quantum Frontiers | https://quantumfrontiers.com/2013/01/03/ignacio-cirac-and-peter-zoller-get-what-they-deserve-2/ | A good thing about a blog is that when my friends win prizes I have the opportunity to say nice things about them. This seems to be happening a lot lately (Kitaev, Wineland, Kimble, Hawking, Polchinski, …).
Today’s very exciting news is that Ignacio Cirac and Peter Zoller have won the 2013 Wolf Prize in Physics “for groundbreaking theoretical contributions to quantum information processing, quantum optics, and the physics of quantum gases.”
Spurred by the recent discovery of Shor’s algorithm, in mid-1994 Jeff Kimble and I started what we called the “Quantum computing club” at Caltech, a series of informal meetings in which our groups got together to talk about quantum information processing. Seth Lloyd, in the midst of a transition from Los Alamos to MIT, was often in town and took part in the discussions. Not knowing much about the topic at the time, I did more listening than talking, learning especially from Seth and Jeff’s amazingly erudite grad student Hideo Mabuchi.
In January 1995, Peter Zoller stopped by for a brief visit, and I suppose that must have been when we first met. Peter told us about his latest work with Ignacio, in which they proposed a method for executing a universal set of quantum gates acting on qubits encoded in the internal states of trapped ions. Their crucial new idea was that the quantized vibrations of the ions in the trap (“phonons”) could serve as a quantum bus, enabling highly controllable interactions between ions. Much of the quantum optics background behind the proposal was new to me, but after hearing these ideas I decided I better find out what an ion trap is.
The Cirac-Zoller proposal was electrifying to me and others, because it made quantum computing not just an abstract concept but something we could envision as a concrete and plausible physical system. Just months passed before Dave Wineland’s group achieved the first laboratory demonstration of the proposal, launching advances in ion-trap quantum computing that continue to this day. And this pioneering work by Cirac and Zoller stirred many other physicists to make further ingenious proposals for realizing quantum computing in other physical systems.
I think I first met Ignacio at a workshop in Santa Barbara in the fall of 1996, where he told me about his work in progress with Zoller, Kimble, and Mabuchi on using photons to distribute entanglement in a quantum network. The tricky part of their proposal concerned how to ensure that a photon arriving at a node in the network would be absorbed rather than reflected. I found it delightful that this problem could be solved by arranging for the photon wave packet to be time symmetric, so its absorption would be related to photon emission by time reversal. This paper was one of the first and most important concerning photons as “flying qubits” in a network.
It would be hard to overestimate the dominant influence of Cirac and Zoller on the many developments since 1994 at the interface of quantum optics, quantum information, and quantum many-body physics, but to avoid boring you I will mention only one more tremendously seminal paper: with co-authors, Cirac and Zoller proposed using ultracold atomic gases in optical lattices to explore correlated quantum matter and quantum phase transitions, an idea that has energized and transformed experimental atomic physics.
Ignacio and Peter have written over 70 papers together, many of them now classics. Though they don’t collaborate with one another nearly as often as they used to, both continue to produce highly original papers at a staggering pace.
Ignacio and Peter are both exceptionally nice people, each with hardly a shred of arrogance. Yet both are also blessed with supreme (and very well justified) self-confidence as well as unbounded ambition. Perhaps their intellectual fearlessness, as much as extraordinary ability, has enabled them to exert such a deep influence over such a broad swath of physics.
Congratulations, Ignacio and Peter!
[Update: In an earlier version of this post I said that Peter Zoller visited Caltech in “late fall of 1994.” I checked my notebook — it was actually January 9-10, 1995.] | ||||
2453 | dbpedia | 0 | 1 | https://ercim-news.ercim.eu/en134/announcements/prestigious-goedel-prize-for-ronald-de-wolf | en | Prestigious Gödel Prize for Ronald de Wolf | https://ercim-news.ercim.eu/templates/purity_iii/favicon.ico | https://ercim-news.ercim.eu/templates/purity_iii/favicon.ico | [
"https://ercim-news.ercim.eu/images/stories/EN134/cover.jpg",
"https://ercim-news.ercim.eu/images/stories/EN138-short.jpg",
"https://ercim-news.ercim.eu/images/stories/EN137-short.jpg",
"https://ercim-news.ercim.eu/images/stories/EN136-short.jpg",
"https://ercim-news.ercim.eu/images/stories/EN135/cover.jpg"... | [] | [] | [
""
] | null | [
"Peter Kunz"
] | 2023-06-23T12:07:16+00:00 | ERCIM News, the quarterly magazine of the European Research Consortium for Informatics and Mathematics | en | /templates/purity_iii/favicon.ico | null | Ronald de Wolf from CWI and his co-authors receive the 2023 Gödel Prize for outstanding papers in theoretical computer science.
Ronald de Wolf (CWI, UvA, QuSoft) and his co-authors receive the prestigious Gödel Prize for outstanding papers in theoretical computer science. The Gödel Prize is jointly awarded by the ACM Special Interest Group on Algorithms and Computation Theory (ACM SIGACT) and the European Association for Theoretical Computer Science (EATCS). The prize will be awarded during STOC 2023, one of the most important conferences in theoretical computer science, which takes place on 20-23 June 2023 in Orlando, Florida. This year, there are two winning articles. The other paper receiving the 2023 Gödel Prize is by Thomas Rothvoss.
Ronald de Wolf says: “I am very proud and humbled to win this prize along with my co-authors, and to be listed among the amazing papers and amazing researchers that have received this prize before”. Earlier winners of the Gödel Prize include well-known researchers like Cynthia Dwork, Shafi Goldwasser, Johan Håstad, László Lovász, Peter Shor, Dan Spielman, Mario Szegedy and Avi Wigderson.
Travelling Salesman Problem
Authors Samuel Fiorini, Serge Massar, Sebastian Pokutta, Hans Raj Tiwary and Ronald de Wolf were given the award for their article ‘Exponential Lower Bounds for Polytopes in Combinatorial Optimization’. One of its main conclusions was that a particular attempt to solve the famous travelling salesman problem cannot possibly work. Ronald de Wolf explains: “This paper refutes an attempt to solve hard computational problems such as Travelling Salesman (TSP). We know how to solve so-called linear programs efficiently, so since the 1980s researchers have been trying to write down a small linear program for TSP. If successful, this approach would have momentous consequences for efficient algorithms. However, our paper - which generalizes work by Yannakakis from 1988 - definitively showed that the approach is doomed to fail, by proving that every linear program that describes TSP needs to be exponentially large. The proof combines geometry, combinatorics, and even a connection with quantum communication theory.”
At STOC 2012, Ronald de Wolf and the rest of the team already received a Best Paper Award for their work, and in 2022 they won the ACM STOC 10-year Test of Time Award. Ronald de Wolf won the ERCIM Cor Baayen Award in 2003. | |||
2453 | dbpedia | 2 | 64 | https://www.academia.edu/54370415/The_G%25C3%25B6del_Prize%3Ffrom_sitemaps%3Dtrue%26version%3D2 | en | The Gödel Prize | http://a.academia-assets.com/images/open-graph-icons/fb-paper.gif | http://a.academia-assets.com/images/open-graph-icons/fb-paper.gif | [
"https://a.academia-assets.com/images/academia-logo-redesign-2015-A.svg",
"https://a.academia-assets.com/images/academia-logo-redesign-2015.svg",
"https://a.academia-assets.com/images/single_work_splash/adobe.icon.svg",
"https://0.academia-photos.com/attachment_thumbnails/70766742/mini_magick20210930-15665-xu... | [] | [] | [
""
] | null | [
"Mogens Nielsen",
"independent.academia.edu"
] | 2021-09-30T00:00:00 | The Gödel Prize | https://www.academia.edu/54370415/The_G%C3%B6del_Prize | Around 1989, a striking letter written in March 1956 from Kurt Gödel to John von Neumann came to light. It poses some problems about the complexity of algorithms; in particular, it asks a question that can be seen as the first formulation of the P = ? NP question. This paper discusses some of the background to this letter, including von Neumann's own ideas on complexity theory. Von Neumann had already raised explicit questions about the complexity of Tarski's decision procedure for elementary algebra and geometry in a letter of 1949 to J. C. C. McKinsey. The paper concludes with a discussion of why theoretical computer science did not emerge as a separate discipline until the 1960s.
This article constructs a Turing Machine which can solve for β which is RE-complete. Such a machine is only possible if there is something wrong with the foundations of computer science and mathematics. We therefore check our work by looking very closely at Cantor's diagonalization and construct a novel formal language as an Abelian group which allows us, through equivalence relations, to provide a non-trivial counterexample to Cantor's argument. As if that wasn't enough, we then discover that the impredicative nature of Gödel's diagonalization lemma leads to logical tautology, invalidating any meaning behind the method, leaving no doubt that diagonalization is flawed. Our discovery in regards to these foundational arguments opens the door to solving the P vs NP problem.
G��del's incompleteness results are two of the most fundamental and important contributuions to logic and the foundations of mathematics. G��del showed that no axiomatizable formal system strong enough to capture elementary number theory can prove every true sentence in its language. This theorem is an important limiting result regarding the power of formal axiomatics, but has also been of immense importance in other areas, eg, the theory of computability.
This article presents the stimulating lectures delivered by the world's most prominent computer scientists upon their receipt of the ACM Turing Award. This collection appeal to everyone interested in the history and development of computer science, and in the perspective and thoughts, which remain relevant today. | |||||
2453 | dbpedia | 1 | 31 | https://gilkalai.wordpress.com/2014/03/18/why-quantum-computers-cannot-work-the-movie/ | en | Why Quantum Computers Cannot Work: The Movie! | [
"https://gilkalai.wordpress.com/wp-content/uploads/2015/11/cropped-gil-jirka.jpg",
"https://gilkalai.wordpress.com/wp-content/uploads/2014/03/nr2.png?w=300&h=200",
"https://gilkalai.wordpress.com/wp-content/uploads/2014/03/nr1.png?w=300&h=214",
"https://gilkalai.wordpress.com/wp-content/uploads/2014/03/nr3.pn... | [] | [] | [
""
] | null | [] | 2014-03-18T00:00:00 | Update (April 2016) : Here is a link to a new post on my May 2016 Notices AMS paper. Here are links to a videotaped lecture in two parts entitled "why quantum computers cannot work" recorded at the Simons Institute for the Theory of Computing on December 2013 and two additional videos: a short talk on… | en | https://s1.wp.com/i/favicon.ico | Combinatorics and more | https://gilkalai.wordpress.com/2014/03/18/why-quantum-computers-cannot-work-the-movie/ | |||||
2453 | dbpedia | 3 | 28 | https://thmatters.wordpress.com/tcs-awards-list/ | en | TCS Awards List | [
"https://s2.wp.com/i/logo/wpcom-gray-white.png",
"https://s2.wp.com/i/logo/wpcom-gray-white.png",
"https://pixel.wp.com/b.gif?v=noscript"
] | [] | [] | [
""
] | null | [] | 2018-05-25T15:32:36+00:00 | Below is a list of major awards that are relevant to the theoretical computer science community. Included is a brief description of each, a link to the nomination instructions, and examples of res… | en | https://s1.wp.com/i/favicon.ico | Theory Matters | https://thmatters.wordpress.com/tcs-awards-list/ | Below is a list of major awards that are relevant to the theoretical computer science community. Included is a brief description of each, a link to the nomination instructions, and examples of researchers from TCS and related fields that won the award in the past.
Honorary Lectures
Lifetime Awards
Major Paper or Book, Set of Papers, etc.
Major Prizes
Mid-Career Awards
Service
Student Prizes
Thesis Prizes
Young Career Awards
Honorary Lectures
John von Neumann Lecture
For outstanding and distinguished contributions to the field of applied mathematical sciences and for the effective communication of these ideas to the community.
Some recipients in TCS: Jennifer Chayes, Donald Knuth, Ingrid Daubechies, Richard Karp
link
Lifetime Awards
Claude Shannon Award
To honor consistent and profound contributions to the field of information theory.
Some recipients in TCS: Robert Calderbank, Abbas El Gamal
link
Richard Hamming Medal
To honor an individual or team, up to three in number for exceptional contributions to information sciences, systems, and technology.
Some recipients in TCS: Robert Calderbank, Abbas El Gamal, Michael Luby, Whitfield Diffie, Martin Hellman, Ralph Merkle
link
Kyoto Prize
To honor those who have contributed significantly to the scientific, cultural, and spiritual betterment of mankind in the area of advanced technology.
Some recipients in TCS: Laszlo Lovasz, Donaly Knuth
link
Simons Investigator
To support outstanding scientists in their most productive years, when they are establishing creative new research directions, providing leadership to the field and effectively mentoring junior scientists.
Some recipients in TCS: Scott Aaronson, Boaz Barak, James Lee, Madhu Sudan, David Zuckerman, Dan Boneh, Subhash Khot, Chris Umans (2-3/year)
link
Vannevar Bush Faculty Fellows program given by DoD (formerly National Security Science and Engineering Faculty Fellowship)
Seeks outstanding researchers to propose basic research that is potentially transformative in topics areas of interest to the DoD.
Some recipients in TCS: Scott Aaronson, Sean Hallgren, Umesh Vazirani
link
INFORMS Khachiyan Prize
Honors individual or a team for life-time achievements in the area of optimization; recognizes a sustained career of scholarship from nominees who are still active at the year of the nomination. The prize serves as an esteemed recognition of innovativeness and impact in the area of optimization, including theory and applications.
Some recipients in TCS: Jean Bernard Lasserre
link
ACM Fellow
ACM’s most prestigious member grade recognizes the top 1% of ACM members for their outstanding accomplishments in computing and information technology and/or outstanding service to ACM and the larger computing community.
Some recipients in TCS: Edith Cohen, Shafi Goldwasser, Martin Grohe, Venkatesan Guruswami, Mor Harchol-Balter, Silvio Micali, Tal Rabin, Michael Sipser, R. Ravi, David Peleg, Erik Demaine, Mike Saks, Noga Alon, Monika Henzinger, Cynthia Dwork, Piotr Indyk
link
SIAM Fellow
To honor SIAM members who are recognized by their peers as distinguished for their contributions to the discipline.
Some recipients in TCS: Ronald DeVore, Pablo Parrilo, Robin Thomas, Joel Spencer, Cynthia Phillips, David Williamson, Fan Chung, Charles Leiserson, Moshe Vardi, Dorit Hochbaum, Michel Goemans, Andrew Goldberg, Alan Frieze
link
IEEE Fellow
A distinction reserved for select IEEE members whose extraordinary accomplishments in any of the IEEE fields of interest are deemed fitting of this prestigious grade elevation.
Some recipients in TCS: Emmanuel Candes, Andrea Montanari, Rebecca Wright
link
AMS Fellow
Recognizes members who have made outstanding contributions to the creation, exposition, advancement, communication, and utilization of mathematics.
Some recipients in TCS: Emmanuel Candes, Peter Sarnak, Noga Alon, Michael Sipser, Victor Pan, Jennifer Chayes, Lenore Blum, Michel Goemans, Tom Leighton, Nick Pippenger, Dana Randall, Madhu Sudan, Joel Spencer, Eva Tardos, Prasad Tetali, Alan Frieze
link
IACR Fellow
To recognize outstanding IACR members for technical and professional contributions that: advance the science, technology, and practice of cryptology and related fields; promote the free exchange of ideas and information about cryptology and related fields; develop and maintain the professional skill and integrity of individuals in the cryptologic community; advance the standing of the cryptologic community in the wider scientific and technical world and promote fruitful relationships between the IACR and other scientific and technical organizations.
Some recipients in TCS: Juan Garay, Yuval Ishai, Paul Kocher, Stafford Tavares, Victor Shoup, Joe Kilian, Tal Rabin, Ran Canetti, Eyal Kushilevitz, Moti Yung
link
INFORMS Fellow
For distinguished individuals who have demonstrated outstanding and exceptional accomplishments and experience in operations research and the management sciences (OR/MS).
Some recipients in TCS: R. Ravi, Dick Karp, Nimrod Megiddo, David Shmoys, Eva Tardos
link
CS-Can/Info-Can Lifetime Achievemant Award
Recognize current or former faculty members in Canadian Computer Science Departments, Schools, Faculties who have made outstanding and sustained contributions to computing over their careers. The awards can be for achievement in research, teaching, service, or any combination of these.
Some recipients in TCS: David Kirkpatrick, Ian Munro, Steve Cook, Derek Corneil
link
EATCS Award
Given to acknowledge extensive and widely recognized contributions to theoretical computer science over a life long scientific career.
Some recipients in TCS: Noam Nisan, Eva Tardos, Dexter Kozen, Christos Papadimitriou, Gordon Plotkin, Martin Dyer, Moshe Vardi, Boris (Boaz) Trakhtenbrot, Kurt Mehlhorn
link
Major Paper or Book, Set of Papers, etc.
Godel Prize
For outstanding papers in the area of theoretical computer science is sponsored jointly by the European Association for Theoretical Computer Science (EATCS) and the Special Interest Group on Algorithms and Computation Theory of the Association for Computing Machinery (ACM SIGACT).
Some recipients in TCS: Cynthia Dwork, Frank McSherry, Kobbi Nissim, Adam Smith, Stephen Brookes, Peter W. O’Hearn, Daniel Spielman, Shang-Hua Teng, Ronald Fagin, Amnon Lotem, Moni Naor, Antoine Joux, Dan Boneh, Matthew K. Franklin, Elias Koutsoupias, Christos H. Papadimitriou, Tim Roughgarden, Eva Tardos, Noam Nisan, Amir Ronen
link
Delbert Ray Fulkerson Prize
For outstanding papers in the area of discrete mathematics. The term “discrete mathematics” is interpreted broadly and is intended to include graph theory, networks, mathematical programming, applied combinatorics, applications of discrete mathematics to computer science, and related subjects.
Some recipients in TCS: Francisco Santos, Sanjeev Arora, Satish Rao, Umesh Vazirani, Anders Johansson, Jeff Kahn, Van H. Vu, Laszlo Lovasz, Balazs Szegedy
link
Prize in Game Theory and Computer Science in Honour of Ehud Kalai
Awarded to the person (or persons) who have published the best paper at the interface of game theory and computer science in the last decade.
Some recipients in TCS: Tim Roughgarden, Christos Papadimitriou, Costis Daskalakis, Paul Goldberg
link
ACM SigEcom Test of Time Award
Recognizes the author or authors of an influential paper or series of papers published between ten and twenty-five years ago that has significantly impacted research or applications exemplifying the interplay of economics and computation.
Some recipients in TCS: Gagan Aggarwal, Ashish Goel, Rajeev Motwani
link
INFORMS Computing Society Prize
For the best English language paper or group of related papers dealing with the Operations Research/Computer Science interface.
Some recipients in TCS: David Johnson, Pablo Parrilo, Peter Shor
link
INFORMS Applied Probability Society Best Publication Award
Recognizes outstanding contributions (a book, paper, or set of papers) to Applied Probability.
Some recipients in TCS: Andrea Montanari, Devavrat Shah, David Gamarnik
link
INFORMS Applied Probability Society Best Student Paper Prize
To identify and honor outstanding papers in the field of applied probability that are written primarily by a student. We define applied probability broadly, as any paper related to the modeling, analysis, and control of stochastic systems.
Some recipients in TCS: Guido Lagos
link
SIAM SIAG/Optimization Prize
Awarded to the author(s) of the most outstanding paper, as determined by the prize committee, on a topic in optimization published in English in a peer-reviewed journal.
Some recipients in TCS: Michel Goemans, David Williamson, Yinyu Ye
link
SIAM George Polya Prize for Mathematical Exposition
Awarded every two years to an outstanding expositor of the mathematical sciences. The prize may be awarded for a specific work or for the cumulative impact of multiple expository works that communicate mathematics effectively.
Some recipients in TCS: Nick Trefethen
link
SIAM George Polya Prize in Mathematics
Awarded every four years for a significant contribution, as evidenced by a refereed publication, in an area of mathematics of interest to George Pólya not covered by the George Pólya Prize in Combinatorics or the George Pólya Prize for Mathematical Exposition.
Some recipients in TCS: Emmanuel Candes, Terence Tao, Adam Marcus, Daniel Spielman, Nikhil Srivastava
link
SIAM George Polya Prize in Combinatorics
Awarded every four years for a notable application of combinatorial theory. The prize is broadly intended to recognize specific recent work.
Some recipients in TCS: Van Vu, Neil Robertson, Paul Seymour, Noga Alon
link
SIAM SIAG/CST Best SICON Paper Prize
Awarded to the author(s) of the two most outstanding papers, as determined by the prize committee, published in the SIAM Journal on Control and Optimization (SICON) in the three calendar years before the award year.
Some recipients in TCS: Bernard Chazelle, Pablo Parrilo
link
SIAM Ralph E. Kleinman Prize
Awarded to one individual for outstanding research, or other contributions, that bridge the gap between mathematics and applications. Work that uses high-level mathematics and/or invents new mathematical tools to solve applied problems from engineering, science, and technology is particularly appropriate.
Some recipients in TCS: Emmanuel Candes, Anna Gilbert
link
AMS Levi L. Conant Prize
To recognize the best expository paper published in either the Notices of the AMS or the Bulletin of the AMS in the preceding five years.
Some recipients in TCS: Henry Cohn, Shlomo Hoory, Nati Linial, Avi Wigderson
link
Michael and Sheila Held Prize from the National Academy of Sciences
Honors outstanding, innovative, creative, and influential research in the areas of combinatorial and discrete optimization, or related parts of computer science, such as the design and analysis of algorithms and complexity theory. This $100,000 prize is intended to recognize recent work (defined as published within the last eight years).
Some recipients in TCS: Prasad Raghavendra and David Steurer
link
Edsger W. Dijkstra Prize in Distributed Computing
The prize is given for outstanding papers on the principles of distributed computing, whose significance and impact on the theory and/or practice of distributed computing has been evident for at least a decade.
Some recipients in TCS: Elizabeth Borowsky, Eli Gafni, Noga Alon, Laszlo Babai, Alon Itai, Michael Luby, Michael Ben-Or, Michael Rabin
link
The Alonzo Church Award for Outstanding Contributions to Logic and Computation
The prize is awarded for an outstanding contribution represented by a paper or small group of papers within the past 25 years. This time span allows the contribution to have established evidence of lasting impact and depth.
Some recipients in TCS: Tomas Feder, Moshe Vardi,
link
Major Prizes
Kannelakis Prize
Honors specific theoretical accomplishments that have had a significant and demonstrable effect on the practice of computing.
Some recipients in TCS: Amos Fiat, Moni Naor, Michael Luby, James Demmel
link
Knuth Prize
For outstanding contributions to the foundations of computer science is awarded for major research accomplishments and contributions to the foundations of computer science over an extended period of time.
Some recipients in TCS: Oded Goldreich, Noam Nisan, Laszlo Babai, Richard Lipton, Gary Miller
link
IMU Nevanlinna Prize
Awarded once every 4 years at the International Congress of Mathematicians, for outstanding contributions in Mathematical Aspects of Information Sciences.
Some recipients in TCS: Subhash Khot, Daniel Spielman, Jon Kleinberg
link
Turing Award
To recognize contributions of lasting and major technical importance to the computing field.
Some recipients in TCS: Whitfield Diffie, Martin Hellman, Shafi Goldwasser, Silvio Micali
link
Mid-Career Awards
Shapley Lecture
Given at each World Congress of the Game Theory Society by a distinguished game theorist aged 40 or under at the time of the Lecture.
Some recipients in TCS: Tim Roughgarden
link
INFORMS Farkas prize
Awarded annually to a mid-career researcher for outstanding contributions to the field of optimization, over the course of their career. Such contributions could include papers (published or submitted and accepted), books, monographs, and software. The awardee will be within 25 years of their terminal degree as of January 1 of the year of the award.
Some recipients in TCS: Pablo Parrilo, Michel Goemans
link
ACM Grace Murray Hopper Award
Awarded to the outstanding young computer professional of the year, selected on the basis of a single recent major technical or service contribution. The candidate must have been 35 years of age or less at the time the qualifying contribution was made.
Some recipients in TCS: Craig Gentry, Omer Reingold, Tim Roughgarden, Brent Waters
link
EATCS Presburger Award
To a young scientist (in exceptional cases to several young scientists) for outstanding contributions in theoretical computer science, documented by a published paper or a series of published papers. Nominated scientists must be at most 35 years at the time of the deadline of nomination.
Some recipients in TCS: Mark Braverman, Xi Chen, Erik Demaine, Venkatesan Guruswami, Mihai Patrascu, Alexandra Silva, David Woodruff
link
NSF Alan T. Waterman Award
The National Science Foundation’s (NSF) annual Alan T. Waterman Award honors an outstanding young U.S. scientist or engineer. The awardee receives a grant of $1 million over five years for scientific research or advanced study in any field of science, plus a medal and other recognition. Public Law 94-86 of the 94th Congress established the Waterman Award in 1975 to mark the 25th anniversary of the NSF and to honor its first director, Alan T. Waterman. Criteria: a candidate must be a U.S. citizen or permanent resident. He or she must be 40 years of age or younger, OR not more than 10 years beyond receipt of the Ph.D. degree, by December 31st of the year in which they are nominated. The candidate should have demonstrated exceptional individual achievements in scientific or engineering research of sufficient quality to be placed at the forefront of his or her peers. Criteria also include originality, innovation and a significant impact on the individual’s field.
Some recipients in TCS: Scott Aronson, Subhash Khot, Herbert Edelsbrunner, Emmanuel Candes, Mung Chiang
link
ACM Prize in Computing
The award recognizes an early to mid-career fundamental innovative contribution in computing that, through its depth, impact and broad implications, exemplifies the greatest achievements in the discipline.
Some recipients in TCS: Dina Katabi, Alexei A. Efros, Stefan Savage, Dan Boneh
link
Service
ACM SIGACT Distinguished Service Prize
Awarded to an individual who has made substantial service contributions to the Theoretical Computer Science community.
Some recipients in TCS: Alistair Sinclair, Laszlo Babai, Avi Wigderson, Lance Fortnow
link
INFORMS Computing Society Harvey J. GreenBerg Service Award
Given to an individual in recognition of their impact on the INFORMS Computing Society (ICS).
Some recipients in TCS: Richard S. Barr
link
SIAM Prize for Distinguished Service to the Profession
Awarded to an applied mathematician who has made distinguished contributions to the furtherance of applied mathematics on the national or international level.
Some recipients in TCS: John Hopcroft
link
EATCS Fellow
Awarded to a person having a track record of intellectual and organizational leadership within the EATCS community. Fellows are expected to be “model citizens” of the TCS community, helping to develop the standing of TCS beyond the frontiers of the community.
Some recipients in TCS: Jim Adamek, Susanne Albers, Artur Czumaj, Mike Fellows, Monika Henzinger, Thomas Henzinger, Giuseppe Italiano, Stefan Leonardi, Kurt Mehlhorn, Paul Spirakas, Aravind Srinivasan, Moshe Vardi, Moti Yung
link
Student Prizes
INFORMS George Nicholson Student Paper Competition
To identify and honor outstanding papers in the field of operations research and the management sciences written by a student.
Some recipients in TCS: Andrew Li, Bradley Sturt, Ramesh Johari
link
INFORMS ICS Student Paper Prize
Is given annually to the best paper on computing and operations research by a student author.
Some recipients in TCS: Guido Lagos
link
INFORMS ICS Student Paper Award
Is given annually to the best paper on computing and operations research by a student author.
Some recipients in TCS: Berk Ustun
link
SIAM Frank and Brennie Morgan Award (SIAM)
Awarded each year to an undergraduate student (or students for joint work) for outstanding research in mathematics.
Some recipients in TCS: Jacob Fox, Daniel Kane
link
SIAM Student Paper Prize
Awarded every year to the student author(s) of the most outstanding SIAM paper(s) submitted to the SIAM Student Paper Competition.
Some recipients in TCS: Emanuele Viola
link
SIAM Student Travel Awards
Given to help students gain the experience and exposure that comes from attending and presenting at SIAM conferences. The travel awards are intended to help students defray some of the costs of their attendance.
Awarded for SIAM conferences each year
link
Thesis Prizes
Mathematical Optimization Society Tucker Prize
Awarded at each International Symposium on Mathematical Programming for an outstanding doctoral thesis.
Some recipients in TCS: Daniel Dadush, Oliver Friedmann, Mohit Singh, Uday Shanbhag, Tim Roughgarden, Andrew Goldberg, Fabian Chudak
link
INFORMS Dantzig Prize
Given for the best dissertation in any area of operations research and the management sciences that is innovative and relevant to practice.
Some recipients in TCS: Negin Golrezaei, Ian Osband, Devavrat Shah, Alberto Caprara
link
ACM SIGecom Doctoral Dissertation Award
Recognizes an outstanding dissertation in the field of economics and computation.
Some recipients in TCS: Aviad Rubinstein, Inbal Talgam-Cohen, Matt Weinberg, Balu Sivan
link
ACM Doctoral Dissertation Award
Presented annually to the author(s) of the best doctoral dissertation(s) in computer science and engineering.
Some recipients in TCS: Haitham Hassanieh, Aaron Sidford, Sanjam Garg, John Duchi
link
EATCS Distinguished Dissertation Award
Given to promote and recognize outstanding dissertations in the field of theoretical computer science.
Some recipients in TCS: Bas Ketsman, Ilya Razenshteyn, Aviad Rubinstein, Vincent Cohen-Addad, Mika Goos, Steen Vester, Radu Curticapean, Heng Guo, Georg Zetzche, Karl Bringmann, Michal Skrzpczak, Mary Wootters
link
Young Career Awards
Presidential Early Career Awards for Scientists and Engineers (PECASE)
To recognize and honor outstanding scientists and engineers at the outset of their independent research careers.
Some TCS recipients: Scott Aaronson, Sean Hallgren, Adam Smith
link
SIAM Richard C. DiPrima Prize
Awarded to an early career researcher who has done outstanding research in applied mathematics (defined as those topics covered by SIAM journals) and who has completed his/her doctoral dissertation and completed all other requirements for his/her doctorate during the period running from three years prior to the award date to one year prior to the award date.
Some recipients in TCS: David Williamson
link
INFORMS Young Researchers Prize
Awarded annually at the fall INFORMS Annual Meeting to one or more young researcher(s) for an outstanding paper in optimization that is published in, or submitted to and accepted by, a refereed professional journal within the four calendar years preceding the year of the award.
Some recipients in TCS: Kamal Jain, Tim Roughgarden
link
INFORMS Applied Probability Society Erlang Prize
A single prize will be awarded once every two years during even calendar years by the Applied Probability Society (APS) of INFORMS to an early career researcher who has made a significant contribution to applied probability.
Some recipients in TCS: Devavrat Shah
link
SIAM Early Career Travel Awards
Support for early career participants affiliated with U.S. institutions for travel to SIAM conferences through a grant from the U.S. National Science Foundation (NSF).
Awarded for SIAM conferences each year
link
SIAM Denes Konig Prize
Awarded biennially to an early career researcher or early career researchers for outstanding research, as determined by the prize committee, in an area of discrete mathematics, based on a publication by the candidate(s) in a peer-reviewed journal published in the three calendar years prior to the year of the award.
Some recipients in TCS: Adam Wade Marcus, Zeev Dvir, Jacob Fox
link
CS-Can/Info-Can Outstanding Young Computer Science Researcher Award
Recognize excellence in research, and are made to top young faculty members in Canadian Computer Science Departments, Schools, Faculties who are within the first ten years of their career beyond the completion of their PhD.
Some recipients in TCS: Lap Chi Lau, Nick Harvey, Kevin Leyton-Brown
link
Maintained by David Woodruff (dwoodruf@cs.cmu.edu) | ||||
2453 | dbpedia | 2 | 48 | https://cs.uwaterloo.ca/~shallit/Courses/134/history.html | en | History of Computer Science | [
"http://www.cs.uwaterloo.ca/~shallit/atwork.gif"
] | [] | [] | [
""
] | null | [] | null | null | A Very Brief History of Computer Science
Written by Jeffrey Shallit for CS 134 at the University of Waterloo in the summer of 1995.
This little web page was hastily stitched together in a few days. Perhaps eventually I will get around to doing a really good job. Suggestions are always welcome.
Before 1900
People have been using mechanical devices to aid calculation for thousands of years. For example, the abacus probably existed in Babylonia (present-day Iraq) about 3000 B.C.E. The ancient Greeks developed some very sophisticated analog computers. In 1901, an ancient Greek shipwreck was discovered off the island of Antikythera. Inside was a salt-encrusted device (now called the Antikythera mechanism) that consisted of rusted metal gears and pointers. When this c. 80 B.C.E. device was reconstructed, it produced a mechanism for predicting the motions of the stars and planets.
John Napier (1550-1617), the Scottish inventor of logarithms, invented Napier's rods (sometimes called "Napier's bones") c. 1610 to simplify the task of multiplication.
In 1641 the French mathematician and philosopher Blaise Pascal (1623-1662) built a mechanical adding machine. Similar work was done by Gottfried Wilhelm Leibniz (1646-1716). Leibniz also advocated use of the binary system for doing calculations.
Recently it was discovered that Wilhelm Schickard (1592-1635), a graduate of the University of Tübingen (Germany), constructed such a device in 1623-4, before both Pascal and Leibniz. A brief description of the device is contained in two letters to Johannes Kepler. Unfortunately, at least one copy of the machine burned up in a fire, and Schickard himself died of bubonic plague in 1635, during the Thirty Years' War.
Joseph-Marie Jacquard (1752-1834) invented a loom that could weave complicated patterns described by holes in punched cards. Charles Babbage (1791-1871) worked on two mechanical devices: the Difference Engine and the far more ambitious Analytical Engine (a precursor of the modern digital computer), but neither worked satisfactorily. (Babbage was a bit of an eccentric -- one biographer calls him an "irascible genius" -- and was probably the model for Daniel Doyce in Charles Dickens' novel, Little Dorrit. A little-known fact about Babbage is that he invented the science of dendrochronology -- tree-ring dating -- but never pursued his invention. In his later years, Babbage devoted much of his time to the persecution of street musicians (organ-grinders).) The Difference Engine can be viewed nowadays in the Science Museum in London, England.
One of Babbage's friends, Ada Augusta Byron, Countess of Lovelace (1815-1852), sometimes is called the "first programmer" because of a report she wrote on Babbage's machine. (The programming language Ada was named for her.)
William Stanley Jevons (1835-1882), a British economist and logician, built a machine in 1869 to solve logic problems. It was "the first such machine with sufficient power to solve a complicated problem faster than the problem could be solved without the machine's aid." (Gardner) It is now in the Oxford Museum of the History of Science.
Herman Hollerith (1860-1929) invented the modern punched card for use in a machine he designed to help tabulate the 1890 census.
1900 - 1939: The Rise of Mathematics
Work on calculating machines continued. Some special-purpose calculating machines were built. For example, in 1919, E. O. Carissan (1880-1925), a lieutenant in the French infantry, designed and had built a marvelous mechanical device for factoring integers and testing them for primality. The Spaniard Leonardo Torres y Quevedo (1852-1936) built some electromechanical calculating devices, including one that played simple chess endgames.
In 1928, the German mathematician David Hilbert (1862-1943) addressed the International Congress of Mathematicians. He posed three questions: (1) Is mathematics complete; i.e. can every mathematical statement be either proved or disproved? (2) Is mathematics consistent, that is, is it true that statements such as "0 = 1" cannot be proved by valid methods? (3) Is mathematics decidable, that is, is there a mechanical method that can be applied to any mathematical assertion and (at least in principle) will eventually tell whether that assertion is true or not? This last question was called the Entscheidungsproblem.
In 1931, Kurt Gödel (1906-1978) answered two of Hilbert's questions. He showed that every sufficiently powerful formal system is either inconsistent or incomplete. Also, if an axiom system is consistent, this consistency cannot be proved within itself. The third question remained open, with 'provable' substituted for 'true'.
In 1936, Alan Turing (1912-1954) provided a solution to Hilbert's Entscheidungsproblem by constructing a formal model of a computer -- the Turing machine -- and showing that there were problems such a machine could not solve. One such problem is the so-called "halting problem": given a Pascal program, does it halt on all inputs?
1940's: Wartime brings the birth of the electronic digital computer
The calculations required for ballistics during World War II spurred the development of the general-purpose electronic digital computer. At Harvard, Howard H. Aiken (1900-1973) built the Mark I electromechanical computer in 1944, with the assistance of IBM.
Military code-breaking also led to computational projects. Alan Turing was involved in the breaking of the code behind the German machine, the Enigma, at Bletchley Park in England. The British built a computing device, the Colossus, to assist with code-breaking.
At Iowa State University in 1939, John Vincent Atanasoff (1904-1995) and Clifford Berry designed and built an electronic computer for solving systems of linear equations, but it never worked properly.
Atanasoff discussed his invention with John William Mauchly (1907-1980), who later, with J. Presper Eckert, Jr. (1919-1995), designed and built the ENIAC, a general-purpose electronic computer originally intended for artillery calculations. Exactly what ideas Mauchly got from Atanasoff is not complely clear, and whether Atanasoff or Mauchly and Eckert deserve credit as the originators of the electronic digital computer was the subject of legal battles and ongoing historical debate. The ENIAC was built at the Moore School at the University of Pennsylvania, and was finished in 1946.
In 1944, Mauchly, Eckert, and John von Neumann (1903-1957) were already at work designing a stored-program electronic computer, the EDVAC. Von Neumann's report, "First Draft of a Report on the EDVAC", was very influential and contains many of the ideas still used in most modern digital computers, including a mergesort routine. Eckert and Mauchly went on to build UNIVAC.
Meanwhile, in Germany, Konrad Zuse (1910-1995) built the first operational, general-purpose, program-controlled calculator, the Z3, in 1941. More information about Zuse can be found here.
In 1945, Vannevar Bush published a surprisingly prescient article in the Atlantic Monthly about the ways information processing would affect the society of the future. (Another copy of the Bush article appears here.)
Maurice Wilkes (b. 1913), working in Cambridge, England, built the EDSAC, a computer based on the EDVAC. F. C. Williams (b. 1911) and others at Manchester University built the Manchester Mark I, one version of which was working as early as June 1948. This machine is sometimes called the first stored-program digital computer.
The invention of the transistor in 1947 by John Bardeen (1908-1991), Walter Brattain (1902-1987), and William Shockley (1910-1989) transformed the computer and made possible the microprocessor revolution. For this discovery they won the 1956 Nobel Prize in physics. (Shockley later became notorious for his racist views.)
Jay Forrester (b. 1918) invented magnetic core memory c. 1949. More about Forrester here.
1950's
Grace Murray Hopper (1906-1992) invented the notion of a compiler, at Remington Rand, in 1951. Earlier, in 1947, Hopper found the first computer "bug" -- a real one -- a moth that had gotten into the Harvard Mark II. (Actually, the use of "bug" to mean defect goes back to at least 1889.)
John Backus and others developed the first FORTRAN compiler in April 1957. LISP, a list-processing language for artificial intelligence programming, was invented by John McCarthy about 1958. Alan Perlis, John Backus, Peter Naur and others developed Algol.
In hardware, Jack Kilby (Texas Instruments) and Robert Noyce (Fairchild Semiconductor) invented the integrated circuit in 1959.
Edsger Dijkstra invented an efficient algorithm for shortest paths in graphs as a demonstration of the ARMAC computer in 1956. He also invented an efficient algorithm for the minimum spanning tree in order to minimize the wiring needed for the X1 computer. (Dijkstra is famous for his caustic, opinionated memos. For example, see his opinions of some programming languages).
In a famous paper that appeared in the journal Mind in 1950, Alan Turing introduced the Turing Test, one of the first efforts in the field of artificial intelligence. He proposed a definition of "thinking" or "consciousness" using a game: a tester would have to decide, on the basis of written conversation, whether the entity in the next room responding to the tester's queries was a human or a computer. If this distinction could not be made, then it could be fairly said that the computer was "thinking".
In 1952, Alan Turing was arrested for "gross indecency" after a burglary led to the discovery of his affair with Arnold Murray. Overt homosexuality was taboo in 1950's England, and Turing was forced to take estrogen "treatments" which rendered him impotent and caused him to grow breasts. On June 7, 1954, despondent over his situation, Turing committed suicide by eating an apple laced with cyanide.
1960's
In the 1960's, computer science came into its own as a discipline. In fact, the term was coined by George Forsythe, a numerical analyst. The first computer science department was formed at Purdue University in 1962. The first person to receive a Ph. D. from a computer science department was Richard Wexelblat, at the University of Pennsylvania, in December 1965.
Operating systems saw major advances. Fred Brooks at IBM designed System/360, a line of different computers with the same architecture and instruction set, from small machine to top-of-the-line. Edsger Dijkstra at Eindhoven designed the THE multiprogramming system.
At the end of the decade, ARPAnet, a precursor to today's Internet, began to be constructed.
Many new programming languages were invented, such as BASIC (developed c. 1964 by John Kemeny (1926-1992) and Thomas Kurtz (b. 1928)).
The 1960's also saw the rise of automata theory and the theory of formal languages. Big names here include Noam Chomsky and Michael Rabin. Chomsky later became well-known for his theory that language is "hard-wired" in human brains, and for his criticism of American foreign policy.
Proving correctness of programs using formal methods also began to be more important in this decade. The work of Tony Hoare played an important role. Hoare also invented Quicksort.
Douglas C. Engelbart invents the computer mouse c. 1968, at SRI.
Ted Hoff (b. 1937) and Federico Faggin at Intel designed the first microprocessor (computer on a chip) in 1969-1971.
A rigorous mathematical basis for the analysis of algorithms began with the work of Donald Knuth (b. 1938), author of 3-volume treatise entitled The Art of Computer Programming.
1970's
The theory of databases saw major advances with the work of Edgar F. Codd on relational databases. Codd won the Turing award in 1981.
Unix, a very influential operating system, was developed at Bell Laboratories by Ken Thompson (b. 1943) and Dennis Ritchie (b. 1941). Brian Kernighan and Ritchie together developed C, an influential programming language.
Other new programming languages, such as Pascal (invented by Niklaus Wirth) and Ada (developed by a team led by Jean Ichbiah), arose.
The first RISC architecture was begun by John Cocke in 1975, at the Thomas J. Watson Laboratories of IBM. Similar projects started at Berkeley and Stanford around this time.
The 1970's also saw the rise of the supercomputer. Seymour Cray (b. 1925) designed the CRAY-1, which was first shipped in March 1976. It could perform 160 million operations in a second. The Cray XMP came out in 1982. Cray Research was taken over by Silicon Graphics.
There were also major advances in algorithms and computational complexity. In 1971, Steve Cook published his seminal paper on NP-completeness, and shortly thereafter, Richard Karp showed that many natural combinatorial problems were NP-complete. Whit Diffie and Martin Hellman published a paper that introduced the theory of public-key cryptography, and a public-key cryptosystem known as RSA was invented by Ronald Rivest, Adi Shamir, and Leonard Adleman.
In 1979, three graduate students in North Carolina developed a distributed news server which eventually became Usenet.
1980's
This decade also saw the rise of the personal computer, thanks to Steve Wozniak and Steve Jobs, founders of Apple Computer.
The first computer viruses are developed c. 1981. The term was coined by Leonard Adleman, now at the University of Southern California.
In 1981, the first truly successful portable computer was marketed, the Osborne I. In 1984, Apple first marketed the Macintosh computer.
In 1987, the US National Science Foundation started NSFnet, precursor to part of today's Internet.
1990's and Beyond
Parallel computers continue to be developed.
Biological computing, with the recent work of Len Adleman on doing computations via DNA, has great promise. The Human Genome Project is attempting to sequence all the DNA in a single human being.
Quantum computing gets a boost with the discovery by Peter Shor that integer factorization can be performed efficiently on a (theoretical) quantum computer.
The "Information Superhighway" links more and more computers worldwide.
Computers get smaller and smaller; the birth of nano-technology.
Other Web Resources for History of Computer Science | ||||||||
2453 | dbpedia | 1 | 8 | https://literature.stackexchange.com/questions/26103/who-inspired-the-poem-addicted-to-proof | en | Who inspired the poem "Addicted to Proof"? | [
"https://i.sstatic.net/hvxcT.jpg?s=64",
"https://www.gravatar.com/avatar/84c1d5157acabf63e53136322bda723f?s=64&d=identicon&r=PG",
"https://literature.stackexchange.com/posts/26103/ivc/0eda?prg=83aa4c4e-85d7-4338-a269-c07aa1ca5f87"
] | [] | [] | [
""
] | null | [
"Rand al'Thor"
] | 2024-01-24T17:12:13 | One of Peter Shor's poems was published in The Mathematical Intelligencer. It's called "Addicted to Proof" and it's a short (16-line) poem about a mathematician who won the Abel Prize for a | en | https://cdn.sstatic.net/Sites/literature/Img/favicon.ico?v=577f569b930b | Literature Stack Exchange | https://literature.stackexchange.com/questions/26103/who-inspired-the-poem-addicted-to-proof | tl;dr
In this poem, Shor faces and resoundingly overcomes his fear that his best poetry is behind him. The "scientist" who inspired the poem is Shor himself.
Deets
Peter Shor's limericks, sonnets, nature poems, and philosophical verses are justly celebrated for their keen eye, deft touch, formal discipline, and deep insight. Denizens of Literature Stack Exchange also know him as a scholar, chiefly of Tagore; a prosodist; a close reader of great acumen; a translator; a literary historian; and a grammarian.
While literature is the governing pursuit of his life, the sad fact remains that poetry is unremunerative. Even a poet of Shor's distinction needs a day job. Luckily, if somewhat surprisingly, this great man of letters maintains an active interest in the physical and applied sciences. He has parlayed this into a sideline as a mathematics teacher at a small liberal arts college in a New England suburb. I forget which one precisely, but I would hazard Wellesley. In any case, being a man of integrity, Shor undertakes his academic responsibilities with at least as much rigor (though understandably not as much passion) as he does his literary vocation, with the result that he is no mere dilettante in his secondary field; indeed, his achievements therein have garnered some measure of respectability.
This poem reflects on his dual identity as poet and mathematician, and expresses his fear that the intellectual and physical energy that have sustained him through his youthful achievements may be dwindling. The first stanza itself contains clues that the subject of the poem is Shor himself, rather thinly disguised. Like the unnamed hero, Shor is "quite famous" and has won a prestigious award at the age of forty. The opening lines of the second stanza reinforce this identification. Judging by his poems, he is no doubt an engaging and effective teacher, since he communicates complex ideas with clarity and wit. Besides, he has been granted tenure, an honor typically granted only to those with demonstrated excellence in teaching. So one can safely assume "the students love his classes." Also, his ongoing contributions here at LitSE attest that "his mind seems sound." I shall refrain from lengthy exegetics over the well-chosen word "seems", except to note that it puts on display Shor's unerring sense for the mot juste.
The "But" in line six announces the volta. Shor presents himself as losing his ability to mathemate. Given that Shor continues to win important prizes in his ancillary pursuit, we can dismiss all this talk of innumeracy as a red herring. The true anxiety here concerns his poetic output. The theme of writer's block or the loss of poetic inspiration has engaged ageing poets all over the globe, from Yeats ("The Circle Animal's Desertion") to B R Tambe (मधु मागशि माझ्या सख्या, परि). Shor takes up their mantle, presenting himself as a Casaubon-like figure, insisting to anyone who will listen that his magnum opus is yet to come, while he is as well aware as his listener that the epic will never be written.
The poignant last stanza, with its talk of "addiction" and "proof", suggests that the poet is considering turning to alcohol. If strong drink does not unleash his creativity, it might at least drown his sorrows. The last two lines encapsulate both a sense of having a unique message to communicate, and despair at one's ability to communicate it. They are truly heartbreaking. However, there is also a sense of hope, because the lines suggest that in fact the poet's diagnosis is wrong. Up to these concluding lines, the poet has been lamenting his own loss of creativity. But here, there is a hint that in fact the poet's vision is intact. It is the readers who are at fault, because their narrow outlook is what keeps them from being able to understand the poet's message.
The word "conviction" is key here. To what does it refer? On the surface, obviously to the poet's unshakeable belief. But it also points to the poet's having been convicted. The unsympathetic "colleagues," i.e., the listeners or readers, condemn the poet for insisting on his beliefs, but their finding him guilty of delusion or senility might just as well indicate their own limitations. Both the poet and his interlocutors claim to be right, and claim the other party is wrong. But as Shor reminds us, "it doesn't matter if you're right or wrong," because poetic truth is not a matter of correctness.
"Addicted to Proof" could easily have been just another self-indulgent rant from a drunken versifier, bemoaning the depradations of his advancing years. But as the above analysis shows, it is poetry of the highest order. The poem is supremely ironic. He is looking ahead to his declining years, worried that perhaps his inspiration will flag, and that his poetry will no longer be as well received as before. Yet this poem itself is testimony that the worries are misplaced. It shows Shor grappling with his fears and defiantly dealing them a knockout blow.
The irony extends to the "Notes" Shor appended to the poem at first publication. As Eliot did in The Waste Land, Shor extends a tongue-in-cheek simulacrum of apologia and scholarly apparatus to (mis)guide his readers. He identifies with the poem's subject: "if I ever start acting this way, please accept this poem as my apology in advance." He promptly rescinds this identification: "you may believe that you can identify the person who inspiried it. However, you are likely to be only partly right." Since the poem itself tells us that "it doesn't matter if you're right or wrong," the mischievous intent here is clear. But there is also an acknowledgment that any reading of a poem can be no more than "partly right," since no single reading can exhaust the meaning of a poem, particularly not one as richly allusive and superbly crafted as this.
Finally, Shor says "I had more than one person in mind when I wrote it." Here at last is the unvarnished truth, for he has in mind the entire poetic tradition: poets like Yeats and Tambe, his dual identity as poet and mathematician, the distinction between autobiographical self and poetic persona, are the multiple figures (the mathematical allusion is intentional) Shor dazzles before our eyes. | ||||
2453 | dbpedia | 3 | 12 | https://en.wikipedia.org/wiki/Quantum_computing | en | Quantum computing | [
"https://en.wikipedia.org/static/images/icons/wikipedia.png",
"https://en.wikipedia.org/static/images/mobile/copyright/wikipedia-wordmark-en.svg",
"https://en.wikipedia.org/static/images/mobile/copyright/wikipedia-tagline-en.svg",
"https://upload.wikimedia.org/wikipedia/commons/thumb/6/60/IBM_Q_system_%28Frau... | [] | [] | [
""
] | null | [
"Contributors to Wikimedia projects"
] | 2001-10-21T16:10:53+00:00 | en | /static/apple-touch/wikipedia.png | https://en.wikipedia.org/wiki/Quantum_computing | Technology that uses quantum mechanics
A quantum computer is a computer that exploits quantum mechanical phenomena. On small scales, physical matter exhibits properties of both particles and waves, and quantum computing leverages this behavior using specialized hardware. Classical physics cannot explain the operation of these quantum devices, and a scalable quantum computer could perform some calculations exponentially faster than any modern "classical" computer. In particular, a large-scale quantum computer could break widely used encryption schemes and aid physicists in performing physical simulations; however, the current state of the art is largely experimental and impractical, with several obstacles to useful applications.
The basic unit of information in quantum computing, the qubit (or "quantum bit"), serves the same function as the bit in classical computing. However, unlike a classical bit, which can be in one of two states (a binary), a qubit can exist in a superposition of its two "basis" states, which loosely means that it is in both states simultaneously. When measuring a qubit, the result is a probabilistic output of a classical bit. If a quantum computer manipulates the qubit in a particular way, wave interference effects can amplify the desired measurement results. The design of quantum algorithms involves creating procedures that allow a quantum computer to perform calculations efficiently and quickly.
Physically engineering high-quality qubits has proven challenging. If a physical qubit is not sufficiently isolated from its environment, it suffers from quantum decoherence, introducing noise into calculations. National governments have invested heavily in experimental research that aims to develop scalable qubits with longer coherence times and lower error rates. Two of the most promising technologies[citation needed] are superconductors (which isolate an electrical current by eliminating electrical resistance) and ion traps (which confine a single atomic particle using electromagnetic fields).
In principle, a classical computer can solve the same computational problems as a quantum computer, given enough time. Quantum advantage comes in the form of time complexity rather than computability, and quantum complexity theory shows that some quantum algorithms are exponentially more efficient than the best known classical algorithms. A large-scale quantum computer could in theory solve computational problems unsolvable by a classical computer in any reasonable amount of time. While claims of such quantum supremacy have drawn significant attention to the discipline, near-term practical use cases remain limited.
History
[edit]
For a chronological guide, see Timeline of quantum computing and communication.
For many years, the fields of quantum mechanics and computer science formed distinct academic communities. Modern quantum theory developed in the 1920s to explain the wave–particle duality observed at atomic scales,[3] and digital computers emerged in the following decades to replace human computers for tedious calculations.[4] Both disciplines had practical applications during World War II; computers played a major role in wartime cryptography,[5] and quantum physics was essential for the nuclear physics used in the Manhattan Project.[6]
As physicists applied quantum mechanical models to computational problems and swapped digital bits for qubits, the fields of quantum mechanics and computer science began to converge. In 1980, Paul Benioff introduced the quantum Turing machine, which uses quantum theory to describe a simplified computer.[7] When digital computers became faster, physicists faced an exponential increase in overhead when simulating quantum dynamics,[8] prompting Yuri Manin and Richard Feynman to independently suggest that hardware based on quantum phenomena might be more efficient for computer simulation.[9][10] In a 1984 paper, Charles Bennett and Gilles Brassard applied quantum theory to cryptography protocols and demonstrated that quantum key distribution could enhance information security.[12][13]
Quantum algorithms then emerged for solving oracle problems, such as Deutsch's algorithm in 1985,[14] the Bernstein–Vazirani algorithm in 1993,[15] and Simon's algorithm in 1994.[16] These algorithms did not solve practical problems, but demonstrated mathematically that one could gain more information by querying a black box with a quantum state in superposition, sometimes referred to as quantum parallelism.
Peter Shor built on these results with his 1994 algorithms for breaking the widely used RSA and Diffie–Hellman encryption protocols, which drew significant attention to the field of quantum computing. In 1996, Grover's algorithm established a quantum speedup for the widely applicable unstructured search problem.[20] The same year, Seth Lloyd proved that quantum computers could simulate quantum systems without the exponential overhead present in classical simulations,[22] validating Feynman's 1982 conjecture.[23]
Over the years, experimentalists have constructed small-scale quantum computers using trapped ions and superconductors. In 1998, a two-qubit quantum computer demonstrated the feasibility of the technology,[25][26] and subsequent experiments have increased the number of qubits and reduced error rates.
In 2019, Google AI and NASA announced that they had achieved quantum supremacy with a 54-qubit machine, performing a computation that is impossible for any classical computer.[27][28][29] However, the validity of this claim is still being actively researched.[30][31]
The threshold theorem shows how increasing the number of qubits can mitigate errors, yet fully fault-tolerant quantum computing remains "a rather distant dream".[33] According to some researchers, noisy intermediate-scale quantum (NISQ) machines may have specialized uses in the near future, but noise in quantum gates limits their reliability.[33]
Investment in quantum computing research has increased in the public and private sectors.[34][35] As one consulting firm summarized,[36]
... investment dollars are pouring in, and quantum-computing start-ups are proliferating. ... While quantum computing promises to help businesses solve problems that are beyond the reach and speed of conventional high-performance computers, use cases are largely experimental and hypothetical at this early stage.
With focus on business management’s point of view, the potential applications of quantum computing into four major categories are cybersecurity, data analytics and artificial intelligence, optimization and simulation, and data management and searching.[37]
In December 2023, physicists, for the first time, report the entanglement of individual molecules, which may have significant applications in quantum computing.[38] Also in December 2023, scientists at Harvard University successfully created "quantum circuits" that correct errors more efficiently than alternative methods, which may potentially remove a major obstacle to practical quantum computers.[39][40] The Harvard research team was supported by MIT, QuEra Computing, Caltech, and Princeton University and funded by DARPA's Optimization with Noisy Intermediate-Scale Quantum devices (ONISQ) program.[41][42] Research efforts are ongoing to jumpstart quantum computing through topological and photonic approaches as well.[43]
In July 2024, quantum computing company Quantinuum announced that their new 56-qubit H2-1 computer has broken a world record in "quantum supremacy," topping the performance of benchmarking set by Google's Sycamore machine by 100-fold and consumes 30,000 times less power.[44]
Quantum information processing
[edit]
See also: Introduction to quantum mechanics
Computer engineers typically describe a modern computer's operation in terms of classical electrodynamics. Within these "classical" computers, some components (such as semiconductors and random number generators) may rely on quantum behavior, but these components are not isolated from their environment, so any quantum information quickly decoheres. While programmers may depend on probability theory when designing a randomized algorithm, quantum mechanical notions like superposition and interference are largely irrelevant for program analysis.
Quantum programs, in contrast, rely on precise control of coherent quantum systems. Physicists describe these systems mathematically using linear algebra. Complex numbers model probability amplitudes, vectors model quantum states, and matrices model the operations that can be performed on these states. Programming a quantum computer is then a matter of composing operations in such a way that the resulting program computes a useful result in theory and is implementable in practice.
As physicist Charlie Bennett describes the relationship between quantum and classical computers,[45]
A classical computer is a quantum computer ... so we shouldn't be asking about "where do quantum speedups come from?" We should say, "well, all computers are quantum. ... Where do classical slowdowns come from?"
Quantum information
[edit]
Just as the bit is the basic concept of classical information theory, the qubit is the fundamental unit of quantum information. The same term qubit is used to refer to an abstract mathematical model and to any physical system that is represented by that model. A classical bit, by definition, exists in either of two physical states, which can be denoted 0 and 1. A qubit is also described by a state, and two states often written | 0 ⟩ {\displaystyle |0\rangle } and | 1 ⟩ {\displaystyle |1\rangle } serve as the quantum counterparts of the classical states 0 and 1. However, the quantum states | 0 ⟩ {\displaystyle |0\rangle } and | 1 ⟩ {\displaystyle |1\rangle } belong to a vector space, meaning that they can be multiplied by constants and added together, and the result is again a valid quantum state. Such a combination is known as a superposition of | 0 ⟩ {\displaystyle |0\rangle } and | 1 ⟩ {\displaystyle |1\rangle } .
A two-dimensional vector mathematically represents a qubit state. Physicists typically use Dirac notation for quantum mechanical linear algebra, writing | ψ ⟩ {\displaystyle |\psi \rangle } 'ket psi' for a vector labeled ψ {\displaystyle \psi } . Because a qubit is a two-state system, any qubit state takes the form α | 0 ⟩ + β | 1 ⟩ {\displaystyle \alpha |0\rangle +\beta |1\rangle } , where | 0 ⟩ {\displaystyle |0\rangle } and | 1 ⟩ {\displaystyle |1\rangle } are the standard basis states,[a] and α {\displaystyle \alpha } and β {\displaystyle \beta } are the probability amplitudes, which are in general complex numbers. If either α {\displaystyle \alpha } or β {\displaystyle \beta } is zero, the qubit is effectively a classical bit; when both are nonzero, the qubit is in superposition. Such a quantum state vector acts similarly to a (classical) probability vector, with one key difference: unlike probabilities, probability amplitudes are not necessarily positive numbers. Negative amplitudes allow for destructive wave interference.
When a qubit is measured in the standard basis, the result is a classical bit. The Born rule describes the norm-squared correspondence between amplitudes and probabilities—when measuring a qubit α | 0 ⟩ + β | 1 ⟩ {\displaystyle \alpha |0\rangle +\beta |1\rangle } , the state collapses to | 0 ⟩ {\displaystyle |0\rangle } with probability | α | 2 {\displaystyle |\alpha |^{2}} , or to | 1 ⟩ {\displaystyle |1\rangle } with probability | β | 2 {\displaystyle |\beta |^{2}} . Any valid qubit state has coefficients α {\displaystyle \alpha } and β {\displaystyle \beta } such that | α | 2 + | β | 2 = 1 {\displaystyle |\alpha |^{2}+|\beta |^{2}=1} . As an example, measuring the qubit 1 / 2 | 0 ⟩ + 1 / 2 | 1 ⟩ {\displaystyle 1/{\sqrt {2}}|0\rangle +1/{\sqrt {2}}|1\rangle } would produce either | 0 ⟩ {\displaystyle |0\rangle } or | 1 ⟩ {\displaystyle |1\rangle } with equal probability.
Each additional qubit doubles the dimension of the state space. As an example, the vector 1/√2|00⟩ + 1/√2|01⟩ represents a two-qubit state, a tensor product of the qubit |0⟩ with the qubit 1/√2|0⟩ + 1/√2|1⟩. This vector inhabits a four-dimensional vector space spanned by the basis vectors |00⟩, |01⟩, |10⟩, and |11⟩. The Bell state 1/√2|00⟩ + 1/√2|11⟩ is impossible to decompose into the tensor product of two individual qubits—the two qubits are entangled because their probability amplitudes are correlated. In general, the vector space for an n-qubit system is 2n-dimensional, and this makes it challenging for a classical computer to simulate a quantum one: representing a 100-qubit system requires storing 2100 classical values.
Unitary operators
[edit]
See also: Unitarity (physics)
The state of this one-qubit quantum memory can be manipulated by applying quantum logic gates, analogous to how classical memory can be manipulated with classical logic gates. One important gate for both classical and quantum computation is the NOT gate, which can be represented by a matrix X := ( 0 1 1 0 ) . {\displaystyle X:={\begin{pmatrix}0&1\\1&0\end{pmatrix}}.} Mathematically, the application of such a logic gate to a quantum state vector is modelled with matrix multiplication. Thus
X | 0 ⟩ = | 1 ⟩ {\displaystyle X|0\rangle =|1\rangle } and X | 1 ⟩ = | 0 ⟩ {\displaystyle X|1\rangle =|0\rangle } .
The mathematics of single qubit gates can be extended to operate on multi-qubit quantum memories in two important ways. One way is simply to select a qubit and apply that gate to the target qubit while leaving the remainder of the memory unaffected. Another way is to apply the gate to its target only if another part of the memory is in a desired state. These two choices can be illustrated using another example. The possible states of a two-qubit quantum memory are | 00 ⟩ := ( 1 0 0 0 ) ; | 01 ⟩ := ( 0 1 0 0 ) ; | 10 ⟩ := ( 0 0 1 0 ) ; | 11 ⟩ := ( 0 0 0 1 ) . {\displaystyle |00\rangle :={\begin{pmatrix}1\\0\\0\\0\end{pmatrix}};\quad |01\rangle :={\begin{pmatrix}0\\1\\0\\0\end{pmatrix}};\quad |10\rangle :={\begin{pmatrix}0\\0\\1\\0\end{pmatrix}};\quad |11\rangle :={\begin{pmatrix}0\\0\\0\\1\end{pmatrix}}.} The controlled NOT (CNOT) gate can then be represented using the following matrix: CNOT := ( 1 0 0 0 0 1 0 0 0 0 0 1 0 0 1 0 ) . {\displaystyle \operatorname {CNOT} :={\begin{pmatrix}1&0&0&0\\0&1&0&0\\0&0&0&1\\0&0&1&0\end{pmatrix}}.} As a mathematical consequence of this definition, CNOT | 00 ⟩ = | 00 ⟩ {\textstyle \operatorname {CNOT} |00\rangle =|00\rangle } , CNOT | 01 ⟩ = | 01 ⟩ {\textstyle \operatorname {CNOT} |01\rangle =|01\rangle } , CNOT | 10 ⟩ = | 11 ⟩ {\textstyle \operatorname {CNOT} |10\rangle =|11\rangle } , and CNOT | 11 ⟩ = | 10 ⟩ {\textstyle \operatorname {CNOT} |11\rangle =|10\rangle } . In other words, the CNOT applies a NOT gate ( X {\textstyle X} from before) to the second qubit if and only if the first qubit is in the state | 1 ⟩ {\textstyle |1\rangle } . If the first qubit is | 0 ⟩ {\textstyle |0\rangle } , nothing is done to either qubit.
In summary, quantum computation can be described as a network of quantum logic gates and measurements. However, any measurement can be deferred to the end of quantum computation, though this deferment may come at a computational cost, so most quantum circuits depict a network consisting only of quantum logic gates and no measurements.
Quantum parallelism
[edit]
Quantum parallelism is the heuristic that quantum computers can be thought of as evaluating a function for multiple input values simultaneously. This can be achieved by preparing a quantum system in a superposition of input states, and applying a unitary transformation that encodes the function to be evaluated. The resulting state encodes the function's output values for all input values in the superposition, allowing for the computation of multiple outputs simultaneously. This property is key to the speedup of many quantum algorithms. However, "parallelism" in this sense is insufficient to speed up a computation, because the measurement at the end of the computation gives only one value. To be useful, a quantum algorithm must also incorporate some other conceptual ingredient.
Quantum programming
[edit]
Further information: Quantum programming
There are a number of models of computation for quantum computing, distinguished by the basic elements in which the computation is decomposed.
Gate array
[edit]
A quantum gate array decomposes computation into a sequence of few-qubit quantum gates. A quantum computation can be described as a network of quantum logic gates and measurements. However, any measurement can be deferred to the end of quantum computation, though this deferment may come at a computational cost, so most quantum circuits depict a network consisting only of quantum logic gates and no measurements.
Any quantum computation (which is, in the above formalism, any unitary matrix of size 2 n × 2 n {\displaystyle 2^{n}\times 2^{n}} over n {\displaystyle n} qubits) can be represented as a network of quantum logic gates from a fairly small family of gates. A choice of gate family that enables this construction is known as a universal gate set, since a computer that can run such circuits is a universal quantum computer. One common such set includes all single-qubit gates as well as the CNOT gate from above. This means any quantum computation can be performed by executing a sequence of single-qubit gates together with CNOT gates. Though this gate set is infinite, it can be replaced with a finite gate set by appealing to the Solovay-Kitaev theorem. Implementation of Boolean functions using the few-qubit quantum gates is presented here.[52]
Measurement-based quantum computing
[edit]
A measurement-based quantum computer decomposes computation into a sequence of Bell state measurements and single-qubit quantum gates applied to a highly entangled initial state (a cluster state), using a technique called quantum gate teleportation.
Adiabatic quantum computing
[edit]
An adiabatic quantum computer, based on quantum annealing, decomposes computation into a slow continuous transformation of an initial Hamiltonian into a final Hamiltonian, whose ground states contain the solution.[53]
Neuromorphic quantum computing
[edit]
Neuromorphic quantum computing (abbreviated as ‘n.quantum computing’) is an unconventional computing type of computing that uses neuromorphic computing to perform quantum operations. It was suggested that quantum algorithms, which are algorithms that run on a realistic model of quantum computation, can be computed equally efficiently with neuromorphic quantum computing. Both, traditional quantum computing and neuromorphic quantum computing are physics-based unconventional computing approaches to computations and don’t follow the von Neumann architecture. They both construct a system (a circuit) that represents the physical problem at hand, and then leverage their respective physics properties of the system to seek the “minimum”. Neuromorphic quantum computing and quantum computing share similar physical properties during computation.
Topological quantum computing
[edit]
A topological quantum computer decomposes computation into the braiding of anyons in a 2D lattice.[54]
Quantum Turing machine
[edit]
A quantum Turing machine is the quantum analog of a Turing machine.[7] All of these models of computation—quantum circuits,[55] one-way quantum computation,[56] adiabatic quantum computation,[57] and topological quantum computation[58]—have been shown to be equivalent to the quantum Turing machine; given a perfect implementation of one such quantum computer, it can simulate all the others with no more than polynomial overhead. This equivalence need not hold for practical quantum computers, since the overhead of simulation may be too large to be practical.
Quantum cryptography and cybersecurity
[edit]
Quantum computing has significant potential applications in the fields of cryptography and cybersecurity. Quantum cryptography, which relies on the principles of quantum mechanics, offers the possibility of secure communication channels that are resistant to eavesdropping. Quantum key distribution (QKD) protocols, such as BB84, enable the secure exchange of cryptographic keys between parties, ensuring the confidentiality and integrity of communication. Moreover, quantum random number generators (QRNGs) can produce high-quality random numbers, which are essential for secure encryption.
However, quantum computing also poses challenges to traditional cryptographic systems. Shor's algorithm, a quantum algorithm for integer factorization, could potentially break widely used public-key cryptography schemes like RSA, which rely on the difficulty of factoring large numbers. Post-quantum cryptography, which involves the development of cryptographic algorithms that are resistant to attacks by both classical and quantum computers, is an active area of research aimed at addressing this concern.
Ongoing research in quantum cryptography and post-quantum cryptography is crucial for ensuring the security of communication and data in the face of evolving quantum computing capabilities. Advances in these fields, such as the development of new QKD protocols, the improvement of QRNGs, and the standardization of post-quantum cryptographic algorithms, will play a key role in maintaining the integrity and confidentiality of information in the quantum era.[59]
Communication
[edit]
Further information: Quantum information science
Quantum cryptography enables new ways to transmit data securely; for example, quantum key distribution uses entangled quantum states to establish secure cryptographic keys.[60] When a sender and receiver exchange quantum states, they can guarantee that an adversary does not intercept the message, as any unauthorized eavesdropper would disturb the delicate quantum system and introduce a detectable change.[61] With appropriate cryptographic protocols, the sender and receiver can thus establish shared private information resistant to eavesdropping.[12][62]
Modern fiber-optic cables can transmit quantum information over relatively short distances. Ongoing experimental research aims to develop more reliable hardware (such as quantum repeaters), hoping to scale this technology to long-distance quantum networks with end-to-end entanglement. Theoretically, this could enable novel technological applications, such as distributed quantum computing and enhanced quantum sensing.[63][64]
Algorithms
[edit]
Progress in finding quantum algorithms typically focuses on this quantum circuit model, though exceptions like the quantum adiabatic algorithm exist. Quantum algorithms can be roughly categorized by the type of speedup achieved over corresponding classical algorithms.[65]
Quantum algorithms that offer more than a polynomial speedup over the best-known classical algorithm include Shor's algorithm for factoring and the related quantum algorithms for computing discrete logarithms, solving Pell's equation, and more generally solving the hidden subgroup problem for abelian finite groups.[65] These algorithms depend on the primitive of the quantum Fourier transform. No mathematical proof has been found that shows that an equally fast classical algorithm cannot be discovered, but evidence suggests that this is unlikely.[66] Certain oracle problems like Simon's problem and the Bernstein–Vazirani problem do give provable speedups, though this is in the quantum query model, which is a restricted model where lower bounds are much easier to prove and doesn't necessarily translate to speedups for practical problems.
Other problems, including the simulation of quantum physical processes from chemistry and solid-state physics, the approximation of certain Jones polynomials, and the quantum algorithm for linear systems of equations have quantum algorithms appearing to give super-polynomial speedups and are BQP-complete. Because these problems are BQP-complete, an equally fast classical algorithm for them would imply that no quantum algorithm gives a super-polynomial speedup, which is believed to be unlikely.
Some quantum algorithms, like Grover's algorithm and amplitude amplification, give polynomial speedups over corresponding classical algorithms.[65] Though these algorithms give comparably modest quadratic speedup, they are widely applicable and thus give speedups for a wide range of problems.
Simulation of quantum systems
[edit]
Main article: Quantum simulation
Since chemistry and nanotechnology rely on understanding quantum systems, and such systems are impossible to simulate in an efficient manner classically, quantum simulation may be an important application of quantum computing.[68] Quantum simulation could also be used to simulate the behavior of atoms and particles at unusual conditions such as the reactions inside a collider.[69] In June 2023, IBM computer scientists reported that a quantum computer produced better results for a physics problem than a conventional supercomputer.[70][71]
About 2% of the annual global energy output is used for nitrogen fixation to produce ammonia for the Haber process in the agricultural fertilizer industry (even though naturally occurring organisms also produce ammonia). Quantum simulations might be used to understand this process and increase the energy efficiency of production.[72] It is expected that an early use of quantum computing will be modeling that improves the efficiency of the Haber–Bosch process[73] by the mid 2020s[74] although some have predicted it will take longer.[75]
Post-quantum cryptography
[edit]
Main article: Post-quantum cryptography
A notable application of quantum computation is for attacks on cryptographic systems that are currently in use. Integer factorization, which underpins the security of public key cryptographic systems, is believed to be computationally infeasible with an ordinary computer for large integers if they are the product of few prime numbers (e.g., products of two 300-digit primes).[76] By comparison, a quantum computer could solve this problem exponentially faster using Shor's algorithm to find its factors. This ability would allow a quantum computer to break many of the cryptographic systems in use today, in the sense that there would be a polynomial time (in the number of digits of the integer) algorithm for solving the problem. In particular, most of the popular public key ciphers are based on the difficulty of factoring integers or the discrete logarithm problem, both of which can be solved by Shor's algorithm. In particular, the RSA, Diffie–Hellman, and elliptic curve Diffie–Hellman algorithms could be broken. These are used to protect secure Web pages, encrypted email, and many other types of data. Breaking these would have significant ramifications for electronic privacy and security.
Identifying cryptographic systems that may be secure against quantum algorithms is an actively researched topic under the field of post-quantum cryptography.[78][79] Some public-key algorithms are based on problems other than the integer factorization and discrete logarithm problems to which Shor's algorithm applies, like the McEliece cryptosystem based on a problem in coding theory.[78][80] Lattice-based cryptosystems are also not known to be broken by quantum computers, and finding a polynomial time algorithm for solving the dihedral hidden subgroup problem, which would break many lattice based cryptosystems, is a well-studied open problem.[81] It has been proven that applying Grover's algorithm to break a symmetric (secret key) algorithm by brute force requires time equal to roughly 2n/2 invocations of the underlying cryptographic algorithm, compared with roughly 2n in the classical case,[82] meaning that symmetric key lengths are effectively halved: AES-256 would have the same security against an attack using Grover's algorithm that AES-128 has against classical brute-force search (see Key size).
Search problems
[edit]
Main article: Grover's algorithm
The most well-known example of a problem that allows for a polynomial quantum speedup is unstructured search, which involves finding a marked item out of a list of n {\displaystyle n} items in a database. This can be solved by Grover's algorithm using O ( n ) {\displaystyle O({\sqrt {n}})} queries to the database, quadratically fewer than the Ω ( n ) {\displaystyle \Omega (n)} queries required for classical algorithms. In this case, the advantage is not only provable but also optimal: it has been shown that Grover's algorithm gives the maximal possible probability of finding the desired element for any number of oracle lookups. Many examples of provable quantum speedups for query problems are based on Grover's algorithm, including Brassard, Høyer, and Tapp's algorithm for finding collisions in two-to-one functions,[83] and Farhi, Goldstone, and Gutmann's algorithm for evaluating NAND trees.[84]
Problems that can be efficiently addressed with Grover's algorithm have the following properties:[85][86]
There is no searchable structure in the collection of possible answers,
The number of possible answers to check is the same as the number of inputs to the algorithm, and
There exists a boolean function that evaluates each input and determines whether it is the correct answer.
For problems with all these properties, the running time of Grover's algorithm on a quantum computer scales as the square root of the number of inputs (or elements in the database), as opposed to the linear scaling of classical algorithms. A general class of problems to which Grover's algorithm can be applied[87] is a Boolean satisfiability problem, where the database through which the algorithm iterates is that of all possible answers. An example and possible application of this is a password cracker that attempts to guess a password. Breaking symmetric ciphers with this algorithm is of interest to government agencies.[88]
Quantum annealing
[edit]
Quantum annealing relies on the adiabatic theorem to undertake calculations. A system is placed in the ground state for a simple Hamiltonian, which slowly evolves to a more complicated Hamiltonian whose ground state represents the solution to the problem in question. The adiabatic theorem states that if the evolution is slow enough the system will stay in its ground state at all times through the process. Adiabatic optimization may be helpful for solving computational biology problems.[89]
Machine learning
[edit]
Main article: Quantum machine learning
Since quantum computers can produce outputs that classical computers cannot produce efficiently, and since quantum computation is fundamentally linear algebraic, some express hope in developing quantum algorithms that can speed up machine learning tasks.[33][90]
For example, the HHL Algorithm, named after its discoverers Harrow, Hassidim, and Lloyd, is believed to provide speedup over classical counterparts.[33][91] Some research groups have recently explored the use of quantum annealing hardware for training Boltzmann machines and deep neural networks.[92][93][94]
Deep generative chemistry models emerge as powerful tools to expedite drug discovery. However, the immense size and complexity of the structural space of all possible drug-like molecules pose significant obstacles, which could be overcome in the future by quantum computers. Quantum computers are naturally good for solving complex quantum many-body problems[22] and thus may be instrumental in applications involving quantum chemistry. Therefore, one can expect that quantum-enhanced generative models[95] including quantum GANs[96] may eventually be developed into ultimate generative chemistry algorithms.
Engineering
[edit]
As of 2023, classical computers outperform quantum computers for all real-world applications. While current quantum computers may speed up solutions to particular mathematical problems, they give no computational advantage for practical tasks. Scientists and engineers are exploring multiple technologies for quantum computing hardware and hope to develop scalable quantum architectures, but serious obstacles remain.[97][98]
Challenges
[edit]
There are a number of technical challenges in building a large-scale quantum computer.[99] Physicist David DiVincenzo has listed these requirements for a practical quantum computer:[100]
Physically scalable to increase the number of qubits
Qubits that can be initialized to arbitrary values
Quantum gates that are faster than decoherence time
Universal gate set
Qubits that can be read easily.
Sourcing parts for quantum computers is also very difficult. Superconducting quantum computers, like those constructed by Google and IBM, need helium-3, a nuclear research byproduct, and special superconducting cables made only by the Japanese company Coax Co.[101]
The control of multi-qubit systems requires the generation and coordination of a large number of electrical signals with tight and deterministic timing resolution. This has led to the development of quantum controllers that enable interfacing with the qubits. Scaling these systems to support a growing number of qubits is an additional challenge.[102]
Decoherence
[edit]
One of the greatest challenges involved with constructing quantum computers is controlling or removing quantum decoherence. This usually means isolating the system from its environment as interactions with the external world cause the system to decohere. However, other sources of decoherence also exist. Examples include the quantum gates, and the lattice vibrations and background thermonuclear spin of the physical system used to implement the qubits. Decoherence is irreversible, as it is effectively non-unitary, and is usually something that should be highly controlled, if not avoided. Decoherence times for candidate systems in particular, the transverse relaxation time T2 (for NMR and MRI technology, also called the dephasing time), typically range between nanoseconds and seconds at low temperature.[103] Currently, some quantum computers require their qubits to be cooled to 20 millikelvin (usually using a dilution refrigerator[104]) in order to prevent significant decoherence.[105] A 2020 study argues that ionizing radiation such as cosmic rays can nevertheless cause certain systems to decohere within milliseconds.[106]
As a result, time-consuming tasks may render some quantum algorithms inoperable, as attempting to maintain the state of qubits for a long enough duration will eventually corrupt the superpositions.[107]
These issues are more difficult for optical approaches as the timescales are orders of magnitude shorter and an often-cited approach to overcoming them is optical pulse shaping. Error rates are typically proportional to the ratio of operating time to decoherence time, hence any operation must be completed much more quickly than the decoherence time.
As described by the threshold theorem, if the error rate is small enough, it is thought to be possible to use quantum error correction to suppress errors and decoherence. This allows the total calculation time to be longer than the decoherence time if the error correction scheme can correct errors faster than decoherence introduces them. An often-cited figure for the required error rate in each gate for fault-tolerant computation is 10−3, assuming the noise is depolarizing.
Meeting this scalability condition is possible for a wide range of systems. However, the use of error correction brings with it the cost of a greatly increased number of required qubits. The number required to factor integers using Shor's algorithm is still polynomial, and thought to be between L and L2, where L is the number of digits in the number to be factored; error correction algorithms would inflate this figure by an additional factor of L. For a 1000-bit number, this implies a need for about 104 bits without error correction.[108] With error correction, the figure would rise to about 107 bits. Computation time is about L2 or about 107 steps and at 1 MHz, about 10 seconds. However, the encoding and error-correction overheads increase the size of a real fault-tolerant quantum computer by several orders of magnitude. Careful estimates[109][110] show that at least 3 million physical qubits would factor 2,048-bit integer in 5 months on a fully error-corrected trapped-ion quantum computer. In terms of the number of physical qubits, to date, this remains the lowest estimate[111] for practically useful integer factorization problem sizing 1,024-bit or larger.
Another approach to the stability-decoherence problem is to create a topological quantum computer with anyons, quasi-particles used as threads, and relying on braid theory to form stable logic gates.[112][113]
Quantum supremacy
[edit]
Physicist John Preskill coined the term quantum supremacy to describe the engineering feat of demonstrating that a programmable quantum device can solve a problem beyond the capabilities of state-of-the-art classical computers.[114][115][116] The problem need not be useful, so some view the quantum supremacy test only as a potential future benchmark.[117]
In October 2019, Google AI Quantum, with the help of NASA, became the first to claim to have achieved quantum supremacy by performing calculations on the Sycamore quantum computer more than 3,000,000 times faster than they could be done on Summit, generally considered the world's fastest computer.[28][118][119] This claim has been subsequently challenged: IBM has stated that Summit can perform samples much faster than claimed,[120][121] and researchers have since developed better algorithms for the sampling problem used to claim quantum supremacy, giving substantial reductions to the gap between Sycamore and classical supercomputers[122][123][124] and even beating it.[125][126][127]
In December 2020, a group at USTC implemented a type of Boson sampling on 76 photons with a photonic quantum computer, Jiuzhang, to demonstrate quantum supremacy.[128][129][130] The authors claim that a classical contemporary supercomputer would require a computational time of 600 million years to generate the number of samples their quantum processor can generate in 20 seconds.[131]
Claims of quantum supremacy have generated hype around quantum computing,[132] but they are based on contrived benchmark tasks that do not directly imply useful real-world applications.[97][133]
In January 2024, a study published in Physical Review Letters provided direct verification of quantum supremacy experiments by computing exact amplitudes for experimentally generated bitstrings using a new-generation Sunway supercomputer, demonstrating a significant leap in simulation capability built on a multiple-amplitude tensor network contraction algorithm. This development underscores the evolving landscape of quantum computing, highlighting both the progress and the complexities involved in validating quantum supremacy claims.[134]
Skepticism
[edit]
Despite high hopes for quantum computing, significant progress in hardware, and optimism about future applications, a 2023 Nature spotlight article summarised current quantum computers as being "For now, [good for] absolutely nothing".[97] The article elaborated that quantum computers are yet to be more useful or efficient than conventional computers in any case, though it also argued that in the long term such computers are likely to be useful. A 2023 Communications of the ACM article[98] found that current quantum computing algorithms are "insufficient for practical quantum advantage without significant improvements across the software/hardware stack". It argues that the most promising candidates for achieving speedup with quantum computers are "small-data problems", for example in chemistry and materials science. However, the article also concludes that a large range of the potential applications it considered, such as machine learning, "will not achieve quantum advantage with current quantum algorithms in the foreseeable future", and it identified I/O constraints that make speedup unlikely for "big data problems, unstructured linear systems, and database search based on Grover's algorithm".
This state of affairs can be traced to several current and long-term considerations.
Conventional computer hardware and algorithms are not only optimized for practical tasks, but are still improving rapidly, particularly GPU accelerators.
Current quantum computing hardware generates only a limited amount of entanglement before getting overwhelmed by noise.
Quantum algorithms provide speedup over conventional algorithms only for some tasks, and matching these tasks with practical applications proved challenging. Some promising tasks and applications require resources far beyond those available today.[135][136] In particular, processing large amounts of non-quantum data is a challenge for quantum computers.[98]
Some promising algorithms have been "dequantized", i.e., their non-quantum analogues with similar complexity have been found.
If quantum error correction is used to scale quantum computers to practical applications, its overhead may undermine speedup offered by many quantum algorithms.[98]
Complexity analysis of algorithms sometimes makes abstract assumptions that do not hold in applications. For example, input data may not already be available encoded in quantum states, and "oracle functions" used in Grover's algorithm often have internal structure that can be exploited for faster algorithms.
In particular, building computers with large numbers of qubits may be futile if those qubits are not connected well enough and cannot maintain sufficiently high degree of entanglement for long time. When trying to outperform conventional computers, quantum computing researchers often look for new tasks that can be solved on quantum computers, but this leaves the possibility that efficient non-quantum techniques will be developed in response, as seen for Quantum supremacy demonstrations. Therefore, it is desirable to prove lower bounds on the complexity of best possible non-quantum algorithms (which may be unknown) and show that some quantum algorithms asymptomatically improve upon those bounds.
Some researchers have expressed skepticism that scalable quantum computers could ever be built, typically because of the issue of maintaining coherence at large scales, but also for other reasons.
Bill Unruh doubted the practicality of quantum computers in a paper published in 1994.[137] Paul Davies argued that a 400-qubit computer would even come into conflict with the cosmological information bound implied by the holographic principle.[138] Skeptics like Gil Kalai doubt that quantum supremacy will ever be achieved.[139][140][141] Physicist Mikhail Dyakonov has expressed skepticism of quantum computing as follows:
"So the number of continuous parameters describing the state of such a useful quantum computer at any given moment must be... about 10300... Could we ever learn to control the more than 10300 continuously variable parameters defining the quantum state of such a system? My answer is simple. No, never."[142][143]
A practical quantum computer must use a physical system as a programmable quantum register.[144] Researchers are exploring several technologies as candidates for reliable qubit implementations. Superconductors and trapped ions are some of the most developed proposals, but experimentalists are considering other hardware possibilities as well.
Theory
[edit]
Computability
[edit]
Further information: Computability theory
Any computational problem solvable by a classical computer is also solvable by a quantum computer. Intuitively, this is because it is believed that all physical phenomena, including the operation of classical computers, can be described using quantum mechanics, which underlies the operation of quantum computers.
Conversely, any problem solvable by a quantum computer is also solvable by a classical computer. It is possible to simulate both quantum and classical computers manually with just some paper and a pen, if given enough time. More formally, any quantum computer can be simulated by a Turing machine. In other words, quantum computers provide no additional power over classical computers in terms of computability. This means that quantum computers cannot solve undecidable problems like the halting problem, and the existence of quantum computers does not disprove the Church–Turing thesis.
Complexity
[edit]
Main article: Quantum complexity theory
While quantum computers cannot solve any problems that classical computers cannot already solve, it is suspected that they can solve certain problems faster than classical computers. For instance, it is known that quantum computers can efficiently factor integers, while this is not believed to be the case for classical computers.
The class of problems that can be efficiently solved by a quantum computer with bounded error is called BQP, for "bounded error, quantum, polynomial time". More formally, BQP is the class of problems that can be solved by a polynomial-time quantum Turing machine with an error probability of at most 1/3. As a class of probabilistic problems, BQP is the quantum counterpart to BPP ("bounded error, probabilistic, polynomial time"), the class of problems that can be solved by polynomial-time probabilistic Turing machines with bounded error. It is known that B P P ⊆ B Q P {\displaystyle {\mathsf {BPP\subseteq BQP}}} and is widely suspected that B Q P ⊊ B P P {\displaystyle {\mathsf {BQP\subsetneq BPP}}} , which intuitively would mean that quantum computers are more powerful than classical computers in terms of time complexity.
The exact relationship of BQP to P, NP, and PSPACE is not known. However, it is known that P ⊆ B Q P ⊆ P S P A C E {\displaystyle {\mathsf {P\subseteq BQP\subseteq PSPACE}}} ; that is, all problems that can be efficiently solved by a deterministic classical computer can also be efficiently solved by a quantum computer, and all problems that can be efficiently solved by a quantum computer can also be solved by a deterministic classical computer with polynomial space resources. It is further suspected that BQP is a strict superset of P, meaning there are problems that are efficiently solvable by quantum computers that are not efficiently solvable by deterministic classical computers. For instance, integer factorization and the discrete logarithm problem are known to be in BQP and are suspected to be outside of P. On the relationship of BQP to NP, little is known beyond the fact that some NP problems that are believed not to be in P are also in BQP (integer factorization and the discrete logarithm problem are both in NP, for example). It is suspected that N P ⊈ B Q P {\displaystyle {\mathsf {NP\nsubseteq BQP}}} ; that is, it is believed that there are efficiently checkable problems that are not efficiently solvable by a quantum computer. As a direct consequence of this belief, it is also suspected that BQP is disjoint from the class of NP-complete problems (if an NP-complete problem were in BQP, then it would follow from NP-hardness that all problems in NP are in BQP).[151]
See also
[edit]
D-Wave Systems – Canadian quantum computing company
Electronic quantum holography
Glossary of quantum computing
IARPA – American government agencyPages displaying short descriptions of redirect targets
IonQ – US information technology company
List of emerging technologies – New technologies actively in development
List of quantum processors – List of quantum computer components
Magic state distillation – Quantum computing algorithm
Natural computing – terminology introduced to encompass three classes of methodsPages displaying wikidata descriptions as a fallback
Optical computing – Computer that uses photons or light waves
Quantum bus – device which can be used to store or transfer information between independent qubits in a quantum computerPages displaying wikidata descriptions as a fallback
Quantum cognition – application of quantum mechanics to cognitive phenomenaPages displaying wikidata descriptions as a fallback
Quantum volume – Metric for a quantum computer's capabilities
Quantum weirdness – Unintuitive aspects of quantum mechanics
Rigetti Computing – American quantum computing company
Supercomputer – Type of extremely powerful computer
Theoretical computer science – Subfield of computer science and mathematics
Unconventional computing – Computing by new or unusual methods
Valleytronics – Experimental area in semiconductors
Notes
[edit]
References
[edit]
Sources
[edit]
Aaronson, Scott (2013). Quantum Computing Since Democritus. Cambridge University Press. doi:10.1017/CBO9780511979309. ISBN 978-0-521-19956-8. OCLC 829706638.
Grumbling, Emily; Horowitz, Mark, eds. (2019). Quantum Computing: Progress and Prospects. Washington, DC: The National Academies Press. doi:10.17226/25196. ISBN 978-0-309-47970-7. OCLC 1091904777. S2CID 125635007.
Mermin, N. David (2007). Quantum Computer Science: An Introduction. doi:10.1017/CBO9780511813870. ISBN 978-0-511-34258-5. OCLC 422727925.
Nielsen, Michael; Chuang, Isaac (2010). Quantum Computation and Quantum Information (10th anniversary ed.). doi:10.1017/CBO9780511976667. ISBN 978-0-511-99277-3. OCLC 700706156. S2CID 59717455.
Shor, Peter W. (1994). Algorithms for Quantum Computation: Discrete Logarithms and Factoring. Symposium on Foundations of Computer Science. Santa Fe, New Mexico: IEEE. pp. 124–134. doi:10.1109/SFCS.1994.365700. ISBN 978-0-8186-6580-6.
Further reading
[edit]
Media related to Quantum computer at Wikimedia Commons
Learning materials related to Quantum computing at Wikiversity
Stanford Encyclopedia of Philosophy: "Quantum Computing" by Amit Hagar and Michael E. Cuffaro.
"Quantum computation, theory of", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Quantum computing for the very curious by Andy Matuschak and Michael Nielsen
Lectures | ||||||
2453 | dbpedia | 0 | 11 | https://scottaaronson.blog/%3Fp%3D474 | en | Optimized | [
"https://149663533.v2.pressablecdn.com/wp-content/plugins/really-simple-facebook-twitter-share-buttons/images/specificfeeds_follow.png",
"https://149663533.v2.pressablecdn.com/wp-content/plugins/really-simple-facebook-twitter-share-buttons/images/specificfeeds_follow.png",
"https://149663533.v2.pressablecdn.com... | [] | [] | [
""
] | null | [] | null | The Blog of Scott Aaronson | en | Shtetl-Optimized | https://scottaaronson.blog/ | Q1: Who will you be voting for in November?
A: Kamala Harris (and mainstream Democrats all down the ballot), of course.
Q2: Of course?
A: If the alternative is Trump, I would’ve voted for Biden’s rotting corpse. Or for Hunter Biden. Or for…
Q3: Why can’t you see this is just your Trump Derangement Syndrome talking?
A: Look, my basic moral commitments remain pretty much as they’ve been since childhood. Namely, that I’m on the side of reason, Enlightenment, scientific and technological progress, secular government, pragmatism, democracy, individual liberty, justice, intellectual honesty, an American-led peaceful world order, preservation of the natural world, mitigation of existential risks, and human flourishing. (Crazy and radical, I know.)
Only when choosing between candidates who all espouse such values, do I even get the luxury of judging them on any lower-order bits. Sadly, I don’t have that luxury today. Trump’s values, such as they are, would seem to be “America First,” protectionism, vengeance, humiliation of enemies, winning at all costs, authoritarianism, the veneration of foreign autocrats, and the veneration of himself. No amount of squinting can ever reconcile those with the values I listed before.
Q4: Is that all that’s wrong with him?
A: No, there are also the lies, and worst of all the “Big Lie.” Trump is the first president in US history to incite a mob to try to overturn the results of an election. He was serious! He very nearly succeeded, and probably would have, had Mike Pence been someone else. It’s now inarguable that Trump rejects the basic rules of our system, or “accepts” them only when he wins. We’re numb from having heard it so many times, but it’s a big deal, as big a deal as the Civil War was.
Q5: Oh, so this is about your precious “democracy.” Why do you care? Haven’t you of all people learned that the masses are mostly idiots and bullies, who don’t deserve power? As Curtis Yarvin keeps trying to explain to you, instead of “democracy,” you should want a benevolent king or dictator-CEO, who could offer a privileged position to the competent scientists like yourself.
A: Yeah, so how many examples does history furnish where that worked out well? I suppose you might make a partial case for Napoleon, or Ataturk? More to the point: even if benevolent, science-and-reason-loving authoritarian strongmen are possible in theory, do you really expect me to believe that Trump could be one of them? I still love how Scott Alexander put it in 2016:
Can anyone honestly say that Trump or his movement promote epistemic virtue? That in the long-term, we’ll be glad that we encouraged this sort of thing, that we gave it power and attention and all the nutrients it needed to grow? That the road to whatever vision of a just and rational society we imagine, something quiet and austere with a lot of old-growth trees and Greek-looking columns, runs through LOCK HER UP?
I don’t like having to vote for the lesser of two evils. But at least I feel like I know who it is.
Q6: But what about J. D. Vance? He got his start in Silicon Valley, was championed by Peter Thiel, and is obviously highly intelligent. Doesn’t he seem like someone who might listen to and empower tech nerds like yourself?
A: Who can say what J. D. Vance believes? Here are a few choice quotes of his from eight years ago:
I’m obviously outraged at Trump’s rhetoric, and I worry most of all about how welcome Muslim citizens feel in their own country. But I also think that people have always believed crazy shit (I remember a poll from a few years back suggesting that a near majority of democratic voters blame ‘the Jews’ for the financial crisis). And there have always been demagogues willing to exploit the people who believe crazy shit.
The more white people feel like voting for trump, the more black people will suffer. I really believe that.
[Trump is] just a bad man. A morally reprehensible human being.
To get from that to being Trump’s running mate is a Simone-Biles-like feat of moral acrobatics. Vance reminds me of the famous saying by L. Ron Hubbard from his pre-Dianetics days: “If a man really wants to make a million dollars, the best way would be to start his own religion.” (And I feel like Harris’s whole campaign strategy should just be to replay Vance’s earlier musings in wall-to-wall ads while emphasizing her agreement with them.) No, Vance is not someone I trust to share my values, if he has values at all.
Q7: What about the other side’s values, or lack thereof? I mean, don’t you care that the whole Democratic establishment—including Harris—colluded to cover up that Biden was senile and cognitively unfit to be president now, let alone for another term?
A: Look, we’ve all seen what happens as a relative gets old. It’s gradual. It’s hard for anyone to say at which specific moment they can no longer drive a car, or be President of the United States, or whatever. This means that I don’t necessarily read evil intent into the attempts to cover up Biden’s decline—merely an epic, catastrophic failure of foresight. That failure of foresight itself would’ve been a huge deal in normal circumstances, but these are not normal circumstances—not if you believe, as I do, that the alternative is the beginning of the end of a 250-year-old democratic experiment.
Q8: Oh stop being so melodramatic. What terrible thing happened to you because of Trump’s first term? Did you lose your job? Did fascist goons rough you up in the street?
A: Well, my Iranian PhD student came close to having his visa revoked, and it became all but impossible to recruit PhD students from China. That sucked, since I care about my students’ welfare like I care about my own. Also, the downfall of Roe v. Wade, which enabled Texas’ draconian new abortion laws, made it much harder for us to recruit faculty at UT Austin. But I doubt any of that will impress you. “Go recruit American students,” you’ll say. “Go recruit conservative faculty who are fine with abortion being banned.”
The real issue is that Trump was severely restrained in his first term, by being surrounded by people who (even if, in many cases, they started out loyal to him) were also somewhat sane and valued the survival of the Republic. Alas, he learned from that, and he won’t repeat that mistake the next time.
Q9: Why do you care so much about Trump’s lies? Don’t you realize that all politicians lie?
A: Yes, but there are importantly different kinds of lies. There are white lies. There are scheming, 20-dimensional Machiavellian lies, like a secret agent’s cover story (or is that only in fiction?). There are the farcical, desperate, ever-shifting lies of the murderer to the police detective or the cheating undergrad to the professor. And then there are the lies of bullies and mob bosses and populist autocrats, which are special and worse.
These last, call them power-lies, are distinguished by the fact that they aren’t even helped by plausibility. Often, as with conspiracy theories (which strongly overlap with power-lies), the more absurd the better. Obama was born in Kenya. Trump’s crowd was the biggest in history. The 2020 election was stolen by a shadowy conspiracy involving George Soros and Dominion and Venezuela.
The central goal of a power-lie is just to demonstrate your power to coerce others into repeating it, much like with the Party making Winston Smith affirm 2+2=5, or Petruchio making Katharina call the sun the moon in The Taming of the Shrew. A closely-related goal is as a loyalty test for your own retinue.
It’s Trump’s embrace of the power-lie that puts him beyond the pale for me.
Q10: But Scott, we haven’t even played our “Trump” card yet. Starting on October 7, 2023, did you not witness thousands of your supposed allies, the educated secular progressives on “the right side of history,” cheer the sadistic mass-murder of Jews—or at least, make endless excuses for those who did? Did this not destabilize your entire worldview? Will you actually vote for a party half of which seems at peace with the prospect of your family members’ physical annihilation? Or will you finally see who your real friends now are: Arkansas MAGA hillbillies who pray for your people’s survival?
A: Ah, this is your first slash that’s actually drawn blood. I won’t pretend that the takeover of part of the US progressive coalition by literal Hamasniks hasn’t been one of the most terrifying experiences of my life. Yes, if I had to be ruled by either (a) a corrupt authoritarian demagogue or (b) an idiot college student chanting for “Intifada Revolution,” I’d be paralyzed. So it’s lucky that I don’t face that choice! I get to vote, once more, for a rather boring mainstream Democrat—alongside at least 70% of American Jews. The idea of Harris as an antisemite would be ludicrous even if she didn’t have a Jewish husband or wasn’t strongly considering a pro-Israel Jew as her running mate.
Q11: Sure, Kamala Harris might mouth all the right platitudes about Israel having a right to defend itself, but she’ll constantly pressure Israel to make concessions to Hamas and Hezbollah. She’ll turn a blind eye to Iran’s imminent nuclearization. Why don’t you stay up at night worrying that, if you vote for a useful idiot like her, you’ll have Israel’s annihilation and a second Holocaust on your conscience forever?
A: Look, oftentimes—whenever, for example, I’m spending hours reading anti-Zionists on Twitter—I feel like there’s no limit to how intensely Zionist I am. On reflection, though, there is a limit. Namely, I’m not going to be more Zionist than the vast majority of my Israeli friends and colleagues—the ones who served in the IDF, who in some cases did reserve duty in Gaza, who prop up the Israeli economy with their taxes, and who will face the consequences of whatever happens more directly than I will. With few exceptions, these friends despise the Trump/Bibi alliance with white-hot rage, and they desperately want more moderate leadership in both countries.
Q12: Suppose I concede that Kamala is OK on Israel. We both know that she’s not the future of the Democratic Party, any more than Biden is. The future is what we all saw on campuses this spring. “Houthis Houthis make us proud, turn another ship around.” How can you vote for a party whose rising generation seems to want you and your family dead?
A: Let me ask you something. When Trump won in 2016, did that check the power of the campus radicals? Or as Scott Alexander prophesied at the time, did it energize and embolden them like nothing else, by dramatically confirming their theology of a planet held hostage by the bullying, misogynistic rich white males? I fundamentally reject your premise that, if I’m terrified of crazy left-wing extremists, then a good response is to vote for the craziest right-wing extremists I can find, in hopes that the two will somehow cancel each other out. Instead I should support a coherent Enlightenment alternative to radicalism, or the closest thing to that available.
Q13: Even leaving aside Israel, how can you not be terrified by what the Left has become? Which side denounced you on social media a decade ago, as a misogynist monster who wanted all women to be his sex slaves? Which side tried to ruin your life and career? Did we, the online rightists, do that? No. We did not. We did nothing worse to you than bemusedly tell you to man up, grow a pair, and stop pleading for sympathy from feminists who will hate you no matter what.
A: I’ll answer with a little digression. Back in 2017, when Kamala Harris was in the Senate, her office invited me to DC to meet with them to provide advice about the National Quantum Initiative Act, which Kamala was then spearheading. Kamala herself sent regrets that she couldn’t meet me, because she had to be at the Kavanaugh hearings. I have (nerdy, male) friends who did meet her about tech policy and came away with positive impressions.
And, I dunno, does that sound like someone who wants me dead for the crime of having been born a nerdy heterosexual male? Or having awkwardly and ineptly asked women on dates, including the one who became my wife? OK, maybe Amanda Marcotte wants me dead for those crimes. Maybe Arthur Chu does (is he still around?). Good that they’re not running for president then.
Q14: Let me try one more time to show you how much your own party hates you. Which side has been at constant war against the SAT and other standardized tests, and merit-based college admissions, and gifted programs, and academic tracking and acceleration, and STEM magnet schools, and every single other measure by which future young Scott Aaronsons (and Saket Agrawals) might achieve their dreams in life? Has that been our side, or theirs?
A: To be honest, I haven’t seen the Trump or Harris campaigns take any position on any of these issues. Even if they did, there’s very little that the federal government can do: these battles happen in individual states and cities and counties and universities. So I’ll vote for Harris while continuing to advocate for what I think is right in education policy.
Q15: Can you not see that Kamala Harris is a vapid, power-seeking bureaucratic machine—that she has no fixed principles at all? For godsakes, she all but condemned Biden as a racist in the 2020 primary, then agreed to serve as his running mate!
A: I mean, she surely has more principles than Vance does. As far as I can tell, for example, she’s genuinely for abortion rights (as I am). Even if she believed in nothing, though, better a cardboard cutout on which values I recognize are written, than a flesh-and-blood person shouting values that horrify me.
Q16: What, if anything, could Republicans do to get you to vote for them?
A: Reject all nutty conspiracy theories. Fully, 100% commit to the peaceful transfer of power. Acknowledge the empirical reality of human-caused climate change, and the need for both technological and legislative measures to slow it and mitigate its impacts. Support abortion rights, or at least a European-style compromise on abortion. Republicans can keep the anti-wokeness stuff, which actually seems to have become their defining issue. If they do all that, and also the Democrats are taken over by frothing radicals who want to annihilate the state of Israel and abolish the police … that’s, uh, probably the point when I start voting Republican.
Q17: Aha, so you now admit that there exist conceivable circumstances that would cause you to vote Republican! In that case, why did you style yourself “Never-Trump From Here to Eternity”?
A: Tell you what, the day the Republicans (and Trump himself?) repudiate authoritarianism and start respecting election outcomes, is the day I’ll admit my title was hyperbolic.
Q18: In the meantime, will you at least treat us Trump supporters with civility and respect?
A: Not only does civil disagreement not compromise any of my values, it is a value to which I think we should all aspire. And to whatever extent I’ve fallen short of that ideal—even when baited into it—I’m sorry and I’ll try to do better. Certainly, age and experience have taught me that there’s hardly anyone so far gone that I can’t find something on which I agree with them, while disagreeing with most of the rest of the world.
The news these days feels apocalyptic to me—as if we’re living through, if not the last days of humanity, then surely the last days of liberal democracy on earth.
All the more reason to ignore all of that, then, and blog instead about the notorious Busy Beaver function! Because holy moly, what news have I got today. For lovers of this super-rapidly-growing sequence of integers, I’ve honored to announce the biggest Busy Beaver development that there’s been since 1983, when I slept in a crib and you booted up your computer using a 5.25-inch floppy. That was the year when Allen Brady determined that BusyBeaver(4) was equal to 107. (Tibor Radó, who invented the Busy Beaver function in the 1960s, quickly proved with his student Shen Lin that the first three values were 1, 6, and 21 respectively. The fourth value was harder.)
Only now, after an additional 41 years, do we know the fifth Busy Beaver value. Today, an international collaboration called bbchallenge is announcing that it’s determined, and even formally verified using the Coq proof system, that BB(5) is equal to 47,176,870—the value that’s been conjectured since 1990, when Heiner Marxen and Jürgen Buntrock discovered a 5-state Turing machine that runs for exactly 47,176,870 steps before halting, when started on a blank tape. The new bbchallenge achievement is to prove that all 5-state Turing machines that run for more steps than 47,176,870, actually run forever—or in other words, that 47,176,870 is the maximum finite number of steps for which any 5-state Turing machine can run. That’s what it means for BB(5) to equal 47,176,870.
For more on this story, see Ben Brubaker’s superb article in Quanta magazine, or bbchallenge’s own announcement. For more background on the Busy Beaver function, see my 2020 survey, or my 2017 big numbers lecture, or my 1999 big numbers essay, or the Googology Wiki page, or Pascal Michel’s survey.
The difficulty in pinning down BB(5) was not just that there are a lot of 5-state Turing machines (16,679,880,978,201 of them to be precise, although symmetries reduce the effective number). The real difficulty is, how do you prove that some given machine runs forever? If a Turing machine halts, you can prove that by simply running it on your laptop until halting (at least if it halts after a “mere” ~47 million steps, which is child’s-play). If, on the other hand, the machine runs forever, via some never-repeating infinite pattern rather than a simple infinite loop, then how do you prove that? You need to find a mathematical reason why it can’t halt, and there’s no systematic method for finding such reasons—that was the great discovery of Gödel and Turing nearly a century ago.
More precisely, the Busy Beaver function grows faster than any function that can be computed, and we know that because if a systematic method existed to compute arbitrary BB(n) values, then we could use that method to determine whether a given Turing machine halts (if the machine has n states, just check whether it runs for more than BB(n) steps; if it does, it must run forever). This is the famous halting problem, which Turing proved to be unsolvable by finite means. The Busy Beaver function is Turing-uncomputability made flesh, a finite function that scrapes the edge of infinity.
There’s also a more prosaic issue. Proofs that particular Turing machines run forever tend to be mind-numbingly tedious. Even supposing you’ve found such a “proof,” why should other people trust it, if they don’t want to spend days staring at the outputs of your custom-written software?
And so for decades, a few hobbyists picked away at the BB(5) problem. One, who goes by the handle “Skelet”, managed to reduce the problem to 43 holdout machines whose halting status was still undetermined. Or maybe only 25, depending who you asked? (And were we really sure about the machines outside those 43?)
The bbchallenge collaboration improved on the situation in two ways. First, it demanded that every proof of non-halting be vetted carefully. While this went beyond the original mandate, a participant named “mxdys” later upped the standard to fully machine-verifiable certificates for every non-halting machine in Coq, so that there could no longer be any serious question of correctness. (This, in turn, was done via “deciders,” programs that were crafted to recognize a specific type of parameterized behavior.) Second, the collaboration used an online forum and a Discord server to organize the effort, so that everyone knew what had been done and what remained to be done.
Despite this, it was far from obvious a priori that the collaboration would succeed. What if, for example, one of the 43 (or however many) Turing machines in the holdout set turned out to encode the Goldbach Conjecture, or one of the other great unsolved problems of number theory? Then the final determination of BB(5) would need to await the resolution of that problem. (We do know, incidentally, that there’s a 27-state Turing machine that encodes Goldbach.)
But apparently the collaboration got lucky. Coq proofs of non-halting were eventually found for all the 5-state holdout machines.
As a sad sidenote, Allen Brady, who determined the value of BB(4), apparently died just a few days before the BB(5) proof was complete. He was doubtful that BB(5) would ever be known. The reason, he wrote in 1988, was that “Nature has probably embedded among the five-state holdout machines one or more problems as illusive as the Goldbach Conjecture. Or, in other terms, there will likely be nonstopping recursive patterns which are beyond our powers of recognition.”
Maybe I should say a little at this point about what the 5-state Busy Beaver—i.e., the Marxen-Buntrock Turing machine that we now know to be the champion—actually does. Interpreted in English, the machine iterates a certain integer function g, which is defined by
g(x) = (5x+18)/3 if x = 0 (mod 3),
g(x) = (5x+22)/3 if x = 1 (mod 3),
g(x) = HALT if x = 2 (mod 3).
Starting from x=0, the machine computes g(0), g(g(0)), g(g(g(0))), and so forth, halting if and if it ever reaches … well, HALT. The machine runs for millions of steps because it so happens that this iteration eventually reaches HALT, but only after a while:
0 → 6 → 16 → 34 → 64 → 114 → 196 → 334 → 564 → 946 → 1584 → 2646 → 4416 → 7366 → 12284 → HALT.
(And also, at each iteration, the machine runs for a number of steps that grows like the square of the number x.)
Some readers might be reminded of the Collatz Conjecture, the famous unsolved problem about whether, if you repeatedly replace a positive integer x by x/2 if x is even or 3x+1 if x is odd, you’ll always eventually reach x=1. As Scott Alexander would say, this is not a coincidence because nothing is ever a coincidence. (Especially not in math!)
It’s a fair question whether humans will ever know the value of BB(6). Pavel Kropitz discovered, a couple years ago, that BB(6) is at least 10^10^10^10^10^10^10^10^10^10^10^10^10^10^10 (i.e., 10 raised to itself 15 times). Obviously Kropitz didn’t actually run a 6-state Turing machine for that number of steps until halting! Instead he understood what the machine did—and it turned out to apply an iterative process similar to the g function above, but this time involving an exponential function. And the process could be proven to halt after ~15 rounds of exponentiation.
Meanwhile Tristan Stérin, who coordinated the bbchallenge effort, tells me that a 6-state machine was recently discovered that “iterates the Collatz-like map {3x/2, (3x-1)/2} from the number 8 and halts if and only if the number of odd terms ever gets bigger than twice the number of even terms.” This shows that, in order to determine the value of BB(6), one would first need to prove or disprove the Collatz-like conjecture that that never happens.
Basically, if and when artificial superintelligences take over the world, they can worry about the value of BB(6). And then God can worry about the value of BB(7).
I first learned about the BB function in 1996, when I was 15 years old, from a book called The New Turing Omnibus by A. K. Dewdney. From what I gather, Dewdney would go on to become a nutty 9/11 truther. But that’s irrelevant to the story. What matters was that his book provided my first exposure to many of the key concepts of computer science, and probably played a role in my becoming a theoretical computer scientist at all.
And of all the concepts in Dewdney’s book, the one I liked the most was the Busy Beaver function. What a simple function! You could easily explain its definition to Archimedes, or Gauss, or any of the other great mathematicians of the past. And yet, by using it, you could name definite positive integers (BB(10), for example) incomprehensibly larger than any that they could name.
It was from Dewdney that I learned that the first four Busy Beaver numbers were the unthreatening-looking 1, 6, 21, and 107 … but then that the fifth value was already unknown (!!), and at any rate at least 47,176,870. I clearly remember wondering whether BB(5) would ever be known for certain, and even whether I might be the one to determine it. That was almost two-thirds of my life ago.
As things developed, I played no role whatsoever in the determination of BB(5) … except for this. Tristan Stérin tells me that reading my survey article, The Busy Beaver Frontier, was what inspired him to start and lead the bbchallenge collaboration that finally cracked the problem. It’s hard to express how gratified that makes me.
Why care about determining particular values of the Busy Beaver function? Isn’t this just a recreational programming exercise, analogous to code golf, rather than serious mathematical research?
I like to answer that question with another question: why care about humans landing on the moon, or Mars? Those otherwise somewhat arbitrary goals, you might say, serve as a hard-to-fake gauge of human progress against the vastness of the cosmos. In the same way, the quest to determine the Busy Beaver numbers is one concrete measure of human progress against the vastness of the arithmetical cosmos, a vastness that we learned from Gödel and Turing won’t succumb to any fixed procedure. The Busy Beaver numbers are just … there, Platonically, as surely as 13 was prime long before the first caveman tried to arrange 13 rocks into a nontrivial rectangle and failed. And yet we might never know the sixth of these numbers and only today learned the fifth.
Anyway, huge congratulations to the bbchallenge team on their accomplishment. At a terrifying time for the world, I’m happy that, whatever happens, at least I lived to see this.
Dana, the kids, and I got back to the US last week after a month spent in England and then Israel. We decided to visit Israel because … uhh, we heard there’s never been a better time.
We normally go every year to visit Dana’s family and our many friends there, and to give talks. Various well-meaning friends suggested that maybe we should cancel or postpone this year—given, you know, the situation. To me, though, the situation felt like all the more reason to go. To make Israel seem more and more embattled, dangerous, isolated, abnormal, like not an acceptable place to visit (much less live), in order to crater its economy, demoralize its population, and ultimately wipe it from the face of earth … that is explicitly much of the world’s game plan right now, laid out with shocking honesty since October 7 (a day that also showed us what the “decolonization” will, concretely, look like). So, if I oppose this plan, then how could I look myself in the mirror while playing my tiny part in it? Shouldn’t I instead raise a middle finger to those who’d murder my family, and go?
Besides supporting our friends and relatives, though, I wanted to see the post-October-7 reality for myself, rather than just spending hours per day reading about it on social media. I wanted to form my own impression of the mood in Israel: fiercely determined? angry? hopeless? just carrying on like normal?
Anyway, in two meeting-packed weeks, mostly in Tel Aviv but also in Jerusalem, Haifa, and Be’er Sheva, I saw stuff that could support any of those narratives. A lot was as I’d expected, but not everything. In the rest of this post, I’ll share eleven observations:
(1) This presumably won’t shock anyone, but in post-October-7 Israel, you indeed can’t escape October 7. Everywhere you look, on every building, in every lobby, hanging from every highway overpass, there are hostage posters and “Bring Them Home Now” signs and yellow ribbons—starting at the airport, where every single passenger is routed through a long corridor of hostage posters, each one signed and decorated by the hostage’s friends and family. It sometimes felt as though Yad Vashem had expanded to encompass the entire country. Virtually everyone we talked to wanted to share their stories and opinions about the war, most of all their depression and anger. While there was also plenty of discussion about quantum error mitigation and watermarking of large language models and local family events, no one even pretended to ignore the war.
(2) Having said that, the morning after we landed, truthfully, the first thing that leapt out at me wasn’t anything to do with October 7, hostages, or Gaza. It was the sheer number of children playing outside, in any direction you looked. Full, noisy playgrounds on block after block. It’s one thing to know intellectually that Israel has by far the highest birthrate of any Western country, another to see it for yourself. The typical secular family probably has three kids; the typical Orthodox family has more. (The Arab population is of course also growing rapidly, both in Israel and in the West Bank and Gaza.) New apartment construction is everywhere you look in Tel Aviv, despite building delays caused by the war. And it all seems perfectly normal … unless you’ve lived your whole life in environments where 0.8 or 1.2 children per couple is the norm.
This, of course, has giant implications for anyone interested in Israel’s future. It’s like, a million Israeli leftists could get fed up and flee to the US or Canada or Switzerland, and Israel would still have a large and growing Jewish population—because having a big family is “just what people do” in a state that was founded to defy the Holocaust. In particular: anyone who dreams of dismantling the illegal, settler-colonial, fascist Zionist ethnostate, and freeing Palestine from river to sea, had better have some plan for what they’re going to do with all these millions of young Jews, who don’t appear to be going anywhere.
(3) The second thing I noticed was the heat—comparable to the Texas summer heat that we try to escape when possible. Because of the roasting sun, our own two pampered offspring mostly refused to go outside during daytime, and we mostly met friends indoors. I more than once had the dark thought that maybe Israel will survive Hamas, Hezbollah, Iran, and its own Jewish extremists … only to be finished off in the end (along with much of the rest of the planet) by global warming. I wonder whether Israel will manage to engineer its way out of the crisis, as it dramatically engineered its way out of its water crisis via desalination. The Arab petrostates have been trying to engineer their way out of the Middle East’s increasingly Mercury-like climate, albeit with decidedly mixed results.
(4) But nu, what did our Israeli friends say about the war? Of course it’s a biased sample, because our friends are mostly left-wing academics and tech workers. But, at risk of overgeneralizing: they’re unhappy. Very, very unhappy. As for Bibi and his far-right yes-men? Our friends’ rage at them was truly a sight to behold. American progressives are, like, mildly irked by Trump in comparison. Yes, our friends blame Bibi for the massive security and intelligence failures that allowed October 7 to happen. They blame him for dragging out the war to stave off elections. They blame him for empowering the contemptible Ben-Gvir and Smotrich. They blame him for his failure to bring back the remaining hostages. Most of all, they blame him for refusing even to meet with the hostage families, and more broadly, for evading responsibility for all that he did wrong, while arrogating credit for any victories (like the rescue of Noa Argamani).
(5) One Israeli friend offered to take me along to the giant anti-Bibi rally that now happens every Saturday night in Azrieli Center in Tel Aviv. (She added that, if I left before 9pm, it would reduce the chances of the police arresting me.) As the intrepid blogger-investigator I am, of course I agreed.
While many of the protesters simply called for new elections to replace Netanyahu (a cause that I 3000% support), others went further, demanding a deal to free the hostages and an immediate end to the war (even if, as they understood, that would leave Hamas in power).
Watching the protesters, smelling their pot smoke that filled the air, I was seized by a thought: these Israeli leftists actually see eye-to-eye with the anti-Israel American leftists on a huge number of issues. In a different world, they could be marching together as allies. Except, of course, for one giant difference: namely, the Tel Aviv protesters are proudly waving Israeli flags (sometimes modified to add anti-Bibi images, or to depict the Star of David “crying”), rather than burning or stomping on those flags. They’re marching to save the Israel that they know and remember, rather than to destroy it.
(6) We did meet one ultra-right-wing (and Orthodox) academic colleague. He was virtually the only person we met on this trip who seemed cheerful and optimistic about Israel’s future. He brought me to his synagogue to celebrate the holiday of Shavuot, while he himself stood guarding the door of the synagogue with a gargantuan rifle (his volunteer duty since October 7). He has six kids.
(7) Again and again, our secular liberal friends told us they’re thinking about moving from Israel, because if the Bibi-ists entrench their power (and of course the demographics are trending in that direction), then they don’t see that the country has any worthwhile future for them or their children. Should this be taken more seriously than the many Americans who promise that this time, for real, they’ll move to Canada if Trump wins? I’m not sure. I can only report what I heard.
(8) At the same time, again and again I got the following question from Israelis (including the leftist ones): how bad is the situation for Jews in the US? Have the universities been taken over by militant anti-Zionists, like it shows in the news? I had to answer: it’s complicated. Because I live my life enbubbled in the STEM field of computer science, surrounded by friends and colleagues of many backgrounds, ethnicities, religions, and political opinions who are thoughtful and decent (otherwise, why would they be my friends and colleagues?), I’m able to live a very nice life even in the midst of loud protesters calling to globalize the intifada against my family.
If, on the other hand, I were in a typical humanities department? Yeah, then I’d be pretty terrified. My basic options would be to (a) shut up about my (ironically) moderate, middle-of-the-road opinions on Israel/Palestine, such as support for the two-state solution; (b) live a miserable and embattled existence; or (c) pack up and move, for example to Israel.
An astounding irony right now is that, just as Israeli leftists are talking about moving from Israel, some of my American Jewish friends have talked to me about moving to Israel, to escape a prejudice that they thought died with their grandparents. I don’t know where the grass is actually greener (or is it brown everywhere?). Nor do I know how many worriers will actually follow through. What’s clear is that, both in Israel and in the diaspora, Jews are feeling an existential fear that they haven’t felt for generations.
(9) Did I fear for my own family’s safety during the trip? Not really. Maybe I should have. When we visited Haifa, we found that GPS was scrambled all across northern Israel, to make targeting harder for Hezbollah missiles. As a result, we couldn’t use Google Maps, got completely lost driving, and had to change plans with our friends. For the first time, now I really feel angry at Hezbollah: they made my life worse and it’s personal!
The funniest part, though, was how the scrambling was implemented: when you opened Google Maps anywhere in the north, it told you that you were in Beirut. It then dutifully gave you walking or driving directions to wherever you were going in Israel, passing through Syria close to Damascus (“warning: this route passes through multiple countries”).
(10) The most darkly comical thing that I heard on the entire trip: “oh, no, I don’t object in the slightest if the anti-Zionists want to kill us all. I only object if they want to kill us because of an incorrect understanding of the relevant history.” Needless to say, this was a professor.
(11) After my two-week investigation, what grand insight can I offer about Israel’s future? Not much, but maybe this: I think we can definitively rule out the scenario where Israel, having been battered by October 7, and bracing itself to be battered worse by Hezbollah, just sort of … withers away and disappears. Yes, Israel might get hotter, more crowded, more dangerous, more right-wing, and more Orthodox. But it will stay right where it is, unless and until its enemies destroy it in a cataclysmic war. You can’t scare people away, break their will, if they believe they have nowhere else on the planet to go. You can only kill them or else live next to them in peace, as the UN proposed in 1947 and as Oslo proposed in the 1990s. May we live to see peace.
Anyway, on that pleasant note, time soon to tune in to the Trump/Biden debate! I wonder who these two gentlemen are, and what they might stand for?
(See here for Boaz Barak’s obituary, and here for Lance Fortnow’s—they cover different aspects of Luca’s legacy from each other and from this post. Also, click here to register for a free online TCS4All talk that Luca was scheduled to give, and that will now be given in his memory, this Monday at 3:30pm Eastern time.)
Luca Trevisan, one of the world’s leading theoretical computer scientists, has succumbed to cancer in Italy, at only 52 years old. I was privileged to know Luca for a quarter-century, first as my complexity theory and cryptography professor at UC Berkeley and as a member of my dissertation committee, and then as a friend and colleague and fellow CS theory blogger.
I regret that I learned of the seriousness of Luca’s condition only a few days ago. So yesterday morning I wrote him a farewell email, under the impression that, while he was now in hospice care, he had at least a few more weeks. Alas, he probably never saw it. So I’m hereby making the email into a memorial post, with small changes mostly to protect people’s privacy.
Dear Luca,
Dana, the kids, and I were traveling in Israel for the past two weeks, when I received the shocking and sad news that this might be my last chance to write to you.
At risk of stating the obvious — you had a very large and positive effect on my life and career. Starting with the complexity theory summer school at the Institute for Advanced Study in 2000, which was the first time we met and also the first time I really experienced the glories of complexity at full blast. And then continuing at Berkeley, TA’ing your algorithms class, which you had to cancel on 9/11 (although students still somehow showed up for office hours lugging their CLRS books…), and dealing with that student who obviously cheated on the midterm although I had stupidly given back to her the evidence that would prove it.
And then your graduate complexity course, where I was very proud to get 100% on your exam, having handwritten it on a train while everyone else used LaTeX (which, embarrassingly, I was still learning). I was a bit less proud to present the Razborov-Rudich paper to the class, and to get questions from you that proved that I understood it less thoroughly than I thought. I emerged from your course far better prepared to do complexity theory than when I entered it.
Later I took your cryptography course, where I came to you afterwards one day to point out that with a quantum computer, you could pull out big Fourier coefficients without all the bother of the Goldreich-Levin theorem. And you said sure, but then you would need a quantum computer. Over 20 years later, Goldreich and Levin (and you?) can say with satisfaction that we still don’t have that scalable quantum computer … but we’re much much closer, I swear!
I still feel bad about the theory lunch talk I gave in 2003, on my complexity-theoretic version of Aumann’s agreement theorem, where I used you and Umesh as characters instead of Alice and Bob, and which then led to unintended references to “Luca’s posterior” (probability distribution, I meant).
I also feel bad about delaying so long the completion of my PhD thesis, until well after I’d started my postdoc in Princeton, so that my former officemate needed to meet you on a street corner in San Francisco to sign the signature page the night before the deadline.
But then a few years later, when Avi and I did the algebrization paper, the fact that you seemed to like it mattered more to me than just about anything else.
Thank you for the excellent dinner when I met you some years ago in Rome. Thank you for the Trevisan-Tulsiani-Vadhan paper, which answered a question we had about BosonSampling (and you probably didn’t even know you were doing quantum computing when you wrote that paper!). Thank you for your blog. Thank you for everything you did for me.
I always enjoyed your dry humor, much of which might sadly be lost to time, unless others wrote it down or it’s on YouTube or something. Two examples spring to my mind across the decades:
“From my previous lecture, you may have gotten the impression that everything in derandomization is due to Nisan and Wigderson, but this is not the case: Avi has been working with other people as well.”
After I’d explained that I’d be spending a semester in Jerusalem to work with Avi, despite (at that time) knowing only the most rudimentary Hebrew, such as how to say “please” and “excuse me”: “you mean there are words in Hebrew for ‘please’ and ‘excuse me’?”
Speaking of which, my current trip to Israel has given me many opportunities to reflect on mortality — for all the obvious war-related reasons of course, but also because while we were here, we unexpectedly had to attend two shivas of people in our social circle who died during our trip, one of them from cancer. And we learned about a close friend whose stepson has a brain tumor and might or might not make it. Cancer is a bitch.
Anyway, there’s much more I could write, but I imagine you’re getting flooded with emails right now from all the people whose lives you’ve touched, so I won’t take up more of your time. You’ve made a real difference to the world, to theoretical computer science, and to your friends and colleagues, one that many people would envy.
Best,
Scott | |||||
2453 | dbpedia | 3 | 13 | https://rjlipton.com/2023/06/14/a-little-noise-makes-quantum-factoring-fail/ | en | A Little Noise Makes Quantum Factoring Fail | [
"https://i0.wp.com/rjlipton.com/wp-content/uploads/2024/03/719c3-jcai.jpeg?resize=150%2C190&ssl=1",
"https://i0.wp.com/rjlipton.com/wp-content/uploads/2024/03/356bc-pshor.jpeg?resize=150%2C150&ssl=1",
"https://s0.wp.com/latex.php?latex=%7B%5Cmathsf%7BBQP%7D%7D&bg=ffffff&fg=000000&s=0&c=20201002",
"https://s0.... | [] | [] | [
""
] | null | [] | 2023-06-14T00:00:00 | Jin-Yi Cai is one of the top theory experts in the world. Both Ken and I have had the pleasure to work with him and interact with him over the years. We have discussed some of his previous work here and here. Today we will talk about his new work on quantum computing. Quantum Factoring… | en | Gödel's Lost Letter and P=NP | https://rjlipton.com/2023/06/14/a-little-noise-makes-quantum-factoring-fail/ | Jin-Yi Cai is one of the top theory experts in the world. Both Ken and I have had the pleasure to work with him and interact with him over the years. We have discussed some of his previous work here and here.
Today we will talk about his new work on quantum computing.
Quantum Factoring
Peter Shor invented the quantum algorithm for finding the prime factors of an integer in 1994.
This is one of the great algorithms of all time. It shows at least in theory that quantum algorithms can be much more efficient than classical algorithms. The algorithm shows that the integer factorization problem can be efficiently solved on an idealized quantum computer and is consequently in the complexity class . This is almost exponentially faster than the most efficient known classical factoring algorithm.
Quantum Factoring Possible?
Is it practically feasible to use Shor’s factoring method to break RSA? This leads to a major question:
Can cryptography survive quantum methods?
A paper by Daniel Bernstein, Nadia Heninger, Paul Lou, and Luke Valenta titled “Post-Quantum RSA” is a key one. They consider further systems including elliptic curve cryptography (ECC) and say:
The conventional wisdom among researchers in post-quantum cryptography is that quantum computers will kill RSA and ECC but will not kill hash-based cryptography, code-based cryptography, lattice-based cryptography, or multivariate- quadratic-equations cryptography.
…
Shor’s algorithm easily breaks RSA as used on the Internet today. The question is whether RSA parameters can be adjusted so that all known quantum attack algorithms are infeasible while encryption and decryption remain feasible.
See also this. A 2019 paper by Craig Gidney and Martin Ekerå argues that implementations of Shor on 2,048-bit integers is within reach of current technology using noisy qubits—needing only some millions of them. However, this presumes an error-free implementation of the Quantum Fourier Transform (QFT). They say:
Note furthermore that when we analyze the success probabilities of Shor’s algorithms, and the various derivatives, we assume the use of an ideal QFT even though the implemented QFT is technically an approximation.
[Added 6/19: This quotation is taken somewhat out of context, because the paper’s main concern is optimizing and dealing with the much greater noise and precision issues in the superposed modular exponentiation step. See Craig Gidney’s comment below for more information on that and on how the QFT step is executed.]
Quantum Factoring Impossible?
Now enter Jin-Yi. He has a new paper that says:
We consider Shor’s quantum factoring algorithm in the setting of noisy quantum gates. Under a generic model of random noise for rotation gates, we prove that the algorithm does not factor integers of the form when the noise exceeds a vanishingly small level in terms of (the number of bits of the integer to be factored), where and are chosen from a set of primes of positive density.
Jin-Yi essentially is saying that quantum algorithms fail to break RSA in the presence of noisy gates. He argues that they will not be able to work when quantum gates are not perfect.
This seems to contradict the previous section. Can it be that quantum algorithms break RSA in theory, but are not practically realizable? See these three recent discussions.
To our knowledge, this is the first hard-and-fast negative result about Shor’s algorithm. Let’s take a closer look.
Angles on Shor’s Algorithm
Given to factor, Shor’s algorithm starts by choosing relatively prime to . The algorithm extends the domain of the function to all where , , and is the next power of after , so that . The quantum engine of Shor’s algorithm has just two main components:
A routine that computes the quantum state
Put another way without the Dirac angle-bracket notation, is a state of qubits that has equal nonzero amplitude only on those components where .
The QFT (or its inverse) on qubits.
Quantum gates of the form where , when controlled from another qubit, are used in the “textbook” way to compute the QFT. The diagram with suffices for the general pattern:
source
For all but a few small values of , the rotation angle in is tinier than theoretical minimum units of space, let alone the smallest precision of angular or spatial resolution we have achieved in experiments such as LIGO. Call a circuit family using for unbounded “idealistic.”
Donald Coppersmith showed that Shor’s algorithm still works if is replaced by the identity operator for , where the threshold equals for a constant slightly above . The resulting circuits are still “idealistic” but at least not exponentially so. Coppersmith’s analysis is referenced in Shor’s original paper but not expounded further there.
Jin-Yi shows that Shor’s and Coppersmith’s circuits cannot tolerate a natural kind of noise that operates close to Coppersmith’s level of scaling. It stands concretely against any asymptotic claims of power via Shor’s algorithm that involve idealistic circuits. At the end we will discuss its implications also for circuits that implement Shor’s algorithm without using gates.
The Noise
Call a Shor circuit if it uses controlled gates to compute the QFT (or its inverse) and can be sampled by a classical procedure to infer the period of in expected time.
Jin-Yi’s noise operation has parameters and and maps a Shor circuit to a distribution of circuits defined as follows: For each controlled gate in with (alternatively, ), replace it by
with the same control qubit and with an independent draw of Gaussian noise . The echo of Coppersmith’s “ ” is on purpose, because he establishes the following fact, which we first state loosely:
Provided , the circuits lose the Shor property, meaning that sampling cannot find .
This says that the noise range brushes against the Coppersmith upper bound for the precision needed to implement Shor’s algorithm. Since is exponentiated, one can say that noise on the order of the cube of the precision needed for Shor’s algorithm is enough to destroy it.
The estimates in the paper allow replacing by with greater attention to additive constants, so lower noise approaching the square root of the Coppersmith precision suffices to destroy the Shor property. This may be improvable to almost linear. Exactly what does the noise attack? That’s next.
Long Periods
The noise most strongly affects cases where and have a large prime factor. The most extreme such case is being prime. Then is called a Sophie Germain prime. Ironically, is called a “safe prime” but those are the most unsafe under Jin-Yi’s noise.
It remains unknown whether infinitely many Sophie Germain primes exist, despite the quest winning a Tony Award and Pulitzer Prize. But a less-heralded property suffices. Étienne Fouvry proved in 1985 that the set of primes for which has a factor is not only infinite, but has positive density in the set of primes. It follows that cases where both and have this “Fouvry property” have positive density among products of two primes. There can be only one prime factor , likewise .
The upshot for such and is that most have exponentially long periods modulo . The geometric sums that concentrate amplitudes on multiples of in the ideal situation, when the circuit is sampled via quantum measurement, have norm-squared proportional to . In the noisy situation, such length maximizes the perturbative effect of the noise so as to level out the amplitude. This destroys the ability to infer .
We cut a few corners in the statements of Jin-Yi’s theorems, but they are reasonably close and the paper has full details. They hold also under the variant and with-or-without removing controlled gates for .
Theorem 1 Asymptotically as , if is an -bit product of two Fouvry primes, and , then the probability that infers is exponentially small.
Theorem 2 Asymptotically as , for all but a vanishing fraction of -bit primes and with , the probability over and noisy that infers is exponentially small.
Theorem 2, whose proof is in the paper’s appendix, says that Shor’s algorithm fails to survive the noise in all but a vanishing fraction of instances. It applies also under certain restrictions of the primes, such as and both being congruent to 3 modulo 4. Theorem 1 gives a substantial explicit set of cases on which the algorithm fails.
How General Is This?
The theorems are carefully stated in terms of the period-inferencing component of Shor’s algorithm. And they are asymptotic. They do not rule out:
possible quantum improvements on input sizes in the finite range of conceivable practical crypto;
quantum circuits that might factor by other means; nor
that error correction might restore the Shor property.
In particular, they do not define a general-purpose noise model that could apply to any quantum circuit .
Now we discuss two means to implement Shor’s algorithm without using gates beyond :
The Hadamard gate , the controlled-not gate CNOT, and the gate form a complete set that (by the Solovay-Kitaev theorem) can feasibly approximate the state produced by any feasible quantum circuit plus QFT. Then the minimum angle of any individual operation is .
The Hadamard and Toffoli gates form a universal set in the weaker sense of encoding real and imaginary parts of quantum amplitudes separately. This suffices to compute the factoring function via polynomial-size circuits using only real entries.
Idea 1 may only mask the issue, insofar as the resulting circuits must still approximate angles down to Coppersmith’s unboundedly small magnitude . Both and are rotations of the Bloch sphere of periods 2 and 8, respectively. As such, each may be exactly physically realizable, along with their controlled versions and CNOT in higher-dimensional Bloch spheres.
However, and together generate an infinite subgroup of SU(2). The group has members that rotate through arbitrarily small angles. Jin-Yi says in his speculative concluding section:
It is true that using a fixed finite set of rotations of reasonable angles such as along various axes can compose to rotations of arbitrarily small angles. But my view is just that these compositional rules as specified by the group SU(2) must not be exact for physical reality.
Most in particular, let . If can be exactly realized, then any power , , … should be. But the angle of is not a rational multiple of , so the powers alone form an infinite state space and include arbitrarily tiny rotations. Please see Jin-Yi’s paper for other context and justifications on these points, plus related contentions by Mikhail Dyakonov.
The circuits in idea 2 cannot approximate any (feasible) quantum state metrically, but they can emulate Shor’s algorithm using only and as “angles.” They may, however, still involve quantum states with filigrees beyond physically realizable precision. In the coda to our own textbook, we speculate this already for the deterministic “functional superposition” component of Shor’s algorithm.
All this and more was discussed already twenty-plus years ago in the “Sure/Shor separator” debate. The difference now is having Jin-Yi’s new work as a linchpin for the skeptical side. Non-robustness to noise in the “Coppersmith range” may be a wider phenomenon than his current results show.
In his last paragraph, Jin-Yi argues that quantum computing makes a fundamental departure from Alan Turing’s condition that primitive steps are finite and fixed independent of the data size . He mentions the free use of SU(2) but his point may apply as well to the step of placing a Toffoli gate anywhere in an -qubit quantum circuit. This point is separate from issues of noise models, about which we have heard much from Gil Kalai including recently.
Open Problems
The issue is simple: Can quantum algorithms be made to work in the presence of gates that are making errors at Jin-Yi’s scaling? The obvious interesting open question is: As in classical computation, can we build circuits that can handle errors? See this and this on error-free computation.
This seems to be a wonderful question. Will the new results reshape debates on quantum computing and the polynomial Church-Turing thesis, or are they subsumed in matters already recently much discussed?
[added update about Gidney-Ekerå paper in third section] | |||||
2453 | dbpedia | 1 | 9 | https://selects.acm.org/selections/people-of-computing-4-quantum-computing | en | ACM Selects: People of Computing #4: Quantum Computing | [
"https://uploads-ssl.webflow.com/5e68f63bfde3c30f156ef725/5f19335d866be8c056b30468_acmlogo-whitev2.png",
"https://uploads-ssl.webflow.com/5e68f63bfde3c30f156ef725/5e68f63bc70ab26105ba82a3_twitter-icon-white.svg",
"https://uploads-ssl.webflow.com/5e68f63bfde3c30f156ef725/5e68f63bc70ab24130ba80c0_facebook-icon-wh... | [] | [] | [
""
] | null | [] | null | ACM Selects are regularly distributed themed shortlists curated by subject matter experts for both the serious and emerging computing professionals. Each Select showcases a set of learning resources, with the goal of helping people discover computing in an easily digestible format. | null | uantum computing and quantum information science is a rapidly growing field that has sparked a lot of interest in the world. While the field itself and its applications are yet to be considered as mature, quantum technologies are expected to profoundly impact technology by completely revolutionizing the foundations on which technology as we know it today is built on. In this week's People in Computing ACM Selects, we spotlight key researchers and scientists making meaningful contributions to the development of quantum technologies. We hope that this first ACM Selects on quantum computing will help provide some context on the landscape of quantum technologies and some of the key directions that research in this area is taking.
We highly encourage learning more about their contributions, and invite you to consider participating in ACMâs activities on these topics, be it through our professional community,â global policyâ activitiesâ, ongoing work in âprofessional ethicsâ, and/or through our âchapters, âSIGsâ, âlocal meetups and/or âconferencesâ.
We value your feedback and look forward to your guidance on how we can continue to improve ACM Selects together. Your suggestions and opinions on how we can do better are welcome via email through selects-feedback@acm.org.
â
â
â
How 'Spooky' is Quantum Physics? The Answer could be Incalculable
First published in Nature, Vol. 577, 23 January 2020.
Most of us relate quantum physics with science fiction - Albert Einstein himself even dubbed a fundamental aspect of quantum mechanics as being âspooky action at a distanceâ. It is an area that is generally perceived as complex, difficult to understand, weird but also cool.Â
â
To kick off this round of Selects, we hope that this ACM News article which conveys the complexity, weirdness and excitement behind quantum information science will spark your curiosity and inspire you to explore this area.
[ Read more ]
 â...spooky action at a distance.âÂ
- Albert Einsteinâ
Peter Shor
Peter Shor is the Morss Professor of Applied Mathematics since 2003, and Chair of the Applied Mathematics Committee at MIT since 2015. His research interests are in theoretical computer science: currently on algorithms, quantum computing, computational geometry and combinatorics. In 1998, Peter Shor received the Nevanlinna Prize and the International Quantum Communication Award. He also received the Dickson Prize in Science from Carnegie-Mellon in 1998. He was awarded the Gödel Prize of the ACM and a MacArthur Foundation Fellowship in 1999.Â
[ Read their bio ]
â
â
Algorithms for quantum computation: discrete logarithms and factoring
First published in SFCS '94: Proceedings of the 35th Annual Symposium on Foundations of Computer Science, November 1994.
Perhaps Shorâs most outstanding contribution to the field of quantum computing is his discovery of a polynomial-time quantum algorithm, dubbed Shorâs algorithm. Although nearly 3 decades old, his discovery holds great significance. Given a âlargeâ number of quantum computers in the world, they would have the capability of breaking down the encryption techniques used on the Internet today.
[ Read more ]
â
Why haven't more quantum algorithms been found?
First published in Journal of the ACM, Vol. 50, No. 1, January 2003.
âNearly a decade after the discovery of Shorâs algorithm, Peter Shor observed that few quantum algorithms had been discovered and set out to find out why. In this paper, he sheds light on possible reasons for this and shares his thoughts on the directions that research at the time could take to yield the discovery of more quantum algorithms.
[ Read more ]
â
Quantum Computing Pioneer Warns of Complacency over Internet Security
First published in Nature, Vol. 587, 12 November 2020.
In this article, Peter Shor looks back on the moment of the Shorâs algorithmâs discovery and tells the story of the impact and perception that his discovery had on the world of physics and computer science at the time. While reviewing the current stage of development in quantum computing, he pondered the road to a post-quantum world and the potential state of unpreparedness it carries, emphasising the importance of ensuring cybersecurity is maintained to minimize the risk of unpreparedness.
[ Read more ]
Elham Kashefi
Elham Kashefi is the Professor of Quantum Computing at the School of Informatics, University of Edinburgh, and Directeur de recherche au CNRS at LIP6 Sorbonne Universite. Elham is a pioneer in the field of quantum technology, as the co-founder of the fields of quantum cloud computing and quantum computing verification and as co-Founder of VeriQloud Ltd, through her innovative work in trans-disciplinary interaction of hybrid quantum-classical solutions from theoretical investigation to actual experimental and industrial commercialisation. Previously, she served as the Associate Director of the NQIT Hub before being elected as the lead for the software activities within the quantum computing and simulation hub. For her work in application development for quantum computing and communication, she has been awarded several UK, EU and US grants and fellowships
[ Read their bio ]
Cyber security in the quantum era
First published in Communications of the ACM, Vol. 62, No. 4, March 2019.
The advent of quantum computers equally poses threats to and creates opportunities of bolstering cybersecurity and in this Communications of the ACM review article, Elham Kashefi and Petros Wallden debunk some myths associated with the capabilities of quantum computing and share their insights on cybersecurity in a quantum-enhanced world
[ Read more ]
â
John Martinis
John Martinis, an alumnus of the University of California, Berkeley, is one of the world's foremost experts on quantum computing. John was a NIST fellow and while working at NIST, he worked to use the Coulomb Blockade phenomenon to develop a new fundamental electrical standard based on counting electrons. Since 2002, his research effort has focused on building a quantum computer using Josephson junctions. He has received numerous awards including the âScience breakthrough of the yearâ in 2010, the London Prize for low-temperature physics research on superconducting quantum bits in 2014. John is a Fellow of the American Physical Society and currently the Wooster Chair in experimental physics at the University of California, Santa Barbara.Â
[ Read about their research ]
â
People of ACM - John Martinis
First published in People of ACM, May 16, 2017.
In this 2017 People of ACM feature, John Martinis shares his outlook on the development of quantum computing, provides some context on the technical challenges that are yet to be addressed in the field and shares his goals and expected achievements for 2017.
[ Read more ]
"Quantum Computing: Far Away? Around the Corner? Or Both at the Same Time?"
First presented at ACM's Celebration of 50 Years of the A.M. Turing Award, June 2017.
In this panel, John Martinis and co-panelists, Dorit Aharonov, Jay M. Gambetta and Andrew Chi-Chih Yao discuss the promise of quantum computing, sharing their insights on the state of research and development in the field and offering guidance to computer scientists on skills that they would need to be equipped with to venture into quantum computing.
[ Watch the panel discussion ]
[Â Learn more about ACM's Celebration of 50 Years of the A.M. Turing Award ]
â
â
â
Stephanie Wehner
Stephanie Wehner is the Antoni van Leeuwenhoek Professor in quantum information, and Research Lead of the Quantum Internet Division at QuTech, Delft University of Technology. Her extensive work on quantum cryptography and communication is advancing research on quantum information theory and its applications to both computer science and physics. In the Quantum Internet Team at QuTech and as the coordinator of the European Quantum Internet Alliance, Stephanie is working towards achieving her goal of overcoming the challenges of building large scale quantum networks.Â
She has written numerous scientific articles in both physics and computer science, and is one the founders of QCRYPT, which has become the largest conference in quantum cryptography.
[ Read their bio ]
â
â
âQuantum Networks: From a Physics Experiment to a Quantum Network System with Stephanie Wehner
âFirst published as an ACM Tech Talk, September 2020.
In this Tech talk, Stephanie Wehner discusses the vision for the quantum internet, its underlying requirements and showcases some of the efforts that are being made towards realizing scalable quantum networks. For those interested in âplayingâ with quantum technology, Stephanie shares some pointers and practical tools that allow you to interact with simulated quantum networks from the comfort of your home.
[ Watch the Tech Talk ]
Designing a quantum network protocol
First published in CoNEXT '20: Proceedings of the 16th International Conference on emerging Networking EXperiments and Technologies, November 2020.
âWe preface this ACM Selects by stating that quantum technologies will âcompletely revolutionize the foundations on which technology as we know it today is built onâ. As protocols are a core building block of todayâs networks, they will be required to evolve to support the functionality of and communication in quantum networks. In this paper, Wojciech Kozlowski, Axel Dahlberg and Stephanie Wehner present their design of a quantum network protocol developed to enable end to end quantum communication and serve as a building block for future quantum-based services and applications.
[ Read more ]
Mingsheng Ying
Mingsheng Ying is a Distinguished Professor and Research Director of the Center for Quantum Software and Information at the University of Technology Sydney, Australia. He is also Deputy Director for Research (adjunct position) at the Institute of Software, Chinese Academy of Sciences, and holds the Cheung Kong Chair Professorship at Tsinghua University, China. Mingsheng was the Director of the Scientific Committee of the National Key Laboratory of Intelligent Technology and Systems, Tsinghua University, China. He has received several awards including NSF China Distinguished Young Scholar Award (1997) and a China National Science Award (Natural Science, 2008) and has served on the editorial board for several publications including Artificial Intelligence Journal and was recently appointed Editor-in Chief of the new journal, ACM Transactions on Quantum Computing.
[ Read their bio ]
â
People of ACM - Mingsheng Ying
First published in People of ACM, December 3, 2019. | |||||||
2453 | dbpedia | 0 | 50 | https://research.ibm.com/haifa/seminars/index.shtml | en | Haifa Seminars | [
"https://research.ibm.com/haifa/seminars/images/web_banner.jpg"
] | [] | [] | [
""
] | null | [] | 2011-05-17T00:00:00 | en | http://www.ibm.com/favicon.ico | null | Abstract:Â
Quantum communication allows two communicating parties (Alice and Bob) to share a secret at a distance - the encryption key. Relying on the basic principle of quantum mechanics - measurement affects the measured state, Alice and Bob can verify the security of the key, since any eavesdropper (Eve) will be revealed by the measurement traces she leaves behind.Â
I will discuss the current limits on the speed of quantum communication, which are primarily due to the limitations of the standard quantum measurement of optical states (single photons or very weak light). I will review the broad context of quantum measurement, and the standard homodyne methods that are limited by the electronic bandwidth of photo-detectors. I will then describe our recently demonstrated parallel optical homodyne measurement that allows to overcome this limit completely1. Using optical parametric amplification, we could measure quantum optical squeezing simultaneously across a bandwidth of 55THz.
Finally, I will discuss our implementation of this new measurement method for broadband, parallel Quantum communication, where many quantum channels (up to 1000) can be multiplexed over a single broadband squeezer and using a single measurement device.Â
1 Yaakov Shaked, Yoad Michael, Rafi Vered, Leon Bello, Michael Rosenbluh and Avi Peâer, âLifting the Bandwidth limit of Optical Homodyne Measurementâ, Nature Comm. 9, 609 (2018) | ||||||
2453 | dbpedia | 3 | 52 | https://quantum.info/news/press.html | en | Press releases | [
"https://quantum.info/images/qtlab.svg",
"https://quantum.info/images/uq.svg",
"https://quantum.info/images/banners/press.jpg",
"https://quantum.info/news/images/near-optimal.png",
"https://quantum.info/news/images/CompressiveSensing.jpg",
"https://quantum.info/news/shrimp/260px-Mantis_shrimp_from_front.j... | [] | [] | [
"Quantum",
"Quantum Technology Lab",
"Quantum Information",
"Quantum Technology",
"Quantum Physics",
"Quantum Optics",
"Experimental Quantum Physics",
"Teleportation",
"Quantum Cryptography",
"Photon",
"Single Photon",
"Single Photon Source",
"Frequency Doubler",
"Cavity",
"Spin Measurem... | null | [
"Andrew White"
] | null | qt lab | ../apple-touch-icon.png | null | 2016 | 2015 | 2014 | 2012 | 2011 | 2010 | 2008 | 2007 | 2003
French revolution in quantum photonics
7 March 2016
Photonics, the technology that powers the global internet, uses bright lasers to function.
Quantum photonics will allow currently impossible capabilities in not only communication, but in sensing, metrology, and even computation, however it requires single particles of light—photons.
The problem is that to date there have been only approximate single-photon sources: these are physically large and hard to multiplex—making 6 independent photons at a time currently occurs at a lower rate than gravitational-wave detection!
A French-Australian team has cleared this decades-long roadblock away, developing a single photon source that is a million times smaller in volume, and 20 times brighter, than existing sources.
The new sources use quantum-dots—artificial atoms made of 10,000 or so semiconductor atoms—sandwiched between two microscopic mirrors that are housed inside a pillar a tenth the thickness of the finest human hair.
“The source is so bright because we use the mirrors to engineer the quantum vacuum, making the quantum-dot very likely to emit light in one direction, unlike natural light sources that emit light in all directions”, said Professor Pascale Senellart, lead of the team at the Centre National de la Recherche Scientifique, Paris.
“We have developed a technique that uses electric fields to ensure that the dot emits light at exactly the right colour for the mirrors it lies between”, she continues.
The new sources are a remarkable improvement over the current state-of-the-art sources, which use millimetre scale crystals, require expensive detectors to flag photon production, and mostly produce no light at all.
“The last twenty years have seen many proposals for exciting future technologies, all of which assumed single photon sources”, notes Juan Loredo, PhD student in the team at the University of Queensland, Brisbane, “We think the new devices will have the same effect in quantum photonics as moving from room-sized mainframes to personal computers did in computing”.
“These sources are the first scalable single-photon technology: literally, the future is looking bright”.
The paper can be found at http://dx.doi.org/10.1038/nphoton.2016.23
Caption: Three sources of single photons. The semiconductor quantum dot—about 10,000 atoms, represented by a red dot at the centre of the cavity—is inserted in the centre of the cavity, which consists of a 3 µm pillar connected to a circular frame by guides that are 1.3 µm wide. By applying electrical voltage to the cavity, the wavelength of the emitted photons can be tuned and the charge noise totally eliminated. Image credit: Niccolo Somaschi – Laboratoire de photonique et de nanostructures (CNRS)
Media:
Mr Juan Loredo (+61 4 2140 8439, juan.loredo1@gmail.com) and Professor Andrew White (+61 4 6625 6329, andrew.white@uq.edu.au), School of Mathematics and Physics, University of Queensland, Australia.
Professor Pascale Senellart, Laboratoire de Photonique et de Nanostructures, Centre National de la Recherche Scientifique, France (+33 6 28 35 55 26, Pascale.Senellart@lpn.cnrs.fr).
Ms Tara Roberson, Communications officer for ARC Centre for Engineered Quantum Systems, Australia (+61 4 0451 6635, t.roberson@uq.edu.au).
Official press release.
Quantum reality check
9 February 2015
Schrödinger’s cat highlights a long-standing dilemma in quantum mechanics: is the cat really alive and dead, or is the weirdness just in our head?
Researchers at The University of Queensland have now made major progress in answering this question.
Using four-dimensional states of photons, and subjecting them to very precise measurements, they ruled out the popular view that describing the cat as dead and alive is just due to a lack of knowledge about its real state.
As with all objects in quantum physics, the cat is described by the quantum wavefunction.
Dr Alessandro Fedrizzi, from the UQ School of Mathematics and Physics (SMP), said that although the quantum wavefunction is our central tool for describing physical systems in quantum mechanics, it is still unclear what it actually is.
“Does it only represent our limited knowledge about the real state of a system, or is it in direct correspondence with this reality?” he said.
“And is there any objective reality at all?”
This debate has remained purely theoretical for decades, until three teams of quantum theorists — including co-authors UQ’s Dr Cyril Branciard and Dr Eric Cavalcanti from The University of Sydney — recently proposed experimental tests to answer this question.
Lead author and UQ PhD student Mr Martin Ringbauer said that the new approach tests whether the competing interpretations of the wavefunction can explain why we cannot tell quantum states apart with certainty, which is a central feature of quantum mechanics.
“Our results suggest that, if there is objective reality, the wavefunction corresponds to this reality,” Mr Ringbauer said.
In other words, Schrödinger’s cat really is in a state of being both alive and dead.
As measurements improve further, physicists will be left with two possible interpretations of the wavefunction: either the wavefunction is completely real, or nothing is.
The authors of the study, published in Nature Physics, are Mr Martin Ringbauer, Mr Benjamin Duffus, Dr Cyril Branciard, Dr Eric Cavalcanti, Professor Andrew White and Dr Alessandro Fedrizzi.
The study can be found at http://dx.doi.org/10.1038/nphys3233.
The work was supported by the Australian Research Council Centres of Excellence for Engineered Quantum Systems and Quantum Computation and Communication Technology, as well as the Templeton World Charity Foundation.
Watch Dr Alessandro Fedrizzi present a short video on the research here.
Media: Mr Martin Ringbauer (+61 7 3365 2444, m.ringbauer@uq.edu.au); Dr Alessandro Fedrizzi (+61 7 5336 7031, fedrizzi@physics.uq.edu.au); Professor Andrew White (+61 4 6625 6329, agx.white@gmail.com); Faculty of Science Engagement Officer Aarti Kapoor (+61 0449 863 208, a.kapoor@uq.edu.au)
Official press release from the University of Queensland.
Doctor Who meets Professor Heisenberg
19 June 2014
Lead author and PhD student Martin Ringbauer, from UQ’s School of Mathematics and Physics, said the study used photons – single particles of light – to simulate quantum particles travelling through time and study their behaviour, possibly revealing bizarre aspects of modern physics.
“The question of time travel features at the interface between two of our most successful yet incompatible physical theories – Einstein’s general relativity and quantum mechanics,” Mr Ringbauer said.
“Einstein’s theory describes the world at the very large scale of stars and galaxies, while quantum mechanics is an excellent description of the world at the very small scale of atoms and molecules.”
Einstein’s theory suggests the possibility of travelling backwards in time by following a space-time path that returns to the starting point in space, but at an earlier time-a closed timelike curve.
This possibility has puzzled physicists and philosophers alike since it was discovered by Kurt Gödel in 1949, as it seems to cause paradoxes in the classical world, such as the grandparents paradox, where a time traveller could prevent their grandparents from meeting, thus preventing the time traveller’s birth.
This would make it impossible for the time traveller to have set out in the first place.
UQ Physics Professor Tim Ralph said it was predicted in 1991 that time travel in the quantum world could avoid such paradoxes.
“The properties of quantum particles are ‘fuzzy’ or uncertain to start with, so this gives them enough wiggle room to avoid inconsistent time travel situations,” he said.
Professor Ralph said there was no evidence that nature behaved in ways other than standard quantum mechanics predicted,but this had not been tested in regimes where extreme effects of general relativity played a role, such as near a black hole.
“Our study provides insights into where and how nature might behave differently from what our theories predict.”
Examples of the intriguing possibilities in the presence of closed timelike curves include the violation of Heisenberg’s uncertainty principle, cracking of quantum cryptography and perfect cloning of quantum states.
Published in Nature Communications, the paper “Experimental Simulation of Closed Timelike Curves” includes Dr Matthew Broome, Dr Casey Myers, Professor Andrew White and Professor Timothy Ralph, all from The University of Queensland. http://www.nature.com/ncomms/2014/140619/ncomms5145/full/ncomms5145.html.
The work was supported by the Australian Research Council Centre of Excellence for Engineered Quantum Systems and Centre of Excellence for Quantum Computation and Communication Technology.
Media: Mr Martin Ringbauer (+61 7 3365 2444 or 0478 919 844 or m.ringbauer@uq.edu.au), Professor Tim Ralph (+61 7 3346 9693 or ralph@physics.uq.edu.au) and Professor Andrew White (+61 4 6625 6329 or agx.white@gmail.com).
Official press release from the University of Queensland.
UQ team challenges the limits of famous quantum principle
5 February 2014
A team of physicists is challenging the very limits of Heisenberg’s famous uncertainty principle by measuring quantum particles with unprecedented accuracy.
Physicists from The University of Queensland have performed joint measurements on single light particles with accuracy never seen before, and developed methods that could help improve the most sensitive quantum sensors.
Martin Ringbauer, PhD student at UQ’s School of Mathematics and Physics and lead author of the experimental study, said the findings help answer long-standing open questions in quantum mechanics.
“The uncertainty principle is one of the central features of quantum mechanics, which has been misunderstood for the longest time,” Mr Ringbauer said.
This “Heisenberg principle” states it is impossible to jointly measure two incompatible quantities, for example speed and location, of a quantum particle with perfect accuracy.
“This experimental work settles a decade-long debate — ‘Heisenberg-like’ relations do not hold for joint measurements,” he said.
“Now that we have a complete theory, as well as experimental evidence, it is probably time to update the textbooks.”
Almost a century ago, renowned quantum theorist Werner Heisenberg found fundamental limits on how well a quantum system can be prepared and measured, known as Heisenberg's uncertainty principle.
However, only the limit that pertains to the preparation of quantum systems has been quantified; the other two, relating to measurements, have long been a matter of debate, lacking a formal treatment.
These limits are: That it is impossible to jointly measure incompatible quantities, for instance, location and speed of a quantum object, with perfect accuracy; and that a measurement of one of these quantities necessarily disturbs the other.
Last year, UQ’s Cyril Branciard proposed a new set of “uncertainty relations”, for the joint measurement of incompatible quantities, which describe the minimal disturbance that will occur for a given measurement accuracy.
“Branciard’s relations quantify how accurately we can measure,” Mr Ringbauer said.
“Testing these relations, we are now able to show in the lab that we can actually reach this ultimate limit of accuracy,” he said.
The study was published in January in the journal Physical Review Letters. A related work by Kaneda et al. in the same journal, has found similar results.
Other authors of the paper include Mr Devon Biggerstaff, Dr Matthew Broome, Dr Alessandro Fedrizzi, Dr Cyril Branciard, and Professor Andrew White.
The research was supported by the Australian Research Centres of Excellence for Engineered Quantum Systems, and Quantum Computing and Communication Technology.
Media: Mr Martin Ringbauer (+61 7 3365 2444, m.ringbauer@uq.edu.au) or Prof. Andrew White (+61 4 6625 6329, agx.white@gmail.com) or Engagement Officer Aarti Kapoor +61 0449 863 208, a.kapoor@uq.edu.au).
Official press release from the University of Queensland.
Advancing scientific insights into quantum systems
11 October 2012
A paper published last year by the Centre of Engineered Quantum Systems (EQuS) researchers has been selected for the New Journal of Physics (NJP): Highlights of 2011.
The paper entitled, Two photon quantum walks in an elliptical direct-wire waveguide array, looks at the evolution of two-photon states in an elliptic array of waveguides.
The paper was deemed by the NJP to be seen as advancing scientific insight within the Physics community, and worthy of note in their latest publication.
Paper co-author Matthew Broome from the Centre said this work highlighted the feasibility of emulation of coherent quantum phenomena in three-dimensional waveguide structures.
“Using integrated optics provides an ideal test-bed for the emulation of quantum systems via continuous-time quantum walks,” Mr Broome said.
“We characterise the photonic chip via coherent light tomography and use the results to predict distinct differences between two, two photon inputs. We then compare these with the experimental observations.”
Other EQuS Researchers include J Owens, Devon Biggerstaff, M Goggin, A Fedrizzi, Trond Linjordet, Jason Twamley and Andrew White are named authors on the paper which appears in the thirteenth volume of the publication.
This work support the EQuS research into Synthetic Quantum Systems and Simulation that aims to harness quantum mechanical phenomena to enhance the functionality and power of information and communication technologies.
Photons are indispensable for quantum communication, and work, such as the research being conducted at EQuS, are leading the approach to quantum information processing and simulation.
The realisation of future technologies in these areas will require miniaturization and integration of high performance components, including single photon sources and detectors, and photonic quantum circuits for manipulating and distributing photons.
EQuS is an Australia Research Centre of Excellence that seeks to initiate the Quantum Era in the 21st century by engineering designer quantum systems.
Through focused and visionary research EQuS will deliver new scientific insights and fundamentally new technical capabilities across a range of disciplines.
Impacts of this work will improve the lives of Australians and people all over the world by producing breakthroughs in physics, engineering, chemistry, biology and medicine.
New Journal of Physics has an impact factor of 3.849 reaching over 1.5 million full-text downloads in 2011.
The paper can be downloaded here.
For more information about Research at EQuS visit equs.org or contact Lynelle Ross (lynelle.ross@uq.edu.au) or Matthew Broome (m.a.broome@physics.uq.edu.au).
Official press release from the University of Queensland.
At the solstice: shining light on quantum computers
21 December 2012
An Australian-American team has shone light — literally — onto the question of whether quantum computers are actually more powerful than conventional counterparts.
“Famously, quantum computers promise a more efficient means of computation, for example using a technique known as `fast factoring’ to efficiently crack encryption codes that form the basis of today’s internet security,” said the study’s lead author, Dr Matthew Broome, of the University of Queensland.
Surprisingly it’s still not known whether quantum computers are the only way to do this efficiently, or whether conventional computers can solve the problem almost as quickly.
In a paper in Science this week, scientists from The University of Queensland and the Massachusetts Institute of Technology (MIT) described the first experimental steps towards answering this question, building a so-called `BosonSampling’ device.
The device implemented a form of quantum computation where a handful of single photons were sent through a photonic network.
The team then sampled how often the photons exited the network.
“Although this sounds simple, for large devices and many photons, it becomes extremely difficult to predict the outcomes using a conventional computer, whereas our measurements remain straightforward to do,” said Dr Broome.
Testing this device — proposed in late 2010 by co-author Associate-Professor Scott Aaronson, and his colleague Dr Alex Arkhipov, both from MIT — will provide strong evidence that quantum computers do indeed have an exponential advantage over conventional computers.
The experimental team leader at UQ Professor Andrew White said: “Scott and Alex’s proposal was a 94-page mathematical tour-de-force.”
“We genuinely didn’t know if it would implement nicely in the lab, where we have to worry about real-world effects like lossy circuits, and imperfect single photon sources and detectors.”
Confirming that the BosonSampling device behaves as expected paves the way for larger and larger instances of this experiment.
The prediction is that with just tens of photons it can outperform any of today’s supercomputers.
“I am excited to see that the first proof-of-principle demonstrations of BosonSampling have been shown — even if only with 3 photons, rather than the 30 or so required to outperform a classical computer,” said Associate-Professor Aaronson.
“I did not expect this to happen so quickly.”
Other researchers in this study are UQ’s Dr Alessandro Fedrizzi, PhD student Saleh Rahimi-Keshari, and Professor Tim Ralph, and MIT’s PhD student Justin Dove.
Financial support was provided by the Australian Research Council Centres of Excellence for Engineered Quantum Systems (EQuS) and Quantum Computing and Communication Technology (CQC2T), and the United States Government.
Related experimental work was published in the same issue of Science by J. Spring et al.
Media: Dr Matthew Broome (+61 4 0644 3479, m.a.broome@googlemail.com) or Prof. Andrew White (+61 4 6625 6329, agx.white@gmail.com) or School of Mathematics and Physics Communications & Marketing Officer Aarti Kapoor (+61 7 3346 9935, a.kapoor@uq.edu.au).
Official press release from the University of Queensland.
Australian-US collaboration leaps ahead in catching spooky light
11 January 2012
Experiments with entangled photons have led the way in the burgeoning fields of quantum information, communication and computation in the last decade.
Their biggest drawback has always been low photon-detection efficiencies, which has limited their potential applications.
Now, a joint experiment by Australian and US labs has fixed this problem, doubling the previous record in entangled photon detection ratio to 62 per cent, and closing the detection “loophole” in the strange phenomenon of quantum steering.
The experiment was conducted by researchers at The University of Queensland, Griffith University, the ARC Centre for Engineered Quantum Systems and the ARC Centre for Quantum Computation and Communication Technology in Australia; and the National Institute of Standards and Technology in Boulder, USA.
Austrian physicist Erwin Schrödinger first introduced the term steering in 1935 to highlight the ability of certain quantum particles to influence—or steer—each other no matter how far they are apart.
This striking effect is the result of quantum entanglement—a phenomenon that connects two particles in such a way that changes to one of the particles are instantly reflected in the other—something that Einstein famously described as “spooky action-at-a-distance”.
Steering allows two parties to verify if they have received quantum particles that share this quantum entanglement—even if one of the parties cannot be trusted.
However, if there are any loopholes—which occur due to problems with the experimental design or set-up—the parties will not be able to say that they have conclusively observed quantum steering.
“We overcame the detection loophole—where not all the photons can be detected—by combining a highly-efficient entangled photon source with state-of-the-art photon detectors,” said Dr Marcelo de Almeida of The University of Queensland.
These detectors—called transition edge sensors —were developed by Dr Sae Woo Nam and his team at the National Institute of Standards and Technology.
“The absorption of a single photon in such detectors causes a tiny change in the temperature which is sensed using superconducting effects,” Dr Almeida said.
“Closing the detection loophole requires efficiencies of above 50 per cent.
"The remarkably high efficiency of 62 per cent achieved in our experiment allows us to demonstrate conclusive steering.”
Dr Almeida’s UQ-based co-authors include PhD students Devin H. Smith, Geoff Gillett, Drs Alessandro Fedrizzi, Till J. Weinhold, and Cyril Branciard, and Professor Andrew G. White, all from the ARC Centre for Engineered Quantum Systems (EQuS) and the ARC Centre for Quantum Computation and Communication Technology (CQC2T), as well as Professor Howard M. Wiseman from Griffith University, also of CQC2T.
This record-breaking achievement, published in Nature Communications today, brings the researchers a step closer toward achieving even higher detection efficiency levels in the near future.
"If we can achieve 66 per cent, then we could perform secure quantum communication even if one party has untrustworthy equipment. Five years ago I would have thought that was impossible,” said Dr Almeida.
Media: Dr Marcelo de Almeida (07 334 67347, marcelo@physics.uq.edu.au) or School of Mathematics and Physics Communications & Marketing Officer Aarti Kapoor (07 3346 9935, a.kapoor@uq.edu.au)
Official press release from the University of Queensland.
Quantum engineers remove roadblock in developing next-generation technologies
Tuesday, March 8, 2011
An international team has removed a major obstacle to engineer quantum systems that will play a key role in the computers, communication networks, and even biomedical devices of the future.
With the process of miniaturisation advancing by the day, quantum effects will come to dominate our everyday lives.
At present it is extremely difficult to characterise quantum systems—the number of measurements required increases exponentially with the number of quantum parts. For example, an 8-qubit quantum computer would require over a billion measurements.
Caption: From just 18 randomly selected white tiles (representing measurements) out of a potential 576, the researchers were able to estimate the behaviour of a quantum device (illustrated by the yellow section). Image credit: Alessandro Fedrizzi
“Imagine that you're building a car but you can't test-drive it. This is the situation that quantum engineers are facing at the moment”, said UQ's Dr Alessandro Fedrizzi, co-author of the study that was recently published in Physical Review Letters.
“We have now found a way to test quantum devices efficiently, which will help transform them from small-scale laboratory experiments to real-world applications.”
The team also include UQ collaborators Dr Marcelo de Almeida, Professor Andrew White and PhD student Matthew Broome, as well as researchers from Princeton University, the Massachusetts Institute of Technology (MIT), and SC Solutions, Inc. The researchers adapted techniques from “compressive sensing”, a hugely successful mathematical data compression method and for the first time, have applied it to experimental quantum research.
“Audio signals have natural patterns which can be compressed to vastly smaller size without a significant quality loss: this means we now store in a single CD what used to take hundreds. In the same way, compressive sensing now allows us to drastically simplify the measurement of quantum systems“, said Dr Alireza Shabani, the study's main author from Princeton University.
“A common example for data compression is a Sudoku puzzle: only a few numbers will allow you to fill in the whole grid. Similarly, we can now estimate the behaviour of a quantum device from just a few key parameters“, said co-author Dr Robert Kosut from SC Solutions, Inc., who developed the algorithm with Dr Shabani, Dr Masoud Mohseni (MIT) and Professor Hershel Rabitz (Princeton University).
The researchers tested their compressive sensing algorithm on a photonic two-qubit quantum computer built at UQ, and demonstrated they could obtain high-fidelity estimates from as few as 18 measurements, compared to the 240 normally required.
The team expects its technique could be applied in a wide range of architectures including quantum-based computers, communication networks, metrology devices and even biotechnology.
More information: The paper, "Efficient Measurement of Quantum Dynamics via Compressive Sensing," by A. Shabani et al., was published in the March 2011 edition of Physical Review Letters. DOI:10.1103/PhysRevLett.106.100401.
Media: Dr Alessandro Fedrizzi (07 3346 7715, a.fedrizzi@uq.edu.au) or Aarti Kapoor (07 3346 9935, a.kapoor@uq.edu.au)
Official press releases from the University of Queensland and Princeton University.
Quantum computers make light work of chemistry
Monday, January 11, 2010
Physicists have a problem.
They have an outstandingly successful theory of nature at the small scale—quantum mechanics—but have been unable to apply it exactly to situations more complicated than, say, 4 or 5 atoms—let alone a caffeine or cholesterol molecule.
Instead, they have developed a host of approximate methods to use quantum mechanics in fields such as biology, chemistry, and materials science, but this approach raises the concern that natural behaviours are being missed, and limits the development of new technologies.
Nearly thirty years ago Nobel Prize winning physicist Richard Feynman proposed a better solution: to use computers that are themselves quantum mechanical, a hypothetical device now known as a quantum computer.
This week an international team of scientists based in Australia and the US have done exactly that: building a small quantum computer and used it to calculate the precise energy of molecular hydrogen.
This groundbreaking approach to molecular simulations could have profound implications not just for chemistry, but also for a range of fields from cryptography to materials science.
The work, described this week in Nature Chemistry, comes from a partnership between a group of physicists—led by Professor Andrew White at the University of Queensland in Brisbane, Australia—and a group of chemists—led by Professor Alán Aspuru-Guzik at Harvard University, Cambridge, USA.
White's team assembled the physical computer and ran the experiments and Aspuru-Guzik's team coordinated experimental design and performed key calculations. "We were the software guys", says Aspuru-Guzik, "and they were the hardware guys".
"Our results agreed with those calculated using a traditional computer to within six parts to a million", says White, "which we were pretty happy with".
While modern supercomputers can perform approximate simulations, increasing the complexity of these systems results in exponential increase in computational time. Quantum computers promise highly precise calculations while using a fraction the resources of conventional computing.
This computational power derives from the way quantum computers manipulate information. In classical computers, information is encoded in bits, that have only two values: zero and one; quantum computers use quantum bits?qubits?that can have an infinite different number of values: zero, or one, or zero plus one, and so on.
Quantum computers also exploit the strange phenomena of entanglement, powerful correlations between qubits that Einstein once described as "spooky action at a distance".
When asked when quantum computers will leave the lab and appear on desktops, White smiles ?Later than I?d like but sooner than I think?, he replies.
"It's very early days for quantum technology", he continues, "most quantum computer demonstrations have been limited to a handful of qubits. A colleague of mine in Canada says that any demonstration with less than ten qubits is cute but useless—which makes me think of a baby with an abacus."
"However Alan and his team at Harvard have shown that when we can build circuits of just a few hundred qubits, this will surpass the combined computing power of all the traditional computers in the world, each of which uses many billions of bits."
"It took standard computing 50 years to get to this point, I'm sure we can do it in much less time than that!"
White's University of Queensland co-authors on the Nature Chemistry paper are Benjamin P. Lanyon, Geoffrey G. Gillet, Michael E. Goggin, Marcelo P. Almeida, Benjamin J. Powell, and Marco Barbieri. Financial support was provided by the Australian Research Council Federation Fellow and Centre of Excellence programs, and the US Army Research Office (ARO) and Intelligence Advanced Research Projects Initiative (IARPA).
For more information contact Professor Andrew White by phone, office: +61 7 3365 7902 or by email: agx.white@gmail.com. Background information at http:// quantum.info/news .
Official press releases from the University of Queensland and Harvard University.
The Secret World of Shrimps: Polarisation Vision at Its Best
Wednesday, May 14, 2008
A Swiss marine biologist and an Australian quantum physicist have found that a species of shrimp from the Great Barrier Reef, Australia, can see a world invisible to all other animals.
Dr Sonja Kleinlogel and Professor Andrew White have shown that mantis shrimp not only have the ability to see colours from the ultraviolet through to the infrared, but have optimal polarisation vision — a first for any animal and a capability that humanity has only achieved in the last decade using fast computer technology.
"The mantis shrimp is a delightfully weird beastie," said Professor White, of the University of Queensland. "They're multi-coloured, their order and genus names mean `mouth-feet' and `genital-fingers'; they can move each eye independently, they see the world in 11 or 12 primary colours as opposed to our humble three, and now we find that this species can see a world invisible to the rest of us."
Dr Kleinlogel, is based at the Max Planck Institute for Biophysics in Frankfurt, and collected the shrimp from the reef. She notes that "...scuba divers know them as 'thumb-splitters', they've got wickedly strong claws and are very aggressive!"
Most animals can tell how fast the electric field in a light wave is oscillating, which is perceived as colour. (Blue light oscillates faster than green, which is faster than red). The direction of the oscillation is known as polarisation: many animals, from budgerigars to ants have some form of polarisation vision. Since the 1950s animals have been shown to use linear polarisation vision for navigation, for finding food, for evading hunters, and for sex, or as Professor White says "...for the four eff's: feeding, fighting, fleeing and ... flirting".
Commonly polarisation vision is quite restricted: in its simplest form different directions of polarisation show up as lighter or darker patches — you can see this yourself by looking at clear blue sky with polarising sunglasses. But polarisation is more subtle than this: the electric field of the light can oscillate back and forth in a line or around and around in a circle, or anywhere in between.
Video by Sonja Kleinlogel. An O. scyllarus is trained to grab a left-hand circular polarised feeding cube and not the two right-hand circular polarised feeding cubes. He is rewarded with a prawn for the right choice.
The two scientists have shown that shrimp of the species Gonodactylus smithii have eyes that simultaneously measure four linear and two circular polarisations, enabling them to determine both the direction of the oscillation, as well as how polarised the light is.
"This is very useful because natural light can vary from strongly polarised, like the glare off snow or water, to unpolarised, like the sun," Professor White said.
"Any changes to the amount of polarisation instantly tells the animal that something is going on."
Colleagues at The University of Queensland have recently found a related species where the males reflect circular polarisation from their bodies, and hypothesized that circular polarisation vision is used for sexual signalling. Professor White smiles and says, "I think of that as the `prawnographic' hypothesis".
He continues, "It can't be the whole story in our case, though. We found the same structures in the eyes of both boy and girl mantis shrimps, and yet neither have circularly polarised markings on their bodies. Each eye measures the six polarisation components that are precisely required for optimal polarisation vision. In fact, the physics we used to understand what was going on is the same physics that we use in quantum computing for optimal storage of information."
"It is this unique talent — to measure linear and circular polarisation simultaneously — which presents a completely new concept of polarisation vision," Dr Kleinlogel continues. "There wouldn't be much point in only being able to see circular polarisation as it is extremely rare in nature. Even the polarized light reflected from some shrimp's bodies is only weakly circular polarised and often contains more linear polarisation."
"We doubt that circular polarisation is used exclusively as a secret shrimp sex signal! It makes more sense that mantis shrimp evolved both circular and linear polarisation receptors to work together so they can detect tiniest changes in any polarisation."
Prof. White notes, "Some of the animals they like to eat are transparent, and quite hard to see in sea-water - except they're packed full of polarising sugars - I suspect they light up like Christmas trees as far as these shrimp are concerned." "And of course", Dr Kleinlogel concludes, "they can still flirt with each other using fancy polarisation cues!
Press release here.
Experimental demonstration of Shor's algorithm with quantum entanglement
Wednesday, December 19, 2007
In the December 21 issue of the scientific journal Physical Review Letters a combined Australian-Canadian research team from Brisbane and Toronto have reported the first-ever unambiguous execution of a quantum calculation. By manipulating quantum mechanically entangled photons—the fundamental particles of light—the prime factors of the number 15 were calculated.
Although the answer to this problem could have been obtained much more quickly by querying a bright 8 year old, the result is significant because it was calculated using a quantum-mechanical program called Shor's algorithm (named after its discoverer, Prof. Peter Shor of the Massachusetts Institute of Technology). Previous theoretical work has shown that this program, when applied to larger numbers, could be used to crack cryptographic codes that are unbreakable using conventional computers. An essential ingredient of the power of quantum computers is entanglement: the apparently nonsensical correlations between particles that Einstein famously called "spooky action at a distance". The Australian-Canadian team showed that entanglement was present throughout their calculation.
Shor's Algorithm
Think of a number—15, for example—what are its prime factors? Recalling from your school days that primes are numbers divisible only by themselves and 1, then the prime factors of 15 are 3 and 5. But as the number becomes bigger and bigger the problem becomes more and more difficult: what are the prime factors of 133, or 1633 or 2934331? (Answers: 133=7x19, 1633=23x71, and 2934331=911x3221). What is difficult for your brain is also difficult for conventional computers. This is not just a problem of interest to pure mathematicians: the computational difficulty of factoring very large numbers forms the basis of widely used internet encryption systems. An efficient solution to this problem will have very far-reaching implications for communications security—quantum computers will be able to crack these codes.
In any computer a problem must be broken down into manageable chunks: classical computers use two-level systems called bits (binary digits); quantum computers use two-level quantum-mechanical systems called qubits (quantum bits). A qubit is like a coin that can be heads (on), tails (off) or simultaneously heads AND tails (on and off) or any possible combination in-between! This is impossible with normal bits. One qubit is described by three pieces of information, two qubits by fifteen; three qubits by sixty-three, and so on: quantum memory sizes grow exponentially with the number of qubits. Performing an operation on just one of these qubits—for example swapping 1 and 0—simultaneously performs an operation on all possible configurations of the quantum memory. In effect, using the combination of an exponentially large memory and massive quantum parallelism, provided by entanglement, allows simultaneous storage of all possible outcomes of a mathematical procedure, with clever down-selection giving the correct result.
The experiment
In the Brisbane experiment, single photons were used as qubits, with up to four being manipulated at once. Using a complex configuration of optical elements—by which photons can be created, sent into multiple paths simultaneously, then recombined—a simplified version of the factoring algorithm was performed, equivalent to factoring the number 15. The initial proposal for optical approach to quantum computing was made by Dr. Emmanuel Knill (of the National Institute of Standards and Technology, Boulder, Colorado), Prof. Raymond Laflamme (director of the Institute for Quantum Computing at the University of Waterloo, Ontario, Canada) and Prof. Gerard Milburn (University of Queensland, Australia).
In addition to executing the quantum computer program, for the first time the true quantum mechanical nature of the device was confirmed at every step of the experiment. Pushing this envelope and identifying the best, most scalable architecture for a quantum computer is a very active area of research, with teams around the world working on a diverse range of technologies: photons, ions in silicon, atoms or ions in vacuum chambers, and superconducting electrical circuits to name just a few. The Australian-Canadian team are part of two international efforts to make a quantum computer: the Australian Centre for Quantum Computer Technology, led by Prof. Robert Clark at the University of New South Wales, and the US program for Optical Quantum Computing, led by Prof. Paul Kwiat at the University of Illinois.
Almost sixty years ago to the day, the team of Bardeen, Brattain, and Shockley revealed the first transistor to the world, an ungainly device consisting of a wire whisker touching a chunk of metal. Now millions of times smaller, transistors are found by the billions in applications undreamed of by the original inventors: from cell phones in the middle of Africa to iPods at the local bus stop. Functional large-scale quantum computers may be as many years away as the transistor is from the modern computer, and it is equally hard to know how they will change the world—but change our world they will.
Demonstration of an all-optical quantum controlled-NOT gate
Thursday, November 20, 2003
Background.
Quantum computers potentially offer tremendous computational power. One of the key elements in a possible quantum computer is the controlled- NOT, or CNOT, gate. In the November 20, 2003, issue of Nature, a team of researchers report an experiment performed in UQ's Quantum Technology Laboratory that unambiguously demonstrates and comprehensively characterises an optical quantum CNOT gate. The team consists of Drs Jeremy O'Brien and Geoffrey Pryde, Assoc.-Prof. Timothy Ralph, and Dr Andrew White, of the University of Queensland, and Dr David Branning, now of the Rose-Hulman Institute of Technology, Indiana.
This image represents the operation of the CNOT gate. The pair of spots at the bottom left are output modes of the photon source used to generate control and target qubits in the experiment. This image has been repeated to form the truth table: the left column represents the ideal input qubits (dim = 0, bright = 1); the right column represents the qubit outputs measured in the lab, where the intensities are scaled by the probabilities measured in our experiment. The entangled state is not shown, as it would look very strange indeed!
Paper and press release
Nature paper web site; Local copy of paper (PDF); UQ media release and contact details.
Media reports of this work
Commsworld: Untangling Quantum Computing at Queensland Uni - An Open and Shut Gate
Commsworld : Interview: UQ's Dr Andrew White - Quanta, Photons, and Things Hard to See
New Scientist: Optical gate kick-starts quantum computer race, p. 27, November 22, 2003
ABC Science Online: Spooky bits propel quantum computer
ABC PM Programme: Quantum computer in development (RealAudio file)
Monitor: Light work of quantum computers, Vol. 28 Issue 5 page 2, December 2003
The R & D Review: Quantum computing breakthrough, December 2003/January 2004
The 7:30 Report: Brisbane team leads ground-breaking project
Australasian Science: "Quantum Leap for Computing" January/February 2004. | ||||||
2453 | dbpedia | 1 | 30 | https://scottaaronson.blog/%3Fp%3D218 | en | Optimized | [
"https://149663533.v2.pressablecdn.com/wp-content/plugins/really-simple-facebook-twitter-share-buttons/images/specificfeeds_follow.png",
"https://149663533.v2.pressablecdn.com/wp-content/plugins/really-simple-facebook-twitter-share-buttons/images/specificfeeds_follow.png",
"https://149663533.v2.pressablecdn.com... | [] | [] | [
""
] | null | [] | null | The Blog of Scott Aaronson | en | Shtetl-Optimized | https://scottaaronson.blog/ | Q1: Who will you be voting for in November?
A: Kamala Harris (and mainstream Democrats all down the ballot), of course.
Q2: Of course?
A: If the alternative is Trump, I would’ve voted for Biden’s rotting corpse. Or for Hunter Biden. Or for…
Q3: Why can’t you see this is just your Trump Derangement Syndrome talking?
A: Look, my basic moral commitments remain pretty much as they’ve been since childhood. Namely, that I’m on the side of reason, Enlightenment, scientific and technological progress, secular government, pragmatism, democracy, individual liberty, justice, intellectual honesty, an American-led peaceful world order, preservation of the natural world, mitigation of existential risks, and human flourishing. (Crazy and radical, I know.)
Only when choosing between candidates who all espouse such values, do I even get the luxury of judging them on any lower-order bits. Sadly, I don’t have that luxury today. Trump’s values, such as they are, would seem to be “America First,” protectionism, vengeance, humiliation of enemies, winning at all costs, authoritarianism, the veneration of foreign autocrats, and the veneration of himself. No amount of squinting can ever reconcile those with the values I listed before.
Q4: Is that all that’s wrong with him?
A: No, there are also the lies, and worst of all the “Big Lie.” Trump is the first president in US history to incite a mob to try to overturn the results of an election. He was serious! He very nearly succeeded, and probably would have, had Mike Pence been someone else. It’s now inarguable that Trump rejects the basic rules of our system, or “accepts” them only when he wins. We’re numb from having heard it so many times, but it’s a big deal, as big a deal as the Civil War was.
Q5: Oh, so this is about your precious “democracy.” Why do you care? Haven’t you of all people learned that the masses are mostly idiots and bullies, who don’t deserve power? As Curtis Yarvin keeps trying to explain to you, instead of “democracy,” you should want a benevolent king or dictator-CEO, who could offer a privileged position to the competent scientists like yourself.
A: Yeah, so how many examples does history furnish where that worked out well? I suppose you might make a partial case for Napoleon, or Ataturk? More to the point: even if benevolent, science-and-reason-loving authoritarian strongmen are possible in theory, do you really expect me to believe that Trump could be one of them? I still love how Scott Alexander put it in 2016:
Can anyone honestly say that Trump or his movement promote epistemic virtue? That in the long-term, we’ll be glad that we encouraged this sort of thing, that we gave it power and attention and all the nutrients it needed to grow? That the road to whatever vision of a just and rational society we imagine, something quiet and austere with a lot of old-growth trees and Greek-looking columns, runs through LOCK HER UP?
I don’t like having to vote for the lesser of two evils. But at least I feel like I know who it is.
Q6: But what about J. D. Vance? He got his start in Silicon Valley, was championed by Peter Thiel, and is obviously highly intelligent. Doesn’t he seem like someone who might listen to and empower tech nerds like yourself?
A: Who can say what J. D. Vance believes? Here are a few choice quotes of his from eight years ago:
I’m obviously outraged at Trump’s rhetoric, and I worry most of all about how welcome Muslim citizens feel in their own country. But I also think that people have always believed crazy shit (I remember a poll from a few years back suggesting that a near majority of democratic voters blame ‘the Jews’ for the financial crisis). And there have always been demagogues willing to exploit the people who believe crazy shit.
The more white people feel like voting for trump, the more black people will suffer. I really believe that.
[Trump is] just a bad man. A morally reprehensible human being.
To get from that to being Trump’s running mate is a Simone-Biles-like feat of moral acrobatics. Vance reminds me of the famous saying by L. Ron Hubbard from his pre-Dianetics days: “If a man really wants to make a million dollars, the best way would be to start his own religion.” (And I feel like Harris’s whole campaign strategy should just be to replay Vance’s earlier musings in wall-to-wall ads while emphasizing her agreement with them.) No, Vance is not someone I trust to share my values, if he has values at all.
Q7: What about the other side’s values, or lack thereof? I mean, don’t you care that the whole Democratic establishment—including Harris—colluded to cover up that Biden was senile and cognitively unfit to be president now, let alone for another term?
A: Look, we’ve all seen what happens as a relative gets old. It’s gradual. It’s hard for anyone to say at which specific moment they can no longer drive a car, or be President of the United States, or whatever. This means that I don’t necessarily read evil intent into the attempts to cover up Biden’s decline—merely an epic, catastrophic failure of foresight. That failure of foresight itself would’ve been a huge deal in normal circumstances, but these are not normal circumstances—not if you believe, as I do, that the alternative is the beginning of the end of a 250-year-old democratic experiment.
Q8: Oh stop being so melodramatic. What terrible thing happened to you because of Trump’s first term? Did you lose your job? Did fascist goons rough you up in the street?
A: Well, my Iranian PhD student came close to having his visa revoked, and it became all but impossible to recruit PhD students from China. That sucked, since I care about my students’ welfare like I care about my own. Also, the downfall of Roe v. Wade, which enabled Texas’ draconian new abortion laws, made it much harder for us to recruit faculty at UT Austin. But I doubt any of that will impress you. “Go recruit American students,” you’ll say. “Go recruit conservative faculty who are fine with abortion being banned.”
The real issue is that Trump was severely restrained in his first term, by being surrounded by people who (even if, in many cases, they started out loyal to him) were also somewhat sane and valued the survival of the Republic. Alas, he learned from that, and he won’t repeat that mistake the next time.
Q9: Why do you care so much about Trump’s lies? Don’t you realize that all politicians lie?
A: Yes, but there are importantly different kinds of lies. There are white lies. There are scheming, 20-dimensional Machiavellian lies, like a secret agent’s cover story (or is that only in fiction?). There are the farcical, desperate, ever-shifting lies of the murderer to the police detective or the cheating undergrad to the professor. And then there are the lies of bullies and mob bosses and populist autocrats, which are special and worse.
These last, call them power-lies, are distinguished by the fact that they aren’t even helped by plausibility. Often, as with conspiracy theories (which strongly overlap with power-lies), the more absurd the better. Obama was born in Kenya. Trump’s crowd was the biggest in history. The 2020 election was stolen by a shadowy conspiracy involving George Soros and Dominion and Venezuela.
The central goal of a power-lie is just to demonstrate your power to coerce others into repeating it, much like with the Party making Winston Smith affirm 2+2=5, or Petruchio making Katharina call the sun the moon in The Taming of the Shrew. A closely-related goal is as a loyalty test for your own retinue.
It’s Trump’s embrace of the power-lie that puts him beyond the pale for me.
Q10: But Scott, we haven’t even played our “Trump” card yet. Starting on October 7, 2023, did you not witness thousands of your supposed allies, the educated secular progressives on “the right side of history,” cheer the sadistic mass-murder of Jews—or at least, make endless excuses for those who did? Did this not destabilize your entire worldview? Will you actually vote for a party half of which seems at peace with the prospect of your family members’ physical annihilation? Or will you finally see who your real friends now are: Arkansas MAGA hillbillies who pray for your people’s survival?
A: Ah, this is your first slash that’s actually drawn blood. I won’t pretend that the takeover of part of the US progressive coalition by literal Hamasniks hasn’t been one of the most terrifying experiences of my life. Yes, if I had to be ruled by either (a) a corrupt authoritarian demagogue or (b) an idiot college student chanting for “Intifada Revolution,” I’d be paralyzed. So it’s lucky that I don’t face that choice! I get to vote, once more, for a rather boring mainstream Democrat—alongside at least 70% of American Jews. The idea of Harris as an antisemite would be ludicrous even if she didn’t have a Jewish husband or wasn’t strongly considering a pro-Israel Jew as her running mate.
Q11: Sure, Kamala Harris might mouth all the right platitudes about Israel having a right to defend itself, but she’ll constantly pressure Israel to make concessions to Hamas and Hezbollah. She’ll turn a blind eye to Iran’s imminent nuclearization. Why don’t you stay up at night worrying that, if you vote for a useful idiot like her, you’ll have Israel’s annihilation and a second Holocaust on your conscience forever?
A: Look, oftentimes—whenever, for example, I’m spending hours reading anti-Zionists on Twitter—I feel like there’s no limit to how intensely Zionist I am. On reflection, though, there is a limit. Namely, I’m not going to be more Zionist than the vast majority of my Israeli friends and colleagues—the ones who served in the IDF, who in some cases did reserve duty in Gaza, who prop up the Israeli economy with their taxes, and who will face the consequences of whatever happens more directly than I will. With few exceptions, these friends despise the Trump/Bibi alliance with white-hot rage, and they desperately want more moderate leadership in both countries.
Q12: Suppose I concede that Kamala is OK on Israel. We both know that she’s not the future of the Democratic Party, any more than Biden is. The future is what we all saw on campuses this spring. “Houthis Houthis make us proud, turn another ship around.” How can you vote for a party whose rising generation seems to want you and your family dead?
A: Let me ask you something. When Trump won in 2016, did that check the power of the campus radicals? Or as Scott Alexander prophesied at the time, did it energize and embolden them like nothing else, by dramatically confirming their theology of a planet held hostage by the bullying, misogynistic rich white males? I fundamentally reject your premise that, if I’m terrified of crazy left-wing extremists, then a good response is to vote for the craziest right-wing extremists I can find, in hopes that the two will somehow cancel each other out. Instead I should support a coherent Enlightenment alternative to radicalism, or the closest thing to that available.
Q13: Even leaving aside Israel, how can you not be terrified by what the Left has become? Which side denounced you on social media a decade ago, as a misogynist monster who wanted all women to be his sex slaves? Which side tried to ruin your life and career? Did we, the online rightists, do that? No. We did not. We did nothing worse to you than bemusedly tell you to man up, grow a pair, and stop pleading for sympathy from feminists who will hate you no matter what.
A: I’ll answer with a little digression. Back in 2017, when Kamala Harris was in the Senate, her office invited me to DC to meet with them to provide advice about the National Quantum Initiative Act, which Kamala was then spearheading. Kamala herself sent regrets that she couldn’t meet me, because she had to be at the Kavanaugh hearings. I have (nerdy, male) friends who did meet her about tech policy and came away with positive impressions.
And, I dunno, does that sound like someone who wants me dead for the crime of having been born a nerdy heterosexual male? Or having awkwardly and ineptly asked women on dates, including the one who became my wife? OK, maybe Amanda Marcotte wants me dead for those crimes. Maybe Arthur Chu does (is he still around?). Good that they’re not running for president then.
Q14: Let me try one more time to show you how much your own party hates you. Which side has been at constant war against the SAT and other standardized tests, and merit-based college admissions, and gifted programs, and academic tracking and acceleration, and STEM magnet schools, and every single other measure by which future young Scott Aaronsons (and Saket Agrawals) might achieve their dreams in life? Has that been our side, or theirs?
A: To be honest, I haven’t seen the Trump or Harris campaigns take any position on any of these issues. Even if they did, there’s very little that the federal government can do: these battles happen in individual states and cities and counties and universities. So I’ll vote for Harris while continuing to advocate for what I think is right in education policy.
Q15: Can you not see that Kamala Harris is a vapid, power-seeking bureaucratic machine—that she has no fixed principles at all? For godsakes, she all but condemned Biden as a racist in the 2020 primary, then agreed to serve as his running mate!
A: I mean, she surely has more principles than Vance does. As far as I can tell, for example, she’s genuinely for abortion rights (as I am). Even if she believed in nothing, though, better a cardboard cutout on which values I recognize are written, than a flesh-and-blood person shouting values that horrify me.
Q16: What, if anything, could Republicans do to get you to vote for them?
A: Reject all nutty conspiracy theories. Fully, 100% commit to the peaceful transfer of power. Acknowledge the empirical reality of human-caused climate change, and the need for both technological and legislative measures to slow it and mitigate its impacts. Support abortion rights, or at least a European-style compromise on abortion. Republicans can keep the anti-wokeness stuff, which actually seems to have become their defining issue. If they do all that, and also the Democrats are taken over by frothing radicals who want to annihilate the state of Israel and abolish the police … that’s, uh, probably the point when I start voting Republican.
Q17: Aha, so you now admit that there exist conceivable circumstances that would cause you to vote Republican! In that case, why did you style yourself “Never-Trump From Here to Eternity”?
A: Tell you what, the day the Republicans (and Trump himself?) repudiate authoritarianism and start respecting election outcomes, is the day I’ll admit my title was hyperbolic.
Q18: In the meantime, will you at least treat us Trump supporters with civility and respect?
A: Not only does civil disagreement not compromise any of my values, it is a value to which I think we should all aspire. And to whatever extent I’ve fallen short of that ideal—even when baited into it—I’m sorry and I’ll try to do better. Certainly, age and experience have taught me that there’s hardly anyone so far gone that I can’t find something on which I agree with them, while disagreeing with most of the rest of the world.
The news these days feels apocalyptic to me—as if we’re living through, if not the last days of humanity, then surely the last days of liberal democracy on earth.
All the more reason to ignore all of that, then, and blog instead about the notorious Busy Beaver function! Because holy moly, what news have I got today. For lovers of this super-rapidly-growing sequence of integers, I’ve honored to announce the biggest Busy Beaver development that there’s been since 1983, when I slept in a crib and you booted up your computer using a 5.25-inch floppy. That was the year when Allen Brady determined that BusyBeaver(4) was equal to 107. (Tibor Radó, who invented the Busy Beaver function in the 1960s, quickly proved with his student Shen Lin that the first three values were 1, 6, and 21 respectively. The fourth value was harder.)
Only now, after an additional 41 years, do we know the fifth Busy Beaver value. Today, an international collaboration called bbchallenge is announcing that it’s determined, and even formally verified using the Coq proof system, that BB(5) is equal to 47,176,870—the value that’s been conjectured since 1990, when Heiner Marxen and Jürgen Buntrock discovered a 5-state Turing machine that runs for exactly 47,176,870 steps before halting, when started on a blank tape. The new bbchallenge achievement is to prove that all 5-state Turing machines that run for more steps than 47,176,870, actually run forever—or in other words, that 47,176,870 is the maximum finite number of steps for which any 5-state Turing machine can run. That’s what it means for BB(5) to equal 47,176,870.
For more on this story, see Ben Brubaker’s superb article in Quanta magazine, or bbchallenge’s own announcement. For more background on the Busy Beaver function, see my 2020 survey, or my 2017 big numbers lecture, or my 1999 big numbers essay, or the Googology Wiki page, or Pascal Michel’s survey.
The difficulty in pinning down BB(5) was not just that there are a lot of 5-state Turing machines (16,679,880,978,201 of them to be precise, although symmetries reduce the effective number). The real difficulty is, how do you prove that some given machine runs forever? If a Turing machine halts, you can prove that by simply running it on your laptop until halting (at least if it halts after a “mere” ~47 million steps, which is child’s-play). If, on the other hand, the machine runs forever, via some never-repeating infinite pattern rather than a simple infinite loop, then how do you prove that? You need to find a mathematical reason why it can’t halt, and there’s no systematic method for finding such reasons—that was the great discovery of Gödel and Turing nearly a century ago.
More precisely, the Busy Beaver function grows faster than any function that can be computed, and we know that because if a systematic method existed to compute arbitrary BB(n) values, then we could use that method to determine whether a given Turing machine halts (if the machine has n states, just check whether it runs for more than BB(n) steps; if it does, it must run forever). This is the famous halting problem, which Turing proved to be unsolvable by finite means. The Busy Beaver function is Turing-uncomputability made flesh, a finite function that scrapes the edge of infinity.
There’s also a more prosaic issue. Proofs that particular Turing machines run forever tend to be mind-numbingly tedious. Even supposing you’ve found such a “proof,” why should other people trust it, if they don’t want to spend days staring at the outputs of your custom-written software?
And so for decades, a few hobbyists picked away at the BB(5) problem. One, who goes by the handle “Skelet”, managed to reduce the problem to 43 holdout machines whose halting status was still undetermined. Or maybe only 25, depending who you asked? (And were we really sure about the machines outside those 43?)
The bbchallenge collaboration improved on the situation in two ways. First, it demanded that every proof of non-halting be vetted carefully. While this went beyond the original mandate, a participant named “mxdys” later upped the standard to fully machine-verifiable certificates for every non-halting machine in Coq, so that there could no longer be any serious question of correctness. (This, in turn, was done via “deciders,” programs that were crafted to recognize a specific type of parameterized behavior.) Second, the collaboration used an online forum and a Discord server to organize the effort, so that everyone knew what had been done and what remained to be done.
Despite this, it was far from obvious a priori that the collaboration would succeed. What if, for example, one of the 43 (or however many) Turing machines in the holdout set turned out to encode the Goldbach Conjecture, or one of the other great unsolved problems of number theory? Then the final determination of BB(5) would need to await the resolution of that problem. (We do know, incidentally, that there’s a 27-state Turing machine that encodes Goldbach.)
But apparently the collaboration got lucky. Coq proofs of non-halting were eventually found for all the 5-state holdout machines.
As a sad sidenote, Allen Brady, who determined the value of BB(4), apparently died just a few days before the BB(5) proof was complete. He was doubtful that BB(5) would ever be known. The reason, he wrote in 1988, was that “Nature has probably embedded among the five-state holdout machines one or more problems as illusive as the Goldbach Conjecture. Or, in other terms, there will likely be nonstopping recursive patterns which are beyond our powers of recognition.”
Maybe I should say a little at this point about what the 5-state Busy Beaver—i.e., the Marxen-Buntrock Turing machine that we now know to be the champion—actually does. Interpreted in English, the machine iterates a certain integer function g, which is defined by
g(x) = (5x+18)/3 if x = 0 (mod 3),
g(x) = (5x+22)/3 if x = 1 (mod 3),
g(x) = HALT if x = 2 (mod 3).
Starting from x=0, the machine computes g(0), g(g(0)), g(g(g(0))), and so forth, halting if and if it ever reaches … well, HALT. The machine runs for millions of steps because it so happens that this iteration eventually reaches HALT, but only after a while:
0 → 6 → 16 → 34 → 64 → 114 → 196 → 334 → 564 → 946 → 1584 → 2646 → 4416 → 7366 → 12284 → HALT.
(And also, at each iteration, the machine runs for a number of steps that grows like the square of the number x.)
Some readers might be reminded of the Collatz Conjecture, the famous unsolved problem about whether, if you repeatedly replace a positive integer x by x/2 if x is even or 3x+1 if x is odd, you’ll always eventually reach x=1. As Scott Alexander would say, this is not a coincidence because nothing is ever a coincidence. (Especially not in math!)
It’s a fair question whether humans will ever know the value of BB(6). Pavel Kropitz discovered, a couple years ago, that BB(6) is at least 10^10^10^10^10^10^10^10^10^10^10^10^10^10^10 (i.e., 10 raised to itself 15 times). Obviously Kropitz didn’t actually run a 6-state Turing machine for that number of steps until halting! Instead he understood what the machine did—and it turned out to apply an iterative process similar to the g function above, but this time involving an exponential function. And the process could be proven to halt after ~15 rounds of exponentiation.
Meanwhile Tristan Stérin, who coordinated the bbchallenge effort, tells me that a 6-state machine was recently discovered that “iterates the Collatz-like map {3x/2, (3x-1)/2} from the number 8 and halts if and only if the number of odd terms ever gets bigger than twice the number of even terms.” This shows that, in order to determine the value of BB(6), one would first need to prove or disprove the Collatz-like conjecture that that never happens.
Basically, if and when artificial superintelligences take over the world, they can worry about the value of BB(6). And then God can worry about the value of BB(7).
I first learned about the BB function in 1996, when I was 15 years old, from a book called The New Turing Omnibus by A. K. Dewdney. From what I gather, Dewdney would go on to become a nutty 9/11 truther. But that’s irrelevant to the story. What matters was that his book provided my first exposure to many of the key concepts of computer science, and probably played a role in my becoming a theoretical computer scientist at all.
And of all the concepts in Dewdney’s book, the one I liked the most was the Busy Beaver function. What a simple function! You could easily explain its definition to Archimedes, or Gauss, or any of the other great mathematicians of the past. And yet, by using it, you could name definite positive integers (BB(10), for example) incomprehensibly larger than any that they could name.
It was from Dewdney that I learned that the first four Busy Beaver numbers were the unthreatening-looking 1, 6, 21, and 107 … but then that the fifth value was already unknown (!!), and at any rate at least 47,176,870. I clearly remember wondering whether BB(5) would ever be known for certain, and even whether I might be the one to determine it. That was almost two-thirds of my life ago.
As things developed, I played no role whatsoever in the determination of BB(5) … except for this. Tristan Stérin tells me that reading my survey article, The Busy Beaver Frontier, was what inspired him to start and lead the bbchallenge collaboration that finally cracked the problem. It’s hard to express how gratified that makes me.
Why care about determining particular values of the Busy Beaver function? Isn’t this just a recreational programming exercise, analogous to code golf, rather than serious mathematical research?
I like to answer that question with another question: why care about humans landing on the moon, or Mars? Those otherwise somewhat arbitrary goals, you might say, serve as a hard-to-fake gauge of human progress against the vastness of the cosmos. In the same way, the quest to determine the Busy Beaver numbers is one concrete measure of human progress against the vastness of the arithmetical cosmos, a vastness that we learned from Gödel and Turing won’t succumb to any fixed procedure. The Busy Beaver numbers are just … there, Platonically, as surely as 13 was prime long before the first caveman tried to arrange 13 rocks into a nontrivial rectangle and failed. And yet we might never know the sixth of these numbers and only today learned the fifth.
Anyway, huge congratulations to the bbchallenge team on their accomplishment. At a terrifying time for the world, I’m happy that, whatever happens, at least I lived to see this.
Dana, the kids, and I got back to the US last week after a month spent in England and then Israel. We decided to visit Israel because … uhh, we heard there’s never been a better time.
We normally go every year to visit Dana’s family and our many friends there, and to give talks. Various well-meaning friends suggested that maybe we should cancel or postpone this year—given, you know, the situation. To me, though, the situation felt like all the more reason to go. To make Israel seem more and more embattled, dangerous, isolated, abnormal, like not an acceptable place to visit (much less live), in order to crater its economy, demoralize its population, and ultimately wipe it from the face of earth … that is explicitly much of the world’s game plan right now, laid out with shocking honesty since October 7 (a day that also showed us what the “decolonization” will, concretely, look like). So, if I oppose this plan, then how could I look myself in the mirror while playing my tiny part in it? Shouldn’t I instead raise a middle finger to those who’d murder my family, and go?
Besides supporting our friends and relatives, though, I wanted to see the post-October-7 reality for myself, rather than just spending hours per day reading about it on social media. I wanted to form my own impression of the mood in Israel: fiercely determined? angry? hopeless? just carrying on like normal?
Anyway, in two meeting-packed weeks, mostly in Tel Aviv but also in Jerusalem, Haifa, and Be’er Sheva, I saw stuff that could support any of those narratives. A lot was as I’d expected, but not everything. In the rest of this post, I’ll share eleven observations:
(1) This presumably won’t shock anyone, but in post-October-7 Israel, you indeed can’t escape October 7. Everywhere you look, on every building, in every lobby, hanging from every highway overpass, there are hostage posters and “Bring Them Home Now” signs and yellow ribbons—starting at the airport, where every single passenger is routed through a long corridor of hostage posters, each one signed and decorated by the hostage’s friends and family. It sometimes felt as though Yad Vashem had expanded to encompass the entire country. Virtually everyone we talked to wanted to share their stories and opinions about the war, most of all their depression and anger. While there was also plenty of discussion about quantum error mitigation and watermarking of large language models and local family events, no one even pretended to ignore the war.
(2) Having said that, the morning after we landed, truthfully, the first thing that leapt out at me wasn’t anything to do with October 7, hostages, or Gaza. It was the sheer number of children playing outside, in any direction you looked. Full, noisy playgrounds on block after block. It’s one thing to know intellectually that Israel has by far the highest birthrate of any Western country, another to see it for yourself. The typical secular family probably has three kids; the typical Orthodox family has more. (The Arab population is of course also growing rapidly, both in Israel and in the West Bank and Gaza.) New apartment construction is everywhere you look in Tel Aviv, despite building delays caused by the war. And it all seems perfectly normal … unless you’ve lived your whole life in environments where 0.8 or 1.2 children per couple is the norm.
This, of course, has giant implications for anyone interested in Israel’s future. It’s like, a million Israeli leftists could get fed up and flee to the US or Canada or Switzerland, and Israel would still have a large and growing Jewish population—because having a big family is “just what people do” in a state that was founded to defy the Holocaust. In particular: anyone who dreams of dismantling the illegal, settler-colonial, fascist Zionist ethnostate, and freeing Palestine from river to sea, had better have some plan for what they’re going to do with all these millions of young Jews, who don’t appear to be going anywhere.
(3) The second thing I noticed was the heat—comparable to the Texas summer heat that we try to escape when possible. Because of the roasting sun, our own two pampered offspring mostly refused to go outside during daytime, and we mostly met friends indoors. I more than once had the dark thought that maybe Israel will survive Hamas, Hezbollah, Iran, and its own Jewish extremists … only to be finished off in the end (along with much of the rest of the planet) by global warming. I wonder whether Israel will manage to engineer its way out of the crisis, as it dramatically engineered its way out of its water crisis via desalination. The Arab petrostates have been trying to engineer their way out of the Middle East’s increasingly Mercury-like climate, albeit with decidedly mixed results.
(4) But nu, what did our Israeli friends say about the war? Of course it’s a biased sample, because our friends are mostly left-wing academics and tech workers. But, at risk of overgeneralizing: they’re unhappy. Very, very unhappy. As for Bibi and his far-right yes-men? Our friends’ rage at them was truly a sight to behold. American progressives are, like, mildly irked by Trump in comparison. Yes, our friends blame Bibi for the massive security and intelligence failures that allowed October 7 to happen. They blame him for dragging out the war to stave off elections. They blame him for empowering the contemptible Ben-Gvir and Smotrich. They blame him for his failure to bring back the remaining hostages. Most of all, they blame him for refusing even to meet with the hostage families, and more broadly, for evading responsibility for all that he did wrong, while arrogating credit for any victories (like the rescue of Noa Argamani).
(5) One Israeli friend offered to take me along to the giant anti-Bibi rally that now happens every Saturday night in Azrieli Center in Tel Aviv. (She added that, if I left before 9pm, it would reduce the chances of the police arresting me.) As the intrepid blogger-investigator I am, of course I agreed.
While many of the protesters simply called for new elections to replace Netanyahu (a cause that I 3000% support), others went further, demanding a deal to free the hostages and an immediate end to the war (even if, as they understood, that would leave Hamas in power).
Watching the protesters, smelling their pot smoke that filled the air, I was seized by a thought: these Israeli leftists actually see eye-to-eye with the anti-Israel American leftists on a huge number of issues. In a different world, they could be marching together as allies. Except, of course, for one giant difference: namely, the Tel Aviv protesters are proudly waving Israeli flags (sometimes modified to add anti-Bibi images, or to depict the Star of David “crying”), rather than burning or stomping on those flags. They’re marching to save the Israel that they know and remember, rather than to destroy it.
(6) We did meet one ultra-right-wing (and Orthodox) academic colleague. He was virtually the only person we met on this trip who seemed cheerful and optimistic about Israel’s future. He brought me to his synagogue to celebrate the holiday of Shavuot, while he himself stood guarding the door of the synagogue with a gargantuan rifle (his volunteer duty since October 7). He has six kids.
(7) Again and again, our secular liberal friends told us they’re thinking about moving from Israel, because if the Bibi-ists entrench their power (and of course the demographics are trending in that direction), then they don’t see that the country has any worthwhile future for them or their children. Should this be taken more seriously than the many Americans who promise that this time, for real, they’ll move to Canada if Trump wins? I’m not sure. I can only report what I heard.
(8) At the same time, again and again I got the following question from Israelis (including the leftist ones): how bad is the situation for Jews in the US? Have the universities been taken over by militant anti-Zionists, like it shows in the news? I had to answer: it’s complicated. Because I live my life enbubbled in the STEM field of computer science, surrounded by friends and colleagues of many backgrounds, ethnicities, religions, and political opinions who are thoughtful and decent (otherwise, why would they be my friends and colleagues?), I’m able to live a very nice life even in the midst of loud protesters calling to globalize the intifada against my family.
If, on the other hand, I were in a typical humanities department? Yeah, then I’d be pretty terrified. My basic options would be to (a) shut up about my (ironically) moderate, middle-of-the-road opinions on Israel/Palestine, such as support for the two-state solution; (b) live a miserable and embattled existence; or (c) pack up and move, for example to Israel.
An astounding irony right now is that, just as Israeli leftists are talking about moving from Israel, some of my American Jewish friends have talked to me about moving to Israel, to escape a prejudice that they thought died with their grandparents. I don’t know where the grass is actually greener (or is it brown everywhere?). Nor do I know how many worriers will actually follow through. What’s clear is that, both in Israel and in the diaspora, Jews are feeling an existential fear that they haven’t felt for generations.
(9) Did I fear for my own family’s safety during the trip? Not really. Maybe I should have. When we visited Haifa, we found that GPS was scrambled all across northern Israel, to make targeting harder for Hezbollah missiles. As a result, we couldn’t use Google Maps, got completely lost driving, and had to change plans with our friends. For the first time, now I really feel angry at Hezbollah: they made my life worse and it’s personal!
The funniest part, though, was how the scrambling was implemented: when you opened Google Maps anywhere in the north, it told you that you were in Beirut. It then dutifully gave you walking or driving directions to wherever you were going in Israel, passing through Syria close to Damascus (“warning: this route passes through multiple countries”).
(10) The most darkly comical thing that I heard on the entire trip: “oh, no, I don’t object in the slightest if the anti-Zionists want to kill us all. I only object if they want to kill us because of an incorrect understanding of the relevant history.” Needless to say, this was a professor.
(11) After my two-week investigation, what grand insight can I offer about Israel’s future? Not much, but maybe this: I think we can definitively rule out the scenario where Israel, having been battered by October 7, and bracing itself to be battered worse by Hezbollah, just sort of … withers away and disappears. Yes, Israel might get hotter, more crowded, more dangerous, more right-wing, and more Orthodox. But it will stay right where it is, unless and until its enemies destroy it in a cataclysmic war. You can’t scare people away, break their will, if they believe they have nowhere else on the planet to go. You can only kill them or else live next to them in peace, as the UN proposed in 1947 and as Oslo proposed in the 1990s. May we live to see peace.
Anyway, on that pleasant note, time soon to tune in to the Trump/Biden debate! I wonder who these two gentlemen are, and what they might stand for?
(See here for Boaz Barak’s obituary, and here for Lance Fortnow’s—they cover different aspects of Luca’s legacy from each other and from this post. Also, click here to register for a free online TCS4All talk that Luca was scheduled to give, and that will now be given in his memory, this Monday at 3:30pm Eastern time.)
Luca Trevisan, one of the world’s leading theoretical computer scientists, has succumbed to cancer in Italy, at only 52 years old. I was privileged to know Luca for a quarter-century, first as my complexity theory and cryptography professor at UC Berkeley and as a member of my dissertation committee, and then as a friend and colleague and fellow CS theory blogger.
I regret that I learned of the seriousness of Luca’s condition only a few days ago. So yesterday morning I wrote him a farewell email, under the impression that, while he was now in hospice care, he had at least a few more weeks. Alas, he probably never saw it. So I’m hereby making the email into a memorial post, with small changes mostly to protect people’s privacy.
Dear Luca,
Dana, the kids, and I were traveling in Israel for the past two weeks, when I received the shocking and sad news that this might be my last chance to write to you.
At risk of stating the obvious — you had a very large and positive effect on my life and career. Starting with the complexity theory summer school at the Institute for Advanced Study in 2000, which was the first time we met and also the first time I really experienced the glories of complexity at full blast. And then continuing at Berkeley, TA’ing your algorithms class, which you had to cancel on 9/11 (although students still somehow showed up for office hours lugging their CLRS books…), and dealing with that student who obviously cheated on the midterm although I had stupidly given back to her the evidence that would prove it.
And then your graduate complexity course, where I was very proud to get 100% on your exam, having handwritten it on a train while everyone else used LaTeX (which, embarrassingly, I was still learning). I was a bit less proud to present the Razborov-Rudich paper to the class, and to get questions from you that proved that I understood it less thoroughly than I thought. I emerged from your course far better prepared to do complexity theory than when I entered it.
Later I took your cryptography course, where I came to you afterwards one day to point out that with a quantum computer, you could pull out big Fourier coefficients without all the bother of the Goldreich-Levin theorem. And you said sure, but then you would need a quantum computer. Over 20 years later, Goldreich and Levin (and you?) can say with satisfaction that we still don’t have that scalable quantum computer … but we’re much much closer, I swear!
I still feel bad about the theory lunch talk I gave in 2003, on my complexity-theoretic version of Aumann’s agreement theorem, where I used you and Umesh as characters instead of Alice and Bob, and which then led to unintended references to “Luca’s posterior” (probability distribution, I meant).
I also feel bad about delaying so long the completion of my PhD thesis, until well after I’d started my postdoc in Princeton, so that my former officemate needed to meet you on a street corner in San Francisco to sign the signature page the night before the deadline.
But then a few years later, when Avi and I did the algebrization paper, the fact that you seemed to like it mattered more to me than just about anything else.
Thank you for the excellent dinner when I met you some years ago in Rome. Thank you for the Trevisan-Tulsiani-Vadhan paper, which answered a question we had about BosonSampling (and you probably didn’t even know you were doing quantum computing when you wrote that paper!). Thank you for your blog. Thank you for everything you did for me.
I always enjoyed your dry humor, much of which might sadly be lost to time, unless others wrote it down or it’s on YouTube or something. Two examples spring to my mind across the decades:
“From my previous lecture, you may have gotten the impression that everything in derandomization is due to Nisan and Wigderson, but this is not the case: Avi has been working with other people as well.”
After I’d explained that I’d be spending a semester in Jerusalem to work with Avi, despite (at that time) knowing only the most rudimentary Hebrew, such as how to say “please” and “excuse me”: “you mean there are words in Hebrew for ‘please’ and ‘excuse me’?”
Speaking of which, my current trip to Israel has given me many opportunities to reflect on mortality — for all the obvious war-related reasons of course, but also because while we were here, we unexpectedly had to attend two shivas of people in our social circle who died during our trip, one of them from cancer. And we learned about a close friend whose stepson has a brain tumor and might or might not make it. Cancer is a bitch.
Anyway, there’s much more I could write, but I imagine you’re getting flooded with emails right now from all the people whose lives you’ve touched, so I won’t take up more of your time. You’ve made a real difference to the world, to theoretical computer science, and to your friends and colleagues, one that many people would envy.
Best,
Scott | |||||
2453 | dbpedia | 2 | 65 | https://thmatters.wordpress.com/tcs-awards-list/ | en | TCS Awards List | [
"https://s2.wp.com/i/logo/wpcom-gray-white.png",
"https://s2.wp.com/i/logo/wpcom-gray-white.png",
"https://pixel.wp.com/b.gif?v=noscript"
] | [] | [] | [
""
] | null | [] | 2018-05-25T15:32:36+00:00 | Below is a list of major awards that are relevant to the theoretical computer science community. Included is a brief description of each, a link to the nomination instructions, and examples of res… | en | https://s1.wp.com/i/favicon.ico | Theory Matters | https://thmatters.wordpress.com/tcs-awards-list/ | Below is a list of major awards that are relevant to the theoretical computer science community. Included is a brief description of each, a link to the nomination instructions, and examples of researchers from TCS and related fields that won the award in the past.
Honorary Lectures
Lifetime Awards
Major Paper or Book, Set of Papers, etc.
Major Prizes
Mid-Career Awards
Service
Student Prizes
Thesis Prizes
Young Career Awards
Honorary Lectures
John von Neumann Lecture
For outstanding and distinguished contributions to the field of applied mathematical sciences and for the effective communication of these ideas to the community.
Some recipients in TCS: Jennifer Chayes, Donald Knuth, Ingrid Daubechies, Richard Karp
link
Lifetime Awards
Claude Shannon Award
To honor consistent and profound contributions to the field of information theory.
Some recipients in TCS: Robert Calderbank, Abbas El Gamal
link
Richard Hamming Medal
To honor an individual or team, up to three in number for exceptional contributions to information sciences, systems, and technology.
Some recipients in TCS: Robert Calderbank, Abbas El Gamal, Michael Luby, Whitfield Diffie, Martin Hellman, Ralph Merkle
link
Kyoto Prize
To honor those who have contributed significantly to the scientific, cultural, and spiritual betterment of mankind in the area of advanced technology.
Some recipients in TCS: Laszlo Lovasz, Donaly Knuth
link
Simons Investigator
To support outstanding scientists in their most productive years, when they are establishing creative new research directions, providing leadership to the field and effectively mentoring junior scientists.
Some recipients in TCS: Scott Aaronson, Boaz Barak, James Lee, Madhu Sudan, David Zuckerman, Dan Boneh, Subhash Khot, Chris Umans (2-3/year)
link
Vannevar Bush Faculty Fellows program given by DoD (formerly National Security Science and Engineering Faculty Fellowship)
Seeks outstanding researchers to propose basic research that is potentially transformative in topics areas of interest to the DoD.
Some recipients in TCS: Scott Aaronson, Sean Hallgren, Umesh Vazirani
link
INFORMS Khachiyan Prize
Honors individual or a team for life-time achievements in the area of optimization; recognizes a sustained career of scholarship from nominees who are still active at the year of the nomination. The prize serves as an esteemed recognition of innovativeness and impact in the area of optimization, including theory and applications.
Some recipients in TCS: Jean Bernard Lasserre
link
ACM Fellow
ACM’s most prestigious member grade recognizes the top 1% of ACM members for their outstanding accomplishments in computing and information technology and/or outstanding service to ACM and the larger computing community.
Some recipients in TCS: Edith Cohen, Shafi Goldwasser, Martin Grohe, Venkatesan Guruswami, Mor Harchol-Balter, Silvio Micali, Tal Rabin, Michael Sipser, R. Ravi, David Peleg, Erik Demaine, Mike Saks, Noga Alon, Monika Henzinger, Cynthia Dwork, Piotr Indyk
link
SIAM Fellow
To honor SIAM members who are recognized by their peers as distinguished for their contributions to the discipline.
Some recipients in TCS: Ronald DeVore, Pablo Parrilo, Robin Thomas, Joel Spencer, Cynthia Phillips, David Williamson, Fan Chung, Charles Leiserson, Moshe Vardi, Dorit Hochbaum, Michel Goemans, Andrew Goldberg, Alan Frieze
link
IEEE Fellow
A distinction reserved for select IEEE members whose extraordinary accomplishments in any of the IEEE fields of interest are deemed fitting of this prestigious grade elevation.
Some recipients in TCS: Emmanuel Candes, Andrea Montanari, Rebecca Wright
link
AMS Fellow
Recognizes members who have made outstanding contributions to the creation, exposition, advancement, communication, and utilization of mathematics.
Some recipients in TCS: Emmanuel Candes, Peter Sarnak, Noga Alon, Michael Sipser, Victor Pan, Jennifer Chayes, Lenore Blum, Michel Goemans, Tom Leighton, Nick Pippenger, Dana Randall, Madhu Sudan, Joel Spencer, Eva Tardos, Prasad Tetali, Alan Frieze
link
IACR Fellow
To recognize outstanding IACR members for technical and professional contributions that: advance the science, technology, and practice of cryptology and related fields; promote the free exchange of ideas and information about cryptology and related fields; develop and maintain the professional skill and integrity of individuals in the cryptologic community; advance the standing of the cryptologic community in the wider scientific and technical world and promote fruitful relationships between the IACR and other scientific and technical organizations.
Some recipients in TCS: Juan Garay, Yuval Ishai, Paul Kocher, Stafford Tavares, Victor Shoup, Joe Kilian, Tal Rabin, Ran Canetti, Eyal Kushilevitz, Moti Yung
link
INFORMS Fellow
For distinguished individuals who have demonstrated outstanding and exceptional accomplishments and experience in operations research and the management sciences (OR/MS).
Some recipients in TCS: R. Ravi, Dick Karp, Nimrod Megiddo, David Shmoys, Eva Tardos
link
CS-Can/Info-Can Lifetime Achievemant Award
Recognize current or former faculty members in Canadian Computer Science Departments, Schools, Faculties who have made outstanding and sustained contributions to computing over their careers. The awards can be for achievement in research, teaching, service, or any combination of these.
Some recipients in TCS: David Kirkpatrick, Ian Munro, Steve Cook, Derek Corneil
link
EATCS Award
Given to acknowledge extensive and widely recognized contributions to theoretical computer science over a life long scientific career.
Some recipients in TCS: Noam Nisan, Eva Tardos, Dexter Kozen, Christos Papadimitriou, Gordon Plotkin, Martin Dyer, Moshe Vardi, Boris (Boaz) Trakhtenbrot, Kurt Mehlhorn
link
Major Paper or Book, Set of Papers, etc.
Godel Prize
For outstanding papers in the area of theoretical computer science is sponsored jointly by the European Association for Theoretical Computer Science (EATCS) and the Special Interest Group on Algorithms and Computation Theory of the Association for Computing Machinery (ACM SIGACT).
Some recipients in TCS: Cynthia Dwork, Frank McSherry, Kobbi Nissim, Adam Smith, Stephen Brookes, Peter W. O’Hearn, Daniel Spielman, Shang-Hua Teng, Ronald Fagin, Amnon Lotem, Moni Naor, Antoine Joux, Dan Boneh, Matthew K. Franklin, Elias Koutsoupias, Christos H. Papadimitriou, Tim Roughgarden, Eva Tardos, Noam Nisan, Amir Ronen
link
Delbert Ray Fulkerson Prize
For outstanding papers in the area of discrete mathematics. The term “discrete mathematics” is interpreted broadly and is intended to include graph theory, networks, mathematical programming, applied combinatorics, applications of discrete mathematics to computer science, and related subjects.
Some recipients in TCS: Francisco Santos, Sanjeev Arora, Satish Rao, Umesh Vazirani, Anders Johansson, Jeff Kahn, Van H. Vu, Laszlo Lovasz, Balazs Szegedy
link
Prize in Game Theory and Computer Science in Honour of Ehud Kalai
Awarded to the person (or persons) who have published the best paper at the interface of game theory and computer science in the last decade.
Some recipients in TCS: Tim Roughgarden, Christos Papadimitriou, Costis Daskalakis, Paul Goldberg
link
ACM SigEcom Test of Time Award
Recognizes the author or authors of an influential paper or series of papers published between ten and twenty-five years ago that has significantly impacted research or applications exemplifying the interplay of economics and computation.
Some recipients in TCS: Gagan Aggarwal, Ashish Goel, Rajeev Motwani
link
INFORMS Computing Society Prize
For the best English language paper or group of related papers dealing with the Operations Research/Computer Science interface.
Some recipients in TCS: David Johnson, Pablo Parrilo, Peter Shor
link
INFORMS Applied Probability Society Best Publication Award
Recognizes outstanding contributions (a book, paper, or set of papers) to Applied Probability.
Some recipients in TCS: Andrea Montanari, Devavrat Shah, David Gamarnik
link
INFORMS Applied Probability Society Best Student Paper Prize
To identify and honor outstanding papers in the field of applied probability that are written primarily by a student. We define applied probability broadly, as any paper related to the modeling, analysis, and control of stochastic systems.
Some recipients in TCS: Guido Lagos
link
SIAM SIAG/Optimization Prize
Awarded to the author(s) of the most outstanding paper, as determined by the prize committee, on a topic in optimization published in English in a peer-reviewed journal.
Some recipients in TCS: Michel Goemans, David Williamson, Yinyu Ye
link
SIAM George Polya Prize for Mathematical Exposition
Awarded every two years to an outstanding expositor of the mathematical sciences. The prize may be awarded for a specific work or for the cumulative impact of multiple expository works that communicate mathematics effectively.
Some recipients in TCS: Nick Trefethen
link
SIAM George Polya Prize in Mathematics
Awarded every four years for a significant contribution, as evidenced by a refereed publication, in an area of mathematics of interest to George Pólya not covered by the George Pólya Prize in Combinatorics or the George Pólya Prize for Mathematical Exposition.
Some recipients in TCS: Emmanuel Candes, Terence Tao, Adam Marcus, Daniel Spielman, Nikhil Srivastava
link
SIAM George Polya Prize in Combinatorics
Awarded every four years for a notable application of combinatorial theory. The prize is broadly intended to recognize specific recent work.
Some recipients in TCS: Van Vu, Neil Robertson, Paul Seymour, Noga Alon
link
SIAM SIAG/CST Best SICON Paper Prize
Awarded to the author(s) of the two most outstanding papers, as determined by the prize committee, published in the SIAM Journal on Control and Optimization (SICON) in the three calendar years before the award year.
Some recipients in TCS: Bernard Chazelle, Pablo Parrilo
link
SIAM Ralph E. Kleinman Prize
Awarded to one individual for outstanding research, or other contributions, that bridge the gap between mathematics and applications. Work that uses high-level mathematics and/or invents new mathematical tools to solve applied problems from engineering, science, and technology is particularly appropriate.
Some recipients in TCS: Emmanuel Candes, Anna Gilbert
link
AMS Levi L. Conant Prize
To recognize the best expository paper published in either the Notices of the AMS or the Bulletin of the AMS in the preceding five years.
Some recipients in TCS: Henry Cohn, Shlomo Hoory, Nati Linial, Avi Wigderson
link
Michael and Sheila Held Prize from the National Academy of Sciences
Honors outstanding, innovative, creative, and influential research in the areas of combinatorial and discrete optimization, or related parts of computer science, such as the design and analysis of algorithms and complexity theory. This $100,000 prize is intended to recognize recent work (defined as published within the last eight years).
Some recipients in TCS: Prasad Raghavendra and David Steurer
link
Edsger W. Dijkstra Prize in Distributed Computing
The prize is given for outstanding papers on the principles of distributed computing, whose significance and impact on the theory and/or practice of distributed computing has been evident for at least a decade.
Some recipients in TCS: Elizabeth Borowsky, Eli Gafni, Noga Alon, Laszlo Babai, Alon Itai, Michael Luby, Michael Ben-Or, Michael Rabin
link
The Alonzo Church Award for Outstanding Contributions to Logic and Computation
The prize is awarded for an outstanding contribution represented by a paper or small group of papers within the past 25 years. This time span allows the contribution to have established evidence of lasting impact and depth.
Some recipients in TCS: Tomas Feder, Moshe Vardi,
link
Major Prizes
Kannelakis Prize
Honors specific theoretical accomplishments that have had a significant and demonstrable effect on the practice of computing.
Some recipients in TCS: Amos Fiat, Moni Naor, Michael Luby, James Demmel
link
Knuth Prize
For outstanding contributions to the foundations of computer science is awarded for major research accomplishments and contributions to the foundations of computer science over an extended period of time.
Some recipients in TCS: Oded Goldreich, Noam Nisan, Laszlo Babai, Richard Lipton, Gary Miller
link
IMU Nevanlinna Prize
Awarded once every 4 years at the International Congress of Mathematicians, for outstanding contributions in Mathematical Aspects of Information Sciences.
Some recipients in TCS: Subhash Khot, Daniel Spielman, Jon Kleinberg
link
Turing Award
To recognize contributions of lasting and major technical importance to the computing field.
Some recipients in TCS: Whitfield Diffie, Martin Hellman, Shafi Goldwasser, Silvio Micali
link
Mid-Career Awards
Shapley Lecture
Given at each World Congress of the Game Theory Society by a distinguished game theorist aged 40 or under at the time of the Lecture.
Some recipients in TCS: Tim Roughgarden
link
INFORMS Farkas prize
Awarded annually to a mid-career researcher for outstanding contributions to the field of optimization, over the course of their career. Such contributions could include papers (published or submitted and accepted), books, monographs, and software. The awardee will be within 25 years of their terminal degree as of January 1 of the year of the award.
Some recipients in TCS: Pablo Parrilo, Michel Goemans
link
ACM Grace Murray Hopper Award
Awarded to the outstanding young computer professional of the year, selected on the basis of a single recent major technical or service contribution. The candidate must have been 35 years of age or less at the time the qualifying contribution was made.
Some recipients in TCS: Craig Gentry, Omer Reingold, Tim Roughgarden, Brent Waters
link
EATCS Presburger Award
To a young scientist (in exceptional cases to several young scientists) for outstanding contributions in theoretical computer science, documented by a published paper or a series of published papers. Nominated scientists must be at most 35 years at the time of the deadline of nomination.
Some recipients in TCS: Mark Braverman, Xi Chen, Erik Demaine, Venkatesan Guruswami, Mihai Patrascu, Alexandra Silva, David Woodruff
link
NSF Alan T. Waterman Award
The National Science Foundation’s (NSF) annual Alan T. Waterman Award honors an outstanding young U.S. scientist or engineer. The awardee receives a grant of $1 million over five years for scientific research or advanced study in any field of science, plus a medal and other recognition. Public Law 94-86 of the 94th Congress established the Waterman Award in 1975 to mark the 25th anniversary of the NSF and to honor its first director, Alan T. Waterman. Criteria: a candidate must be a U.S. citizen or permanent resident. He or she must be 40 years of age or younger, OR not more than 10 years beyond receipt of the Ph.D. degree, by December 31st of the year in which they are nominated. The candidate should have demonstrated exceptional individual achievements in scientific or engineering research of sufficient quality to be placed at the forefront of his or her peers. Criteria also include originality, innovation and a significant impact on the individual’s field.
Some recipients in TCS: Scott Aronson, Subhash Khot, Herbert Edelsbrunner, Emmanuel Candes, Mung Chiang
link
ACM Prize in Computing
The award recognizes an early to mid-career fundamental innovative contribution in computing that, through its depth, impact and broad implications, exemplifies the greatest achievements in the discipline.
Some recipients in TCS: Dina Katabi, Alexei A. Efros, Stefan Savage, Dan Boneh
link
Service
ACM SIGACT Distinguished Service Prize
Awarded to an individual who has made substantial service contributions to the Theoretical Computer Science community.
Some recipients in TCS: Alistair Sinclair, Laszlo Babai, Avi Wigderson, Lance Fortnow
link
INFORMS Computing Society Harvey J. GreenBerg Service Award
Given to an individual in recognition of their impact on the INFORMS Computing Society (ICS).
Some recipients in TCS: Richard S. Barr
link
SIAM Prize for Distinguished Service to the Profession
Awarded to an applied mathematician who has made distinguished contributions to the furtherance of applied mathematics on the national or international level.
Some recipients in TCS: John Hopcroft
link
EATCS Fellow
Awarded to a person having a track record of intellectual and organizational leadership within the EATCS community. Fellows are expected to be “model citizens” of the TCS community, helping to develop the standing of TCS beyond the frontiers of the community.
Some recipients in TCS: Jim Adamek, Susanne Albers, Artur Czumaj, Mike Fellows, Monika Henzinger, Thomas Henzinger, Giuseppe Italiano, Stefan Leonardi, Kurt Mehlhorn, Paul Spirakas, Aravind Srinivasan, Moshe Vardi, Moti Yung
link
Student Prizes
INFORMS George Nicholson Student Paper Competition
To identify and honor outstanding papers in the field of operations research and the management sciences written by a student.
Some recipients in TCS: Andrew Li, Bradley Sturt, Ramesh Johari
link
INFORMS ICS Student Paper Prize
Is given annually to the best paper on computing and operations research by a student author.
Some recipients in TCS: Guido Lagos
link
INFORMS ICS Student Paper Award
Is given annually to the best paper on computing and operations research by a student author.
Some recipients in TCS: Berk Ustun
link
SIAM Frank and Brennie Morgan Award (SIAM)
Awarded each year to an undergraduate student (or students for joint work) for outstanding research in mathematics.
Some recipients in TCS: Jacob Fox, Daniel Kane
link
SIAM Student Paper Prize
Awarded every year to the student author(s) of the most outstanding SIAM paper(s) submitted to the SIAM Student Paper Competition.
Some recipients in TCS: Emanuele Viola
link
SIAM Student Travel Awards
Given to help students gain the experience and exposure that comes from attending and presenting at SIAM conferences. The travel awards are intended to help students defray some of the costs of their attendance.
Awarded for SIAM conferences each year
link
Thesis Prizes
Mathematical Optimization Society Tucker Prize
Awarded at each International Symposium on Mathematical Programming for an outstanding doctoral thesis.
Some recipients in TCS: Daniel Dadush, Oliver Friedmann, Mohit Singh, Uday Shanbhag, Tim Roughgarden, Andrew Goldberg, Fabian Chudak
link
INFORMS Dantzig Prize
Given for the best dissertation in any area of operations research and the management sciences that is innovative and relevant to practice.
Some recipients in TCS: Negin Golrezaei, Ian Osband, Devavrat Shah, Alberto Caprara
link
ACM SIGecom Doctoral Dissertation Award
Recognizes an outstanding dissertation in the field of economics and computation.
Some recipients in TCS: Aviad Rubinstein, Inbal Talgam-Cohen, Matt Weinberg, Balu Sivan
link
ACM Doctoral Dissertation Award
Presented annually to the author(s) of the best doctoral dissertation(s) in computer science and engineering.
Some recipients in TCS: Haitham Hassanieh, Aaron Sidford, Sanjam Garg, John Duchi
link
EATCS Distinguished Dissertation Award
Given to promote and recognize outstanding dissertations in the field of theoretical computer science.
Some recipients in TCS: Bas Ketsman, Ilya Razenshteyn, Aviad Rubinstein, Vincent Cohen-Addad, Mika Goos, Steen Vester, Radu Curticapean, Heng Guo, Georg Zetzche, Karl Bringmann, Michal Skrzpczak, Mary Wootters
link
Young Career Awards
Presidential Early Career Awards for Scientists and Engineers (PECASE)
To recognize and honor outstanding scientists and engineers at the outset of their independent research careers.
Some TCS recipients: Scott Aaronson, Sean Hallgren, Adam Smith
link
SIAM Richard C. DiPrima Prize
Awarded to an early career researcher who has done outstanding research in applied mathematics (defined as those topics covered by SIAM journals) and who has completed his/her doctoral dissertation and completed all other requirements for his/her doctorate during the period running from three years prior to the award date to one year prior to the award date.
Some recipients in TCS: David Williamson
link
INFORMS Young Researchers Prize
Awarded annually at the fall INFORMS Annual Meeting to one or more young researcher(s) for an outstanding paper in optimization that is published in, or submitted to and accepted by, a refereed professional journal within the four calendar years preceding the year of the award.
Some recipients in TCS: Kamal Jain, Tim Roughgarden
link
INFORMS Applied Probability Society Erlang Prize
A single prize will be awarded once every two years during even calendar years by the Applied Probability Society (APS) of INFORMS to an early career researcher who has made a significant contribution to applied probability.
Some recipients in TCS: Devavrat Shah
link
SIAM Early Career Travel Awards
Support for early career participants affiliated with U.S. institutions for travel to SIAM conferences through a grant from the U.S. National Science Foundation (NSF).
Awarded for SIAM conferences each year
link
SIAM Denes Konig Prize
Awarded biennially to an early career researcher or early career researchers for outstanding research, as determined by the prize committee, in an area of discrete mathematics, based on a publication by the candidate(s) in a peer-reviewed journal published in the three calendar years prior to the year of the award.
Some recipients in TCS: Adam Wade Marcus, Zeev Dvir, Jacob Fox
link
CS-Can/Info-Can Outstanding Young Computer Science Researcher Award
Recognize excellence in research, and are made to top young faculty members in Canadian Computer Science Departments, Schools, Faculties who are within the first ten years of their career beyond the completion of their PhD.
Some recipients in TCS: Lap Chi Lau, Nick Harvey, Kevin Leyton-Brown
link
Maintained by David Woodruff (dwoodruf@cs.cmu.edu) | ||||
2453 | dbpedia | 1 | 71 | http://backreaction.blogspot.com/2019/10/what-is-quantum-measurement-problem.html | en | Sabine Hossenfelder: Backreaction: What is the quantum measurement problem? | https://lh3.googleusercontent.com/blogger_img_proxy/AEn0k_tww_M07lHbqP8e3_EFyDbXCbwSEUv50AcFasVPb12lu1z3WOTHGM2mCE5p6C2cTeaQ248wSafEeduuLZ02GcfXMUQ2trWwiUYrr96eQsxu2rPocQ=w1200-h630-n-k-no-nu | https://lh3.googleusercontent.com/blogger_img_proxy/AEn0k_tww_M07lHbqP8e3_EFyDbXCbwSEUv50AcFasVPb12lu1z3WOTHGM2mCE5p6C2cTeaQ248wSafEeduuLZ02GcfXMUQ2trWwiUYrr96eQsxu2rPocQ=w1200-h630-n-k-no-nu | [
"http://1.bp.blogspot.com/-N9QDX9zz1aE/XGuq12-EKSI/AAAAAAAAEgE/EuyipZCfAW8vKpgNLc1fh744VXTtai6ZQCK4BGAYYCw/s1600/br_banner_new06.jpg",
"https://resources.blogblog.com/img/icon18_edit_allbkg.gif",
"http://www.blogger.com/img/blogger_logo_round_35.png",
"http://www.blogger.com/img/blogger_logo_round_35.png",
... | [] | [] | [
""
] | null | [
"Sabine Hossenfelder"
] | null | Science News, Physics, Science, Philosophy, Philosophy of Science | http://backreaction.blogspot.com/favicon.ico | http://backreaction.blogspot.com/2019/10/what-is-quantum-measurement-problem.html | |||||
2453 | dbpedia | 3 | 68 | https://www.jinfo.org/Computer_Info_Science.html | en | Jews in Computer & Information Science | [] | [] | [] | [
""
] | null | [] | null | null | This section concerns contributions to the development of information science and technology at its logical (as opposed to its hardware) level. Specifically, this section deals with areas such as computation theory, artificial intelligence, the statistical theories of information, communication, and systems control, cryptography, operations research, computer and network architectures, and algorithm and software design. The general level of this contribution is reflected in the current ~45% Jewish membership in the Computer and Information Sciences division of the US National Academy of Sciences and in the percentages of Jewish recipients shown below for several of the most prestigious awards in the field. Two of the four individuals generally recognized to have been the principal architects of the Information Age were the Jewish mathematicians Norbert Wiener and John von Neumann.1
Some of the more notable Jewish contributions are listed below. (The names of non-Jewish scientists and engineers mentioned in the accompanying discussion have been denoted with the superscript "+" in order to avoid confusion.)
The interpretation of thermodynamic entropy as an information metric by Leo Szilard. Szilard's 1929 analysis of the Maxwell's demon paradox "is now considered to be the earliest known paper in what became the field of 'information theory' in the 1950s and 1960s." 2 Other important information metrics were formulated by John von Neumann, Alfréd Rényi, Solomon Kullback, and Richard Leibler. The von Neumann entropy, e.g., is the quantum generalization of Szilard's classical information measure and is one of the fundamental concepts in quantum information theory.
The introduction of the diagonal argument proof method by Georg Cantor*. This method is central to the derivation of the incompleteness and noncomputability results of Gödel+, Turing+, Church+, and Post that lie at the foundation of theoretical computer science. In a 1936 paper, Emil Post described a mechanical definition of computation, known as the Post machine, which is equivalent to the Turing machine introduced by Alan Turing+ in a paper that appeared several months later. Post had understood the undecidability implications of such a definition as early as 1921, but had hesitated to publish and lost priority to Gödel+, who approached the problem from a very different perspective in his 1931 paper. Post was also one of the four principal founders of the theory of recursive functions, which is of immense importance in theoretical computer science.3
The logical design of Colossus, the first all-electronic, digital, programmable computer by Max Newman.* Although Colossus was not a general-purpose computer and had only limited programmability, it represented an important milestone. Newman, a Cambridge University professor of mathematics, headed the "Newmanry," a special code-breaking unit at Bletchley Park in England during World War II. In this capacity, he formulated the logical design of a machine to mechanize the cryptanalysis of the German Lorenz Cipher, which was used by the Nazi high command to encrypt its highest priority communications. The first instantiation of this machine, called Heath Robinson, was based on a combination of electromechanical, electromagnetic-relay, and vacuum tube switches. Tommy Flowers,+ who had worked on the engineering design of Heath Robinson, subsequently argued, and then brilliantly demonstrated, that a much faster and more reliable version of the machine could be realized all-electronically, using just vacuum tubes. The resulting Colossus machines played a critical role in securing Allied victory in Europe and were influential in the post-war development of computers in England.4 (Contrary to what is sometimes claimed, Alan Turing,+ who was Newman's protégé, had relatively little direct involvement with Colossus, although his ideas were extremely influential. Newman later declined an OBE appointment in protest against the treatment accorded Turing+ by the postwar British government.)
The design of the logical architecture employed in virtually all modern computers by John von Neumann. Von Neumann's 1946 paper "Preliminary Discussion of the Logical Design of an Electronic Computing Instrument" has been described as "the most influential paper in the history of computer science ... the ideas it contains, collectively known as the von Neumann machine, have provided the foundation for essentially all computer system developments since that date."5 The machine that von Neumann designed and had constructed at the Institute for Advanced Study (IAS) in the late 1940s was widely replicated in the development of many other early computer systems, including SEAC (US National Bureau of Standards), ILLIAC (University of Illinois), ORDVAC (Aberdeen Proving Ground), JOHNNIAC (RAND Corporation), MANIAC (Los Alamos National Laboratory), AVIDAC (Argonne National Laboratory), ORACLE (Oak Ridge National Laboratory), WEIZAC (Weizmann Institute of Science), and the IBM 701 (IBM's first mass-produced, commercial mainframe computer).6 The IAS computer project was intended by von Neumann primarily to demonstrate the utility of computers in the solution of scientific and engineering problems, an objective at which it indeed succeeded, creating the field of modern numerical weather prediction and solving some of the most difficult problems in radiation hydrodynamics associated with the design of thermonuclear weapons. Von Neumann invented the computerized random number generator and co-invented the Monte Carlo method. He also invented the theory of system fault tolerance and the cellular automata model of computation. The universal von Neumann constructor, a generalization of the universal Turing machine that emerged out of von Neumann's theory of self-reproducing automata, is one of the foundational concepts in the theoretical study of the biomolecular nanotechnology of living systems.
The invention of parallel supercomputing architectures by Stephen Unger, Daniel Slotnick, David Schaefer, and Włodzimierz Holsztyński. Unger, Slotnick, Schaefer, and Holsztyński are four of the "eight men [who] dominate the history of SIMD computer architectures." 7 SIMD (single instruction, multiple data) refers to the basic parallel processing technique employed in the earliest supercomputers.8 Unger was the first to propose and explore such architectures in the late 1950s. Slotnick designed SOLOMON in the early 1960s and built the first parallel processing prototypes. He was later the architect of Illiac IV, the first important parallel supercomputer, which had up to 256 processing elements. Built with 64 processing elements in the early 1970s with ARPA (now DARPA) funding and operated by NASA, Illiac IV remained the world's fastest computer until its shutdown in 1981. In the late 1970s and early 1980s, Schaefer initiated and managed the development of NASA's Massively Parallel Processor (MPP), the first truly massively parallel supercomputer, with 16,384 processing elements. Holsztyński designed the Geometric-Arithmetic Parallel Processor (GAPP) in 1981. GAPPs with hundreds of thousands of processing elements are used today in real-time video image processing applications such as image enhancement and noise reduction, video data compression, and format and frame rate conversion.
The co-discovery of NP-completeness by Leonid Levin. Levin and Stephen Cook+ independently discovered and proved what is now referred to as the Cook-Levin theorem, the central result concerning the P = NP? question, which is the major open problem in theoretical computer science. Richard Karp introduced the terms "P" and "NP" and defined NP-completeness (although not the term itself) in its present form. He also identified the decision problem formulations of many well-known, combinatorially intractable problems as being NP-complete. Levin, Karp, and Manuel Blum are considered to be three of the six founders of the field of computational complexity theory.
The invention of context-free languages by Noam Chomsky. This work was based on Emil Post's theory of production systems in mathematical logic. It is the basis of the BNF notation widely used to specify the syntax rules of programming languages. Chomsky's hierarchical classification of formal languages initiated the field of formal language theory in computer science.
The co-invention of BASIC by John Kemeny. Kemeny and Thomas Kurtz+ developed this popular programming language. At least one-third of the nine-person team that developed FORTRAN under John Backus+ at IBM were Jewish. Also at IBM, Adin Falkoff collaborated with Kenneth Iverson+ on the design and development of the array processing language APL (acronym for "A Programming Language"). Four of the six principal designers of COBOL, Howard Bromberg, Norman Discount, Jean Sammet, and William Selden, were Jewish. COBOL was used to run an estimated 80% of the world's business systems at the turn of the century. Although its usage has been in gradual decline since then, it is still widely employed in business, financial, and government systems. Ada, an advanced programming language adopted by the US Department of Defense as its standard high-level computer programming language in the 1980s and 1990s, was designed by Jean Ichbiah. LISP, the second-oldest high-level programming language still in use (primarily in artificial intelligence research), was invented by John McCarthy* in 1958. Barbara Liskov was awarded the 2008 ACM Turing Award for fundamental advances in programming language design. The ACM press release noted that her innovations "are now the basis of every important programming language since 1975, including Ada, C++, Java, and C#."
The invention of the MINIX operating system by Andrew Tanenbaum. MINIX was the precursor to, and inspiration for, the widely used Linux operating system.
The invention of the computer spreadsheet by Dan Bricklin and Robert Frankston. Bricklin and Frankston's VisiCalc spreadsheet was the first "killer app." The Lotus 1-2-3 spreadsheet program, the most successful software product of its time, was developed by Jonathan Sachs and Mitchell Kapor.
The invention of the computerized word processor by Evelyn Berezin. Berezin, who was inducted into the National Inventors Hall of Fame in 2022, invented the Data Secretary, the first computerized word processor, in 1971. She is also credited with developing the first office computer, the first computerized banking system, and the United Airlines computerized reservation system, the largest data processing system of its time. Other important word processors were designed by Charles "Nick" Corfield (Adobe FrameMaker) and Richard Brodie* (Microsoft Word).
The co-founding of the field of artificial intelligence (AI) by Marvin Minsky, Herbert Simon*, and John McCarthy*. (Allen Newell+ is also considered to have been one of AI's four principal founders.9) Six of the ten inductees into the IEEE Computer Society's Intelligent Systems Magazine AI Hall of Fame are, or were, Jewish or of Jewish descent (Noam Chomsky, Edward Feigenbaum, John McCarthy*, Marvin Minsky, Judea Pearl, and Lotfi Zadeh*). Major approaches to machine learning, the now dominant approach to artificial intelligence, were pioneered by Jews, including Bayesian Networks (Judea Pearl), Support Vector Machines (Vladimir Vapnik and Alexey Chervonenkis), Deep Learning (Frank Rosenblatt and Yoshua Bengio, together with Geoffrey Hinton+ and Yann LeCun+), Evolutionary Computing (Lawrence Fogel), and the Probably Approximately Correct (PAC) Model (Leslie Valiant).10 Minsky, Simon, McCarthy, Feigenbaum, Pearl, Bengio, and Valiant have received seven of the eleven Turing Awards given thus far for work in artificial intelligence.
The creation of ChatGPT by Ilya Sutskever. Sutskever, a co-founder and the chief scientist of OpenAI, is the principal designer of the large language model GPT-4, on which ChatGPT is based. The extraordinary ability of this deep learning AI language model to "understand" textual input and generate textual output that is both syntactically and semantically precise has shocked even many AI researchers.
The development of computer algebra (symbol manipulation) programs by Jean Sammet (FORMAC), Carl Engelman (MATHLAB), Joel Moses (MACSYMA), and Stephen Wolfram (Mathematica).
The invention of reversible computation theory by Rolf Landauer. Reversible computation circumvents the thermodynamic limits on irreversible computation established by John von Neumann, and is one of the foundations of quantum computing. The ballistic architecture, or Fredkin gate, model of reversible computation was introduced by Edward Fredkin.
The invention of quantum computing by Paul Benioff, Richard Feynman, Yuri Manin,* and David Deutsch.
The invention of DNA computing by Leonard Adleman.
The invention of fuzzy logic by Max Black and Lotfi Zadeh* (independently).
The invention of algorithmic complexity by Ray Solomonoff. Also termed Kolmogorov complexity or algorithmic information theory, Solomonoff's 1964 work was later arrived at independently by Andrei Kolmogorov+ (1965) and Gregory Chaitin (1969).
The invention of the Monte Carlo method by Stanislaw Ulam and John von Neumann. This statistical numerical method is one of the cornerstones of computer simulation science. Von Neumann invented the first computer-based random number generator for use in Monte Carlo simulations. The so-called Metropolis Monte Carlo algorithm, widely used in statistics and computational physics, was largely devised by Marshall Rosenbluth, based in part on ideas from Edward Teller and John von Neumann.11
The invention of nondeterministic algorithms by Michael Rabin. Such algorithms employ Monte Carlo methods to provide efficiently computable solutions that are correct with high (but less than one hundred percent) probability to many problems whose exact solution is computationally intractable. Rabin's probabilistic primality testing, e.g., is essential to the practical implementation of RSA public-key cryptography.
The invention of the SIMPLEX linear programming algorithm by George B. Dantzig. Linear programming (LP), invented independently by Dantzig and Leonid Kantorovich, is a powerful optimization technique that is widely used in economics and engineering. It has been estimated that, aside from database operations such as sorting and searching, LP consumes more computer time than any other mathematical procedure.12 The SIMPLEX algorithm remains LP's fundamental numerical solution technique.
The invention of the ellipsoid method of convex optimization by Naum Shor and, independently, by Arkadi Nemirovski and David Yudin. This technique, which was successfully employed by Leonid Khachiyan+ to prove the polynomial-time complexity of linear programming, underlies most modern results concerning the computational complexity of convex optimization programs. The ellipsoid method provided the first effective solver for semidefinite programs (which are encountered in many engineering applications) and has led to significant advances in combinatorial optimization.
The invention or co-invention of five of CiSE's "Top Ten Algorithms of the Century" by Stanislaw Ulam, John von Neumann, Marshall Rosenbluth, Edward Teller, George Dantzig, Leonid Kantorovich, Cornelius Lanczos, I. J. Good, Leslie Greengard, and Vladimir Rokhlin, Jr.. The January/February 2000 issue of Computing in Science & Engineering, a joint publication of the American Institute of Physics and the IEEE Computer Society, assembled a list of "the ten algorithms with the greatest influence on the development and practice of science and engineering in the 20th century." In addition to the Monte Carlo method and the SIMPLEX algorithm discussed above, the top ten algorithms included the Krylov subspace iteration method for the solution of large systems of linear equations (Lanczos, together with Magnus Hestenes+ and Eduard Stiefel+), the Fast Fourier Transform (FFT) (Lanczos, together with G. C. Danielson+ in 1942, and independently by I. J. Good in 1958 and by James Cooley+ and John Tukey+ in 1965),13 and the fast multipole algorithm for the solution of many-body problems (Greengard and Rokhlin).
The invention of the Wiener filter by Norbert Wiener. The Wiener filter is an optimal filter for extracting signals from noise in stationary stochastic systems and is one of the central results in statistical communication theory, a field pioneered by Wiener. (A version of the Wiener filter was also formulated independently by Andrei Kolmogorov+.) The nonlinear, recursive Wiener filter, or Kalman filter, its extension to nonstationary systems for use in tracking and guidance was first formulated by Peter Swerling in 1959.14 Wiener and Alexander Khinchine independently derived the Wiener-Khinchine theorem, another central result in statistical communication theory.
The invention of statistical decision theory by Abraham Wald. Among other applications, statistical decision theory plays an important role in radar, control, and communication. Its minimax decision rules derive from John von Neumann's theory of optimal strategies (theory of games).
The invention of dynamic programming by Richard Bellman. This procedure solves sequential, or multi-stage, decision problems and is one of the foundations of modern control theory. It also constitutes the basis for many powerful algorithms, including the backpropagation algorithm used to train neural networks in machine learning and the Viterbi algorithm, invented by Andrew Viterbi, that is used to decode convolutional codes employed in error correction and in CDMA and GSM digital cellular telephony.
The co-invention of public-key cryptography by Martin Hellman. Hellman and Whitfield Diffie+ devised the Diffie-Hellman algorithm for secure key distribution over nonsecure channels.
The co-invention of RSA by Adi Shamir and Leonard Adleman. RSA (which is named for its three co-inventors, Shamir, Adleman, and Ronald Rivest+) is the most widely used public-key algorithm.
The invention of elliptic curve cryptography (ECC) by Neal Koblitz and Victor S. Miller (independently). Based on concepts rooted in algebraic geometry, ECC is widely deployed as the leading successor to RSA. Relative to RSA, it provides greater cryptographic strength with much smaller cryptovariables. (It does, however, share the same potential vulnerability to quantum computational attack.)
The formulation of the learning with errors (LWE) problem and encryption system by Oded Regev. The application of LWE, a computationally "hard" problem, has led to revolutionary developments in cryptography. CRYSTALS-Kyber, the recent finalist in the US National Institute of Standards and Technologys six-year international competition to find a quantum-resistant, or "post-quantum," algorithm standard for general encryption, is based on LWE. CRYSTALS-Dilithium, one of the three algorithms selected to become quantum-resistant digital signature standards, is also based on LWE. (FALCON, one of the other two digital signature finalists, is based on the NTRU cryptographic system, which was invented by Joseph Silverman, Jeffrey Hoffstein, and Jill Pipher.+) LWE is also the basis for the recent breakthrough in the construction of efficient, fully homomorphic encryption (FHE) schemes by Zvika Brakerski, Craig Gentry,+ and Vinod Vaikuntanathan.+ FHE techniques permit the processing of encrypted data without the need to first decrypt it, thus permitting the secure processing of sensitive data on non-secure computing platforms.
The invention of quantum cryptography by Stephen Wiesner. Although quantum key distribution was invented in the mid-1980s by others, it was specifically acknowledged to have been inspired by Wiesner's circa 1970 work that established the basic principles underlying the use of quantum mechanics to achieve information security.
The development of mathematical and statistical cryptanalysis by William Friedman. Friedman's innovations are ranked amongst the greatest in the history of cryptology; he supervised the breaking of the Japanese diplomatic code PURPLE in 1940 and directed US cryptanalysis during World War II. Other important World War II cryptologists included Solomon Kullback, Leo Rosen, and Abraham Sinkov in the US and Max Newman*, I.J. Good, and Leo Marks in England. Newman and Good were instrumental in the design of Colossus, which was used to break the Lorenz cipher employed by the German high command. Marks, the chief cryptologist of the Special Operations Executive (SOE) of MI6, revolutionized the one-time pad.
The invention of cryptocurrency by David Chaum. In his seminal 1982 paper, Chaum established the concept of secure digital cash, the first cryptocurrency. Chaum's invention anticipated by several decades the now widespread recognition of the issue of electronic privacy in financial and other online transactions.
The co-invention of blockchain by David Chaum and, independently, by Stuart Haber (together with W. Scott Stornetta+). Chaum gave the first full technical description of a blockchain in his 1982 doctoral dissertation, but never published the concept in a journal or conference proceeding. It, therefore, went largely unnoticed. In a 1991 paper, Haber and Stornetta+ introduced another embodiment of this concept for a shared, tamper-proof, decentralized transaction ledger. Their company, Surety Technologies, founded in 1994, created the first and longest running commercial blockchain. Blockchain technology is widely claimed to have the potential to revolutionize e-commerce (and more) by greatly increasing the efficiency, confidentiality, and trust with which online transactions are conducted and recorded.
The invention of convolutional codes by Peter Elias. Important decoding algorithms for these error correction codes were invented by Barney Reiffen, Robert Fano, and Andrew Viterbi.
The co-invention of the Reed-Solomon error correction code by Gustave Solomon. Reed-Solomon and Viterbi- or Fano-decoded convolutional codes, or hybrid concatenations of the two, are probably the most widely used error correction techniques at present.
The invention of the LZ data compression algorithm by Jacob Ziv and Abraham Lempel. Although LZ coding was not the first data compression technique (the first such technique having been invented, independently, by Robert Fano and Claude Shannon+), it is today the most widely used in commercial systems. It underpins PDF, GIF, TIFF, ZIP, and other widely used file formats.
The development of automated, electronically switched telephone networks by Amos Joel. Joel received both the 1989 Kyoto Prize ("Japan's Nobel Prize") and the 1993 US National Medal of Technology for work that revolutionized telephone switching systems worldwide. Joel's 1972 US Patent No. 3,663,762, "Mobile Communication System," is the basis of the switching technology that made cellular telephone networks possible.
The co-invention of spread spectrum communications by Hedy Lamarr. Lamarr (the Hollywood actress) and George Antheil+ (a Hollywood composer) received US Patent No. 2,292,387, "Secret Communication System," in 1942 for the invention of frequency-hopped spread spectrum. The digital form of spread spectrum that is widely used in cellular communications (CDMA) was developed by Qualcomm, a company founded by the information theorists Irwin Jacobs and Andrew Viterbi. Jacobs received the US National Medal of Technology in 1994 and Viterbi received the US National Medal of Science in 2007. Both were recognized for their pioneering innovations in digital wireless communications. Joel Engel also received the Medal of Technology in 1994 as one of the two "fathers of the cellular phone" for his work on the development of the basic network architecture used worldwide in cellular telephony. (The cell phone itself, as opposed to cellular switching fabrics, etc., was invented by Martin Cooper. Precursor technologies, such as the "walkie-talkie," the pager, the cordless telephone, and CB radio, were largely pioneered by Irving "Al" Gross.)
The co-invention of the Internet by Leonard Kleinrock, Paul Baran, Vinton Cerf,* and Robert Kahn. Together with Kleinrock, Baran, Cerf, and Kahn, Donald Davies+ and Lawrence Roberts+ are the six individuals most frequently cited as principal inventors of the Internet. Kleinrock, Cerf, Kahn, and Roberts+were awarded the US National Academy of Engineering's half-million dollar Draper Prize in 2001 "for the development of the Internet." Baran, Kleinrock, Davies+, and Roberts+ received the first IEEE Internet Award in 2000 for "their early, preeminent contributions in conceiving, analyzing and demonstrating packet-switching networks, the foundation technology of the Internet." Cerf, Kahn, and Baran received US National Medals of Technology, the former two in 1997 and the latter in 2007. Kleinrock was awarded the US National Medal of Science in 2007. Cerf and Kahn co-invented the TCP/IP protocol for integration of heterogeneous networks, which is the basis of the Internet's "inter-networking" architecture. They shared the 2004 ACM Turing Award for this work, and in 2005 each received the US Presidential Medal of Freedom.
The invention of Alohanet (precursor to Ethernet) by Norman Abramson. Alohanet was a packet-switched research network that solved the major problem of packet interference, or "packet collision." Alohanet was further developed by Robert Metcalfe,+ working at the Xerox Palo Alto Research Center, into Ethernet (which Metcalfe+ originally called the Alto Aloha network), the standard method used in local area computer networking. Radia Perlman's spanning tree protocol, which solved the problem of broadcast storms due to network switching loops, was the critical enabler that allowed Ethernet to realize high levels of robust network complexity.
The invention of Google by Sergey Brin and Larry Page*.The algorithm employed by Google, the most powerful and widely used search engine on the Internet, employs an adaptation of the citation frequency "impact factor" metric originally invented in the 1950s by Eugene Garfield to rank the relative influence of scientific researchers, articles, and journals. A search algorithm very similar to Google PageRank, called HITS (Hypertext Induced Topic Search), was devised almost simultaneously by Jon Kleinberg at IBM. In his papers, Kleinberg credited the 1976 mathematical work of Gabriel Pinski and Francis Narin, as does the PageRank patent. Pinski and Narin had shown how to formulate and compute Garfield's relative influence in terms of a graph theoretic matrix eigenvalue problem. Similar mathematical techniques for calculating the relative influence of individuals in social networks or of production sectors in national economies can be found, respectively, in the 1953 work of statistician Leo Katz and in the 1941 work on input-output analysis by the Nobel Prize winning economist Wassily Leontief.*
NOTES
1. According to the prominent historian of technology George Dyson, "there were four essential prophets whose mathematics brought us into the Information Age: Norbert Wiener, John von Neumann, Alan Turing and Claude Shannon." See "The Elegance of Ones and Zeroes," by George Dyson in The Wall Street Journal, 21 July 2017.
2. See Genius in the Shadows: A Biography of Leo Szilard, by William Lanouette (Scribner's, New York, 1992, p. 63).
3. See "Emil Post and His Anticipation of Gödel and Turing," by John Stillwell in Mathematics Magazine (Mathematical Association of America, Washington, DC, Vol. 77, No. 1, Feb. 2004, pp. 3-14). See also http://www-gap.dcs.st-and.ac.uk/~history/Mathematicians/Post.html.
4. See "Max Newman: Mathematician, Codebreaker and Computer Pioneer," by William Newman in Colossus: The First Electronic Computer, edited by Jack Copeland (Oxford, Oxford and New York, 2004).
5. Encyclopedia of Computer Science (Fourth Edition), edited by Anthony Ralston, Edwin D. Reilly, and David Hemmendinger (Wiley, Chichester, England, 2003, p. 1841).
6. See Turing's Cathedral: The Origins of the Digital Universe, by George Dyson (Pantheon/Random House, New York, 2012, p. 287).
7. Parallel Supercomputing in SIMD Architectures, by R. Michael Hord (CRC Press, Boca Raton, FL, 1990).
8. Although most supercomputers are now based on MIMD (multiple instruction, multiple data) architectures, their individual processing nodes generally embody small-scale SIMD capabilities. The still largely hypothetical quantum computer can be thought of as an SIMD machine with exponentially many virtual processors.
9. See AI: The Tumultuous History of the Search for Artificial Intelligence, by Daniel Crevier (Basic Books, New York, 1993, p. 26), or Encyclopedia of Computer Science (Fourth Edition), edited by Anthony Ralston, Edwin D. Reilly, and David Hemmendinger (Wiley, Chichester, England, 2003, p. 91).
10. The 2019 IEEE Xplore article Who Is the Father of Deep Learning? states: We conclude that Frank Rosenblatt developed and explored all the basic ingredients of the deep learning systems of today, and that he should be recognized as a Father of Deep Learning, perhaps together with Hinton, LeCun and Bengio who have just received the Turing Award as the fathers of the deep learning revolution. Lawrence Fogel initiated the field of evolutionary computing in 1960 and is considered to be the father of evolutionary programming, the first of four basic approaches that were eventually formulated and subsequently merged into the field of evolutionary computing.
11. See "Marshall Rosenbluth and the Metropolis algorithm," by J. E. Gubernatis, in Physics of Plasmas (12, 057303, 2005). According to this article, Nicholas Metropolis'+ only contribution to the algorithm's development was making available the use of MANIAC, the Los Alamos computer he had constructed as a replica of the computer that von Neumann designed and built at the Institute for Advanced Study. Of the other co-authors of the Metropolis algorithm paper, "arguably the most significant publication in the history of computational physics," Mici Teller initiated the assembly language programming work to code the algorithm, Arianna Rosenbluth+ took that over and produced from scratch the actual program used to test the algorithm, Edward Teller made "the crucial suggestion" to employ ensemble, rather than temporal averaging and how to do that, and Marshall Rosenbluth actually designed the algorithm, incorporating insights of his own. According to Gubernatis, the "key," as opposed to the "crucial," idea underpinning the algorithm's power was the principle of detailed balance, which was implicit in the original 1953 paper, but not made explicit until Rosenbluth formulated a general proof of the algorithm's validity in 1956. Rosenbluth went on to become one of the world's leading plasma theorists and a winner of both the Enrico Fermi Award (1985) and the US National Medal of Science (1997).
12. See http://www-gap.dcs.st-and.ac.uk/~history/Mathematicians/Dantzig_George.html.
13. Cooley and Tukey are generally credited with invention of the "modern" FFT. Their 1965 paper only referenced the prior work of I. J. Good, whose FFT algorithm was both somewhat different and less efficient. In a January 1992 paper in IEEE SP Magazine, entitled "How the FFT Gained Acceptance," Cooley reviewed other prior work and concluded that "it appears that Lanczos had the FFT algorithm" in 1942. He holds out the possibility that Gauss may have had it as early 1805, however. Gauss's work was not published until after his death and was in Latin and employed somewhat archaic notation, which made it difficult to decipher. It was only published in 1866 in his collected works. An analysis by Michael Heideman, Don Johnson, and C. Sidney Burrus has concluded that Gauss did indeed have the basic elements of the modern FFT in 1805, but due to its obscurity, his formulation appears to have had no influence on subsequent work.
14. See the next-to-last paragraphs in https://archive.siam.org/news/news.php?id=526 and in the obituary published in the November 2000 issue of Physics Today (pp. 75-76). See also the discussion in the Appendix to Tracking and Kalman Filtering Made Easy, by Eli Brookner (Wiley, New York, 1998, pp. 383-387).
* Georg Cantor and Herbert Simon had Jewish fathers; Simon's mother was of partial Jewish descent, which was also the case, at a minimum, for the mother of Georg Cantor. Max Newman and Vinton Cerf had Jewish fathers and non-Jewish mothers, while Richard Brodie, Wassily Leontief, Yuri Manin, John McCarthy, Larry Page, and Lotfi Zadeh have, or had, Jewish mothers. For more information, see the footnotes to these and other listings in Jewish Computer and Information Scientists, or in the cases of Leontief and Manin, in Jewish Economists and Jewish Mathematicians, respectively.
+ Non-Jewish.
QUESTIONS AND COMMENTS: CONTACT US
JINFO HOME | ||||||||
2453 | dbpedia | 1 | 10 | https://rjlipton.com/2010/10/23/galactic-algorithms/ | en | Galactic Algorithms | [
"https://i0.wp.com/rjlipton.com/wp-content/uploads/2010/10/johnson_ds72dpi.jpg?w=130&ssl=1",
"https://s0.wp.com/latex.php?latex=%7B%5Cbullet+%7D&bg=ffffff&fg=000000&s=0&c=20201002",
"https://s0.wp.com/latex.php?latex=%7B%5Cbullet+%7D&bg=ffffff&fg=000000&s=0&c=20201002",
"https://s0.wp.com/latex.php?latex=%7B%... | [] | [] | [
""
] | null | [] | 2010-10-23T00:00:00 | Algorithms with huge running times David Johnson is a renown computer theorist who was awarded the 2009 Knuth Prize, for his seminal contributions to theory. He is famous for: his many great papers on various aspects of algorithmic theory, his classic book ``Computers and Intractability: A Guide to the Theory of NP-Completeness" with Mike Garey,… | en | Gödel's Lost Letter and P=NP | https://rjlipton.com/2010/10/23/galactic-algorithms/ | Algorithms with huge running times
David Johnson is a renown computer theorist who was awarded the 2009 Knuth Prize, for his seminal contributions to theory. He is famous for: his many great papers on various aspects of algorithmic theory, his classic book “Computers and Intractability: A Guide to the Theory of NP-Completeness” with Mike Garey, and his continuing interest in the practical aspects of computing. He has been the driving force behind many experimental studies of how well algorithms perform on “real” data.
Today I want to discuss the issue of galactic algorithms. A galactic algorithm is an algorithm that is wonderful in its asymptotic behavior, but is never used to actual compute anything.
The power of asymptotic analysis is that we can predict the behavior of an algorithm without ever running it. This ability is one of the great achievements of modern complexity theory. That we can predict, often with great accuracy, how fast an algorithm would run, if executed, is indeed a major triumph. Yet, often algorithms are created that cannot be run because their running time is too large, or there are less sophisticated ones that would out-perform them on realistic data.
The name “galactic algorithm” is due to Ken Regan—I suggested the concept, but he came up with the name—my original name was terrible. Not actually “terrible,” but not as good as Ken’s. Perhaps there is a better name, for now it is the one I will use.
The thought behind it is simple: some algorithms are wonderful, so wonderful that their discovery is hailed as a major achievement. Yet these algorithms are never used and many never will be used—at least not on terrestrial data sets. They do, however, play several important roles in our field, which I will dicuss in the next section.
Galactic Algorithms
The danger in giving examples is that I may make some people upset—this is not my intent. If they are the proud creators of an algorithm they may be offended if I point out that their prize creation is galactic—that it will not get used in practice. I think this misses several key points. There is nothing wrong with galactic algorithms, they abound throughout complexity theory. They are still of immense value for several reasons:
A galactic algorithm may contain new techniques that can be used to create other algorithms that can be used. This happens all the time.
A galactic algorithm may eventually become a real algorithm as computer architectures change.
A galactic algorithm may lead to a series of research advances. The initial algorithm may run in a huge polynomial, but further improvements may yield practical algorithms. This has happen a number of times.
A galactic algorithm may show us that lower bounds that were conjectured are wrong. This alone could be important and often is a great reason for finding such algorithms. For an example, if there were tomorrow a discovery that showed there is a factoring algorithm with a huge but provably polynomial time bound that would change our beliefs about factoring. The algorithm might never be used, but would certainly shape the future research into factoring.
Some Examples
Here are some examples of galactic algorithms. Again please keep all of the above in mind. This is not any criticism.
Nash Equilibrium: I thought I would start with an algorithm of my own. Not because it is the most important, but to make it clear that I too have worked on galactic algorithms. The result is the best current bound for non-zero sum games. The result is due to Evangelos Markakis, Aranyak Mehta, and myself: it proves that every such game has a strategy that is an Nash solution, and it finds this solution in time at most
Note, this is clearly a galactic algorithm: even for modest approximations the exponent is huge, and the algorithm cannot be used. Yet researchers in algorithmic game theory would be thrilled if one could prove a bound of the form
where the dependence on is reduced. Note, this would still be a galactic algorithm.
Quantum Factoring: The famous quantum factoring algorithm of Peter Shor may or may not be a galactic algorithm. It is of course one of the great results in theory, ever. It has sparked funding for research centers on quantum computation that have promoted many other advances.
If and when practical quantum computers are built Peter’s algorithm will be one of the first algorithms run on them. Right now it is a galactic algorithm. But, perhaps it is the best example of the importance of galactic algorithms. Peter’s great breakthrough in the discovery of his algorithms has lead to the creation of thousands of new results and papers.
Graph Minors:
Neil Robertson and Paul Seymour proved their famous graph minor theorem. This theorem proves that every minor closed family , there is a polynomial time algorithm to check if a graph contains a forbidden minor from . The algorithms that arise from this general technology unfortunately contain huge constants that make the algorithms galactic. There has been recent work trying to make the results potentially practical. In any event, David Johnson famously once said,
For any instance that one could fit into the known universe, one would easily prefer to even constant time, if that constant had to be one of Robertson and Seymour’s.
Matrix Product: One of the most beautiful two results are the initial brilliant result of Volker Strassen on fast matrix product, and the subsequent deep work of Don Coppersmith and Shmuel Winograd on an even faster matrix product algorithm. These results are used by hundreds—perhaps thousands of papers. Clearly, these are galactic algorithms. Even the initial one due to Strassen was not used in actual computation for many years. Apparently today as computers have became able to perform huge matrix products it is unclear if Strassen can be used. Wikipedia claims that even for matrices of size by the improvement is marginal.
Sorting: Miklos Ajtai, János Komlós, and Endre Szemerédi proved the existence of a sorting network of depth and size. This solved a long standing open problem, but due to the hidden constants it is a galactic algorithm.
Open Problems
Galactic algorithms can be of immense importance. They can lead to new methods, new directions, and sometimes can be made practical. If you found a SAT algorithm that runs in time bounded by
or proved that any SAT algorithm requires time
then you would be famous. The first would prove P=NP so that would be huge, the later would prove a non-linear lower bound on SAT. That would also be huge—at least to those of us who have tried to prove such theorems.
Is this classification of algorithms useful? What is your favorite galactic algorithm? What do you all think? | |||||
2453 | dbpedia | 2 | 12 | https://rjlipton.com/2010/10/23/galactic-algorithms/ | en | Galactic Algorithms | [
"https://i0.wp.com/rjlipton.com/wp-content/uploads/2010/10/johnson_ds72dpi.jpg?w=130&ssl=1",
"https://s0.wp.com/latex.php?latex=%7B%5Cbullet+%7D&bg=ffffff&fg=000000&s=0&c=20201002",
"https://s0.wp.com/latex.php?latex=%7B%5Cbullet+%7D&bg=ffffff&fg=000000&s=0&c=20201002",
"https://s0.wp.com/latex.php?latex=%7B%... | [] | [] | [
""
] | null | [] | 2010-10-23T00:00:00 | Algorithms with huge running times David Johnson is a renown computer theorist who was awarded the 2009 Knuth Prize, for his seminal contributions to theory. He is famous for: his many great papers on various aspects of algorithmic theory, his classic book ``Computers and Intractability: A Guide to the Theory of NP-Completeness" with Mike Garey,… | en | Gödel's Lost Letter and P=NP | https://rjlipton.com/2010/10/23/galactic-algorithms/ | Algorithms with huge running times
David Johnson is a renown computer theorist who was awarded the 2009 Knuth Prize, for his seminal contributions to theory. He is famous for: his many great papers on various aspects of algorithmic theory, his classic book “Computers and Intractability: A Guide to the Theory of NP-Completeness” with Mike Garey, and his continuing interest in the practical aspects of computing. He has been the driving force behind many experimental studies of how well algorithms perform on “real” data.
Today I want to discuss the issue of galactic algorithms. A galactic algorithm is an algorithm that is wonderful in its asymptotic behavior, but is never used to actual compute anything.
The power of asymptotic analysis is that we can predict the behavior of an algorithm without ever running it. This ability is one of the great achievements of modern complexity theory. That we can predict, often with great accuracy, how fast an algorithm would run, if executed, is indeed a major triumph. Yet, often algorithms are created that cannot be run because their running time is too large, or there are less sophisticated ones that would out-perform them on realistic data.
The name “galactic algorithm” is due to Ken Regan—I suggested the concept, but he came up with the name—my original name was terrible. Not actually “terrible,” but not as good as Ken’s. Perhaps there is a better name, for now it is the one I will use.
The thought behind it is simple: some algorithms are wonderful, so wonderful that their discovery is hailed as a major achievement. Yet these algorithms are never used and many never will be used—at least not on terrestrial data sets. They do, however, play several important roles in our field, which I will dicuss in the next section.
Galactic Algorithms
The danger in giving examples is that I may make some people upset—this is not my intent. If they are the proud creators of an algorithm they may be offended if I point out that their prize creation is galactic—that it will not get used in practice. I think this misses several key points. There is nothing wrong with galactic algorithms, they abound throughout complexity theory. They are still of immense value for several reasons:
A galactic algorithm may contain new techniques that can be used to create other algorithms that can be used. This happens all the time.
A galactic algorithm may eventually become a real algorithm as computer architectures change.
A galactic algorithm may lead to a series of research advances. The initial algorithm may run in a huge polynomial, but further improvements may yield practical algorithms. This has happen a number of times.
A galactic algorithm may show us that lower bounds that were conjectured are wrong. This alone could be important and often is a great reason for finding such algorithms. For an example, if there were tomorrow a discovery that showed there is a factoring algorithm with a huge but provably polynomial time bound that would change our beliefs about factoring. The algorithm might never be used, but would certainly shape the future research into factoring.
Some Examples
Here are some examples of galactic algorithms. Again please keep all of the above in mind. This is not any criticism.
Nash Equilibrium: I thought I would start with an algorithm of my own. Not because it is the most important, but to make it clear that I too have worked on galactic algorithms. The result is the best current bound for non-zero sum games. The result is due to Evangelos Markakis, Aranyak Mehta, and myself: it proves that every such game has a strategy that is an Nash solution, and it finds this solution in time at most
Note, this is clearly a galactic algorithm: even for modest approximations the exponent is huge, and the algorithm cannot be used. Yet researchers in algorithmic game theory would be thrilled if one could prove a bound of the form
where the dependence on is reduced. Note, this would still be a galactic algorithm.
Quantum Factoring: The famous quantum factoring algorithm of Peter Shor may or may not be a galactic algorithm. It is of course one of the great results in theory, ever. It has sparked funding for research centers on quantum computation that have promoted many other advances.
If and when practical quantum computers are built Peter’s algorithm will be one of the first algorithms run on them. Right now it is a galactic algorithm. But, perhaps it is the best example of the importance of galactic algorithms. Peter’s great breakthrough in the discovery of his algorithms has lead to the creation of thousands of new results and papers.
Graph Minors:
Neil Robertson and Paul Seymour proved their famous graph minor theorem. This theorem proves that every minor closed family , there is a polynomial time algorithm to check if a graph contains a forbidden minor from . The algorithms that arise from this general technology unfortunately contain huge constants that make the algorithms galactic. There has been recent work trying to make the results potentially practical. In any event, David Johnson famously once said,
For any instance that one could fit into the known universe, one would easily prefer to even constant time, if that constant had to be one of Robertson and Seymour’s.
Matrix Product: One of the most beautiful two results are the initial brilliant result of Volker Strassen on fast matrix product, and the subsequent deep work of Don Coppersmith and Shmuel Winograd on an even faster matrix product algorithm. These results are used by hundreds—perhaps thousands of papers. Clearly, these are galactic algorithms. Even the initial one due to Strassen was not used in actual computation for many years. Apparently today as computers have became able to perform huge matrix products it is unclear if Strassen can be used. Wikipedia claims that even for matrices of size by the improvement is marginal.
Sorting: Miklos Ajtai, János Komlós, and Endre Szemerédi proved the existence of a sorting network of depth and size. This solved a long standing open problem, but due to the hidden constants it is a galactic algorithm.
Open Problems
Galactic algorithms can be of immense importance. They can lead to new methods, new directions, and sometimes can be made practical. If you found a SAT algorithm that runs in time bounded by
or proved that any SAT algorithm requires time
then you would be famous. The first would prove P=NP so that would be huge, the later would prove a non-linear lower bound on SAT. That would also be huge—at least to those of us who have tried to prove such theorems.
Is this classification of algorithms useful? What is your favorite galactic algorithm? What do you all think? | |||||
2453 | dbpedia | 1 | 47 | https://thmatters.wordpress.com/tcs-awards-list/ | en | TCS Awards List | [
"https://s2.wp.com/i/logo/wpcom-gray-white.png",
"https://s2.wp.com/i/logo/wpcom-gray-white.png",
"https://pixel.wp.com/b.gif?v=noscript"
] | [] | [] | [
""
] | null | [] | 2018-05-25T15:32:36+00:00 | Below is a list of major awards that are relevant to the theoretical computer science community. Included is a brief description of each, a link to the nomination instructions, and examples of res… | en | https://s1.wp.com/i/favicon.ico | Theory Matters | https://thmatters.wordpress.com/tcs-awards-list/ | Below is a list of major awards that are relevant to the theoretical computer science community. Included is a brief description of each, a link to the nomination instructions, and examples of researchers from TCS and related fields that won the award in the past.
Honorary Lectures
Lifetime Awards
Major Paper or Book, Set of Papers, etc.
Major Prizes
Mid-Career Awards
Service
Student Prizes
Thesis Prizes
Young Career Awards
Honorary Lectures
John von Neumann Lecture
For outstanding and distinguished contributions to the field of applied mathematical sciences and for the effective communication of these ideas to the community.
Some recipients in TCS: Jennifer Chayes, Donald Knuth, Ingrid Daubechies, Richard Karp
link
Lifetime Awards
Claude Shannon Award
To honor consistent and profound contributions to the field of information theory.
Some recipients in TCS: Robert Calderbank, Abbas El Gamal
link
Richard Hamming Medal
To honor an individual or team, up to three in number for exceptional contributions to information sciences, systems, and technology.
Some recipients in TCS: Robert Calderbank, Abbas El Gamal, Michael Luby, Whitfield Diffie, Martin Hellman, Ralph Merkle
link
Kyoto Prize
To honor those who have contributed significantly to the scientific, cultural, and spiritual betterment of mankind in the area of advanced technology.
Some recipients in TCS: Laszlo Lovasz, Donaly Knuth
link
Simons Investigator
To support outstanding scientists in their most productive years, when they are establishing creative new research directions, providing leadership to the field and effectively mentoring junior scientists.
Some recipients in TCS: Scott Aaronson, Boaz Barak, James Lee, Madhu Sudan, David Zuckerman, Dan Boneh, Subhash Khot, Chris Umans (2-3/year)
link
Vannevar Bush Faculty Fellows program given by DoD (formerly National Security Science and Engineering Faculty Fellowship)
Seeks outstanding researchers to propose basic research that is potentially transformative in topics areas of interest to the DoD.
Some recipients in TCS: Scott Aaronson, Sean Hallgren, Umesh Vazirani
link
INFORMS Khachiyan Prize
Honors individual or a team for life-time achievements in the area of optimization; recognizes a sustained career of scholarship from nominees who are still active at the year of the nomination. The prize serves as an esteemed recognition of innovativeness and impact in the area of optimization, including theory and applications.
Some recipients in TCS: Jean Bernard Lasserre
link
ACM Fellow
ACM’s most prestigious member grade recognizes the top 1% of ACM members for their outstanding accomplishments in computing and information technology and/or outstanding service to ACM and the larger computing community.
Some recipients in TCS: Edith Cohen, Shafi Goldwasser, Martin Grohe, Venkatesan Guruswami, Mor Harchol-Balter, Silvio Micali, Tal Rabin, Michael Sipser, R. Ravi, David Peleg, Erik Demaine, Mike Saks, Noga Alon, Monika Henzinger, Cynthia Dwork, Piotr Indyk
link
SIAM Fellow
To honor SIAM members who are recognized by their peers as distinguished for their contributions to the discipline.
Some recipients in TCS: Ronald DeVore, Pablo Parrilo, Robin Thomas, Joel Spencer, Cynthia Phillips, David Williamson, Fan Chung, Charles Leiserson, Moshe Vardi, Dorit Hochbaum, Michel Goemans, Andrew Goldberg, Alan Frieze
link
IEEE Fellow
A distinction reserved for select IEEE members whose extraordinary accomplishments in any of the IEEE fields of interest are deemed fitting of this prestigious grade elevation.
Some recipients in TCS: Emmanuel Candes, Andrea Montanari, Rebecca Wright
link
AMS Fellow
Recognizes members who have made outstanding contributions to the creation, exposition, advancement, communication, and utilization of mathematics.
Some recipients in TCS: Emmanuel Candes, Peter Sarnak, Noga Alon, Michael Sipser, Victor Pan, Jennifer Chayes, Lenore Blum, Michel Goemans, Tom Leighton, Nick Pippenger, Dana Randall, Madhu Sudan, Joel Spencer, Eva Tardos, Prasad Tetali, Alan Frieze
link
IACR Fellow
To recognize outstanding IACR members for technical and professional contributions that: advance the science, technology, and practice of cryptology and related fields; promote the free exchange of ideas and information about cryptology and related fields; develop and maintain the professional skill and integrity of individuals in the cryptologic community; advance the standing of the cryptologic community in the wider scientific and technical world and promote fruitful relationships between the IACR and other scientific and technical organizations.
Some recipients in TCS: Juan Garay, Yuval Ishai, Paul Kocher, Stafford Tavares, Victor Shoup, Joe Kilian, Tal Rabin, Ran Canetti, Eyal Kushilevitz, Moti Yung
link
INFORMS Fellow
For distinguished individuals who have demonstrated outstanding and exceptional accomplishments and experience in operations research and the management sciences (OR/MS).
Some recipients in TCS: R. Ravi, Dick Karp, Nimrod Megiddo, David Shmoys, Eva Tardos
link
CS-Can/Info-Can Lifetime Achievemant Award
Recognize current or former faculty members in Canadian Computer Science Departments, Schools, Faculties who have made outstanding and sustained contributions to computing over their careers. The awards can be for achievement in research, teaching, service, or any combination of these.
Some recipients in TCS: David Kirkpatrick, Ian Munro, Steve Cook, Derek Corneil
link
EATCS Award
Given to acknowledge extensive and widely recognized contributions to theoretical computer science over a life long scientific career.
Some recipients in TCS: Noam Nisan, Eva Tardos, Dexter Kozen, Christos Papadimitriou, Gordon Plotkin, Martin Dyer, Moshe Vardi, Boris (Boaz) Trakhtenbrot, Kurt Mehlhorn
link
Major Paper or Book, Set of Papers, etc.
Godel Prize
For outstanding papers in the area of theoretical computer science is sponsored jointly by the European Association for Theoretical Computer Science (EATCS) and the Special Interest Group on Algorithms and Computation Theory of the Association for Computing Machinery (ACM SIGACT).
Some recipients in TCS: Cynthia Dwork, Frank McSherry, Kobbi Nissim, Adam Smith, Stephen Brookes, Peter W. O’Hearn, Daniel Spielman, Shang-Hua Teng, Ronald Fagin, Amnon Lotem, Moni Naor, Antoine Joux, Dan Boneh, Matthew K. Franklin, Elias Koutsoupias, Christos H. Papadimitriou, Tim Roughgarden, Eva Tardos, Noam Nisan, Amir Ronen
link
Delbert Ray Fulkerson Prize
For outstanding papers in the area of discrete mathematics. The term “discrete mathematics” is interpreted broadly and is intended to include graph theory, networks, mathematical programming, applied combinatorics, applications of discrete mathematics to computer science, and related subjects.
Some recipients in TCS: Francisco Santos, Sanjeev Arora, Satish Rao, Umesh Vazirani, Anders Johansson, Jeff Kahn, Van H. Vu, Laszlo Lovasz, Balazs Szegedy
link
Prize in Game Theory and Computer Science in Honour of Ehud Kalai
Awarded to the person (or persons) who have published the best paper at the interface of game theory and computer science in the last decade.
Some recipients in TCS: Tim Roughgarden, Christos Papadimitriou, Costis Daskalakis, Paul Goldberg
link
ACM SigEcom Test of Time Award
Recognizes the author or authors of an influential paper or series of papers published between ten and twenty-five years ago that has significantly impacted research or applications exemplifying the interplay of economics and computation.
Some recipients in TCS: Gagan Aggarwal, Ashish Goel, Rajeev Motwani
link
INFORMS Computing Society Prize
For the best English language paper or group of related papers dealing with the Operations Research/Computer Science interface.
Some recipients in TCS: David Johnson, Pablo Parrilo, Peter Shor
link
INFORMS Applied Probability Society Best Publication Award
Recognizes outstanding contributions (a book, paper, or set of papers) to Applied Probability.
Some recipients in TCS: Andrea Montanari, Devavrat Shah, David Gamarnik
link
INFORMS Applied Probability Society Best Student Paper Prize
To identify and honor outstanding papers in the field of applied probability that are written primarily by a student. We define applied probability broadly, as any paper related to the modeling, analysis, and control of stochastic systems.
Some recipients in TCS: Guido Lagos
link
SIAM SIAG/Optimization Prize
Awarded to the author(s) of the most outstanding paper, as determined by the prize committee, on a topic in optimization published in English in a peer-reviewed journal.
Some recipients in TCS: Michel Goemans, David Williamson, Yinyu Ye
link
SIAM George Polya Prize for Mathematical Exposition
Awarded every two years to an outstanding expositor of the mathematical sciences. The prize may be awarded for a specific work or for the cumulative impact of multiple expository works that communicate mathematics effectively.
Some recipients in TCS: Nick Trefethen
link
SIAM George Polya Prize in Mathematics
Awarded every four years for a significant contribution, as evidenced by a refereed publication, in an area of mathematics of interest to George Pólya not covered by the George Pólya Prize in Combinatorics or the George Pólya Prize for Mathematical Exposition.
Some recipients in TCS: Emmanuel Candes, Terence Tao, Adam Marcus, Daniel Spielman, Nikhil Srivastava
link
SIAM George Polya Prize in Combinatorics
Awarded every four years for a notable application of combinatorial theory. The prize is broadly intended to recognize specific recent work.
Some recipients in TCS: Van Vu, Neil Robertson, Paul Seymour, Noga Alon
link
SIAM SIAG/CST Best SICON Paper Prize
Awarded to the author(s) of the two most outstanding papers, as determined by the prize committee, published in the SIAM Journal on Control and Optimization (SICON) in the three calendar years before the award year.
Some recipients in TCS: Bernard Chazelle, Pablo Parrilo
link
SIAM Ralph E. Kleinman Prize
Awarded to one individual for outstanding research, or other contributions, that bridge the gap between mathematics and applications. Work that uses high-level mathematics and/or invents new mathematical tools to solve applied problems from engineering, science, and technology is particularly appropriate.
Some recipients in TCS: Emmanuel Candes, Anna Gilbert
link
AMS Levi L. Conant Prize
To recognize the best expository paper published in either the Notices of the AMS or the Bulletin of the AMS in the preceding five years.
Some recipients in TCS: Henry Cohn, Shlomo Hoory, Nati Linial, Avi Wigderson
link
Michael and Sheila Held Prize from the National Academy of Sciences
Honors outstanding, innovative, creative, and influential research in the areas of combinatorial and discrete optimization, or related parts of computer science, such as the design and analysis of algorithms and complexity theory. This $100,000 prize is intended to recognize recent work (defined as published within the last eight years).
Some recipients in TCS: Prasad Raghavendra and David Steurer
link
Edsger W. Dijkstra Prize in Distributed Computing
The prize is given for outstanding papers on the principles of distributed computing, whose significance and impact on the theory and/or practice of distributed computing has been evident for at least a decade.
Some recipients in TCS: Elizabeth Borowsky, Eli Gafni, Noga Alon, Laszlo Babai, Alon Itai, Michael Luby, Michael Ben-Or, Michael Rabin
link
The Alonzo Church Award for Outstanding Contributions to Logic and Computation
The prize is awarded for an outstanding contribution represented by a paper or small group of papers within the past 25 years. This time span allows the contribution to have established evidence of lasting impact and depth.
Some recipients in TCS: Tomas Feder, Moshe Vardi,
link
Major Prizes
Kannelakis Prize
Honors specific theoretical accomplishments that have had a significant and demonstrable effect on the practice of computing.
Some recipients in TCS: Amos Fiat, Moni Naor, Michael Luby, James Demmel
link
Knuth Prize
For outstanding contributions to the foundations of computer science is awarded for major research accomplishments and contributions to the foundations of computer science over an extended period of time.
Some recipients in TCS: Oded Goldreich, Noam Nisan, Laszlo Babai, Richard Lipton, Gary Miller
link
IMU Nevanlinna Prize
Awarded once every 4 years at the International Congress of Mathematicians, for outstanding contributions in Mathematical Aspects of Information Sciences.
Some recipients in TCS: Subhash Khot, Daniel Spielman, Jon Kleinberg
link
Turing Award
To recognize contributions of lasting and major technical importance to the computing field.
Some recipients in TCS: Whitfield Diffie, Martin Hellman, Shafi Goldwasser, Silvio Micali
link
Mid-Career Awards
Shapley Lecture
Given at each World Congress of the Game Theory Society by a distinguished game theorist aged 40 or under at the time of the Lecture.
Some recipients in TCS: Tim Roughgarden
link
INFORMS Farkas prize
Awarded annually to a mid-career researcher for outstanding contributions to the field of optimization, over the course of their career. Such contributions could include papers (published or submitted and accepted), books, monographs, and software. The awardee will be within 25 years of their terminal degree as of January 1 of the year of the award.
Some recipients in TCS: Pablo Parrilo, Michel Goemans
link
ACM Grace Murray Hopper Award
Awarded to the outstanding young computer professional of the year, selected on the basis of a single recent major technical or service contribution. The candidate must have been 35 years of age or less at the time the qualifying contribution was made.
Some recipients in TCS: Craig Gentry, Omer Reingold, Tim Roughgarden, Brent Waters
link
EATCS Presburger Award
To a young scientist (in exceptional cases to several young scientists) for outstanding contributions in theoretical computer science, documented by a published paper or a series of published papers. Nominated scientists must be at most 35 years at the time of the deadline of nomination.
Some recipients in TCS: Mark Braverman, Xi Chen, Erik Demaine, Venkatesan Guruswami, Mihai Patrascu, Alexandra Silva, David Woodruff
link
NSF Alan T. Waterman Award
The National Science Foundation’s (NSF) annual Alan T. Waterman Award honors an outstanding young U.S. scientist or engineer. The awardee receives a grant of $1 million over five years for scientific research or advanced study in any field of science, plus a medal and other recognition. Public Law 94-86 of the 94th Congress established the Waterman Award in 1975 to mark the 25th anniversary of the NSF and to honor its first director, Alan T. Waterman. Criteria: a candidate must be a U.S. citizen or permanent resident. He or she must be 40 years of age or younger, OR not more than 10 years beyond receipt of the Ph.D. degree, by December 31st of the year in which they are nominated. The candidate should have demonstrated exceptional individual achievements in scientific or engineering research of sufficient quality to be placed at the forefront of his or her peers. Criteria also include originality, innovation and a significant impact on the individual’s field.
Some recipients in TCS: Scott Aronson, Subhash Khot, Herbert Edelsbrunner, Emmanuel Candes, Mung Chiang
link
ACM Prize in Computing
The award recognizes an early to mid-career fundamental innovative contribution in computing that, through its depth, impact and broad implications, exemplifies the greatest achievements in the discipline.
Some recipients in TCS: Dina Katabi, Alexei A. Efros, Stefan Savage, Dan Boneh
link
Service
ACM SIGACT Distinguished Service Prize
Awarded to an individual who has made substantial service contributions to the Theoretical Computer Science community.
Some recipients in TCS: Alistair Sinclair, Laszlo Babai, Avi Wigderson, Lance Fortnow
link
INFORMS Computing Society Harvey J. GreenBerg Service Award
Given to an individual in recognition of their impact on the INFORMS Computing Society (ICS).
Some recipients in TCS: Richard S. Barr
link
SIAM Prize for Distinguished Service to the Profession
Awarded to an applied mathematician who has made distinguished contributions to the furtherance of applied mathematics on the national or international level.
Some recipients in TCS: John Hopcroft
link
EATCS Fellow
Awarded to a person having a track record of intellectual and organizational leadership within the EATCS community. Fellows are expected to be “model citizens” of the TCS community, helping to develop the standing of TCS beyond the frontiers of the community.
Some recipients in TCS: Jim Adamek, Susanne Albers, Artur Czumaj, Mike Fellows, Monika Henzinger, Thomas Henzinger, Giuseppe Italiano, Stefan Leonardi, Kurt Mehlhorn, Paul Spirakas, Aravind Srinivasan, Moshe Vardi, Moti Yung
link
Student Prizes
INFORMS George Nicholson Student Paper Competition
To identify and honor outstanding papers in the field of operations research and the management sciences written by a student.
Some recipients in TCS: Andrew Li, Bradley Sturt, Ramesh Johari
link
INFORMS ICS Student Paper Prize
Is given annually to the best paper on computing and operations research by a student author.
Some recipients in TCS: Guido Lagos
link
INFORMS ICS Student Paper Award
Is given annually to the best paper on computing and operations research by a student author.
Some recipients in TCS: Berk Ustun
link
SIAM Frank and Brennie Morgan Award (SIAM)
Awarded each year to an undergraduate student (or students for joint work) for outstanding research in mathematics.
Some recipients in TCS: Jacob Fox, Daniel Kane
link
SIAM Student Paper Prize
Awarded every year to the student author(s) of the most outstanding SIAM paper(s) submitted to the SIAM Student Paper Competition.
Some recipients in TCS: Emanuele Viola
link
SIAM Student Travel Awards
Given to help students gain the experience and exposure that comes from attending and presenting at SIAM conferences. The travel awards are intended to help students defray some of the costs of their attendance.
Awarded for SIAM conferences each year
link
Thesis Prizes
Mathematical Optimization Society Tucker Prize
Awarded at each International Symposium on Mathematical Programming for an outstanding doctoral thesis.
Some recipients in TCS: Daniel Dadush, Oliver Friedmann, Mohit Singh, Uday Shanbhag, Tim Roughgarden, Andrew Goldberg, Fabian Chudak
link
INFORMS Dantzig Prize
Given for the best dissertation in any area of operations research and the management sciences that is innovative and relevant to practice.
Some recipients in TCS: Negin Golrezaei, Ian Osband, Devavrat Shah, Alberto Caprara
link
ACM SIGecom Doctoral Dissertation Award
Recognizes an outstanding dissertation in the field of economics and computation.
Some recipients in TCS: Aviad Rubinstein, Inbal Talgam-Cohen, Matt Weinberg, Balu Sivan
link
ACM Doctoral Dissertation Award
Presented annually to the author(s) of the best doctoral dissertation(s) in computer science and engineering.
Some recipients in TCS: Haitham Hassanieh, Aaron Sidford, Sanjam Garg, John Duchi
link
EATCS Distinguished Dissertation Award
Given to promote and recognize outstanding dissertations in the field of theoretical computer science.
Some recipients in TCS: Bas Ketsman, Ilya Razenshteyn, Aviad Rubinstein, Vincent Cohen-Addad, Mika Goos, Steen Vester, Radu Curticapean, Heng Guo, Georg Zetzche, Karl Bringmann, Michal Skrzpczak, Mary Wootters
link
Young Career Awards
Presidential Early Career Awards for Scientists and Engineers (PECASE)
To recognize and honor outstanding scientists and engineers at the outset of their independent research careers.
Some TCS recipients: Scott Aaronson, Sean Hallgren, Adam Smith
link
SIAM Richard C. DiPrima Prize
Awarded to an early career researcher who has done outstanding research in applied mathematics (defined as those topics covered by SIAM journals) and who has completed his/her doctoral dissertation and completed all other requirements for his/her doctorate during the period running from three years prior to the award date to one year prior to the award date.
Some recipients in TCS: David Williamson
link
INFORMS Young Researchers Prize
Awarded annually at the fall INFORMS Annual Meeting to one or more young researcher(s) for an outstanding paper in optimization that is published in, or submitted to and accepted by, a refereed professional journal within the four calendar years preceding the year of the award.
Some recipients in TCS: Kamal Jain, Tim Roughgarden
link
INFORMS Applied Probability Society Erlang Prize
A single prize will be awarded once every two years during even calendar years by the Applied Probability Society (APS) of INFORMS to an early career researcher who has made a significant contribution to applied probability.
Some recipients in TCS: Devavrat Shah
link
SIAM Early Career Travel Awards
Support for early career participants affiliated with U.S. institutions for travel to SIAM conferences through a grant from the U.S. National Science Foundation (NSF).
Awarded for SIAM conferences each year
link
SIAM Denes Konig Prize
Awarded biennially to an early career researcher or early career researchers for outstanding research, as determined by the prize committee, in an area of discrete mathematics, based on a publication by the candidate(s) in a peer-reviewed journal published in the three calendar years prior to the year of the award.
Some recipients in TCS: Adam Wade Marcus, Zeev Dvir, Jacob Fox
link
CS-Can/Info-Can Outstanding Young Computer Science Researcher Award
Recognize excellence in research, and are made to top young faculty members in Canadian Computer Science Departments, Schools, Faculties who are within the first ten years of their career beyond the completion of their PhD.
Some recipients in TCS: Lap Chi Lau, Nick Harvey, Kevin Leyton-Brown
link
Maintained by David Woodruff (dwoodruf@cs.cmu.edu) | ||||
2453 | dbpedia | 3 | 25 | https://nanoexplanations.wordpress.com/2011/07/04/a-mathematical-proof-of-the-church-turing-thesis/ | en | A mathematical proof of the Church-Turing Thesis? | http://www.researchblogging.org/public/citation_icons/rb_editors-selection.png | http://www.researchblogging.org/public/citation_icons/rb_editors-selection.png | [
"https://i0.wp.com/www.researchblogging.org/public/citation_icons/rb_editors-selection.png",
"https://2.gravatar.com/avatar/2408601e9753c798ea623d55655959ac87c73b4c011d3f60a83e50785e147b62?s=48&d=identicon&r=PG",
"https://1.gravatar.com/avatar/a992462f7546f957134c9df1e05040acaba333f429c10f995048a889e93f820f?s=4... | [] | [] | [
""
] | null | [] | 2011-07-04T00:00:00 | The Church-Turing Thesis lies at the junction between computer science, mathematics, physics and philosophy. The Thesis essentially states that everything computable in the "real world" is exactly what is computable within our accepted mathematical abstractions of computation, such as Turing machines. This is a strong statement, and, of course, if one had tried to say the… | en | https://s1.wp.com/i/favicon.ico | Nanoexplanations | https://nanoexplanations.wordpress.com/2011/07/04/a-mathematical-proof-of-the-church-turing-thesis/ | The Church-Turing Thesis lies at the junction between computer science, mathematics, physics and philosophy. The Thesis essentially states that everything computable in the “real world” is exactly what is computable within our accepted mathematical abstractions of computation, such as Turing machines. This is a strong statement, and, of course, if one had tried to say the same thing about natural laws and Newtonian physics, one would have a respectable thesis that turned out to be false. (There is even a theoretical research area, hypercomputation, that attempts to show how “super-Turing” computers could be built in real life by taking advantage on non-Newtonian physics.)
When I learned the Church-Turing Thesis in school, I was told that it was a thesis, not a theorem, precisely because it was not formally provable. The notion of “computable” was intuitive, not mathematically precise, so it was impossible to say whether a particular mathematical abstraction was the ULTIMATELY CORRECT one. Nevertheless, in 2008, two respected researchers — Nachum Dershowitz of Tel Aviv University, and Yuri Gurevich of Microsoft Research — did indeed publish a proof of the Church-Turing Thesis in the Bulletin of Symbolic Logic. How is this possible? They constructed an axiomatization of computation based on abstract state machines, a theoretical notion developed by Gurevich that Microsoft has used to perform practical software tests, and then proved that the Church-Turing Thesis held for that axiomatization of computation. In other words, they managed to formalize the notoriously unformalizable “computation in the real world.”
This impressed me quite a bit — so much so, that when a user named Avinash asked on the theoretical computer science question and answer site, “What would it mean to disprove Church-Turing Thesis?” I answered that the Thesis had been proved for all practical purposes. Not my finest hour, as we will see. Fortunately, Avinash, in a feat of crowdsourcing genius, accepted my answer as correct, in order to encourage discussion. Since then, some of the top theorists in the world have contributed their opinion of the Dershowitz/Gurevich paper, and their philosophy about the thesis overall. I will cover some of the main points in the rest of this blog entry.
First off, the Wikipedia definition of the Church-Turing Thesis is:
Every effectively calculable function is a computable function.
Here, “effectively calculable” means intuitively computable, by rote, in real life; and “computable” means formally computable according to some mathematically defined notion of computation. The history leading up to the formulation of the Thesis is fascinating, and not without controversy. Dershowitz and Gurevich believe, in fact, that Church and Turing put forth two separate Theses, while the computability theorist Robert Soare believes the Thesis should be named simply, “Turing’s Thesis.” I won’t go into any of this here, but for further information, you can look at a video of a presentation Gurevich gave in 2009, or read Computability and Recursion by Soare.
The informal axiomatization of computation provided by Dershowitz and Gurevich is as follows.
I. An algorithm determines a sequence of “computational” states for each input.
II. The states of a computational sequence are structures. And everything is invariant under isomorphism.
III. The transitions from state to state in computational sequences are governable by some fixed, finite description.
IV. Only undeniably computable operations are available in initial states.
Dershowitz and Gurevich formalize these axioms using abstract state machines, and proceed to derive from those axioms the statements they call Church’s Thesis and Turing’s Thesis. Pretty cool. But… what is wrong with this picture? I will quote from the comments and answers generated by Avinash’s question, and my own answer to it.
As normally understood, the Church-Turing thesis is not a formal proposition that can be proved. It is a scientific hypothesis, so it can be “disproved” in the sense that it is falsifiable. Any “proof” must provide a definition of computability with it, and the proof is only as good as that definition. I’m sure Dershowitz-Gurevich have a fine proof, but the real issue is whether the definition really covers everything computable. Answering “can it be disproved?” by saying “it’s been proved” is misleading. It has been proved under a reasonable (falsifiable!) definition of computability. — Ryan Williams
The Dershowitz-Gurevich paper says nothing about probabilistic or quantum computation. It does write down a set of axioms about computation, and prove the Church-Turing thesis assuming those axioms. However, we’re left with justifying these axioms. Neither probabilistic nor quantum computation is covered by these axioms (they admit this for probabilistic computation, and do not mention quantum computation at all), so it’s quite clear to me these axioms are actually false in the real world, even though the Church-Turing thesis is probably true. — Peter Shor
Peter Shor is, of course, a Godel Prize winner for designing the “quantum factoring algorithm” among many other impressive accomplishments; Ryan Williams is on the short list for a future Godel Prize, because of a major breakthrough he recently achieved in the field of circuit complexity.
Other heavy hitters weighed in on the subject as well. Gil Kalai provided several pointers to papers discussing variants of the Church-Turing Thesis, and some thoughts of his own. Andrej Bauer said he thought it was impossible to prove the thesis, but it might be disproved by designing a real-world computational device that was able to compute a function that Turing machines provably could not compute. Timothy Chow responded to that by saying it brought up a philosophical problem: how could we know that the real-world device was actually performing that super-Turing computation? It’s a fascinating conversation, that is still ongoing. I doubt the Dershowitz/Gurevich paper is the last word. | ||
2453 | dbpedia | 3 | 7 | https://rjlipton.com/2012/01/30/perpetual-motion-of-the-21st-century/ | en | Perpetual Motion of The 21st Century? | [
"https://i0.wp.com/rjlipton.com/wp-content/uploads/2012/01/gilandaram.png?resize=297%2C169&ssl=1",
"https://i0.wp.com/rjlipton.com/wp-content/uploads/2012/01/perpetualcollage.png?resize=300%2C225&ssl=1",
"https://i0.wp.com/rjlipton.com/wp-content/uploads/2012/01/baak.png?resize=400%2C298&ssl=1",
"https://s0.w... | [] | [] | [
""
] | null | [] | 2012-01-30T00:00:00 | Are quantum errors incorrigible? Discussion between Gil Kalai and Aram Harrow Gil Kalai and Aram Harrow are world experts on mathematical frameworks for quantum computation. They hold opposing opinions on whether or not quantum computers are possible. Today and in at least one succeeding post, Gil and Aram will discuss the possibility of building large-scale… | en | Gödel's Lost Letter and P=NP | https://rjlipton.com/2012/01/30/perpetual-motion-of-the-21st-century/ | Are quantum errors incorrigible? Discussion between Gil Kalai and Aram Harrow
Gil Kalai and Aram Harrow are world experts on mathematical frameworks for quantum computation. They hold opposing opinions on whether or not quantum computers are possible.
Today and in at least one succeeding post, Gil and Aram will discuss the possibility of building large-scale quantum computers.
Quantum computers provide a 21st Century field for the kind of debate first led by Albert Einstein about the reach of quantum theory. One thought experiment by which Einstein tried to contravene the Uncertainty Principle can be described as having asserted that quantum theory implies the creation of perpetual motion machines, which are impossible machines. In a later attempt, after initial puzzlement, Niels Bohr pointed out that Einstein himself had neglected to correct for gravity’s effect on time in general relativity.
Perpetual motion machines were the dream of many inventors over the centuries—and why not? Having a machine that could create useful work but consume no fuel would change the world. Alas advances in our understanding of physics have ruled them out: there is indeed no free lunch. The designs at right look like birds-of-a-feather, but the rightmost was designed in 1960 by Hermann Bondi to illustrate Bohr’s correction above.
The guest discussions here between Gil Kalai and Aram Harrow address the fundamental question:
Are quantum computers feasible? Or are their underlying models defeated by some fundamental physical laws?
Those like Royal Society co-founder John Wilkins who in 1670 wrote of perpetual motion machines did not know of the second law of thermodynamics. We, Dick and Ken, would like to think that if blogs like GLL were around centuries ago there might have been a more penetrating discussion than even the Royal Society could foster. We are here now, and we are very honored that Gil and Aram wish to use GLL as a location to discuss this interesting, important, and wonderful question. We believe in the win-win that either we will have wonderful quantum computers, or we will learn some new laws of nature, particularly about information.
For a roadmap, Gil and Aram will alternate thesis-response in these posts, talking about quantum error-correction and fault tolerance. However, we also invite you, the reader, to take part in the debate sparked by Gil’s paper, “How quantum computers fail: Quantum codes, correlations in physical systems, and noise accumulation.”Perhaps they and we will react to comments. We thank them greatly, and have worked to make the issues even more accessible.
Guest Post: Gil Kalai
The discovery by Peter Shor of the famous quantum algorithm for fast integer factoring gave a strong reason to be skeptical about quantum computers (QC’s), along with an even stronger reason for wanting to build them. Shor is also the pioneer of quantum error-correction and quantum fault-tolerance, which give good reasons to believe that QC’s can be built. Other researchers have focused on this very issue, and the physics community is filled with work on many approaches to building practical QC’s.
In my (Gil’s) part of the world, Michael Ben-Or is a world leader in theoretical computer science with major contributions in cryptography, complexity, randomization, distributed algorithms, and quantum computation. Among the famous notions associated with Michael’s work before he turned quantum are non-cryptographic fault-tolerance, multi-prover interactive proofs, and algebraic decision trees. Dorit Aharonov is one of the great quantum computation researchers in the world and she has studied, among other things, fault tolerance, adiabatic computation, lattice problems, computation of Jones polynomials, and quantum Hamiltonian complexity.
Aharonov and Ben-Or proved in the mid-1990s (along with other groups) the threshold theorem which allows fault tolerant quantum computation (FTQC) at least in theory. The following photo shows them on the road in Jerusalem in 2005 with me at left, and on the right Robert Alicki, a famous quantum physicist from Gdansk, Poland, known for work on quantum dynamical systems.
Alicki is perhaps the only physicist engaged in long-term research skeptical towards quantum computers and error-correction. Over the years he has produced several papers and critiques under this program, coming from several different directions: some based on thermodynamics, others based on various issues in modeling noisy quantum evolutions.
Conjectures on noisy QC’s and error-correction
I suppose readers here are familiar with the basic concepts of quantum computers: qubits, basis states as members of , superposition, entanglement, interference. My comments in the first round of discussions are based on several (related) papers of mine, mainly the one linked above (alternate link). A more technical paper is “When noise accumulates.” Here are slides from a related lecture at Caltech’s Institute for Quantum Information, and an earlier, more-detailed, survey. The feasibility of building quantum computers that can out-perform digital computers is one of the most fascinating and clear-cut scientific problems of our time. The main concern is that quantum systems are inherently noisy. Roughly what this means for QC’s is that the internal states of quantum registers may vary unpredictably outside the range that allows the algorithm to continue.
First consider a single classical bit with some probability of being flipped when read. For any we can improve the odds of correct reading above by making and sending enough separate copies . In case of any flips the reader will take the majority value, and this works provided the error events on the different bits are independent. For strings of bits there are error correcting codes that achieve the same guarantee more efficiently than making copies, and that can also cope with limited kinds of correlated errors such as “burst noise” which affects consecutive bits.
For quantum systems there are special obstacles, such as the inability to make exact copies of quantum states in general. Nevertheless, much of the theory of error-correction has been carried over, and the famous threshold theorem shows that fault-tolerant quantum computation (FTQC) is possible if certain conditions are met. The most-emphasized condition sets a threshold for the absolute rate of error, one still orders of magnitude more stringent than what current technology achieves but approachable. One issue raised here, however, is whether the errors have sufficient independence for these schemes to work or correlations limited to what they can handle. I will now go on to describe my conjectures regarding how noisy quantum computers really behave.
Conjecture 1 (No quantum error-correction): In every implementation of quantum error-correcting codes with one encoded qubit, the probability of not getting the intended qubit is at least some , independently of the number of qubits used for encoding.
Conjecture 1 does not obstruct classical error correction as described above. The rationale behind Conjecture 1 is that when you implement the decoding from a single qubit to qubits , a noise in the input amounts to having a mixture with undesired code words. The conjecture asserts that, for a realistic implementation of quantum error-correction, there is no way around it. Conjecture 1 reflects a strong conjectural interpretation of the principle that quantum systems are inherently noisy:
Conjecture 2 (The strong principle of noise): Quantum systems are inherently noisy with respect to every Hilbert space used in their description.
The next two conjectures concern noise among entangled qubits—proposed mathematical formulations for them are in the paper.
Conjecture 3: A noisy quantum computer is subject to noise in which error events for two substantially entangled qubits have a substantial positive correlation.
Conjecture 4: In any quantum computer at a highly entangled state there will be a strong effect of error synchronization.
Standard circuit or machine models of QC’s divide the computation into discrete cycles, between which one can identify “fresh noise” apart from the accumulated effect of previous noise. The threshold theorem does entail that (when the noise rate is under the threshold) for FTQC to fail, these conjectures must hold for the fresh noise. A QC model in which fresh noise shows these effects differs sharply from the assumptions underlying standard models. I proved that a strong form of Conjecture 3, where “entanglement” is replaced by a certain notion of “emergent entanglement,” implies Conjecture 4.
Conjectured Limit on Entanglement
The papers argue a few other conjectures regarding how noisy quantum computers behave. One describes noisy quantum evolutions that do not enact quantum fault tolerance, which we skip here. The most quantitative one is called Conjecture C in the technical paper on noise, C for censorship because it concerns what types of (highly entangled) quantum states cannot be reached at all by such noisy QC’s.
Consider a QC with a set of qubits. Given a subset of qubits, consider the convex hull of all states that for some factor into a tensor product of a state on some of the qubits and a state on the other qubits. For a state on , define as the trace distance between and . For a state of all the qubits, define .
Conjecture C: There is a polynomial (perhaps even a quadratic polynomial) such that for any QC on qubits, which describes a state (which need not be pure), .
Here QC can be regarded as a quantum circuit given initial state .
Interpreting and Testing the Conjectures
The strong interpretation is that the conjectures hold globally, for any quantum dynamical system on which a QC can be based. The medium interpretation says they hold for processes currently observed in nature, but human artifice can create systems in which they are false, thus allowing computationally superior QC’s to be built via FTQC. The weak interpretation is that they only make a sharp distinction between two kinds of QC models, one supporting FTQC and the other not, and that the former kind can be built artificially and also does represent some quantum processes that occur naturally.
I tend to believe in the strong interpretation, namely, that the conjectures are always true. The weaker interpretations can be used to discuss (as we do below) specific proposals for implementing quantum computation. There are quite a few suggestions on how to build quantum computers based on qubits and gates, and also some suggestions based on computationally equivalent but physically quite different methods.
Nevertheless, I do not expect a common physical reason why my conjectures should apply for each proposed realization of a QC. Hence the conjectures should be examined, either based on detailed modeling, or based on experimentation, on a case-by-case basis. Note that they are not about some mysterious breakdown that occurs when you try to scale quantum computers to a large number of qubits. Conjecture 3 is about the two-qubit behavior of a quantum computer with any number of qubits, and it can be checked (as can the other conjectures) on quantum computers with a rather small number of qubits.
One prominent proposal under which the conjectures can be tested is measurement-based QC employing cluster states. Cluster states can be regarded as code words in a certain quantum error-correcting code. Once you prepare such states, universal quantum-computing can be achieved by a certain measurement of the state. Conjecture 1 asserts that noisy quantum states created in the laboratory will involve a mixture of the intended state with other cluster states.
Question 1: Will such noisy cluster states still support universal quantum-computing?
A second proposal is topological quantum computing. Non-abelian anyons that can support universal quantum-computing can also be regarded as codewords in a certain quantum error-correcting code. Similar to before, the conjecture asserts that when we create such states in the laboratory (in a process that does not apply quantum fault-tolerance) we achieve a mixture of intended codewords with unintended codewords.
Question 2: Will such noisy anyons be useful for universal quantum-computing?
For these two proposals the special physical gadgets are supposed to be constructible by “ordinary” experimental quantum physics that does not involve quantum fault-tolerance, so they are an especially appealing testbed for my conjectures where all three interpretations can apply.
Why I Believe My Conjectures
Let me explain why I think that my conjectures are correct—also mindful of this nice post by Shmuel Weinberger on what “a conjecture” means for a mathematician. I regard it as implausible (see below) that universal quantum computers are realistic, and I think that the issue of noise is indeed the main issue. The strong principle of noise underlying Conjecture 1 strikes me as the right way to approach noise in quantum systems to begin with. The two-qubit conjecture proposes the simplest dividing line that I can think of between noise that allows fault tolerance and noise that does not. The conjecture regarding error-synchronization also captures, in my opinion, a very basic obstacle to quantum fault-tolerance. There is an argument from first principles that since error-correction is possible classically and Nature is really quantum, then error-correction must be possible quantumly. But it strikes me as conflating the settings after-the-fact. In any case, my conjectures allow classical error-correction and fault tolerance. And, finally, as far as I can see, my conjectures on the behavior of noise do not violate any principle of quantum mechanics.
As an aside, let me briefly say why I tend to regard universal quantum computers as unrealistic. An explanation for why universal quantum computers are unrealistic may require some change in physics theory of quantum decoherence. On the other hand, universal quantum computers will be physical devices that are able to simulate arbitrary quantum evolutions, where the word “simulate” is understood in the strong sense that the computer will actually create an identical quantum state to the state created by the evolution it simulates, and the word “arbitrary” is understood in the strong sense that it applies to every quantum evolution we can imagine as long as it obeys the rules of quantum mechanics. As such, quantum computers propose a major change in physical reality.
Aram Harrow: A Short Response
Although Peter Shor has already been featured on this blog for his famous factoring algorithm, I want to mention an arguably deeper contribution of his to quantum information. After demonstrating that -bit numbers could be factored in time, Shor pointed out that this was possible even with noisy gates, as long as each gate’s noise was (This observation is not totally obvious, and rests on the fact that quantum computers, unlike analog computers, cannot magnify small errors in their amplitudes.) Shor made this point to argue that factoring can be achieved with resources that are genuinely only polynomial, even when counting time, number of processors, energy and precision. When proposing new models of computation, it’s important to not to fall into the trap of analog computing, where seemingly innocuous assumptions dramatically change the power of the model.
While requiring noise to scale as might be theoretically reasonable, it’s not very encouraging if we hope to ever build a large-scale quantum computer. In the mid 1990’s, many disbelieved that quantum decoherence could ever be significantly reduced. Shor (and others) responded to this by developing the theory of quantum error correcting codes (QECC), which protect data in a manner analogous to classical codes. This requires overcoming several difficulties, such as the no-cloning theorem (which prevents redundant encodings), the fact that measurements cause disturbance, and the continuous range of possible errors.
Later, Shor (and Aharonov and Ben-Or, and others) extended QECCs to protect dynamic computations, so that fault-tolerant quantum computing (FTQC) could be achieved in the presence of a sufficiently low, but constant, rate of errors. To be sure, this makes assumptions such as independence that Gil is questioning.
QECC and FTQC are more than an answer to a technical objection; together they describe a potentially new phase of matter. In my opinion, they represent the deepest discovery in quantum mechanics since Bell’s Theorem. And we have in part the criticism of the quantum computing skeptics to thank for these breakthroughs! I hope the conversation between Gil’s skepticism and the optimism of people like me will also lead to useful results.
In a later post, I’ll respond in detail about why I believe that the emperor is fully dressed, and large-scale FTQC is possible, not only in theory, but realistically in the not-too-distant future. But by way of preview, I’ll outline my arguments briefly here.
Response Road Map
Any argument that FTQC is impossible must also deal with the fact that classical computing is evidently possible. Just as we know that any vs proof must avoid working relative to every oracle, we can argue that any proof of quantum computing’s impossibility must somehow distinguish quantum computers from classical computers. This rules out most models of maliciously correlated errors.
The key assumption of FTQC is (approximately) independent errors. Conversely, Gil’s skepticism is based on error models that may have low single-qubit error rates, but are highly correlated even across large distances. While this possibility can’t be definitively ruled out until we build a working large-scale quantum computer, I’ll give both theoretical and experimental evidence that such error models don’t occur in nature.
Current routes to building quantum computers, such as ion traps and superconductors, nevertheless suffer from correlated errors. I think these correlations aren’t too bad, but they definitely exist. However, I’ll propose a thought-experiment implementation of a quantum computer, which is not meant to be practical, but where correlated errors are highly implausible.
Open Problems
What are your thoughts on this matter? Please try to be as clear as possible, and if you refer to specific issues raised here this will be especially good. Also, solve Questions 1 and 2.
[fixed intro’s conflation of two Einstein-Bohr interchanges] | |||||
2453 | dbpedia | 2 | 28 | https://www.edge.org/conversation/david_deutsch-the-edge-of-computation-science-prize | en | THE EDGE OF COMPUTATION SCIENCE PRIZE | [
"https://www.edge.org/sites/default/files/edge_logo.jpg",
"https://www.edge.org/3rd_culture/prize05/images/David1.jpg",
"https://www.edge.org/3rd_culture/prize05/images/1125844.gif",
"https://www.edge.org/sites/all/modules/print/icons/print_icon.png",
"https://www.edge.org/sites/all/modules/print/print_mail... | [] | [] | [
""
] | null | [] | null | https://www.edge.org/favicon.ico | https://www.edge.org/conversation/david_deutsch-the-edge-of-computation-science-prize | THE $100,000 EDGE OF COMPUTATION SCIENCE PRIZE
For individual scientific work, extending the computational idea, performed, published, or newly applied within the past ten years.
David Deutsch
Recipient of the 2005
$100,000 Edge of Computation Science Prize
DAVID DEUTSCH is the founder of the field of quantum computation. Paul Benioff, Richard Feynman, and others had written about the possibility of quantum computation earlier, but Deutsch's 1985 paper on Quantum Turing Machines was the first full treatment of the subject, and the Deutsch-Jozsa algorithm is the first quantum algorithm.
When he first proposed it, quantum computation seemed practically impossible. But the last decade has seen an explosion in the construction of simple quantum computers and quantum communication systems. None of this would have taken place without Deutsch's work.
The nominating essay is reproduced in part below.
Although the general idea of a quantum computer had been proposed earlier by Richard Feynman, in 1985 David Deutsch wrote the key paper which proposed the idea of a quantum computer and initiated the study of how to make one. Since then he has continued to be a pioneer and a leader in a rapidly growing field that is now called quantum information science.
Presently, small quantum computers are operating in laboratories around the world, and the race is on to find a scalable implementation that, if successful, will revolutionize the technologies of computation and communications. It is fair to say that no one deserves recognition for the growing success of this field more than Deutsch, for his ongoing work as well as for his founding paper. Among his key contributions in the last ten years are a paper with Ekert and Jozsa on quantum logic gates, and a proof of universality in quantum computation, with Barenco and Ekert (both in 1995).
One reason to nominate Deutsch for this prize is that he has always aimed to expand our understanding of the notion of computation in the context of the deepest questions in the foundations of mathematics and physics. Thus, his pioneering work in 1985 was motivated by interest in the Church-Turing thesis. Much of his recent work is motivated by his interest in the foundations of quantum mechanics, as we see from his 1997 book.
ABOUT DAVID DEUTSCH
The main papers written by Deutsch that contained "achievement in scientific work that embodies extensions of the computational idea" were in 1985 ("Quantum theory, the Church-Turing principle, and the universal quantum computer") and 1989 ("Quantum computational networks").
His 1995 paper, "Conditional quantum dynamics and logic gates" (with A. Barenco, A. Ekert and R. Jozsa) was an important step in clarifying what sort of physical processes would be needed to implement quantum computation in the laboratory, and what sort of things the experimentalists should be trying to get to work.
"Universality in quantum computation," also written in 1995 (with A. Barenco and A. Ekert) proved the universality of almost all 2-qubit quantum gates, thus verifying his conjecture made in 1989 and showing that quantum computation and quantum gate operations are "built in" to quantum physics far more deeply than classical physics. In 1996, in "Quantum privacy amplification and the security of quantum cryptography over noisy channels" (with A. Ekert, R. Jozsa, C. Macchiavello, S. Popescu and A. Sanpera), he brought quantum cryptography a little bit closer to being practical as opposed to just a laboratory curiosity.
His recent work as seen in the following three papers can be seen as new "applications" of the computational idea, rather than extensions of it.
In 2000, "Information Flow in Entangled Quantum Systems" (with P. Hayden) refutes the long-held belief that quantum systems contain 'non-local' effects, and it does it by appealing to the universality of quantum computational networks, and analysing information flow in those.
Also in 2000, in "Machines, Logic and Quantum Physics" (with A. Ekert and R. Lupacchini), a philosophic paper, not a scientific one, he appealed to the existence of a distinctive quantum theory of computation to argue that our knowledge of mathematics is derived from, and is subordinate to, our knowledge of physics (even though mathematical truth is independent of physics).
In 2002, he answered several long-standing questions about the multiverse interpretation of quantum theory in "The Structure of the Multiverse" — in particular, what sort of structure a "universe" is, within the multiverse. It does this by using the methods of the quantum theory of computation to analyse information flow in the multiverse.
His two main lines of research at the moment, qubit field theory and quantum constructor theory, may well yield important extensions of the computational idea eventually, but at the moment neither of them has yielded any results at all, to speak of, only promising avenues of research.
Born in Haifa, Israel, David Deutsch was educated at Cambridge and Oxford universities. After several years at the University of Texas at Austin, he returned to Oxford, where he now lives and works. Since 1999, he has been a non-stipendiary Visiting Professor of Physics at the University of Oxford, where he is a member of the Centre for Quantum Computation at the Clarendon Laboratory, Oxford University.
In 1998 he was awarded the Institute of Physics' Paul Dirac Prize and Medal. This is the Premier Award for theoretical physics within the gift of the Council of the Institute of Physics. It is made for "outstanding contributions to theoretical (including mathematical and computational) physics." In 2002 he received the Fourth International Award on Quantum Communication for "theoretical work on Quantum Computer Science."
He is the author of The Fabric of Reality [1997].
References:
"Quantum Theory, The Church-Turing Principle, and the Universal Quantum Computer," Proc. Roy. Soc. London A400, 97-117 (1985)
" Quantum computational networks" Proceedings of the Royal Society of London A425:73-90. (1989)
"Conditional quantum dynamics and logic gates" (with A. Barenco, A. Ekert and R. Jozsa) Phys. Rev. Lett. 74 4083-6 (1995)
"Universality in quantum computation" (with A. Barenco and A. Ekert) Proc. R. Soc. Lond. A449 669-77 (1995)
"Quantum privacy amplification and the security of quantum cryptography over noisy channels" (with A. Ekert, R. Jozsa, C. Macchiavello, S. Popescu and A. Sanpera) Phys. Rev. Lett. 77 2818-21 (1996)
"Information Flow in Entangled Quantum Systems" (with P. Hayden) Proc. R. Soc. Lond. A456 1759-1774 (2000)
"Machines, Logic and Quantum Physics" (with A. Ekert and R. Lupacchini) Bulletin of Symbolic Logic 3 3 (2000)
"The Structure of the Multiverse" Proc. R. Soc. Lond.A458 2028 2911-23 (2002) | |||||||
2453 | dbpedia | 0 | 66 | https://www.ranker.com/list/famous-male-computer-scientists/reference%3Fpage%3D2 | en | Famous Male Computer Scientists | https://imgix.ranker.com/list_img_v2/5973/165973/original/famous-male-computer-scientists-u4 | https://imgix.ranker.com/list_img_v2/5973/165973/original/famous-male-computer-scientists-u4 | [
"https://sb.scorecardresearch.com/p?c1=2&c2=10600724&cv=3.6&cj=1",
"https://static.ranker.com/img/brand/ranker-logo.svg?v=1&auto=format&q=60&fit=crop&fm=png&dpr=2&w=104",
"https://static.ranker.com/img/brand/wordmark.svg?v=1&auto=format&q=60&fit=crop&fm=png&dpr=2&w=210",
"https://imgix.ranker.com/img/icons/me... | [] | [] | [
""
] | null | [
"Reference"
] | 2009-11-24T00:00:00 | List of famous male computer scientists, listed by their level of prominence with photos when available. This greatest male computer scientists list contains ... | en | /img/icons/touch-icon-iphone.png | Ranker | https://www.ranker.com/list/famous-male-computer-scientists/reference | Alan Mathison Turing (; 23 June 1912 – 7 June 1954) was an English mathematician, computer scientist, logician, cryptanalyst, philosopher and theoretical biologist. Turing was highly influential in the development of theoretical computer science, providing a formalisation of the concepts of algorithm and computation with the Turing machine, which can be considered a model of a general-purpose computer. Turing is widely considered to be the father of theoretical computer science and artificial intelligence. Despite these accomplishments, he was not fully recognised in his home country during his lifetime, due to his homosexuality, and because much of his work was covered by the Official Secrets Act. During the Second World War, Turing worked for the Government Code and Cypher School (GC&CS) at Bletchley Park, Britain's codebreaking centre that produced Ultra intelligence. For a time he led Hut 8, the section that was responsible for German naval cryptanalysis. Here, he devised a number of techniques for speeding the breaking of German ciphers, including improvements to the pre-war Polish bombe method, an electromechanical machine that could find settings for the Enigma machine. Turing played a pivotal role in cracking intercepted coded messages that enabled the Allies to defeat the Nazis in many crucial engagements, including the Battle of the Atlantic, and in so doing helped win the war. Due to the problems of counterfactual history, it's hard to estimate what effect Ultra intelligence had on the war, but at the upper end it has been estimated that this work shortened the war in Europe by more than two years and saved over 14 million lives.After the war, Turing worked at the National Physical Laboratory, where he designed the Automatic Computing Engine, which was one of the first designs for a stored-program computer. In 1948, Turing joined Max Newman's Computing Machine Laboratory at the Victoria University of Manchester, where he helped develop the Manchester computers and became interested in mathematical biology. He wrote a paper on the chemical basis of morphogenesis and predicted oscillating chemical reactions such as the Belousov–Zhabotinsky reaction, first observed in the 1960s. Turing was prosecuted in 1952 for homosexual acts; the Labouchere Amendment of 1885 had mandated that "gross indecency" was a criminal offence in the UK. He accepted chemical castration treatment, with DES, as an alternative to prison. Turing died in 1954, 16 days before his 42nd birthday, from cyanide poisoning. An inquest determined his death as a suicide, but it has been noted that the known evidence is also consistent with accidental poisoning. In 2009, following an Internet campaign, British Prime Minister Gordon Brown made an official public apology on behalf of the British government for "the appalling way he was treated". Queen Elizabeth II granted Turing a posthumous pardon in 2013. The Alan Turing law is now an informal term for a 2017 law in the United Kingdom that retroactively pardoned men cautioned or convicted under historical legislation that outlawed homosexual acts.On 15 July 2019 the Bank of England announced that Turing would be depicted on the United Kingdom's new £50 note.
Stephen Gary "Woz" Wozniak (; born August 11, 1950) is an American inventor, electronics engineer, programmer, philanthropist, and technology entrepreneur. In 1976 he co-founded Apple Inc., which later became the world's largest information technology company by revenue and largest company in the world by market capitalization. He and Apple co-founder Steve Jobs are widely recognized as two prominent pioneers of the personal computer revolution of the 1970s and 1980s. In 1975, Wozniak started developing the Apple I into the computer that launched Apple when he and Jobs first began marketing it the following year. He primarily designed the Apple II in 1977, known as one of the first highly successful mass-produced microcomputers, while Jobs oversaw among other things the development of its foam-molded plastic case and early Apple employee Rod Holt developed the switching power supply. With computer scientist Jef Raskin, Wozniak had major influence over the initial development of the original Apple Macintosh concepts from 1979 to 1981, when Jobs took over the project following Wozniak's brief departure from the company due to a traumatic airplane accident. After permanently leaving Apple in 1985, Wozniak founded CL 9 and created the first programmable universal remote, released in 1987. He then pursued several other business and philanthropic ventures throughout his career, focusing largely on technology in K–12 schools.As of January 2018, Wozniak has remained an employee of Apple in a ceremonial capacity since stepping down in 1985. | ||
2453 | dbpedia | 1 | 21 | https://windowsontheory.org/2017/10/30/the-different-forms-of-quantum-computing-skepticism/ | en | The different forms of quantum computing skepticism | [
"https://s0.wp.com/latex.php?latex=T&bg=eeeeee&fg=666666&s=0&c=20201002",
"https://s0.wp.com/latex.php?latex=T&bg=eeeeee&fg=666666&s=0&c=20201002",
"https://s0.wp.com/latex.php?latex=BQP+%5Cnot%5Csubseteq+SUBEXP&bg=eeeeee&fg=666666&s=0&c=20201002",
"https://s0.wp.com/latex.php?latex=TIME%282%5E%7Bn%5E%7Bo%281... | [] | [] | [
""
] | null | [
"Boaz Barak",
"~ Boaz Barak"
] | 2017-10-30T00:00:00 | (see also pdf version) Quantum computing is one of the most exciting developments of computer science in the last decades. But this concept is not without its critics, often known as "quantum computing skeptics" or "skeptics" for short. The debate on quantum computing can sometimes confuse the physical and mathematical aspects of this question,… | en | https://s1.wp.com/i/favicon.ico | Windows On Theory | https://windowsontheory.org/2017/10/30/the-different-forms-of-quantum-computing-skepticism/ | (see also pdf version)
Quantum computing is one of the most exciting developments of computer science in the last decades. But this concept is not without its critics, often known as “quantum computing skeptics” or “skeptics” for short. The debate on quantum computing can sometimes confuse the physical and mathematical aspects of this question, and so in this essay I try to clarify those. Following Impagliazzo’s classic essay, I will give names to scenarios or “potential worlds” in which certain physical or mathematical conditions apply.
Potential worlds
Superiorita is the world where it is feasible to build scalable quantum computers, and these computers have exponential advantage over classical computers. That is, in superiorita there is no fundamental physical roadblock to building large quantum computers, and hence the class BQP is a good model of computation that is physically realizable. More precisely, in superioriata the amount of resources (think dollars) that is required in order to simulate a -gate quantum circuit grows at most polynomially or maybe even linearly (with not-too-terrible constants) in .
The other aspect of Superiorita is the mathematical conjecture that quantum computers offer exponential advantage over classical ones. That is, that there are functions computable by the mathematical model of (uniform) quantum circuits that require exponential time to compute by Turing machines. (In complexity jargon, this is the conjecture that where the latter stands for the class .) Integer factoring is one problem that is conjectured to lie in (i.e., where quantum computers have an exponential advantage). One can also consider analogous conjectures for sampling problems, and some particular sampling tasks that can be achieved in quantum polynomial time have been conjectured as requiring exponential time for probabilistic Turing machines.
Superiorita is the world in which most quantum computing researchers think we live in, and, judging by the hundreds of millions of dollars of investments, many commercial companies and funding agencies as well. Note that this is a mix of both a physical assumption (that the model of can be physically realized) and a mathematical assumption (that this model offers exponential speedup over classical machines). Without assuming both the physical and mathematical aspects of superiorita there would be no justification for investing huge efforts in building quantum computers.
In Superiorita quantum computers are not a panacea and in particular they can’t solve NP complete problems. Let me not wage into the (hugely important!) question of whether in Superiorita the Lattice Shortest Vector Problem is in or not (see my essay for more on this topic). Also, even if one believes we live in Superiorita, whether or not the particular problems on which quantum computing offer exponential speedup are interesting is a matter of taste. As far as I know, factoring large integers is not inherently interesting in its own right, and once the world moves to different encryption standards, the applications to breaking encryption will eventually disappear. However, there are other tasks where quantum computers seem to provide exponential speedups and that can be interesting in their own right in areas such as chemistry and machine learning (though one should read the fine print).
Popscitopia is the “hyper superiorita” world where quantum computers can solve NP complete problems. That is, in popscitopia quantum computers can be built, and . This is the world that is described by some popular accounts of quantum computers as being able to “run exponentially many parallel computations at once”, a belief that is prevalent enough that Scott Aaronson devotes the tagline of his blog to refuting it. Most researchers in the area believe that, regardless of whether quantum computers can be physically be built, they cannot solve -complete problem (a belief which is essential to so called “post quantum cryptography”), and indeed so far we have no reason to think quantum computers offer exponential (or even better than quadratic) speedup for such problems. But, we have no proof that this is the case, and indeed, some TCS researchers, as Richard Lipton, have suggested that even (which in particular implies ) might be true.
Skepticland is the world where it is not possible to build scalable quantum computers, though mathematically they do offer an exponential advantage. That is, in Skepticland, for every function (and more generally a promise problem or a sampling problem) that can be computed using amount of physical resources, there is a probabilistic Boolean circuit of size polynomial in that computes as well. However, mathematically, like in Superiorita, it is still the case in Skepticland that contains functions (such as integer factoring) that require exponential time to be computed classically.
Skepticland is the world that “quantum computing skeptics” such as Gil Kalai, Leonid Levin and Oded Goldreich think we live in. In this world the extended Church-Turing hypothesis hold sway and there exists some (yet unaccounted for) cost that blows up exponentially in when trying to physically realize size quantum circuits.
These skeptics still accept the mathematical conjecture underlying superiorita that contains functions that require exponential time for deterministic or probabilistic Turing machines. Indeed, as far as I can tell, their belief in the inhrent difficulty of problems such as factoring is a large part of the intuition for why quantum computers would not be physically realizable.
Finally, Classicatopia is the world where and more generally any function, promise problem, or sampling problem that can be solved by (uniform) quantum circuits can be solved by probabilistic Turing machines with a polynomial overhead. In this world quantum computers can be physically realized, but only because they are no more powerful than classical computers. Hence the Extended Church-Turing holds but for a completely different reason than in Skepticland. In Classicatopia we can simulate the entire physical world using a classical computer. One advocate of this world is Ed Fredkin (who interestingly was the person who motivated Richard Feynmann to propose the possiblity of quantum computers in the first place). Also, several researchers (such as Peter Sarnak) have suggested that the marquee problem of integer factoring can be solved by polynomial-time Turing machines.
Truth and beauty
At this point I should probably talk about the evidence for the probability of truth of each of these scenarios, and discuss the latest advances in experimental works building quantum computers. But frankly I’d be just parroting stuff I Googled, since I don’t really know much about these works beyond second or third hand reports.
Rather, I’d like to talk about which of these worlds is more beautiful. Beauty is in some ways as important for science as truth. Science is not just a collection of random facts but rather a coherent framework where these facts fit together. If a conjecture is “ugly” in the sense that it does not fit with our framework then this can be evidence that it is false. When such “ugly ducklings” turn out to be true then this means we need to change our standards of beauty and come up with a new framework in which they fit. This is often how progress in science is made.
While I am not a physicst, I believe that quantum mechanics itself followed exactly such a trajectory. (I am probably making some historical, physical, and maybe even mathematical mistakes below, but I hope thebigger picture description is still accurate; however please do correct me in the comments!)
The ancient greek philospher Democritus is often quoted as saying “Nothing exists except atoms and empty space, everything else is opinion.” This saying is usually interpreted as an emprical hypothesis about the world, or to use mathematical jargon, a conjecture. But I think this is really more of a definition. That is, one can interpret Democritus as not really making a concrete physical theory but defining the allowed space for all physical theories: any theory of the world should involve particles that mechnically and deterministically evolve following some specific and local rules.
Over the coming years, scientists such as Newton, Leibniz and Einstein, took this prescription to heart and viewed the role of physics as coming up with ever more general and predictive theories within the Democritus model of deterministic particles with no randomness, intent, or magic such as “action at a distance”. In the late 1910’s, Emmy Noether proved some remarkable theorems that derived conservation laws from physical theories based only on the fact that they satisfy certain symmetries (see also my recent post). While the mechanical clockwork theories satisfied such symmetries, they are not the only theories that do so. Thus Noether’s theorems showed that even non-clockwork theories could still satisfy a more general notion of “mathematical beauty”.
At the time Noether’s Theorems were just a very useful mathematical tool, but soon nature gave some indications that she prefers Noether’s notion of beauty to Democritus’. That is, a series of experiments led to the introduction of the distinctly “non clockwork” theory of quantum mechanics. Giving up on the classical notion of beauty was not easy for physicsts, and many (most famously Einstein) initially thought of quantum mechanics as a temporary explanation that eventually will be replaced by a more beautiful “Democritus-approved” theory. But Noether’s results allowed to make quantum mechanics not just predictive but beautiful. As Nima Harkani-Hamed says:
Newton’s laws, even though they were the first way we leaned how to think about classical physics, were not the right way to make the jump to quantum mechanics. … [Rather] because the underlying ideas of the action– and everything just really ports beautifully through, from classical to quantum physics, only the interpretation changes in a fundamental way– all of Noether’s arguments, all of Emmy Noether’s arguments about conservation laws go through completely unscathed. It’s absolutely amazing. All these arguments about conservation laws, many other things change, tons of other things changed when we went from classical to quantum. But our understanding of the conservation laws, even though they’re come up with by this classical physicist a hundred years ago, are equally true in quantum mechanics today.
Moreover, my outsider impression is that with time physicsts have learned to accept and even grow to love quantum mechanics, to the degree that today many would not want to live in a purely classical world. If you wonder how anyone could ever love such a monstrosity, note that, as Scott Aaronson likes to say, there is a sense in which the relation between quantum and classical physics is analogous to the relation between the and norms. I think most mathematicians would agree that the former norm is “more beautiful” than the latter.
My personal opinion
So, which is the most beautiful world, Superiorita or Skepticland?
If you’ve asked me that question a decade ago, I would have answered “Skepticland” without hesitation. Part of the reason I got into computer science is that I was never good at physics and didn’t particularly like it. I also thought I could avoid caring about it. I believed that ultimately the world is a Turing machine or cellular automata and whether it has 5 or 12 particles is about as interesting as whether the computer I’m typing this on uses big endian or little endian representation for integers. When I first heard about quantum computing I was hoping very much that there is some inherent reason it can never work so I can avoid dealing with the ugliness of quantum mechanics and its bracket notation.
But as I’ve learned more about quantum mechanics, I’ve grown not just to accept it as a true theory but also beautiful, and with this to also accept quantum information and computation theory as a beautiful generalization of information and computation in its own right. At the moment I don’t see any beautiful alternative theory (to use Aaronson’s terms, a “Sure/Shor separator”) from the skeptics. The closest we have to such a theory comes from Gil Kalai, but as far as I can tell it posits noise as a new fundamental property of nature (the Ka-la-ee constant?). Noise here is not the usual interpretation of quantum probabilities or the uncertainty principle. It seems to be more similar to the engineering form of noise as inaccuracies in measurements or errors in transmissions. These can be serious issues (for example, I believe that friction is a large part why actually building Babbage’s Analytical Engine was so difficult). But as far as I can tell, these engineering difficulties are not fundamental barriers and with sufficient hard work and resources the noise can be driven down to as close to zero as needed.
Moreover some of the predictions involve positing noise that scales with number of qubits in the computer. It seems to require nature to “know” that some physical system in fact corresponds to a logical qubit, and moreover that two distant physical systems are part of the same quantum computer. (I should say that Gil Kalai disagrees with this interpretation of his conjecture.) While one could argue that this is not more counterintuitive than other notions of quantum mechanics such as destructive interference, entanglement, and collapse under measurements, each one of those notions was only accepted following unequivocal experimental results, and moreover they all follow from our modelling of quantum mechanics via unitary evolutions.
The bottom line is that, as far as I can tell, Superiorita is the most beautiful and evidence-supported world that is currently on offer.
Will we see a mega-qubit quantum computer?
The current experimental efforts are aimed at building a 50 qubit quantum computer. This sounds impressive until I remember that the VIC 20 I played with as a third-grader more than thirty years ago already had 5K (i.e., about 40,000 bits) of memory. So, will we ever see a quantum computer big enough to run Frogger? (not to mention Ultima IV )
The answer to this question depends not just on the science but also on economics and policy as well. Suppose that (with no real justification) that eventually we will able to produce a quantum computer at a cost of 1000 dollars per qubit. Then a million qubit machine will cost a billion dollar to build. The current applications of quantum computers do not seem to justify this cost. As I mentioned, once we transition to different cryptosystems, the motivation for factoring integers will be significantly lessened, and while simulating quantum systems can be important, it’s hard to see it as forming the basis for a billion dollar business. Of course, this can all change with a single theory paper, just as Peter Shor revolutionized the field of quantum computing with a single algorithm.
Moreover I hope that at some point, policy makers and the public at large will stop viewing computer science just through the lens of applications, and start seeing it also as a fundamental science in its own right. The large Hardron Collider apparantly cost about 13 billion dollars to build and operate, and yet the same analysis calls it a “bargain” in terms of the benefit from both technologies invented and scientific discovery. The case can be made that building a large scale quantum computer would be no less important to science, and would offer no less benefit to society. Indeed, a quantum computer offers literally an exponential number of potential experiments one can run on it. Moreover, there is absolutely no reason to think that Shor gave the final word on breakthrough algorithms that could use such a computer for tasks that a priori seem to have nothing to do with physics. In that vein, I hope that whatever bodies that fund experimental quantum computing research realize that at least part of their investment should go into theoretical work in quantum (and also classical, as the two are intertwined) algorithm design. | ||||
2453 | dbpedia | 3 | 38 | https://blog.computationalcomplexity.org/2003/ | en | Computational Complexity | https://blog.computationalcomplexity.org/favicon.ico | https://blog.computationalcomplexity.org/favicon.ico | [
"https://resources.blogblog.com/img/icon18_email.gif",
"https://resources.blogblog.com/img/icon18_edit_allbkg.gif",
"https://resources.blogblog.com/img/icon18_email.gif",
"https://resources.blogblog.com/img/icon18_edit_allbkg.gif",
"https://resources.blogblog.com/img/icon18_email.gif",
"https://resources.... | [] | [] | [
""
] | null | [
"Lance Fortnow"
] | null | Computational Complexity and other fun stuff in math and computer science from Lance Fortnow and Bill Gasarch | en | https://blog.computationalcomplexity.org/favicon.ico | https://blog.computationalcomplexity.org/2003/ | Today is the 100th anniversary of the birth of John von Neumann, one of the greatest mathematicians of the 20th century. Let me discuss two aspects of his work, one big, one small, that have greatly affected computational complexity.
The von Neumann min-max theorem showed that every finite zero-sum two-person game has optimal mixed strategies. More formally, let A be the payoff matrix of a game, then
maxx miny xTAy = miny maxx xTAy where x and y are probability vectors.
Andrew Yao used the min-max theorem to prove what we now call the Yao Principle: The worst case expected runtime of a randomized algorithm for any input equals best case running time of a deterministic algorithm for a worst-case distribution of inputs. The Yao principle has proven invaluable for proving upper and lower bounds for deterministic and probabilistic algorithms.
How can you get a fair coin by flipping a coin of unknown bias? You use the von Neumann coin-flipping trick: Flip the biased coin twice. If you get heads then tails output HEADS. If you get tails then heads output TAILS. Otherwise repeat. This procedure will output HEADS or TAILS with equal probability and if the bias is not too close to zero or one the expected number of repetitions is relatively small.
The von Neumann coin flipping trick is the first in a long line of research in complexity extracting random bits from weak random sources.
John von Neumann passed away February 8, 1957 in Washington, DC.
On Saturday I visited the Einstein Exhibit at Chicago's Field Museum. Some manuscripts and letters and a nice exhibit explaining why time must vary if the speed of light remains a constant made this an interesting but not a must-see exhibit. The biggest surprise for me came from seeing how Einstein's fame happened overnight instead of the more gradual fame I would have expected. In 1919 a solar eclipse showed that light from stars do bend from gravitational forces. Einstein's fame grew immediately and his name became synonymous with genius.
This superstardom for a scientist doesn't seem to happen today. When Andrew Wiles proved Fermat's last theorem he did get some deserved attention but he never became a true household name. When you realize Wiles has hit the upper limit of fame a mathematician can receive (ruling out people like Ted Kaczynski and John Nash) one can see the Einstein effect of science may never return.
On the other hand, University of Chicago paleontologist Paul Sereno headlines the social page of the Chicago Tribune at the "Party With Giants." Perhaps scientists can still achieve more than fifteen minutes of fame after all.
Time for another of my favorite open problems.
Does NP=UP imply the polynomial-time hierarchy collapses?
UP is the class of languages accepted by nondeterministic polynomial-time Turing machines that have at most one accepting computation for all inputs.
This problem has loose connections to Valiant-Vazirani but Hemaspaandra, Naik, Ogiwara and Selman have the most closely related result. Consider the following proposition.
(*) There is a set A in NP such that for all satisfiable formula φ there is a unique satisfying assignment a of φ such that (φ,a) is in A.
Hemaspaandra et. al. show that (*) implies the polynomial-time hierarchy collapses to the second level.
For all we know, (*) and NP=UP are incomparable. If (*) holds for some A in P then NP=UP by just guessing a. We don't know whether NP=UP implies (*) since the accepting computations of a UP machine accepting SAT need not reveal a satisfying assignment of a formula.
There exist an oracle relative to which UP=NP≠co-NP. A relativized world with UP=NP and Σ2p≠Π2p remains open.
Every scientific field has their own rules for the order of authors in a paper. In theoretical computer science, tradition dictates that we list the authors alphabetically by last name. I don't agree with this tradition; rarely do all the co-authors of a paper play an equal role. The decision whether to add someone as a co-author, and thus an equal, often becomes difficult.
But breaking with tradition can have its own problems. I have three papers that break the alphabetical rule though two were in biology which has its own rules. In the other back in 1990, Carsten Lund, a graduate student at the time, made the key step in developing an interactive proof system for the permanent. For that we made him first author in the Lund-Fortnow-Karloff-Nisan paper. In retrospect I regret this decision. It only added confusion to those who cited the paper. Also did Lund not play as important a role in other papers where we kept alphabetical order? Breaking with tradition, even with the best of intentions, can often cause more harm than good.
Want an easy rule to greatly improve your writing? Just avoid the following words, particularly in the abstract and introduction of your papers. am is are was were be been
Avoiding these seven forms of "to be" will force you to write in the active tense instead of the passive making your sentences less boring. For example, instead of "It is known that all functions can be computed securely in the information theoretic setting" use "We can compute all functions securely in the information theoretic setting."
Taking this rule to the extreme can lead to some very convoluted sentences but, I promise, forcing yourself to think actively about every statement you write will make a great difference in your prose. In almost all cases the right answer is "not to be."
Over the last 15 years, two very active research areas seem at odds. Derandomization results have shown us that we can often remove randomness from computation but interactive proof systems and PCPs exhibit incredible power from randomness. There is no contradiction here, just two very different ways we use randomness in complexity: for searching and for hiding.
Typically we think of randomness for searching, for example finding disagreements with Fermat's little theorem to show a number is composite or taking random walks on graphs to show they are connected. Derandomization results have given us reasons to believe we can replace the randomness in these computations with pseudorandom number generators.
Randomness can also play the role of hiding, since no one can predict future coin tosses. In interactive proofs we make the jump from NP to PSPACE because of randomness. For PCPs with O(log n) queries the jump goes from P to NP and with poly queries from NP to NEXP, in the later case classes which are provable different. In all these cases the prover cannot cheat because it cannot predict coin tosses not yet made by the verifier. A verifier using a pseudorandom generator will fail here, since the prover could then predict the verifier's actions.
AM protocols have the verifier flip coins first so no hiding going on, rather searching for a statement Merlin can prove and we expect some derandomization for AM. The result that MA is in AM says that sometimes we can replace hiding randomness with searching randomness.
Paz Carmi, Yinnon Haviv and Enav Weinreb from Ben-Gurion University have solved the regular language problem I posted last month.
The problem came from Janos Simon based on a homework question in Kozen's book. Let L(A)={x|x^m is in A for some m}. The homework question asked to show that L(A) is regular if A is regular. The question Janos asks was how many states do we need for a DFA for L(A) if A has n states. Carmi, Haviv and Weinreb show that an exponential number of states are required.
Not only did they solve the problem but also sent me this nice write-up of the proof. I believe this is the first time someone has directly solved a problem I've given on this weblog. I owe each of them a beer (or non-alcoholic beverage of their choice).
Update 12/9: I received the following today from Markus Holzer.
It seems that I have missed your posting about the problem last month. The problem you have stated was already solved in June by Jeff Shallit and co-authors. They have given a lower bound on the DFA accepting root, by considering the (largest) syntactic monoid induced by two generators. The latter problem on syntactic monoid size is of its own interest and I was working on that for a while, therefore I know the result of Shallit et al on the root descriptional complexity. Maybe you also owe the beers to Shallit et al.
The Cornell University Library has announced it will drop a substantial number of their Elsevier subscriptions, part of a general problem Cornell and other libraries are facing with higher costs and different pricing models from commercial academic publishers.
I have wanted to write a post on this topic for a while but I find it difficult to truly understand the problems or the potential solutions. Elsevier does a nice job with their portal and their publishing, but because of consolidation and cheap distribution via the internet, they have changed their pricing model in ways that make it difficult for many libraries to afford all of the journals that they need.
This poses some moral questions: Should we avoid submitting our papers to Elsevier journals? Is it wrong for me to serve on the editorial board of the Elsevier-published Information and Computation? I just don't know.
First an update on NSF program solicitations: The Formal and Mathematical Foundations cluster has posted its solicitation which includes computational complexity. The deadline is March 4. The program announcement for the Emerging Models and Technologies for Computation Cluster, which includes quantum and biological computing, is still under development. Also the ITR solicitation has also been posted, with some major changes from previous years.
Donald Knuth's tribute to Robert Floyd highlights the December SIGACT News. Also reviews of a bunch of crypto books, a column on sublinear-time algorithms and the complexity theory column on "Post's Lattice with Applications to Complexity Theory."
In my mailbox yesterday was not one but five copies of SIGACT News shrink-wrapped together. Once I unwrapped them and looked at the labels, only the outer one belonged to me. There were two for other professors in my department, one for our library and one for the library of Loyola University Chicago, which is on the other side of the city. I'm not sure if it was a mistake or some attempt by the ACM to reduce mailing costs, but I hope this is a one-time occurrence.
XML (eXtensible Markup Language) has become quite a popular data format in recent years. XML roughly corresponds to a tree. For example,
<person><name>Harry</name><age>29</age></person>
<person><name>Jane</name><major>Computer Science</major></person>
represents a tree. The root having two children, each labeled "person". The first of these children has two children named "name" and "age". The first of those children has a leaf node labeled with the phrase "Harry". For a larger example, see the RSS feed for my weblog.
XML was designed as a flexible way to present documents for later displaying. Since the XML format can be easily produced and parsed, XML also serves as a standardized method for transferring data between databases, far better than the old CSV (Comma-Separated Values) format.
Recently there have been some work on directly manipulating and querying the XML data. As a theorist, this seems like a bad idea, particularly for larger databases. While XML completely represents the underlying tree, it is not a good implementation of that tree. Basic tree operations like parent and sibling are very expensive just looking at XML. About the only thing one can do quickly with XML is depth-first search. Far better to "shred" the data into a better tree implementation like a DOM (Document Object Model) or a full-fledged database and do the work there, rewriting a new XML if needed.
One issue though is when the XML file is on the order of 5-10 GB, a bit larger than what can be stored in the memory of a typical desktop machine. One can stream through the file rather quickly but cannot recreate the tree. This opens up some interesting theoretical questions:
Given a stream of data in XML format, how can one do efficient analysis and manipulations of the underlying tree? I suspect one would want to sometimes shred subtrees, but you cannot determine the size of the subtree until after its been streamed. Perhaps some randomness or streaming the file multiple times might be helpful.
XML might not be the right model of a tree for this purpose. What is the best way to stream a tree or other data structure to allow an efficient implementation of the basic operations of the data structure? Perhaps some redundancy might be useful.
I thought I should mention some of my favorite and most frustrating open questions over the years. Here's one of them:
Let f:{0,1}n→{0,1}. Let h and g be n-variable degree d polynomials over the reals. Suppose for all x in {0,1}n, g(x)≠0 and f(x)=h(x)/g(x). Is there a constant k such that the decision-tree complexity of f is bounded by dk?
The decision-tree (or query) complexity is the number of bits of x that need to be viewed to determine f(x). The queries to the bits of x can be adaptive. I'm particularly interested in the case where d is poly-logarithmic in n.
Nisan and Szegedy answer the question in the affirmative if g(x)=1. Their result holds even if f(x) is only approximated by h(x). However if we allow arbitrary g(x), h(x)/g(x) can closely approximate the OR function which requires looking at all of the bits. The case where we require exact equality of f(x) and h(x)/g(x) is the open question at hand.
Back in my science-fiction reading days, I particularly remember one editorial written in one of those anthology magazines about 1980: In the near future, you will be able to access, via your personal computer, any science fiction story right after it has been written. If you like a certain author, you can read other stories from that author, even if we didn't decide to put it in this magazine. In this future world, will you still need me, the editor? The answer is yes, for there will be way too much dreck out there for you to find the good stories within, and you will need people like me to point them out to you.
The future is now and though I haven't kept up with science fiction, the same issue applies to academic publications. Recent posts by Michael Nielsen and on Slashdot have asked: With nearly all new papers in physics and computer science easily accessible on the web, how do you find the ones worth reading?
Conferences have traditionally played this role in computer science. But, by definition, paper choices are decisions by committee and with the massive growth in the field, many good papers do not appear in the major conferences.
What we need are "editors"! You can help. Write a survey paper, or spend a page in your research papers discussing the important earlier results in a field. Maintain a web page pointing to papers you find interesting. Start a weblog saying what you find interesting--you don't have to post long or often, just to say, hey, this paper is worth looking at. This way people with similar interests to you can find out what at least you think is important. Only by many of us working together can we make the interesting papers stand out.
It happened right after I started high school in suburban New Jersey, the start of the Science Times section in Tuesday's New York Times. The Science Times not only helped get me excited about science but made me feel others could get excited over science as well. I've kept reading it off and on during these past 25 years. The Science Times has reported on a fair amount of research in complexity and theoretical computer science, for a time some joked that a result was not important until it appeared in the New York Times.
Today the New York Times celebrates the 25th Anniversary Issue of the Science Times. It features 25 questions such as Does Science Matter? and What Is the Most Important Problem in Math Today? (Hint: It's not P versus NP).
I'll end this post with a quote from the essay of Alan Lightman:
All of the scientists I've known have at least one quality in common: they do what they do because they love it, and because they cannot imagine doing anything else. In a sense, this is the real reason a scientist does science. Because the scientist must. Such a compulsion is both blessing and burden. A blessing because the creative life, in any endeavor, is a gift filled with beauty and not given to everyone, a burden because the call is unrelenting and can drown out the rest of life.
The Computer and Information Science and Engineering Directorate of the NSF has completed it reorganization. The CISE web site details the new structure.
CISE now has four divisions. Instead of each division have a large number of specific programs, each division contains a smaller number of clusters covering a broader research area. I'm happy to see "Computational Complexity" specifically mentioned in the Formal and Mathematical Foundations Cluster in the Division of Computing & Communication Foundations. However it shares that cluster with such diverse topics as "computational algorithms for high-end scientific and engineering applications" and "analysis of images, video, and multimedia information." Hopefully funding panels will meet in the more specific areas to avoid trying to compare proposals from vastly different areas of computer science.
Quantum and Biological Computing sit in a different CCF cluster, Emerging Models and Technologies for Computation. This shows NSF's hopes for these new technologies but may also give them a way to phase out these areas if the technologies don't show promise.
Program announcements for the CCF clusters are still under development. The ITR solicitation is still not expected until Thanksgiving. So if you plan a grant proposal this year, you'll still need to wait.
The NEC Research Institute (NECI) died just over a year ago. I didn't feel comfortable talking about it then so let me say a few words now.
I joined NECI in 1999 just after its tenth anniversary. When I joined its mission and focus was basic research in computer science and physics. NECI gave me considerable time and resources to study fundamental questions in computational complexity. It was an exciting place to be.
Soon thereafter some changes were occurring. NEC modified the mission of NECI to focus on producing technologies with basic research secondary. Some researchers (though not us theorists) were encouraged to join "technology groups" to find practical applications of their research. But during this time, the administrators always supported basic research and I never felt uncomfortable doing theory.
But then on November 1, 2002, NECI merged with NEC CCR&L, a more applied research lab in the same building to form NEC Laboratories America. The new mission makes no mention of basic research. The scientists in charge were replaced by engineering/management types. Many of the research scientists, particularly physicists, were let go.
My job was never in immediate danger but NEC was no longer the place for me and so I went on the job market; no one was surprised when I decided to leave.
A corporation like NEC needs to make decisions for the health of the company. I do not fault NEC for the decisions that it made and they gave me a few great years. Still I mourn the NEC Research Institute, quite a special place during its thirteen year run.
Last week I started an experiment using instant messaging. I thank the many of you who sent me IMs, a great way for me to meet you, the readers of this weblog. I plan to keep trying IM for a while but I have had learned a few lessons which seem obvious in retrospect.
Instant messaging can be a time sink. I love communicating with people, which is the main reason I keep this weblog going. However, as most academics, I have much going on and can't afford to have many lengthy discussions. I've also learned there is no clean way to end an IM conversation. So please feel free to IM me but don't take it personally if I rudely keep the conversation short.
Just because the nice icon on the home page says I'm online it doesn't mean that I am at my computer and available to chat at the moment. Often I am and I will but if not I will eventually see your message and respond. If there is really is something important that you want to discuss with me via IM we can setup a scheduled time via email. I often do this with phone calls so why not IM too?
I've also discovered IM conversations can be recorded, posted on the web and could be used in a court of law. I need to be careful about what I say.
I have already had some interesting research conversations and ideas for weblog posts via IM. The last post came in part because of some IM questions about the Feigenbaum-Fortnow paper. Email became a powerful research tool when email use hit a critical mass among computer scientists sometime in the mid-late 80's. I believe IM will also follow that curve and I hope to keep ahead of it and perhaps nudge it a little bit.
Let f be a function that maps Σn to Σn. Let U represent the uniform distribution on Σn and D be the distribution that one gets by applying f to a string drawn from U.
We wish to find f that change x but keep the underlying distribution close to the same, in particular we want the following properties,
(1) Prob(f(x)≠x)≥2/3 when x is drawn from U.
(2) U and D should be statistically close, informally no algorithm making a polynomial number of samples will be able to distinguish, with high confidence, whether those samples came from D or U.
Achieving such an f is easy, consider f that just flips the first bit of x. (1) holds all the time and U=D.
Suppose we add a restriction to f:
(3) In the bits where x and f(x) differ those bits are 1 in x and 0 in f(x). For example, f(011)=010 is allowed, but f(011)=f(111) is not.
An f fulfilling (1), (2) and (3) is impossible. (1) and (3) means that f will reduce the number of ones on most of the strings and taking say n3 samples we will notice a statistical difference in the number of bits which are 1 depending on whether the samples were drawn from U or D.
Suppose we replaced (3) with a weaker restriction:
(3') In the first bit where x and f(x) differ, that bit is 1 in x an 0 in f(x). So f(110)=011 is allowed but f(001)=010 is not allowed.
Can an f fulfilling (1), (2) and (3') exist? Not so clear, but Peter Shor found a simple example: f(0n)=0n, and for the other x, f(x)=x-1 where x is viewed as a nonnegative integer written in binary. D is indistinguishable from U yet f changes nearly every string.
These questions are related to an old paper I had with Joan Feigenbaum which has gotten some recent attention because of a nice new FOCS paper by Bogdanov and Trevisan that builds on our paper. The proofs in these papers work partly because (1), (2) and (3) cannot all happen even for arbitrary distributions U. Both papers give a negative result for a nonadaptive case; the adaptive case corresponds to (1), (2) and (3') and Shor's example shows that the proof techniques will not directly lead to a solution in the adaptive case which remain open.
A quantum computing graduate student sent me email over the weekend. He had thought he had proven some surprising results about the class PP and was wondering if he was making some mistake. After some discussion here was his reply:
Ok I get it. Somehow I jumped to the conclusion that PPP was PP.
There is one more for your blog: A⊆ B implies B⊆ AB but not AB⊆ B (duh!)
He goes on to say he made his quantum leap to conclusions since for the quantum class BQP, PBQP=BQP, he thought the same property must hold for all classes.
I present this because he suggested it for my weblog and as a public service for those who might make a similar mistake. Yes, in case you were wondering, for reasonable classes A (like A=P), B⊆AB without needing to assume A⊆B.
Here is one of my favorite examples of a bad proof for what turns out to be a correct theorem.
Theorem: If NP is in BPP then the whole polynomial-time hierarchy is in BPP.
Let's focus on simply showing Σ2 is in BPP if NP is in BPP. The rest is straightforward induction. Here is our first proof:
Σ2=NPNP⊆ NPBPP⊆BPPBPP=BPP. Do you see the problem with this proof?
To get a correct proof (first due to Zachos) we need to use Arthur Merlin games. Consider a Σ2 language L as an ∃∀ expression. Since NP is in BPP, we can replace the ∀ with a probabilistic test. This gives us what is known as MA or a Merlin-Arthur game where the powerful Merlin sends a message that Arthur can probabilistically verify. A now classic result shows that MA is contained in AM, where Arthur provides a random string to Merlin who must then provide a proof based on that string. Once again we apply the NP in BPP assumption to allow Arthur to simulate Merlin probabilistically and now we have a BPP algorithm for L.
The problem in the first proof is in the second "⊆". The assumption NP in BPP does not imply NPA in BPPA for all A.
How has the internet most affected the study of science? In one word: communication: the ability for scientists to discuss and share their research with each other quickly and cheaply. So I strive to find new ways to use the internet to improve communication. Starting this weblog is one such example. I'd thought I would try another: Instant Messaging.
Now many of you are thinking I am crazy, but for different reasons. Some of you out there have been using instant messaging for years and wondering how I could consider it s "new" technology. But many of you out there have barely figured out how to read your email attachments and have hardly even heard of IM.
On a trial basis, for my weblog readers, I will welcome your instant messages. Talk to me about this weblog, about complexity and computer science in general or about whatever you want. Maybe I'll start a trend and all computer scientists will IM each other. Maybe not but it's worth trying out.
I'm using Yahoo Instant Messaging; my Yahoo id is the imaginative "fortnow" (note: I do not read email sent to fortnow@yahoo.com). I put a button on the left column of the weblog home page that tells you when I am online and you can click to connect. I look forward to hearing from you.
There is a big reorganization in the CISE directorate of NSF. To understand what's happening, let's review the previous structure..
The National Science Foundation, like most government bureaucracies, has a tree-like structure. At top is the office of the director (Rita Colwell). Below that are several directorates including the Directorate for Computer and Information Science and Engineering (CISE) headed by Peter Freeman. By law every organization in NSF cannot be just "science" but "science and engineering" except for the Foundation itself.
Below CISE were several divisions, including Computer-Communications Research (C-CR) headed by Kamal Abdali. C-CR ha several programs including the Theory program headed by Ding-Zhu Du.
Peter Freeman, who recently became head of CISE, has decided to reorganize the whole directorate. Exactly what it will become should be announced next week but there are some hints in this presentation. Change is always scary but I'm hopeful theory will survive. I'll give more details when I know them.
To overcome the tree structure of NSF, there are a number of cross-disciplinary programs. One such program, Information and Technology Research (ITR), has produced several large, medium and small grants to a variety of projects, including many applications of theory. This is the last year of ITR solicitations and the calls have been well behind schedule, probably not unrelated to the CISE reorganization. This year's topic will be on "ITR for National Priorities" with more details promised by Thanksgiving. Unconfirmed rumors have the program will be more focused and only making medium sized grants.
There are two computer science departments on the University of Chicago campus. The one I belong to, a department in the physical sciences division of the University and the other, the Toyota Technological Institute at Chicago (TTI-C). What is TTI-C?
The Toyota Technological Institute, a university covering various engineering disciplines located in Nagoya City, Japan, was founded in 1981 from funds from the Toyota Motor Corporation as directed by the Toyoda family. They decided to start a computer science department and locate it in the states to have a broader access to computer science faculty and students. For various reasons they settled on Chicago and set up an agreement with the University of Chicago, using space in the University of Chicago Press building. TTI-C has just officially started up and have already signed up a few strong faculty members including theorist Adam Kalai and Fields medalist Stephen Smale. TTI-C plans to increase its faculty size and start up a graduate program in the near future.
Although there will be some sharing of courses and a few of our faculty (including myself) sit on a Local Academic Advisory Council for TTI-C, TTI-C will formally maintain itself as a separate institution from the University. Nevertheless close collaborations between our department and TTI-C has already established an exciting research environment for our combined faculty and students.
This is for my friends in Boston who suggested I do a sports post.
One of the great parts of my job is working with people from around the world. I was working with a graduate student, Luis Antunes, from Portugal when we found out that Portugal would play the US in the 2002 World Cup. We had various rounds of taunting back and forth with me fully knowing the US didn't stand a chance in that match. When the US did win, Luis tells me the whole country went into a deep depression. By contrast, for the most part people in the US didn't care.
I can now understand Portugal's pain as the city of Chicago has gone into a similar kind of quiet depression over the Cubs failure to advance to the world series. Impressive what sports can do to the psyche of a city or a country.
Memo to my friends in Boston: Hope things go well for the Sox so your city doesn't end up feeling tomorrow like Chicago does today.
I have tried to keep politics out of this weblog with the exception of issues related to science, in particular science funding and immigration. To celebrate America's fiscal new year, let's talk about immigration.
Congress has declined to renew the higher annual cap on H1-B visas, rolling them back to 65,000 for the fiscal year starting today from 195,000 in 2000. H1-B's allow "employers to hire foreign workers with special skills they can't find among American job applicants," typically for high-tech jobs. But H1-B's are also used for visiting researchers at industrial research labs and some university positions. When the limit is reached, the government will no longer issue more visas until the start of the next fiscal year.
At NEC, we had postdocs who had to delay their start date until October for this reason, including in some cases those who wanted to start at the beginning of summer. With the limit dramatically decreased, if the job market starts perking up, we could hit the limit much earlier. This could make a real dent in international cooperation in science.
What is a one-way function, intuitively a function that is easy to compute and hard to invert? Taking this intuitive idea to a formal definition has yield two quite different meanings, sometimes causing confusion.
The first directly tries to translate the intuition. A function f is one-way if
f is 1-1 (so an inverse is unique),
f is length increasing (so the output of the inverse function is not too large),
f is computable in polynomial time, and
there is no polynomial-time computable g such that for all x, g(f(x))=x.
This is a nice clean definition that fulfills the intuition but is not that useful for cryptography, since f could be easily invertible on all but a small number of inputs, or with stronger adversaries. To handle these issues we have a different looking definition.
A function f is r(n)-secure one-way if
There is a function l(n)≥n such that f maps strings of length n to strings of length l(n),
f is computable in polynomial time, and
for all probabilistic polynomial-time algorithms A, the probability that f(A(f(x)))=f(x) is at most r(n) where the probability is taken over x chosen uniformly from the strings of length n and the random coins used by A.
There are many variations on both definitions and a considerable amount of theory devoted to each. Grollmann and Selman show that one-way functions of the first kind exist if and only if P ≠ UP. On the other hand Håstad, Impagliazzo, Levin and Luby show that from any one-way function of the second kind, one can create a pseudorandom generator.
At one point I tried using complexity-theoretic one-way functions and cryptographic one-way functions to distinguish the two, but this only caused confusion. So we have to live with the fact that we have these two definitions with the same name and we'll have to just use context to figure out which definition is appropriate. If you give a talk or write a paper about one-way functions, it never hurts to distinguish which version you are talking about.
Last week I posed the following question:
(1) Exhibit an NP-complete language L, such that for all lengths n≥1, L contains exactly half (2n-1) of the strings of length n.
This question was posed by Ryan O'Donnell and solved by Boaz Barak. Here is a proof sketch.
By using standard padding and encoding tools, (1) is equivalent to
(2) There is an NP-complete language L and a polynomial-time computable function f such that for every n, there are exactly f(n) strings in L of length n.
First we show how to achieve (2) if we replace "strings" with "total witnesses." Consider pair of formulas φ and ¬φ. The total number of satisfying assignments between them total 2n if the have n variables. We just create an encoding that puts φ and ¬φ at the same length. The total number of witnesses at that length is equal to 2n times the number of formula pairs encoded at that length.
We now prove (2) by creating a language L that encodes the following at the same length for each φ
φ, where φ is satisfiable.
(φ,w) where w is a satisfying assignment for φ and there is another satisfying assignment u<w for φ.
You can check that the language is NP-complete and the total number of strings in L for each φ is just the number of satisfying assignments of φ.
A colleague of mine, who shall remain nameless, likes to schedule time for research, a certain set block of time during the day where he puts off all his todo's and concentrates on science. Sounds good but often his chair will stop by for some discussion or an impromptu meeting. The colleague will say, "Sorry, but I reserved this time for research", but that argument didn't fly, the chair said he could do research anytime. One day he said instead, "Sorry I have a squash game" and the chair replied that they would talk at a future time. Welcome to the academic world, where research gets trumped by a meeting that itself can be trumped by a squash game.
Is scheduling time for research a good idea? It depends on your personality and your research style. If you find yourself with no time to think about an interesting problem because too much else is happening then yes, best to schedule a few hours where you promise yourself you will do nothing else but research during those times. This means more than not preparing for class but also ignoring your computer. Checking email and surfing the web are themselves great time sinks.
In my case, I find it difficult to just start thinking about research at a given time. So I use the rule that research trumps all and when inspiration hits me, or someone comes to my office with a research question, I drop everything I can to work on the problem. Okay, I can't skip a class for research but email, weblog posts, referee reports, etc., should never stand in the way of science.
The call for papers for the 2004 ACM Conference on Electronic Commerce is now available. I'm posting this note as my duty as a program committee member to spread the word of the conference.
Why would an electronic commerce conference want me, a complexity theorist, as a PC member? Electronic commerce has many surprising connections to computational complexity. Consider complex auction situations where different buyers wish to purchase different items with varying needs for combinations of these auctions. One needs to design such auctions which decisions made by the buyers, as well as determining the winner must be computationally efficient. This in addition to the usual needs of auctions to be revenue generating, robust against players trying to cheat the system and other notions of "fairness."
In a more philosophical sense, what is a large financial market but some sort of massive parallel computation device that takes pieces of information and produces prices for securities. How can we model and analyze this process? Computational complexity should play a major role in understanding this model of computing and allow us to develop more efficient financial markets.
I have a gap in my knowledge of work in theory done between 1979 (the publication of Hopcroft and Ullman) and 1985 (when I started graduate school). So every now and then I see a new result from this time that I should have known years ago. Here is an example from the Winter 1982 SIGACT News, a variation of the regular language pumping lemma due to Donald Stanat and Stephen Weiss.
Theorem: If L is regular then there is a positive integer n such that for every string x of length at least n, there are strings u, v and w with v nonempty such that x=uvw and for all strings r and t and integers k≥0, rut is in L if and only if ruvkt is in L.
What surprises me about this result is that w does not appear in the conclusion and that the initial r could put the finite automaton in any state before it gets to u. Here is a sketch of the proof.
Let s be the number of states of a finite automaton accepting L. Let yi be the first i bits of x. For any initial state a, yi will map it to some state b. So one can consider yi as a function mapping states to states. There are at most ss such functions so if |x|≥ss there is an i and a j, i<j such that yi and yj represent the same function. We let u=x1...xi-1 and v=xi...xj-1. The rest follows like the usual pumping lemma.
Using a result of Jaffe, Stanat and Weiss show that this condition is not only necessary but also sufficient to characterize the regular languages.
Cleaning out my office I came across some old SIGACT News that Bill Gear had given me when he cleaned out his office after his retirement. The Winter 1982 edition is quite interesting. I was a freshman in college that year, well before I was part of the theory community.
There are some interesting technical articles that I will get to in future posts. But the first two pages were letters to the editor that are chilling reminders of the cold war during that time.
On page two was the following short note from Witold Lipski, Jr. and Antoni Mazurkiewicz from the Polish Academy of Sciences.
We are very sorry to inform you that due to the situation in Poland we do not see any chance to organize our 1982 Conference on Mathematical Foundations of Computer Science.
MFCS started in 1972 as an annual conference rotating between Poland and Czechoslovakia, and now between Poland, Slovakia and the Czech Republic. There was no conferences in 1982 or 1983 and the conference did not return to Poland until 1989.
Talking about the Czechs, there was a much longer letter on page one from James Thatcher of IBM. Here are some excerpts.
On a recent trip to Europe, I visited Prague and had the pleasure of talking with Dr. Ivan M. Havel who is a friend and colleague of many years. Ivan Havel received his Ph.D. in CS from Berkeley in 1971. He joined the Institute for Applied Computer Technology in Prague in 1972 and then in 1974 became a member of the Czechoslovakian Academy of Sciences, in the Institute of Information Theory and Automation.
Ivan's brother, Vaclav Havel, an internationally known playwright, was imprisoned in 1979 for four and a half years for his activities in connection with the Charter 1977 movement.
In 1980, possibly related to his refusal to denounce his brother, Ivan Havel was removed from his position in the Academy of Sciences and was unemployed for several months. Last May, he and Vaclav's wife were arrested and charged with "subversion" for allegedly "collecting and distributing written material oriented against the socialist state and social establishment, with hostile intentions." After four days detention, they were released.
He is employed as a programmer-analyst by META, a home-worker program for the handicapped.
Ivan Havel remained a programmer until after the Velvet Revolution in 1989. After some political work in 1990, he became a docent (associate professor) at Charles University and director of the Center for Theoretical Study where he remains today.
His brother Vaclav went on to become president of the Czech Republic.
When I move back to Chicago, I will go back to my old email address . I got to thinking about how my career can be described by my email addresses.
As an undergrad at Cornell, I spent several years working for computer services writing an email system in assembly language for the IBM 370. The system was scrapped shortly after I left for grad school at Berkeley. After a year at Berkeley, I followed by advisor, Michael Sipser, to MIT.
I had email addresses at Cornell and Berkeley but I have long since forgotten them. At MIT I wanted the userid "lance", but the name was taken by Lance Glasser, then an MIT professor. So my email became .
When I graduated and went to Chicago, I decided to stick with the userid "fortnow" for an email of . This bucked the trend at the time of having first names for email at Chicago so I had to have aliased to . When the university started system wide email I got though also works.
When I did a sabbatical in Amsterdam my email became or simply . When I moved to the NEC Research Institute my email because aliased to and when the NEC Research Institute became NEC Laboratories America I got my current email .
In addition to this, the ACM has created permanent email addresses, permanent as long as you are an ACM member and I did create an address though I never did give it out (until now). My brother and I now own the domain fortnow.com and I have what I do call my permanent address, . I also am the default receiver for fortnow.com mail, which means that addresses like , or even will all go to me.
All of the email addresses in this post still work and forward to me. But I will stick to using two main email addresses, for work related email and for non-work emails.
I used javascript to generate the emails in this post to avoid adding even more to my heavy spam load. We'll see if it works or whether I start getting spam sent to .
A few months ago I had a post describing information markets, a system of buying and selling securities that pay off if a given future event happens. Based on the price of a security, one can get an estimate of the probability that that event will occur. Studies have shown that information markets are better predictors than polls or experts.
Information markets have taken a blow in the past few days. The US Department of Defense has cancelled a program that would have set up limited futures markets on securities based on terroristic activities. They bowed to pressure from senators who consider it morally wrong to bet on events on future terrorist attacks. I understand their concerns but computer scientists and economists have produced what could have been a powerful tool in controlling terrorism and it is quite a shame to see it discarded so easily.
David Pennock sent me some links on a more positive point of view from CNN, Fortune and Wired and a fun CNN piece on the Tradesports Poindexter future.
Update (8/1): A well-written New York Times column A Good Idea with Bad Press and a nicely argued opinion piece by David Pennock.
Way back when I was a graduate student, I moved from Berkeley to MIT. I put what few belongings I had into boxes and shipped them via UPS. My brother flew out and we drove across the country together. Those were the days.
Now making the move back to Chicago is not nearly so simple. We have houses to sell and buy. Getting our kids ready for a new school. Real estate agents, lawyers, mortgage and insurance people to deal with. Meanwhile there is academic work that needs to get done before the real move. Conference and grant deadlines don't move to accommodate my move.
So this weblog might get a little spotty until I get settled into Chicago, sometime in mid-September. I'll try to find some time for some posts during that time but don't expect too much. If you are having complexity weblog withdrawal check out the archives. Nice thing about complexity--old stuff doesn't (usually) get stale.
Another rump session talk by Scott Aaronson showed that BQP/qpoly is contained in EXP/poly. In other words, everything efficiently quantumly computable with a polynomial amount of arbitrarily entangled quantum advice can be simulated in exponential time with a polynomial amount of classical advice.
Let me try to put this in context while avoiding quantum mechanics. Advice is method for encoding a different program for each input length. We define the class P/poly as those languages computable in polynomial time with access to a polynomially-long advice string an where the string an depends only on the length of the input. P/poly is equivalent to those problems having nonuniform polynomial-size circuits.
Quantum advice is a bit more tricky, since it can be in a superposition of regular advice strings. Formally, quantum advice is an exponentially long vector of numbers βa where βa is the amplitude of advice string a. For simplicity let us assume those numbers are real and we'll also have the restriction that the sum of the squares of the amplitudes is one.
You can see there are far more ways to give quantum advice than classical advice. But the quantum machines are limited in how they can use the advice. Harry Buhrman asked whether one can give any limit at all to what one can do with quantum advice. Scott Aaronson gives an answer: No better than classical advice as long as you are allowed (classical) exponential time.
Ideally one would like that efficient quantum algorithms with quantum advice can be simulated with efficient quantum algorithms with classical advice. Still Aaronson's result shows that even with fully entangled advice one cannot get all the information out of it.
During the rump session of complexity, Nikolai Vereshchagin presented a combinatorial theorem that he proved using Kolmogorov complexity. Let A be a finite subset of N×N where N is the set of natural numbers. Let m be the size of A, r be the number of nonempty rows of A and c the number of nonempty columns.
We say A is good is every nonempty row has m/r elements and every nomempty column has m/c elements of A. A rectangle has this property, as does a diagonal. We say A is k-good if every nonempty row has at most km/r elements and every nonempty column has at most km/c elements. A is good if it is 1-good.
Vereshchagin's Theorem: There is a constant c such that for all finite subsets B of N×N with n = log |B| there is a partition of B into at most nc sets each of which is nc-good.
Vereshchagin asks whether there is a purely combinatorial proof of this theorem. If you know of one let me know.
For those who know some Kolmogorov complexity, let me sketch the proof: We label each point (x,y) of B with the following five values: KB(x,y), KB(x), KB(y), KB(x|y) and KB(y|x). We partition the points into sets with the same labels. Standard counting arguments from Kolmogorov complexity show that each partition is nc-good for some c.
Update
A doctor, lawyer and mathematician were discussing whether it was better to have a wife or a girlfriend. The doctor said it was better to have a wife because it is medically safer to have a single partner. The lawyer said it was better to have a girlfriend to avoid the legal hassles of marriage. The mathematician said it was better to have both.
"Both?" said the doctor and the lawyer. "Yes," said the mathematician, "That way the wife thinks I'm with the girlfriend, the girlfriend thinks I'm with the wife and I can do some math."
I was reminded of that joke by the recent New York Times article Pure Math, Pure Joy and the accompanying slideshow. Those pictures look all too familiar.
The greatest lovers of math though are not the famous mathematicians at places like Berkeley and Harvard. Rather the mathematicians who take low-paying jobs with high teaching loads at less-strong colleges or move from visiting position to visiting position just to have some occasional time to do math. They have a dedication (or perhaps an addiction) I can never fully appreciate.
Expander graphs informally are graphs that given any subset S that is not too large, the set of vertices connected to S contains a large number of vertices outside of S. There are many constructions and applications for expander graphs leading to entire courses on the subject.
The adjacency matrix A of a graph G of n vertices is an n×n matrix such that ai,j is 1 if there is an edge between vertices i and j and 0 otherwise. Noga Alon noticed that a graph that has a large gap between the first and second eigenvalue of the adjacency matrix will be a good expander.
We can use ε-biased sets to get expanders. Let S be a ε-biased set for Fm for F the field of 2 elements. Consider the graph G consisting of 2m vertices labelled with the elements of Fm and an edge from x to y if y=x+s or x=y+s. This kind of graph G is known as a Cayley graph.
By looking at the eigenvalues the adjacency matrix A of G we can show G is an expander. The eigenvectors v are just the vectors corresponding to the functions g in L described earlier. For any vector a we have
(Ag)(a) = Σs in S g(a+s) = g(a) Σs in S g(s) since g(a+s) = g(a)g(s). Let g(S) = Σs in S g(s). We now have that Ag = g(S) g. So g is an eigenvector with eigenvalue g(S). If g is the constant one function then g(S)=|S|. Since S is an ε-biased set, g(S)≤ε|S| for every other g, so the second eigenvalue is much smaller than the largest eigenvalue and G must be an expander.
The June 2003 SIGACT News is out. Aduri Pavan wrote this months Complexity Theory Column on "Comparison of Reductions and Completeness Notions".
As I have mentioned before in this weblog, I heartily encourage joining SIGACT, the ACM Special Interest Group on Algorithms and Computation Theory. You get the SIGACT News, discounts on conferences and as I discovered last night from home, you apparently get online access to the STOC proceedings. Not to mention supporting the theory community. All this for the low price of $18 ($9 for students).
What about the ACM itself? I have been an ACM member since graduate school since I feel it is important to support the main computer science organization. But for the additional $96 ($42 for students) there are no real significant benefits over joining SIGACT alone.
ε-biased sets are an interesting concept that I have seen recently in a few papers but never seemed to have a clear description. At FCRC Eli Ben-Sasson gave me a good explanation and I will try to recreate it here.
Let F be the field of 2 elements 0 and 1 with addition and multiplication done modulo 2. Fix a dimension m. Let L be the set of functions g mapping elements of Fm to {-1,1} with the property that g(x+y)=g(x)g(y). Here x+y represents addition done coordinate-wise modulo 2. One example of a g in L is g(x1,x2,x3)=(-1)x1 (-1)x3.
There is the trivial function g in L that always maps to 1. For every non-trivial g in L exactly half of the elements in Fm map to 1 and the others to -1. If one picks a reasonably large subset S of Fm at random then high probability, g will map about half the elements to 1 and the rest to -1. In other words the expected value of g(x) for x uniformly chosen in S is smaller than some small value ε. If this is true we say S is ε-biased for g.
An ε-biased set is a set S such that for all nontrivial g in L, S is ε-biased for g. Formally this means that
Σx in S g(x) ≤ ε|S|. Not only do reasonable size ε-biased sets exists but they can be found efficiently. Naor and Naor found the first efficiently constructible ε-biased sets of size polynomial in m and 1/ε.
One can extend the notion of ε-biased sets to fields F of p elements for arbitrary prime p. L would now be the set of functions g mapping elements of Fm to the complex pth roots of unity, e2π(j/p)i for 0≤j≤p-1 again with the property that g(x+y)=g(x)g(y). Various constructions have created generalized ε-biased sets of size polynomial in m, 1/ε and log p.
For applications let me quote from the recent STOC paper by Ben-Sasson, Sudan, Vadhan and Wigderson that used ε-biased sets to get efficient low-degree tests and smaller probabilistically checkable proofs. You can get more information and references from that paper.
Since the introduction of explicit ε-biased sets, the set and diversity of applications of these objects grew quickly, establishing their fundamental role in theoretical computer science. The settings where ε-biased sets are used include: the direct derandomization of algorithms such as fast verification of matrix multiplication and communication protocols for equality; the construction of almost k-wise independent random variables, which in turn have many applications; inapproximability results for quadratic equation over GF(2); learning theory; explicit constructions of Ramsey graphs; and elementary constructions of Cayley expanders.
After the FCRC meetings I attended were concluded, I headed up to UCSD for the celebration of Walter Savitch for his sixtieth birthday and upcoming retirement. He gained his fame in complexity for Savitch's Theorem that shows "P=NP" for space.
I learned quite a bit at the meeting. Walt Savitch was Steve Cook's first student, his only student while Cook was at Berkeley in his pre-Toronto pre-"SAT is NP-complete" days. Also as Cook said, Savitch is the only student he has had with a theorem named after him. That theorem made up a good part of Savitch's Ph.D. thesis. At the celebration Cook gave an overview on propositional proof systems.
After coming to UCSD, Savitch did some work on computational linguistics and one of the leaders of the field, Aravind Joshi, gave a talk on combining trees to keep the structure when parsing sentences.
Savitch is probably best known now in computer science for his textbooks in introductory programming that likley many of you have used.
Congrats Walt on a fine career and here's hoping retirement doesn't slow you down.
As promised I added links to the papers in the post on the STOC business meeting. Let me say some more words on the winner of the Gödel prize
Valiant developed the concept of PAC (Probably Approximably Correct) learning as roughly where a learner sees a small number of labelled examples from a distribution and with high confidence will generate a hypothesis that with high probability will correctly label instances drawn from the same distribution.
A strong learner has confidence close to 100%; a weak learner has confidence only slightly better than 50%. Schapire, using a technique called boosting, showed how to convert a weak learner to a strong learner. This is a wonderful theoretical result but the algorithm had problems that made it difficult to implement.
In their Gödel prize winning paper, A decision-theoretic generalization of on-line learning and an application to boosting, Freund and Schapire develop the adaboost algorithm that solves many of these issues and has become a staple of the theoretical and practical machine learning community.
Boosting has its own web site where you can find much more information about the algorithms and applications.
Alonzo Church was born a hundred years ago today in Washington, DC. Church is best known for the λ-calculus, a simple method for expressing and applying functions that has the same computational power as Turing machines.
With Rosser in 1936, he showed that λ-expressions that reduce to an irreducible normal form have a unique normal form. In that same year he showed the impossibility of decided whether such a normal form existed.
Church's thesis, which he states as a definition: "An effectively calculable function of the positive integers is a λ-definable function of the positive integers."
Again in 1936, Kleene and Church showed that computing normal forms have the equivalent power of the recursive functions of Turing machines. And thus the Church-Turing thesis was born: Everything computable is computable by a Turing machine.
The λ-calculus also set the stage for many of the functional programming languages like lisp and scheme.
Alonzo Church passed away on August 11, 1995 in Ohio.
I have mixed feelings about the Federated Computing Research Conference. It is a good idea to get many different areas of computer science together. I do get to see many people I haven't seen in years who went into non-theoretical areas of CS.
On the other hand 2200 participants made the place quite crowded and it seemed to take away from the informal atmosphere of most theory conference. Since STOC and Electronic Commerce had nearly a complete overlap I jumped back and forth between talks never really feeling fully part of either conference.
For the first time the Complexity conference was not part of FCRC because 2003 is a Europe year for Complexity. In an informal poll I took of STOC people interested in complexity most liked having both conferences at the same place but would rather that happen in isolation, like last year in Montreal, rather than as part of the much larger FCRC meeting.
In what seems to be a trend in CS conferences, wireless internet was made available at the conference site. As you walked around you would pass many people sitting on chairs and on the ground hunched over their laptops disconnected from the conference and connected into another world. Seemed a bit depressing but I too found the net hard to resist--it is always tempting to simply open my laptop and connect, checking email and posting to this weblog. |