Skip to main content

Currently Skimming:

Part Two: Selected Perspectives on Computer Science 2 Exponential Growth, Computability, and Complexity
Pages 25-56

The Chapter Skim interface presents what we've algorithmically identified as the most significant single chunk of text within every page in the chapter.
Select key terms on the right to highlight them within pages of the chapter.


From page 25...
... The diversity of the topics addressed in these essays reflects the diversity of the field itself. The essays in Part Two are organized by chapter into several clusters: · Exponential growth, computability, and complexity.
From page 26...
... How computer scientists' efforts in the service of human effectiveness have led to such advances as spreadsheets, text-formatting programs, and information retrieval from the Internet, and how these and other innovations have had a very broad impact (Chapter 8)
From page 27...
... They provide glimpses not only into the foundational models that define computation but also into theoreticians' thinking about these models -- what's convincing, what's surprising, what's not. Finally, Bennett describes how quantum computing research seeks to dodge a fundamental physical limit of current computing technology and stretch our conception of computation.
From page 28...
... Today, it is most common to implement a processor on a single silicon chip called a microprocessor. Most computers today employ a single microprocessor, but larger computers employ multiple microprocessors.
From page 29...
... Moore's Law and Exponential Growth The key enabler of "better and cheaper" computers has been rapid technological progress. Arguably, the most important enabler has been the progress in the number of transistors (switches)
From page 30...
... How We Have Harnessed Exponential Growth While it is obvious that rapid technological improvements provide great opportunities for implementing computers, it is also the case that rapid and differential rates pose great challenges. Rapid change means that ideas that were too wasteful (e.g., in number of transistors)
From page 31...
... To exploit instruction level parallelism past a few instructions, however, it is necessary to predict the outcomes of program decisions (often embodied in branch instructions) and speculatively execute subsequent instructions.
From page 32...
... This popular Moore's law has provided incredible opportunities for the rest of computer science. Everyone from researchers to product designers can dream up many ideas and ask not whether something will be practical, but when (assuming, of course, one is not trying to solve problems that cannot be solved or require execution times that defeat exponential growth with exponential complexity -- see Kleinberg and Papadimitriou)
From page 33...
... There is already evidence that the doubling time for memory chips is now closer to every 2 years instead of every 18 months. A contributing factor to this slowdown is the exponentially increasing cost of building factories for fabricating chips.
From page 34...
... This vision is now known as grid computing, since it strives to deliver customers access to computing resources much as the power grid delivers customers power. We encourage future efforts to exploit parallel threads in all their forms, because doing so represents
From page 35...
... . Nevertheless, the history of computing has shown that new smaller systems are great catalysts for change: mainframes to minicomputers to personal computers to personal digital assistants to cellular phones.
From page 36...
... The last half-century has seen substantial computing advances and impacts on society. We expect the synergies just discussed to provide plenty of non-technical and technical challenges and opportunities.
From page 37...
... By a "formula" in this story, we meant a particular thing: a finite sequence of steps that begins with the given values of the coefficients and ends with a root x; each step consists of one of the basic arithmetic operations applied to certain of the quantities already computed, or else it consists of the extraction of a root of one of the quantities already computed. Now we can assert more precisely, thanks to the work of the 19th-century mathematicians Abel and Galois: there is no quintic formula.
From page 38...
... Computation as a Universal Technology So, guided by this intuition, let us move beyond stylized forms of computation and seek to understand general-purpose computation in all its richness -- for if we could do this, then we might find similarly surprising consequences that apply much more broadly. Such was the goal of Alan Turing in the 1930s, and such was also the goal of a host of other mathematical logicians at that time.
From page 39...
... Today, we would think of U simply as an interpreter -- it executes a stepby-step simulation of any Turing machine M presented to it. Indeed, our style of writing programs and then executing them is so ingrained in our view of computation that it takes a moment to appreciate the consequences that flow from a universal machine.
From page 40...
... Do you stop it and see what's wrong, or wait to see if it's just taking longer than expected to come up with an answer? We might well want something stronger than the blind simulation that U provides; we might want a "Universal Termination Detector" -- a Turing machine D that behaves as
From page 41...
... In the case at hand, we can exploit the self-reference inherent in universality to prove that there is no Universal Termination Detector by showing that there is no Turing machine that correctly implements a Universal Termination Detector (Box 2.1)
From page 42...
... This is a contradiction, so there cannot be such a machine X, and hence there cannot be a Universal Termination Detector D This style of proof is often referred to as diagonalization, and it was intro duced by Cantor to show that one cannot put the real numbers in one-to-one correspondence with the integers.
From page 43...
... . Such techniques, however, are developed in a constant struggle against the negative results discussed above: over the years researchers have carved out broader and broader tractable special cases of the problem, but to solve these verification and transformation tasks in their full generality -- to perform them correctly on all possible programs -- is provably impossible.
From page 44...
... And so, following this line of attack, we seek algorithms that require only "polynomial time" -- on an input of size N, a "good" algorithm should use a number of steps that is bounded by a polynomial function of N such as N2 or N5. Clearly polynomial functions grow much more slowly than exponential ones, and they have a very desirable scaling property -- if you increase your input size by a constant multiplicative factor, the running time goes up by a predictable constant factor as well.
From page 45...
... The full search space for a 17-city instance of the Traveling Salesman Problem is 16 times larger than the search space for a 16-city instance. So if exhaustively searching the space of solutions for a 16-city problem is at the limit of your current computer's abilities, and if computing power doubles every year and a half, then you'll need to wait 6 years before your new computer can tackle a 17-city problem by brute force -- 6 years to be able to solve a problem that is only
From page 46...
... So the Traveling Salesman Problem may not be efficiently solvable, but it is at least efficiently verifiable: if there is a short tour among the cities we are given, then there exists a "certificate" -- the tour itself -- that enables us to verify this fact in polynomial time. This is the crux of the Traveling Salesman Problem's complexity, coiled like the "trick" that helps you solve a difficult puzzle: it's hard to find a short tour on your own, but it's easy to be convinced when the short tour is actually revealed to you.
From page 47...
... Finding the folding of minimum energy for discrete models of proteins is NP-complete. The Traveling Salesman Problem -- the tough nut that started us on this road -- is NP-complete.
From page 48...
... One boundary divides the computable from the uncomputable -- it is feasible to build a step-by-step interpreter for computer programs, but one cannot design an algorithm that decides whether arbitrary programs will terminate. Another boundary divides polynomial-time solvability from the exponential growth of brute-force search.
From page 49...
... What are the ultimate limits of quantum computers? And are there theoretical obstacles to building them (in addition to the practical ones currently braved in labs all over the world)
From page 50...
... 50 COMPUTER SCIENCE: REFLECTIONS Computation as a technology that follows its own laws; computation as the quintessence of universality; computation as a powerful perspective on the world and on science -- these are issues that still drive our study of the phenomenon today. And the more we grapple with the underlying principles of computation, the more we see their reflections and imprints on all disciplines -- in the way structured tasks can be cast as stylized computational activities; in the surprising complexity of simple systems; and in the rich and organic interplay between information and code.
From page 51...
... Already they have been used to create unconditionally secure cryptographic key agreement protocols, and in the future, if a quantum computer can be built, it could easily perform some computations (most notably the factorization of large numbers) that would take longer than the age of the universe by the best known algorithms, not only on today's supercomputers, but also on the supercomputers of 2050 (by which time we predict Moore's law will have ended)
From page 52...
... This theory, which is analogous to the classical theory of fault tolerance developed by von Neumann and others in the days when computers were made of vacuum tubes and relays, allows arbitrarily large reliable quantum computations to be done on imperfect hardware, provided the hardware exceeds some threshold of reliability. The technical problem is that the quantum threshold is higher than the classical threshold discovered by von Neumann, while today's quantum computing hardware processes quantum information far less reliably than vacuum tubes process classical information, so there remains a gap of several orders of magnitude that
From page 53...
... Just how do quantum information and the hardware used to process it differ from the ordinary classical type of information processing formalized by Turing? States of a Turing machine tape, or any other digital storage medium, are in principle reliably distinguishable, and the tape can be copied accurately without disturbing it.
From page 54...
... . An n qubit register has 2n reliably distinguishable states corresponding to the n-bit strings |000...> through |111...1>; more generally the Hilbert space of a compound quantum system has a dimensionality equal to the product of the Hilbert space dimensions of its parts.
From page 55...
... Aside from computation per se, quantum information science includes the disciplines of quantum communication and quantum cryptography. The latter field is at a far more mature stage of development than is quantum computation per se, with successful laboratory experiments and even a few startup companies.
From page 56...
... It is already providing new ways of thinking about a wide variety of scientific and technical questions, and has begun to affect how science is taught, in a way that will bring a deeper understanding of the fruitful abstraction that is information not only to computer scientists but also to a broad segment of the lay public. For further information: · Jozef Gruska, Quantum Computing (McGraw-Hill, 1999, ISBN 007 709503 0)


This material may be derived from roughly machine-read images, and so is provided only to facilitate research.
More information on Chapter Skim is available.