National Academies Press: OpenBook

Future R&D Environments: A Report for the National Institute of Standards and Technology (2002)

Chapter: Appendix G: Innovation's Quickening Pace: Summary and Extrapolation of Frontiers of Science/ Frontiers of Engineering Papers and Presentations

« Previous: Appendix F: Trends in the Economy and Industrial Strength
Suggested Citation:"Appendix G: Innovation's Quickening Pace: Summary and Extrapolation of Frontiers of Science/ Frontiers of Engineering Papers and Presentations." National Research Council. 2002. Future R&D Environments: A Report for the National Institute of Standards and Technology. Washington, DC: The National Academies Press. doi: 10.17226/10313.
×

G
Innovation’s Quickening Pace: Summary and Extrapolation of Frontiers of Science/ Frontiers of Engineering Papers and Presentations, 1997-2000

James Schultz

CONTENTS

Suggested Citation:"Appendix G: Innovation's Quickening Pace: Summary and Extrapolation of Frontiers of Science/ Frontiers of Engineering Papers and Presentations." National Research Council. 2002. Future R&D Environments: A Report for the National Institute of Standards and Technology. Washington, DC: The National Academies Press. doi: 10.17226/10313.
×

INTRODUCTION

Scientific endeavor continues to accelerate rapidly on many fronts. Such a conclusion can be reasonably drawn from consideration of a series of symposia underwritten by the National Academy of Sciences (NAS) and the National Academy of Engineering (NAE). The symposia yearly bring together 100 of the nation’s outstanding young scientists and engineers, ages 30 to 45, from industry, academia, and government to discuss pioneering research in various scientific, technological, and engineering fields and in industry sectors. Meetings are held not only in the United States, but also in China, Germany, and Japan, underscoring the now-ordinary nature of international scientific collaboration.

Following a competitive nomination and selection process, participation in the Frontiers of Science (FOS) and Frontiers of Engineering (FOE) symposia is by invitation. Attendees are selected from a pool of researchers at or under age 45 who have made significant contributions to science; they include recipients of the Sloan, Packard, and MacArthur fellowships; winners of the Waterman award; Beckman Young Investigators; and NSF Presidential Faculty Fellows. The symposia make public the thinking and work of some of the country’s best and brightest, those now rising to positions of leadership within their institutions and industries.

Symposia presenters generally credit interdisciplinary approaches to fields of study as a means of providing new insights and techniques while significantly advancing specific disciplines. A survey of the symposia presentations indicates this cross-fertilization appears likely to intensify. For example, the use and integration of computation within all fields of research is now routine. The application of next-generation computation—from embedded microsensors and actuators within semi-intelligent machines and structures to the reverse engineering of biological systems (known as biomimetics) to enhanced modeling of cosmological, geological, and climatological phenomena—appears likely to facilitate additional advances, accelerating both basic understanding of fundamental processes and the efficacy of targeting applications and commercial spinoffs.

Constraints and Limitations

Given the availability of published papers and the timeliness of the research, the purview of this report is of necessity constrained to a span of not more than 4 years (although none of the year 2000 FOS papers had become available as of this writing, and neither are some 1999 FOS studies available.) The discussion herein is thus limited to a discussion of FOS presentations that occurred in 1997, 1998, and 1999 and of FOE presentations that occurred from 1997 through 2000. Presenters often limit themselves to the minutiae of their respective fields—an inclination to be expected, but not conducive to an aggregate synthesis of progress across multiple disciplines.

Suggested Citation:"Appendix G: Innovation's Quickening Pace: Summary and Extrapolation of Frontiers of Science/ Frontiers of Engineering Papers and Presentations." National Research Council. 2002. Future R&D Environments: A Report for the National Institute of Standards and Technology. Washington, DC: The National Academies Press. doi: 10.17226/10313.
×

Dated reports of innovation are inevitably eclipsed by other, newer advances that tend to play out rapidly in either the scientific or popular press. Thus, this report is also dated in the scope and freshness of announced results. Nevertheless, these presentations do offer a glimpse into certain trends that underscores the acceleration of scientific enterprise. In general, the pace of innovation appears to be quickening, with multiple advances in multiple fields leading to yet greater technological acceleration. Nothing is ever certain, but if the rate of applied innovation continues to increase, even the most optimistic forecasts may prove in retrospect hesitating and timid. Although rearranged thematically and edited for clarity, the sections that follow contain material from the original papers published by the cited authors.

TREND 1: COMPUTATION CONTINUES TO ADVANCE

As computational power continues a seemingly inexorable advance, interest in and exploitation of new microprocessor architectures and software techniques remain strong. Computing is increasingly Internet-centric, with computer nodes distributed among desktops rather than in climate-controlled repositories. As Gharachorloo notes, administrators are finding it easy and inexpensive to scale up computational power by installing multiple independent systems. Although management issues inevitably arise, making more systems available simultaneously in a loosely clustered environment allows for incremental capacity expansion. The accelerating growth of the World Wide Web should continue to encourage the development and deployment of high-performance computers with high availability and incremental scalability. Hardware improvements are, however, proving easier to implement than is the software to support those improvements.1

Outside the purview of the FOS/FOE symposia are advances in biological and optical computing, both of which must be said to be in the very earliest stages of development. Nevertheless, their potential, either as stand-alone systems or integrated in some as-yet-unanticipated fashion, could have a substantial impact on future computer design and deployment. Even though robust real-world architectures for either have yet to be perfected, their promise, in terms of sheer computational power, is orders of magnitude beyond current serial-processing applications, and they should be considered as future contributors to advanced computing initiatives.

More promising still is quantum computation, which employs individual atoms, molecules, or photons in exploitation of quantum interference to solve oth-

1  

Frontiers of Engineering/1999. “Evolution of Large Multiprocessor Servers,” Kourosh Gharachorloo, pp. 11-19. Many of the FOE papers cited in this report can be found at <http://www.nae.edu/nae/NAEFOE.nsf/weblinks/NAEW-4NLSEK?OpenDocument>.

Suggested Citation:"Appendix G: Innovation's Quickening Pace: Summary and Extrapolation of Frontiers of Science/ Frontiers of Engineering Papers and Presentations." National Research Council. 2002. Future R&D Environments: A Report for the National Institute of Standards and Technology. Washington, DC: The National Academies Press. doi: 10.17226/10313.
×

erwise intractable problems. In principle, computers could be built to take advantage of genuine quantum phenomena such as entanglement and interference that have no classical analogue and that offer otherwise impossible capabilities and speeds. Computers that thrive on entangled quantum information could thus run exponentially faster than classical computers, say Brassard et al.2

Brassard et al. explain that quantum parallelism arises because a quantum operation acting on a superposition of inputs produces a superposition of outputs. The unit of quantum information is the quantum bit, or qubit. Classical bits can take a value of either 0 or 1, but qubits can be in a linear superposition of the two classical states.

The quartet assert that implementation of quantum computers presents a profound experimental challenge. Quantum computer hardware must satisfy fundamental constraints. First, qubits must interact very weakly with the environment to preserve their superpositions. Second, the qubits must interact very strongly with one another to make logic gates and transfer information. Lastly, the states of the qubits must be able to be initialized and read out with high efficiency. Although few physical systems can satisfy these seemingly conflicting requirements, a notable exception is a collection of charged atoms (ions) held in an electromagnetic trap. Here, each atom stores a qubit of information in a pair of internal electronic levels. Each atom’s levels are well protected from environmental influences, which is why such energy levels also are used for atomic clocks.

For the moment, however, no large-scale quantum computation has been achieved in the laboratory. Nevertheless, several teams around the globe are working at small-scale prototypes, and quantum computing may be possible within the decade.

TREND 2: A QUICKENING MEDICAL-GENETICS REVOLUTION

Genomic Medicine

New kinds of diagnostic and therapeutic treatments will likely be derived from an enhanced understanding of the human genome. As Fields et al. see it, the emerging field of functional genomics—the term refers to a gene’s inner workings and interplay with other genes—seeks to contribute to the elucidation of some fundamental questions. The three ask: How does the exact sequence of human DNA differ between individuals? What are the differences that result in disease or predisposition to disease? What is the specific role of each protein

2  

Frontiers of Science/1997. Gilles Brassard, Isaac Chuang, Seth Lloyd, and Christopher Monroe, at <http://www.pnas.org/cgi/content/full/95/19/11032>.

Suggested Citation:"Appendix G: Innovation's Quickening Pace: Summary and Extrapolation of Frontiers of Science/ Frontiers of Engineering Papers and Presentations." National Research Council. 2002. Future R&D Environments: A Report for the National Institute of Standards and Technology. Washington, DC: The National Academies Press. doi: 10.17226/10313.
×

synthesized by a bacterial pathogen, by a model organism (e.g., Escherichia coli, yeast, the fruit fly, and the nematode), or by a human? How do proteins collaborate to perform the tasks required for life? Because not all genes are active in a given cell at a given time, which genes are used under which circumstances? How does this differential gene expression result in different types of cells and tissues in a multicellular organism?3

The trio point out that DNA arrays are being used to characterize human genetic variation. Extensive stretches of DNA sequence can be screened at once, and more than 4,000 variations have been found across the human genome. These small differences provide markers that can be used in subsequent studies to identify the genes responsible for particular traits and to analyze the sequences of important genes to uncover associations between specific genetic variations. Either predisposition to common diseases or the efficacy and safety of therapies can thus be iterated. Sophisticated tests should eventually allow the collection and analysis of the cellular and genetic information that should significantly augment biologic understanding, changing the way drugs are developed and the way diseases are diagnosed, prevented, and treated.

A number of studies are beginning to pinpoint both genetic predisposition and genetic function. For example, Uhl et al. say that precise identification of behavior-influencing candidate genes in humans and animals should lead to a more comprehensive understanding of the molecular/neurobiological underpinnings of complex behavioral disorders, as in animal studies that examine vulnerability to drug abuse. Classical human genetic studies also indicate significant genetic contributions to drug abuse, while genetic influences on human drug-abuse behaviors can be found in strain comparison, selective breeding, quantitative trait locus, and transgenic mouse studies. Testing genetic markers found within the human brain is also now possible.4

Identification of poorly understood, if complex, disease mechanisms may enable physicians to short-circuit the ways in which illness takes hold. The role of prions—proteinaceous infectious agents, molecules that cause transmissible neurodegenerative diseases in mammals—is now being evaluated, report Westaway et al. One class of prion protein was discovered during studies of experimental scrapie disease in rodents, while other varieties are associated with heritable cytoplasmic traits in yeast and fungi. All prion proteins are host-en-coded and come in at least two varieties. The benign “cellular” form is referred to as PrPC, a molecule that is most probably present in all mammals and expressed on the surface of neurons.5

3  

Frontiers of Science/1998. Stanley Fields, Yuji Kohara, and David J. Lockhart, at <http://www.pnas.org/cgi/content/full/96/16/8825>.

4  

Frontiers of Science/1996. George R. Uhl, Lisa H. Gold, and Neil Risch, at < http://www.pnas.org/cgi/content/full/94/7/2785>.

5  

Frontiers of Science/1997. David Westaway, Glenn Telling, and Suzette Priola, at <http://www.pnas.org/cgi/content/full/95/19/11030>.

Suggested Citation:"Appendix G: Innovation's Quickening Pace: Summary and Extrapolation of Frontiers of Science/ Frontiers of Engineering Papers and Presentations." National Research Council. 2002. Future R&D Environments: A Report for the National Institute of Standards and Technology. Washington, DC: The National Academies Press. doi: 10.17226/10313.
×

The three scientists contend that, while the prion hypothesis is not universally accepted, prions nevertheless may be gatekeepers controlling disease susceptibility. At the other extreme, prion proteins serve as the prototype for a new class of infectious pathogen and establish protein misfolding as a novel mechanism of disease pathogenesis, prompting the suggestion that simple organisms use prionlike mechanisms to switch physiological states and thereby adapt to new environments.

Genetic Cures?

Correcting defects at the genetic level should offer the most potent means of ensuring health and well-being. As defined by authors Kay et al., gene therapy is the introduction of nucleic acids into cells for the purpose of altering the course of a medical condition or disease. In general, with some exceptions, the nucleic acids are DNA molecules encoding gene products or proteins. Thus, the gene essentially can be considered a new pharmaceutical agent for treating many types of diseases. But because cells and organisms have developed powerful mechanisms to avoid the accumulation of extraneous genetic material, routine gene therapy is quite difficult, involving insertion of the appropriate gene into a target non-germ-cell tissue, such that an appropriate amount of gene product (usually a protein) is produced to correct a given malady.6

The three researchers assert that there are two primary candidates for gene transfer: viral and nonviral vectors. According to them, some believe that viruses will be most successful because they have evolved for millions of years to become efficient vesicles for transferring genetic material into cells, whereas others believe that the side effects of viruses and previous exposures will render the host resistant to transduction (gene transfer into the cell) and therefore preclude their long-term use in gene therapy. There are a number of additional viral vectors based on Epstein-Barr virus, herpes, simian virus 40, papilloma, nonhuman lentiviruses, and hepatitis viruses that are currently being evaluated in the laboratory.

Kay et al. remark that once a vector is designed, two general approaches are used for gene transfer: ex vivo, wherein cells are removed, genetically modified, and transplanted back into the same recipient, and in vivo, which is accomplished by transfer of genetic materials directly into the patient. The latter is preferable in most situations, because the complexity of the former method makes it less amenable to wide-scale application.

6  

Frontiers of Science/1997. Mark A. Kay, Dexi Liu, and Peter M. Hoogerbrugge, at <http://www.pnas.org/cgi/content/full/94/24/12744>.

Suggested Citation:"Appendix G: Innovation's Quickening Pace: Summary and Extrapolation of Frontiers of Science/ Frontiers of Engineering Papers and Presentations." National Research Council. 2002. Future R&D Environments: A Report for the National Institute of Standards and Technology. Washington, DC: The National Academies Press. doi: 10.17226/10313.
×

TREND 3: THE NANOTECHNOLOGY POTENTIAL

Tools of the Trade

Building structures atom by atom, first to the molecular scale and then to the macro scale, is the defining characteristic of the emerging field of nanotechnology. Wiesendanger contends that a key nanotechnology tool is the scanning tunneling microscope (STM). In STM and related scanning-probe methods, he writes, a probe tip of atomic sharpness is brought within close proximity to the object under investigation until some physical signal can be measured that might originate from electronic, electrical, magnetic, optical, thermal, or other kinds of interactions between tip and sample. Point probing by a sharp tip allows one to receive local information about the physical, chemical, or biological state of a sample, which facilitates the investigation of site-specific properties.7

As Wiesendanger explains, to achieve high spatial resolution the distance between the probe tip and the sample is chosen to be smaller than the characteristic wavelength of the particular type of interaction acting between tip and sample. In the case of STM, that distance would be the electron wavelength, whereas for a scanning optical microscope it would be the optical wavelength. STM and related scanning-probe methods are therefore exceptional types of microscopes because they work without lenses, in contrast to optical and electron microscopes, and thus achieve superresolution.

He writes that, for strongly distance-dependent interactions, the dominant tip-sample interaction region can be as small as a few angstroms, thereby allowing the imaging of individual atoms and molecules on surfaces. Increasing the interaction strength between probe tip and sample in a controllable manner has become important for the fabrication of well-defined nanometer-scale structures. It has even become possible to synthesize artificial structures by sliding individual atoms and molecules on surfaces by means of the probe tip.

According to Wiesendanger, the controlled manipulation of matter at the scale of individual atoms and molecules may lead to new generations of nanoelectronic and mass storage devices. Scanning-probe instruments have also become powerful metrological devices that allow measurement of the distances between objects with extremely high accuracy. The precision of distance measurements offered by scanning-probe methods is also of great importance for a variety of sensor applications.

Scientists using scanning-probe microscopes and advanced optical methods are able to closely study single molecules. As Bai et al. note in their writings, the findings of these studies not only confirm the results expected from studies of bulk matter, but also provide substantially new information on the complexity of

7  

Frontiers of Science/1997. Roland Wiesendanger, at <http://www.pnas.org/cgi/content/full/94/24/12749>.

Suggested Citation:"Appendix G: Innovation's Quickening Pace: Summary and Extrapolation of Frontiers of Science/ Frontiers of Engineering Papers and Presentations." National Research Council. 2002. Future R&D Environments: A Report for the National Institute of Standards and Technology. Washington, DC: The National Academies Press. doi: 10.17226/10313.
×

biomolecules or molecules in a structured environment.8 The technique lays the groundwork for achieving the control of an individual molecule’s motion. Ultimately, this work may lead to such practical applications as miniaturized sensors.

The four assert that studying single molecules is important because molecular individuality plays an important role even when molecular structure is complex. An intricate internal structure such as that found in a biomolecule, for example, results in a complex energy landscape. Alternatively, the molecule may be influenced by environmental factors that substantially change its behavior. Thus, the ability to distinguish different molecules under differing conditions and structures becomes crucial for understanding the system as a whole.

Biomolecules in living cells are one such example, the quartet explain. Even simple inorganic molecules on structured surfaces or in disordered systems, such as viscous liquids or glasses, provide situations in which molecular individuality matters. In all of these cases, the ability to study an individual molecule over time can give new insights unavailable by straightforward experiments on macroscopic populations of molecules. The new questions that single-molecule experiments pose move chemistry and physics into a realm more familiar to astronomers, who have direct observational knowledge of a single complex object such as the universe, and who must infer underlying rules and patterns. A single molecule under active control may well resemble the elegant engineered machinery rather than the “wild” molecules created by and found commonly in the natural world.

Tiny Building Blocks

As Wooley et al. point out, over the past decade, polymer chemistry has attained the sophistication necessary to produce macromolecules with accurate control of structure, composition, and properties over several length scales, from typical small-molecule, angstrom-scale resolution to nanometer dimensions and beyond.9 Most recently, methods that allow for the preparation of polymeric materials with elaborate structures and functions—“bioinspired” materials—have been modeled from biological systems.

The four explain that, in biological systems, complexity is constructed by the ordering of polymer components (that is, polymers of amino acids, saccharides, and nucleic acids) through a combination of covalent bonds and weak interactions (such as hydrophobic, hydrogen bonding, and electrostatic interactions), with the process being controlled by the specific sequence compositions. An

8  

Frontiers of Science/1998. Chunli Bai, Chen Wang, X. Sunney Xie, and Peter G. Wolynes, at <http://www.pnas.org/cgi/content/full/96/20/11075>.

9  

Frontiers of Science/1999. Karen L. Wooley, Jeffrey S. Moore, Chi Wu, and Yulian Yang, at <http://www.pnas.org/cgi/content/full/97/21/11147>.

Suggested Citation:"Appendix G: Innovation's Quickening Pace: Summary and Extrapolation of Frontiers of Science/ Frontiers of Engineering Papers and Presentations." National Research Council. 2002. Future R&D Environments: A Report for the National Institute of Standards and Technology. Washington, DC: The National Academies Press. doi: 10.17226/10313.
×

extension of these assembly concepts to synthetic macromolecules is primarily driven by the desire to create materials that resemble the structure and morphology of biological systems but that nevertheless possess the versatile compositions and properties of synthetic materials.

According to Zhou, nanotechnology’s most effective building blocks may be extremely strong carbon nanotubes. He writes that few, if any, materials demonstrate as perfect a structure at the molecular level as does a single carbon nanotube, which exhibits excellent mechanical and thermal properties. A nanotube’s aspect ratio and small diameter permit excellent imaging applications, and theoretical calculations and measurements performed on individual carbon nanotubes have demonstrated that their elastic modulus is as high as that of diamond. Indeed, Zhou asserts, if engineers could make a defect-free, carbon nanotube cable, such a cable that connected Earth and the Moon would be within the realm of possibility. Carbon materials could also be used in spacecraft, both for vehicle structures and for providing power from fuel cells or lithium-ion batteries based on nanomaterials.10

Architectures built from pure carbon units can result in new symmetries and structures with unique physical properties, according to Ajayan et al. The three write that a carbon nanotube can be considered the ultimate fiber—highly organized, near-ideal bonded-carbon structure. The organization of the hexagonal honeycomb carbon lattice into cylinders with helical arrangement of hexagonal arrays creates an unusual macromolecular structure that is by far the best carbon fiber ever made.11

Because of their intrinsic structural robustness and high electrical conductivity, nanotubes show great promise for applications, the trio maintain. Studies of field emissions have demonstrated transmission of large currents at low operating voltages. Nanotubes emit radiation coherently, indicating possible use in electron holography. Electron emissions from arrays of tubes have also been used to construct an actual device: a cathode ray tube lighting element. Now nanotube arrays can also be grown on glass substrates for field-emitting flat panel displays.

According to Ajayan et al., techniques for the robust attachment of multiwall nanotubes to the ends of atomic force microscope (AFM) cantilevers have been developed for use as scanning probe tips. Such tips have a number of advantages, including crashproof operation and the ability to image deep structures inaccessible to conventional tips. Such tips have been used as high-resolution probes in electrostatic force microscopy and in nanolithography for writing 10-nanometerwide lines at relatively high speeds. Chemistry can be restricted to the ends of the

10  

Frontiers of Engineering/2000. “Science and Technology of Nanotube-Based Materials,” Otto Z. Zhou, pp. 89-92.

11  

Frontiers of Science/1999. P.M. Ajayan, J.C. Charlier, and A.G. Rinzler, at <http://www.pnas.org/cgi/content/full/96/25/14199>.

Suggested Citation:"Appendix G: Innovation's Quickening Pace: Summary and Extrapolation of Frontiers of Science/ Frontiers of Engineering Papers and Presentations." National Research Council. 2002. Future R&D Environments: A Report for the National Institute of Standards and Technology. Washington, DC: The National Academies Press. doi: 10.17226/10313.
×

nanotubes, where topological defects are present, and the nanotube tip can be functionalized with distinct chemical groups and used to demonstrate nanoscale imaging with the ability to discriminate the local chemistry of the surface being probed.

In addition, note the three researchers, nanotubes can be considered as the ultimate carbon fiber and may one day replace existing micron-size carbon fibers in composites. Freestanding films of purified nanotubes have also shown promise for application as the anode intercalation host in lithium-ion rocking-chair batteries with high charging capacities. Nanotube films embedded in electrolytes have been shown to be efficient electromechanical actuators, with possible application as artificial muscles. Nanotubes are also being considered as energy-storage and -delivery systems because of possibilities of hydrogen storage and excellent electron-transfer characteristics.

As Tromp writes, the nanometer world has properties and promises of its own that go beyond scaling and that may create entirely new technologies based on new physics, materials, and paradigms. The worldwide drive to invest in and develop nanotechnology, he believes, reflects the optimism that such a technology will soon become reality and that it will spawn new capabilities, opportunities, and wealth. Novel nanoscale materials—quantum dots, for instance—have applications that more conventional materials do not offer.12

Tromp cites nanocrystal semiconductor memory as an application based on the small capacitance of quantum dots (or nanocrystals, as they are often called in this context). In a conventional field-effect, he explains, transistor inversion is obtained by applying a suitable bias voltage to the gate. The incorporation of nanocrystals in the gate insulator would provide a means to apply a field offset by charging the quantum dots. Working devices have been successfully manufactured.

Tromp thinks that patterned media will be the next step in magnetic storage. As he explains, rather than use continuous magnetic thin films, researchers divide the film in spatially separated magnetic dots, where a single dot stores a single bit. This has certain advantages in terms of the ultimate storage density but is itself limited by the paramagnetic effect at a density of about 100 gigabits per square inch. When the bits become too small, the magnetic moment is subject to thermal fluctuations, and the stored information is subject to thermal decay. Recent attempts at fabricating magnetic nanocrystals have made much progress, and studies of their magnetic properties are under way. Hard magnetic thin films have also been fabricated using magnetic nanocrystals, even though the use of such nanocrystals in patterned magnetic media is not imminent.

12  

Frontiers of Engineering/2000. “Nanoscale Materials: Synthesis, Analysis and Applications,” Rudolf M. Tromp, pp. 93-102.

Suggested Citation:"Appendix G: Innovation's Quickening Pace: Summary and Extrapolation of Frontiers of Science/ Frontiers of Engineering Papers and Presentations." National Research Council. 2002. Future R&D Environments: A Report for the National Institute of Standards and Technology. Washington, DC: The National Academies Press. doi: 10.17226/10313.
×

Tromp also points out the benefits of nanostructural inks, which represent a generalization of the previous two applications. Manufactured in bulk, nanostructural inks can be applied to a wide variety of substrates using a wide variety of application methods, including inkjet printing, screen printing, spray application, spinning, and immersion. Inks can be mixed, layered, and patterned as applications require. Nanostructural inks might present the largest opportunity for the practical application of quantum dots and nanocrystals in the next two decades.

TREND 4: NATURE AS ENGINEERING TEMPLATE

Dickinson writes that organisms experimented with form and function for at least 3 billion years before the first human manipulations of stone, bone, and antler.13 When pressed with an engineering problem, humans often draw guidance and inspiration from the natural world, using natural architectures as inspiration for so-called biomimetic, or bionic, design. Just as biologists are discovering the structural and physiological mechanisms that underlie the functional properties of plants and animals, engineers are beginning to develop means of fabrication that rely on nature for inspiration. As the performance gap between biological structures and mechanical analogs shortens, engineers may feel increasingly encouraged to seek and adopt such design concepts.

Dickinson points out that birds and bats, for instance, played a central role in one of the more triumphant feats of human engineering, that of airplane construction. In the 16th century, Leonardo da Vinci sketched designs for gliding and flapping machines based on anatomical study of birds. More than 300 years later, Otto Lilienthal built and flew gliding machines that were also patterned after birds. The wing-warping mechanism that enabled Orville and Wilbur Wright to steer their airplane past the cameras and into the history books is said to have been inspired by watching buzzards soar near their Ohio home.

Innovations in materials science, electrical engineering, chemistry and molecular genetics are enabling designers, Dickinson explains, to plan and construct complicated structures at the molecular or near-molecular level. Examples include buckyballs, nanotubes, and the myriad of microelectromechanical devices (MEMs) constructed with technology derived from the silicon chip industry. Integrated circuits themselves play a role in bionics projects aimed at constructing smart materials or mimicking the movement, behavior, and cognition of animals. Biological structures are complicated; only recently have engineers developed a sophisticated enough toolkit to mimic the salient features of that complexity. For

13  

Frontiers of Science/1999. Michael H. Dickinson, at <http://www.pnas.org/cgi/content/full/96/25/14208>.

Suggested Citation:"Appendix G: Innovation's Quickening Pace: Summary and Extrapolation of Frontiers of Science/ Frontiers of Engineering Papers and Presentations." National Research Council. 2002. Future R&D Environments: A Report for the National Institute of Standards and Technology. Washington, DC: The National Academies Press. doi: 10.17226/10313.
×

their part, biologists are also beginning to understand how basilisk lizards walk on water, how penguins minimize drag, and how insects manage to remain air-borne.

Dickinson writes that the fields of biology that use principles of structural engineering and fluid mechanics to draw structure/function relationships are known as functional morphology or biomechanics. These disciplines are of particular use to bionics engineers, he maintains, because the behavior and performance of natural structures can be characterized with methods and units that are directly applicable to mechanical analogues. In recent years, however, biomechanics has become increasingly sophisticated, aided by a battery of techniques, including x-ray cinematography, atomic-force microscopy, high-speed video, sonomicrometry, particle-image velocimetry, and finite-element analysis.

Dickinson cites a number of successful biomimetic designs that are based on the morphology of biological materials. A simple and well-known example is Velcro, invented by George de Mestral, who was inspired by the hours wasted pulling burrs off his dog’s fur after walks in the Swiss countryside. In the same category is the lotus leaf: Although living above muddy water and without active grooming capability, lotus leaves remain pristine and dirt free. This self-cleaning ability, Dickenson notes, results from tiny, wax-coated protuberances on the lotus-leaf surface. When water falls on a leaf, it does not spread out and wet the surface, as it would on the smooth leaves of most plants, but rather forms tiny beads atop the knobby surface, collecting dust and dirt as they roll off. A brand of paints is now available that makes use of a patented “LotusEffekt” to form a similar protective barrier.

Another example is that of the shark. As do many fast-swimming organisms, sharks exhibit skin scales that possess tiny ridges running parallel to the longitudinal body axis. Dickinson points out that the grooved body surface reduces drag through its influence on the boundary layer between turbulence and smooth water flow. Placed over the wings and fuselage of an Airbus 320, riblet sheets modeled on sharkskin reduced the aircraft’s fuel consumption.

Designers and engineers can mimic and utilize biological structures, Dickinson believes, provided that it is possible to fabricate the artificial material with the precision required to produce the desired effect. In the case of synthetic sharkskin, once engineers determined the correct groove geometry, it was relatively easy to mold plastic sheets that reproduce the pattern. House paints replicating lotus leaves are presumably laced with a material to mimic the rough surface of the leaves.

According to Dickinson, an example that well illustrates the crudeness of current microfabrication techniques is spider silk. Silks are proteins secreted by specialized glands found in many groups of arthropods. More than 4000 years ago, the Chinese domesticated the moth Bombyx mori, the primary source of textile silk. Although the quality of moth silk was great enough to have fueled the oldest intercontinental trade route in world history, its properties pale compared

Suggested Citation:"Appendix G: Innovation's Quickening Pace: Summary and Extrapolation of Frontiers of Science/ Frontiers of Engineering Papers and Presentations." National Research Council. 2002. Future R&D Environments: A Report for the National Institute of Standards and Technology. Washington, DC: The National Academies Press. doi: 10.17226/10313.
×

with those of spider silk. Spiders make a variety of different silks to serve different functions, but most research focuses on the dragline silk that individual spiders use to hoist and lower their bodies. This silk can extend and stretch by 30 percent without snapping; it is stronger than the best metal alloys or synthetic polymers. The use of ropes, parachutes, and bulletproof vests spun of spider silk has motivated the search for genes that encode silk proteins.

Dickinson cites the exoskeleton of insects as a good example of biological structural sophistication. He writes that the cuticle surrounding an insect is composed of one topologically continuous sheet composed of proteins, lipids, and the polysaccharide chitin. Before each molt, the cuticle is secreted by an underlying layer of epithelial cells. Complex interactions of genes and signaling molecules spatially regulate the exact composition, density, and orientation of proteins and chitin molecules during cuticle formation. Temporal regulation of protein synthesis and deposition permits construction of elaborate layered cuticles that display the toughness of composite materials.

The result of such precise spatial and temporal regulation is a complex exoskeleton that is tagmatized into functional zones. Limbs consist of tough, rigid tubes made of molecular plywood, connected by complex joints made of hard junctures separated by a rubbery membrane. The most elaborate example of an arthropod joint is the wing hinge, the morphological centerpiece of flight behavior. The hinge consists of a complex interconnected tangle of five hard elements embedded within thinner, more elastic cuticle and bordered by the thick side walls of the thorax. In most insects, the muscles that power the wings are not attached to the hinge. Instead, flight muscles cause small strains within the walls of the thorax, which the hinge then amplifies into large oscillations of the wing. Small control muscles attached directly to the hinge enable the insect to alter wing motion during steering maneuvers.

Although the material properties of the elements within the hinge are indeed remarkable, Dickinson asserts, it is the structural complexity as much as the material properties that endow the wing hinge with its unique characteristics. Several research groups are actively attempting to construct miniature flying devices patterned after insects. Their challenge is not simply to replicate an insect wing, Dickinson notes, but to create a mechanism that flaps it just as effectively.

Microscale Materials

Chakraborty believes that certain classes of plastics—polymers—could be designed from the molecular scale up in order to perform microscale functions.14 Nature uses proteins and nucleic acids in much the same way, he writes: Evolu-

14  

Frontiers of Engineering/1999. “Design of Biomimetic Polymeric Materials,” Arup K. Chakraborty, pp. 37-43.

Suggested Citation:"Appendix G: Innovation's Quickening Pace: Summary and Extrapolation of Frontiers of Science/ Frontiers of Engineering Papers and Presentations." National Research Council. 2002. Future R&D Environments: A Report for the National Institute of Standards and Technology. Washington, DC: The National Academies Press. doi: 10.17226/10313.
×

tion has devised schemes that allow the design of macromolecular building blocks that can self-assemble into functionally interesting structures. This does not mean that researchers should precisely copy the detailed chemistries of nature. Rather, they are exploring underlying universality in the schemes that natural systems employ to carry out a class of functions and are determining whether these adaptations affect biomimetic behavior.

According to Chakraborty, many biological processes, such as transmembrane signaling and pathogen-host interactions, are initiated by a protein when it recognizes a specific pattern of binding sites on part of a membrane or cell surface. Recognition means that the polymer quickly finds and then adsorbs strongly on the pattern-matched region and not on others. He writes that the development of synthetic systems that can mimic such recognition between polymers and surfaces could have a significant impact on such advanced applications as the development of sensors, molecular-scale separation processes, and synthetic viral-inhibition agents. Chakraborty believes the ability of certain classes of molecules to form organized assemblies in solution has important commercial and biological consequences. A variety of products, such as detergents, emulsifiers, catalysts, and vehicles for drug delivery, already rely on this ability.

TREND 5: THE MATURATION OF AUTONOMOUS MACHINES

The field of smart materials and structures combines the knowledge of physics, mathematics, chemistry, computer science, and material, electrical, and mechanical engineering in order to accomplish such tasks as making a safer car, a more comfortable airplane, and structures capable of self-repair, assert Cao et al.15 The trio write that, in the future, with the help of miniaturized electromechanical devices, structures may be “intelligent” enough to communicate directly with the human brain. The development of supersensitive noses, ears, and eyes would enable humans to smell more scents, hear beyond a certain frequency range, and see what normally cannot be seen naturally, such as the infrared spectrum.

A smart structure, explain the three scientists, is a system containing multifunctional parts that can perform sensing, control, and actuation; it is a primitive analogue of a biological body. Smart materials are used to construct these smart structures, which can perform both sensing and actuation functions. The “I.Q.” of smart materials is measured in terms of their responsiveness to environmental stimuli and their agility. The first criterion requires a large amplitude change, whereas the second assigns faster response materials a higher I.Q.

15  

Frontiers of Science/1998. Wenwu Cao, Harley H. Cudney, and Rainer Waser, at <http://www.pnas.org/cgi/content/full/96/15/8330>.

Suggested Citation:"Appendix G: Innovation's Quickening Pace: Summary and Extrapolation of Frontiers of Science/ Frontiers of Engineering Papers and Presentations." National Research Council. 2002. Future R&D Environments: A Report for the National Institute of Standards and Technology. Washington, DC: The National Academies Press. doi: 10.17226/10313.
×

Commonly encountered smart materials and structures can be categorized into three different levels, according to Cao et al.: single-phase materials, composite materials, and smart structures. Many ferric materials and those with one or more large anomalies associated with phase-transition phenomena belong to the first category. Functional composites are generally designed to use nonfunctional materials to enhance functional materials or to combine several functional materials to make a multifunctional composite. The third category is an integration of sensors, actuators, and a control system that mimics the biological body in performing many desirable functions, such as synchronization with environmental changes and self-repair of damage. A smarter structure would develop an optimized control algorithm that could guide the actuators to perform required functions after sensing changes.

Active damping is one of the most studied areas using smart structures assert Cao et al. By using collocated actuator and sensors (i.e., physically located at the same place and energetically conjugated, such as force and displacement), a number of active damping schemes with guaranteed stability have been developed. These schemes are categorized on the basis of feedback type in the control procedure, i.e., velocity, displacement, or acceleration.

Goldberg writes that economic pressures and researcher interest continue to press the evolution of simple control structures into more complex, semi-intelligent directions. Although anticipated for perhaps two decades, the widespread distribution of robots and products with some robotic capability—most notably, systems that monitor or direct the manufacture of goods—are becoming routine. While household robots nevertheless remain the stuff of science fiction, microprocessor capability continues to advance sufficiently to raise the prospect of nontoy robotic devices in households within the decade.16

Goldberg estimates that by 1998, there were 700,000 robots at work in industry. Almost half those robots were installed in Japan, while 10,000 were installed in the United States and Germany, respectively, and the remaining 20,000 were installed in Korea, Italy, France, and other countries in the developed world. By far the largest application areas are welding and painting, followed by machining and assembly. The largest customers are the automotive industry, followed by electronics, food, and pharmaceuticals. Robotics also continues to be an active and thriving area of research, with ongoing studies of kinematics (positions and velocities), dynamics (forces), and motion planning (to avoid obstacles).

Because motion planning is designed to allow robots to reach their destination without being stymied by difficult or unanticipated obstacles, robotics research is finding an unexpected benefit, according to Kavraki. She writes that pharmaceutical firms are using the results to plan the routes of therapeutic

16  

Frontiers of Engineering/1998. “A Brief History of Robotics,” Kenneth Y. Goldberg, pp. 87-89.

Suggested Citation:"Appendix G: Innovation's Quickening Pace: Summary and Extrapolation of Frontiers of Science/ Frontiers of Engineering Papers and Presentations." National Research Council. 2002. Future R&D Environments: A Report for the National Institute of Standards and Technology. Washington, DC: The National Academies Press. doi: 10.17226/10313.
×

molecules to their docking sites on a variety of bodily proteins, in order to treat or cure disease.17

Collaborative robots, or “cobots,” are a new type of robotic device, writes Michael Peshkin, intended for direct interaction with a human operator within a shared workspace. Cobots allow a true sharing of control between human and computer. The human operator supplies motive power and exerts forces directly on the payload, while the mechanism of the cobot serves to redirect or “steer” the motion of the payload under computer control. The computer monitors the force (direction as well as magnitude) applied by the operator to the payload.

In real time, Peshkin notes, these operator forces can be compared with programmed guiding surfaces, and motion in the direction that the operator pushes can be allowed, disallowed, or redirected. The human operator may be allowed complete freedom of motion of the payload, or in the opposite extreme, the payload may be restricted to a single curve through space. Thus, the full versatility of software is available for the production of virtual surfaces and other sensate effects.18

Cobots rely on the worker to provide motive power, or can give some powered assistance that requires only small motors. The much greater need for force is that required for changes of direction, sometimes called “inertia management.” In cobots this is accomplished by the physical mechanism of the cobot rather than by motors, with a consequent improvement in both safety and smoothness of operation.

Peshkin believes that cobotic applications beyond manufacturing are several. In image-guided surgery, for example, safety is essential; a cobot’s ability to guide motion without possessing a corresponding ability to move on its own can alleviate concern about inadvertent movement due to software or hardware malfunction. Because the quality of a virtual surface enforced by a cobot originates in its physical mechanism rather than in servocontrolled actuators, harder and smoother surfaces are possible than can be achieved by a conventional robot. Preserving the critical sense of touch in surgery requires a high-quality shared control between surgeon and robot, for which smoothness of motion is essential.

Popular weight training equipment, originally designed for rehabilitation, uses shaped cams and other mechanical components to confine a user’s limb or body motion to a particular trajectory. Peshkin observes that while these trajectories are somewhat adjustable, far greater versatility could be achieved if the motion trajectories were encoded in software rather than frozen into the mechanical design of the equipment. Cobots can enforce virtual trajectories with appropriate levels of smoothness, hardness, and safety.

17  

Frontiers of Engineering/1998. “Algorithms in Robotics: The Motion-Planning Perspective,” Lydia E. Kavraki, pp. 90-94.

18  

Frontiers of Engineering/1998. “Cobots,” Michael A. Peshkin, pp. 105-109.

Suggested Citation:"Appendix G: Innovation's Quickening Pace: Summary and Extrapolation of Frontiers of Science/ Frontiers of Engineering Papers and Presentations." National Research Council. 2002. Future R&D Environments: A Report for the National Institute of Standards and Technology. Washington, DC: The National Academies Press. doi: 10.17226/10313.
×

At present, conventional robots’ interface to computers and information systems is their primary benefit. Collaborative robotics allows for comingling computer power with the innate intelligence of humans and the full responsiveness of their senses and dexterity. Thus far, such capabilities cannot be matched or replaced by robots alone.

ADDITIONAL TOPICS OF NOTE

Certain fields, while not trends in and of themselves, represent studies into disciplines that are intriguing and potentially relevant. A limited discussion of six such areas follows. The sections were taken from the original papers and edited for clarity. The language is primarily, and the assertions completely, those of the cited authors.

Climate Change

Studies of past climate changes show that the Earth system has experienced greater and more rapid change over larger areas than was generally believed possible, jumping between fundamentally different modes of operation in as little as a few years. Ongoing research cannot exclude the possibility that natural or human-caused changes will trigger another oscillation in the near future.19

Global climate change is of interest because of the likelihood that it will affect the ease with which humans make a living and, perhaps, the carrying capacity of the planet for humans and other species. Attention is focused on the possibility that human activities will cause global climate change, because choices affect outcomes.

Long climate-change records show alternations between warm and cold conditions over hundreds of millions of years associated with continental drift. Large glaciers and ice sheets require polar landmasses, continental rearrangement, and associated changes in topography that affect oceanic and atmospheric circulation; in turn, these affect and are affected by global biogeochemical cycles, which include high levels of atmospheric carbon dioxide, associated with warm times.

According to Alley et al., the last few million years have been generally cold and icy compared with the previous hundred million years but have alternated between warmer and colder conditions. These alternations have been linked to changes over tens of thousands of years in the seasonal and latitudinal distribution of sunlight on Earth caused by features of the Earth’s orbit. Globally synchronous climate change—despite some hemispheric asynchrony—is explained

19  

Frontiers of Science/1998. Richard B. Alley, Jean Lynch-Stieglitz, and Jeffrey P. Severinghaus, at <http://www.pnas.org/cgi/content/full/96/18/9987>.

Suggested Citation:"Appendix G: Innovation's Quickening Pace: Summary and Extrapolation of Frontiers of Science/ Frontiers of Engineering Papers and Presentations." National Research Council. 2002. Future R&D Environments: A Report for the National Institute of Standards and Technology. Washington, DC: The National Academies Press. doi: 10.17226/10313.
×

at least in part by the lowering of carbon dioxide during colder times in response to changes in ocean chemistry. We currently live in one of the warmer, or “high,” times of these orbital cycles. Previously, the coolest “low” times brought glaciation to nearly one-third of the modern land area.

Recent examination of high-time-resolution records has shown that much of the climate variability occurred with spacings of one to a few thousand years. Changes within high times have been large, widespread (hemispheric to global, with cold, dry, and windy conditions typically observed together), and rapid (over periods as short as a single year to decades).

The changes have been especially large in the North Atlantic basin. In the modern climate, the warm and salty surface waters of the Gulf Stream heat the overlying atmosphere during winter, becoming dense enough to sink into and flow through the deep ocean before upwelling and returning in a millennium or so. Numerical models and paleoclimatic data agree that changes in this “conveyor belt” circulation can explain at least much of the observed millennial variability, although the reconstructed changes may be more dramatic than those modeled. Sinking can be stopped by North Atlantic freshening associated with increased precipitation or with melting of ice sheets on land and a resulting surge into the ocean. North Atlantic sinking also might be stopped by changes in the tropical ocean or elsewhere.

Of concern to Alley et al. is that some global warming models project North Atlantic freshening and possible collapse of this conveyor circulation, perhaps with attendant large, rapid climate changes. At least one model indicates that slowing the rate of greenhouse gas emissions might stabilize the modern circulation.

Fluorescence Sensing

After a long induction period, say de Silva et al., fluorescent molecular sensors are showing several signs of wide-ranging development.20 The clarification of the underlying photophysics, the discovery of several biocompatible systems, and the demonstration of their usefulness in cellular environments are key indicators. Another sign is that the beneficiaries of the field are multiplying and have come to include medical diagnostics through physiological imaging, biochemical investigations, environmental monitoring, chemical analysis, and aeronautical engineering.

The design of fluorescent molecular sensors for chemical species combines a receptor and a fluorophore for a “catch-and-tell” operation. The receptor module engages exclusively in transactions of chemical species, while the fluorophore is

20  

Frontiers of Science/1998. A. Prasanna de Silva, Jens Eilers, and Gregor Zlokarnik, at <http://www.pnas.org/cgi/content/full/96/15/8336>.

Suggested Citation:"Appendix G: Innovation's Quickening Pace: Summary and Extrapolation of Frontiers of Science/ Frontiers of Engineering Papers and Presentations." National Research Council. 2002. Future R&D Environments: A Report for the National Institute of Standards and Technology. Washington, DC: The National Academies Press. doi: 10.17226/10313.
×

concerned solely with photon emission and absorption. Molecular light emission is particularly appealing for sensing purposes owing to its near-ultimate detectability, off/on switchability, and very high spatiotemporal resolution, including video imaging.

Neurobiology, say the authors, has been a special beneficiary of fluorescence-sensing techniques combined with high-resolution microscopy. This is so because the central nervous system is a highly complex network of billions of cells, each with an elaborate morphology. Enhanced understanding of the central nervous system requires more intense study of neuronal activity at the network level as well as increased subcellular resolution.

Neotectonics

Recent advances in Global Positioning System (GPS) technology have made it possible to detect millimeter-scale changes in Earth’s surface. According to Clement et al., these advances have made it possible to detect relative motion between the large plates of the outermost rigid layer of Earth.21 These motions previously had only been inferred from indirect evidence of the plates’ motions. A remarkable piece of knowledge from these studies is that the plate motions are nearly continuous, not episodic, processes, even on human time scales. Analyses of these motions indicate that much of the motion between plates occurs without producing earthquakes. In addition to monitoring interplate motions, GPS arrays are making it possible to study present deformation occurring within mountain belts. The strain-rate models obtained from the GPS arrays then can be used to test specific theories of crustal deformation in mountain belts.

The theory of plate tectonics that revolutionized the earth sciences during the 1960s was based primarily on indirect evidence of past crustal movements. Motions of the seafloor crust were inferred from the magnetization of the crust that recorded polarity reversals (of known age) of Earth’s magnetic field. The symmetric magnetic patterns suggested that new crust was being created at the mid-ocean ridges and then was spreading away from the ridges. The seafloor-spreading hypothesis was successful in explaining many longstanding problems in earth sciences and became the basis of a new paradigm of crustal mobility.

Most plate boundary zones accommodate motion along relatively narrow regions of deformation. However, plate boundary zones involving continental lithosphere absorb relative motion by deforming over broad zones that are hundreds to even thousands of kilometers wide. An understanding of the forces at work in these zones is important because many of the most damaging earthquakes occur within these zones, say Clement et al. At present, little is known of how the

21  

Frontiers of Science/1999. Bradford Clement, Rob McCaffrey, and William Holt, at <http://www.pnas.org/cgi/content/full/96/25/14205>.

Suggested Citation:"Appendix G: Innovation's Quickening Pace: Summary and Extrapolation of Frontiers of Science/ Frontiers of Engineering Papers and Presentations." National Research Council. 2002. Future R&D Environments: A Report for the National Institute of Standards and Technology. Washington, DC: The National Academies Press. doi: 10.17226/10313.
×

relative motion between two tectonic plates is accommodated by earthquakes and how much is taken up by slow creep, either steady or episodic. Understanding the ratio of fast, seismic (earthquake-producing) slip to slow aseismic slip is fundamentally important in the quest to assess the danger of active geologic faults.

The use of GPS arrays capable of continuously monitoring a large region provides the resolution needed to monitor short- and long-term displacements that occur during and after earthquakes. In addition to making estimates of the component of aseismic creep between major earthquakes, we also can now estimate the relative amounts of seismic and aseismic slip associated with a particular earthquake. Cases have been documented in which the aseismic slip after an earthquake has accommodated as much slip or more slip than the quake itself. If this is a common occurrence, more than half of plate-boundary slip may be aseismic.

Several GPS arrays are being deployed across plate boundaries in an effort to monitor slip events. For example, an array being installed across the Cascadia subduction zone offshore of Oregon and Washington State is capable of detecting purely creep events. These events result from slip on a fault that does not radiate seismic energy detectable by seismometers and hence do not produce traditional earthquakes. These creep events may explain the paradox that many fault zones currently have high strain rates, although their histories are largely devoid of earthquakes, or the quakes that did happen were too small to account for the long-term rate of slip.

Extrasolar Planets

The discovery of extrasolar planets has brought with it a number of surprises. To put matters in context, say Najita et al., the planet Jupiter has been a benchmark in planet searches because it is the most massive planet in the solar system and is the object that we are most likely to detect in other systems.22 Even so, this is a challenging task. All the known extrasolar planets have been discovered through high-resolution stellar spectroscopy, which measures the line-of-sight reflex motion of the star in response to the gravitational pull of the planet. In our solar system, Jupiter induces in the Sun a reflex motion of only about 12 meters per second, which is challenging to measure given that the typical spectral resolution employed is approximately several kilometers per second. Fully aware of this difficulty, planet-searching groups have worked hard to achieve this velocity resolution by reducing the systematic effects in their experimental method.

22  

Frontiers of Science/1999. Joan Najita, Willy Benz, and Artie Hatzes, at <http://www.pnas.org/cgi/content/full/96/25/14197>.

Suggested Citation:"Appendix G: Innovation's Quickening Pace: Summary and Extrapolation of Frontiers of Science/ Frontiers of Engineering Papers and Presentations." National Research Council. 2002. Future R&D Environments: A Report for the National Institute of Standards and Technology. Washington, DC: The National Academies Press. doi: 10.17226/10313.
×

As one example, prior to detection, the stellar light is passed through an iodine gas-filled absorption cell to imprint a velocity reference on the stellar spectrum. However, after search techniques had been honed in this way for years to detect a Jupiter-like world in other solar systems, a surprising result has emerged: a much greater diversity of planetary systems than was expected. According to Najita et al., searches have revealed planets with a wide range of masses, including planets much more massive than Jupiter; planets with a wide range of orbital distances, including planets much closer to their suns than Jupiter is to our Sun; and planets with a wide range of eccentricities, including some with much more eccentric orbits than those of the planets in our solar system.

These results were essentially unanticipated by theory and reveal the diversity of possible outcomes of the planet-formation process, an important fact that was not apparent from the single example of our own solar system. This diversity is believed to result from the intricate interplay among the many physical processes that govern the formation and evolution of planetary systems, processes such as grain sticking and planetesimal accumulation, runaway gas accretion, gap formation, disk-driven eccentricity changes, orbital migration, and dynamical scattering with other planets, companion stars, or passing stars. Thus far, what has changed is not so much our understanding of the relevant physical processes but, rather, how these processes fit together, i.e., our understanding of their relative importance and role in the eventual outcome of the planet formation process.

Femtochemistry

The essence of the chemical industry and indeed of life is the making and breaking of molecular bonds. The elementary steps in bond making and breaking occur on the time scale of molecular vibrations and rotations, the fastest period of which is 10 femtoseconds. Chemical reactions are, therefore, ultrafast processes, and the study of these elementary chemical steps has been termed “femtochemistry.” According to Tanimura et al., a primary aim of this field is to develop an understanding of chemical reaction pathways at the molecular level.23 With such information, one can better conceive of new methods to control the outcome of a chemical reaction. Because chemical reaction pathways for all but the simplest of reactions are complex, this field poses both theoretical and experimental challenges. Nevertheless, much progress is being made, and systems as complex as biomolecules can now be investigated in great detail.

Ultrafast dynamics of molecules have long been studied theoretically by integrating a relevant equation of motion. The time-dependent wave packet ap-

23  

Frontiers of Science/1998. Yoshitaka Tanimura, Koichi Yamashita, and Philip A. Anfinrud, at <http://www.pnas.org/cgi/content/full/96/16/8823>.

Suggested Citation:"Appendix G: Innovation's Quickening Pace: Summary and Extrapolation of Frontiers of Science/ Frontiers of Engineering Papers and Presentations." National Research Council. 2002. Future R&D Environments: A Report for the National Institute of Standards and Technology. Washington, DC: The National Academies Press. doi: 10.17226/10313.
×

proach has proven to be particularly promising for following femtosecond chemical reactions in real time. Briefly, a molecular system can be characterized by the electronic potential energy of surfaces on which wave packets propagate.

Experimental efforts in the field of femtochemistry have exploited the pump-probe technique, wherein a pump laser pulse initiates a chemical reaction and a probe laser pulse records a “snapshot” of the chemical reaction at a time controlled by the temporal delay between the pump and probe pulses. By recording snapshots as a function of the temporal delay, one can follow the time evolution of a chemical reaction with time resolution limited only by the duration of the laser pulses. Beyond monitoring the outcome of a normal photoreaction, the phase and frequency of a femtosecond pump pulse can be tailored, as prescribed by theory, to drive a molecular state to a target location on its potential energy surface and then steer it toward a channel that favors a particular photochemical outcome.

For example, say the authors, the excitation pulse might be a femtosecond, linear-chirped laser pulse, which can interact with the wave packet through a so-called intrapulse, pump-dump process. A negatively chirped pulse (frequency components shift in time from blue to red) might be tailored to maintain resonance with the wave packet as it evolves along the excited state surface. In contrast, a positively chirped pulse might quickly go off resonance with the wave packet, and the photoexcitation would be nonselective.

Understanding molecular motions and how they couple to the reaction coordinate is crucial for a comprehensive description of the underlying microscopic processes. This problem is particularly challenging because molecules exhibit strong mutual interactions, and these interactions evolve on the femtosecond time scale because of random thermal motion of the molecules. In essence, understanding the dynamics of a molecular system in the condensed phase boils down to a problem of nonequilibrium statistical physics. Combined with an impressive increase in computational capacity, recent developments in theoretical methodology such as molecular dynamics, path-integral approaches, and kinetic-equation approaches for dissipative systems have enlarged dramatically the scope of what is now theoretically tractable.

Biometric Identification

Personal identification, regardless of method, is ubiquitous in our daily lives. For example, we often have to prove our identity to gain access to a bank account, to enter a protected site, to draw cash from an ATM, to log in to a computer, to claim welfare benefits, to cross national borders, and so on. Conventionally, we identify ourselves and gain access by physically carrying passports, keys, badges, tokens, and access cards, or by remembering passwords, secret codes, and personal identification numbers (PINs). Unfortunately, passports, keys, badges, tokens, and access cards can be lost, duplicated, stolen or forgotten, and pass-

Suggested Citation:"Appendix G: Innovation's Quickening Pace: Summary and Extrapolation of Frontiers of Science/ Frontiers of Engineering Papers and Presentations." National Research Council. 2002. Future R&D Environments: A Report for the National Institute of Standards and Technology. Washington, DC: The National Academies Press. doi: 10.17226/10313.
×

words, secret codes, and PINs can easily be forgotten, compromised, shared, or observed.

Such loopholes or deficiencies of conventional personal identification techniques have caused major problems for all concerned, say Shen and Tan.24 For example, hackers often disrupt computer networks; credit card fraud is estimated at $2 billion per year worldwide; and in the United State, welfare fraud is believed to be in excess of $4 billion a year. Robust, reliable, and foolproof personal identification solutions must be sought to address the deficiencies of conventional techniques.

At the frontier of such solutions is biometrics-based personal identification: personal physical or biological measurements unique to an individual. Some frequently used measurements are height, weight, hair color, eye color, and skin color. As one may easily observe, although such measurements can accurately describe an individual, more than one individual could fit the description. To identify an individual based on biometric data, the data should be unique to that individual, easily obtainable, time-invariant (no significant changes over a period of time), easily transmittable, able to be acquired as nonintrusively as possible, and distinguishable by humans without much special training. The last characteristic is helpful for manual intervention, when deemed necessary, after an automated, biometrics-based identification/verification system has made an initial determination.

Automated biometrics-based personal identification systems can be classified into two main categories: identification and verification. In a process of verification (one-to-one comparison), the biometrics information of an individual who claims a certain identity is compared with the biometrics on the record for the person whose identity is being claimed. The result of the comparison determines whether the identity claim is accepted or rejected. On the other hand, it is often desirable to be able to discover the origin of certain biometrics information to prove or disprove the association of that information with a certain individual. This process is commonly known as identification (one-to-many comparison).

According to Shen and Tan, a typical automated biometrics-based identification/verification system consists of six major components. The first is a dataacquisition component that acquires the biometric data in digital format by using a sensor. For fingerprints, the sensor is typically a scanner; for voice data, the sensor is a microphone; for face pictures and iris images, the sensor is typically a camera. The quality of the sensor has a significant impact on the accuracy of the comparison results. The second and third components of the system are optional. They are the data compression and decompression mechanisms, which are designed to meet the data transmission and storage requirements of the system. The

24  

Frontiers of Science/1998. Weicheng Shen and Tieniu Tan, at <http://www.pnas.org/cgi/content/full/96/20/11065>.

Suggested Citation:"Appendix G: Innovation's Quickening Pace: Summary and Extrapolation of Frontiers of Science/ Frontiers of Engineering Papers and Presentations." National Research Council. 2002. Future R&D Environments: A Report for the National Institute of Standards and Technology. Washington, DC: The National Academies Press. doi: 10.17226/10313.
×

fourth component is of great importance: the feature-extraction algorithm. The feature-extraction algorithm produces a feature vector, in which the components are numerical characterizations of the underlying biometrics. The feature vectors are designed to characterize the underlying biometrics so that biometric data collected from one individual at different times are similar, while those collected from different individuals are dissimilar. In general, the larger the size of a feature vector (without much redundancy), the higher its discrimination power. The discrimination power is the difference between a pair of feature vectors representing two different individuals. The fifth component of the system is the “matcher,” which compares feature vectors obtained from the feature extraction algorithm to produce a similarity score. This score indicates the degree of similarity between a pair of biometrics data under consideration. The sixth component of the system is a decision maker.

Suggested Citation:"Appendix G: Innovation's Quickening Pace: Summary and Extrapolation of Frontiers of Science/ Frontiers of Engineering Papers and Presentations." National Research Council. 2002. Future R&D Environments: A Report for the National Institute of Standards and Technology. Washington, DC: The National Academies Press. doi: 10.17226/10313.
×
Page 143
Suggested Citation:"Appendix G: Innovation's Quickening Pace: Summary and Extrapolation of Frontiers of Science/ Frontiers of Engineering Papers and Presentations." National Research Council. 2002. Future R&D Environments: A Report for the National Institute of Standards and Technology. Washington, DC: The National Academies Press. doi: 10.17226/10313.
×
Page 144
Suggested Citation:"Appendix G: Innovation's Quickening Pace: Summary and Extrapolation of Frontiers of Science/ Frontiers of Engineering Papers and Presentations." National Research Council. 2002. Future R&D Environments: A Report for the National Institute of Standards and Technology. Washington, DC: The National Academies Press. doi: 10.17226/10313.
×
Page 145
Suggested Citation:"Appendix G: Innovation's Quickening Pace: Summary and Extrapolation of Frontiers of Science/ Frontiers of Engineering Papers and Presentations." National Research Council. 2002. Future R&D Environments: A Report for the National Institute of Standards and Technology. Washington, DC: The National Academies Press. doi: 10.17226/10313.
×
Page 146
Suggested Citation:"Appendix G: Innovation's Quickening Pace: Summary and Extrapolation of Frontiers of Science/ Frontiers of Engineering Papers and Presentations." National Research Council. 2002. Future R&D Environments: A Report for the National Institute of Standards and Technology. Washington, DC: The National Academies Press. doi: 10.17226/10313.
×
Page 147
Suggested Citation:"Appendix G: Innovation's Quickening Pace: Summary and Extrapolation of Frontiers of Science/ Frontiers of Engineering Papers and Presentations." National Research Council. 2002. Future R&D Environments: A Report for the National Institute of Standards and Technology. Washington, DC: The National Academies Press. doi: 10.17226/10313.
×
Page 148
Suggested Citation:"Appendix G: Innovation's Quickening Pace: Summary and Extrapolation of Frontiers of Science/ Frontiers of Engineering Papers and Presentations." National Research Council. 2002. Future R&D Environments: A Report for the National Institute of Standards and Technology. Washington, DC: The National Academies Press. doi: 10.17226/10313.
×
Page 149
Suggested Citation:"Appendix G: Innovation's Quickening Pace: Summary and Extrapolation of Frontiers of Science/ Frontiers of Engineering Papers and Presentations." National Research Council. 2002. Future R&D Environments: A Report for the National Institute of Standards and Technology. Washington, DC: The National Academies Press. doi: 10.17226/10313.
×
Page 150
Suggested Citation:"Appendix G: Innovation's Quickening Pace: Summary and Extrapolation of Frontiers of Science/ Frontiers of Engineering Papers and Presentations." National Research Council. 2002. Future R&D Environments: A Report for the National Institute of Standards and Technology. Washington, DC: The National Academies Press. doi: 10.17226/10313.
×
Page 151
Suggested Citation:"Appendix G: Innovation's Quickening Pace: Summary and Extrapolation of Frontiers of Science/ Frontiers of Engineering Papers and Presentations." National Research Council. 2002. Future R&D Environments: A Report for the National Institute of Standards and Technology. Washington, DC: The National Academies Press. doi: 10.17226/10313.
×
Page 152
Suggested Citation:"Appendix G: Innovation's Quickening Pace: Summary and Extrapolation of Frontiers of Science/ Frontiers of Engineering Papers and Presentations." National Research Council. 2002. Future R&D Environments: A Report for the National Institute of Standards and Technology. Washington, DC: The National Academies Press. doi: 10.17226/10313.
×
Page 153
Suggested Citation:"Appendix G: Innovation's Quickening Pace: Summary and Extrapolation of Frontiers of Science/ Frontiers of Engineering Papers and Presentations." National Research Council. 2002. Future R&D Environments: A Report for the National Institute of Standards and Technology. Washington, DC: The National Academies Press. doi: 10.17226/10313.
×
Page 154
Suggested Citation:"Appendix G: Innovation's Quickening Pace: Summary and Extrapolation of Frontiers of Science/ Frontiers of Engineering Papers and Presentations." National Research Council. 2002. Future R&D Environments: A Report for the National Institute of Standards and Technology. Washington, DC: The National Academies Press. doi: 10.17226/10313.
×
Page 155
Suggested Citation:"Appendix G: Innovation's Quickening Pace: Summary and Extrapolation of Frontiers of Science/ Frontiers of Engineering Papers and Presentations." National Research Council. 2002. Future R&D Environments: A Report for the National Institute of Standards and Technology. Washington, DC: The National Academies Press. doi: 10.17226/10313.
×
Page 156
Suggested Citation:"Appendix G: Innovation's Quickening Pace: Summary and Extrapolation of Frontiers of Science/ Frontiers of Engineering Papers and Presentations." National Research Council. 2002. Future R&D Environments: A Report for the National Institute of Standards and Technology. Washington, DC: The National Academies Press. doi: 10.17226/10313.
×
Page 157
Suggested Citation:"Appendix G: Innovation's Quickening Pace: Summary and Extrapolation of Frontiers of Science/ Frontiers of Engineering Papers and Presentations." National Research Council. 2002. Future R&D Environments: A Report for the National Institute of Standards and Technology. Washington, DC: The National Academies Press. doi: 10.17226/10313.
×
Page 158
Suggested Citation:"Appendix G: Innovation's Quickening Pace: Summary and Extrapolation of Frontiers of Science/ Frontiers of Engineering Papers and Presentations." National Research Council. 2002. Future R&D Environments: A Report for the National Institute of Standards and Technology. Washington, DC: The National Academies Press. doi: 10.17226/10313.
×
Page 159
Suggested Citation:"Appendix G: Innovation's Quickening Pace: Summary and Extrapolation of Frontiers of Science/ Frontiers of Engineering Papers and Presentations." National Research Council. 2002. Future R&D Environments: A Report for the National Institute of Standards and Technology. Washington, DC: The National Academies Press. doi: 10.17226/10313.
×
Page 160
Suggested Citation:"Appendix G: Innovation's Quickening Pace: Summary and Extrapolation of Frontiers of Science/ Frontiers of Engineering Papers and Presentations." National Research Council. 2002. Future R&D Environments: A Report for the National Institute of Standards and Technology. Washington, DC: The National Academies Press. doi: 10.17226/10313.
×
Page 161
Suggested Citation:"Appendix G: Innovation's Quickening Pace: Summary and Extrapolation of Frontiers of Science/ Frontiers of Engineering Papers and Presentations." National Research Council. 2002. Future R&D Environments: A Report for the National Institute of Standards and Technology. Washington, DC: The National Academies Press. doi: 10.17226/10313.
×
Page 162
Suggested Citation:"Appendix G: Innovation's Quickening Pace: Summary and Extrapolation of Frontiers of Science/ Frontiers of Engineering Papers and Presentations." National Research Council. 2002. Future R&D Environments: A Report for the National Institute of Standards and Technology. Washington, DC: The National Academies Press. doi: 10.17226/10313.
×
Page 163
Suggested Citation:"Appendix G: Innovation's Quickening Pace: Summary and Extrapolation of Frontiers of Science/ Frontiers of Engineering Papers and Presentations." National Research Council. 2002. Future R&D Environments: A Report for the National Institute of Standards and Technology. Washington, DC: The National Academies Press. doi: 10.17226/10313.
×
Page 164
Suggested Citation:"Appendix G: Innovation's Quickening Pace: Summary and Extrapolation of Frontiers of Science/ Frontiers of Engineering Papers and Presentations." National Research Council. 2002. Future R&D Environments: A Report for the National Institute of Standards and Technology. Washington, DC: The National Academies Press. doi: 10.17226/10313.
×
Page 165
Suggested Citation:"Appendix G: Innovation's Quickening Pace: Summary and Extrapolation of Frontiers of Science/ Frontiers of Engineering Papers and Presentations." National Research Council. 2002. Future R&D Environments: A Report for the National Institute of Standards and Technology. Washington, DC: The National Academies Press. doi: 10.17226/10313.
×
Page 166
Next: Appendix H: Trends in Science and Technology »
Future R&D Environments: A Report for the National Institute of Standards and Technology Get This Book
×
Buy Paperback | $50.00 Buy Ebook | $39.99
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

In September 2000, the National Institute of Standards and Technology (NIST) asked the National Research Council to assemble a committee to study the trends and forces in science and technology (S&T), industrial management, the economy, and society that are likely to affect research and development as well as the introduction of technological innovations over the next 5 to 10 years. NIST believed that such a study would provide useful supporting information as it planned future programs to achieve its goals of strengthening the U.S. economy and improving the quality of life for U.S. citizens by working with industry to develop and apply technology, measurements, and standards.

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    Switch between the Original Pages, where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

    « Back Next »
  6. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  7. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  8. ×

    View our suggested citation for this chapter.

    « Back Next »
  9. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!