NII White Papers--Table of Contents NII White Papers--Chapter 38 NII White Papers--Chapter 40

The National Information Infrastructure: A High-Performance Computing and Communications Perspective

Randy H. Katz, University of California at Berkeley
William L. Scherlis, Carnegie Mellon University
Stephen L. Squires, Advanced Research Projects Agency

ABSTRACT

Information infrastructure involves more than the building of faster and larger computers and the creation of faster and better communications networks. The essence of information infrastructure is a diverse array of high-level information services, provided in an environment of pervasive computing and computer communications. These services enable users to locate, manage, and share information of all kinds, conduct commerce, and automate a wide range of business and governmental processes. Key to this is a broad array of rapidly evolving commonalities, such as protocols, architectural interfaces, and benchmark suites. These commonalities may be codified as standards or, more likely, manifest as generally accepted convention in the marketplace.

Information technology has become essential in sectors such as health care, education, design and manufacturing, financial services, and government service, but there are barriers to further exploitation of information technology. Pervasive adoption of specific service capabilities, which elevates those capabilities from mere value-added services to infrastructural elements, is possible only when value can be delivered with acceptable technological and commercial risk, and with an evolutionary path rapidly responsive to technological innovation and changing needs. Private- and public-sector investment in national information infrastructure (NII) is enabling increased sectorwide exploitation of information technologies in these national applications areas. Although the private sector must lead in the building and operation of the information infrastructure, government must remain a principal catalyst of its creation, adoption, and evolution.

This paper explores the barriers to achieving NII and suggests appropriate roles for government to play in fostering an NII that can be pervasively adopted. The main locus of government activity in research and early-stage technology development is the federal High Performance Computing and Communications (HPCC) program. This program is evolving under the leadership of the National Science and Technology Council's Committee on Information and Communications.

INTRODUCTION

Information technologies are broadly employed in nearly all sectors of the economy, and with remarkable impact. Nonetheless, there are still enormous unrealized benefits to be obtained from effective application of information technology, particularly the intertwining of multiple distributed computing applications into national-scale infrastructural systems. In many sectors, including health care, education and training, crisis management, environmental monitoring, government information delivery, and design and manufacturing, the benefits would have profound significance to all citizens (as suggested in IIS, 1992, and Kahin, 1993). These sectors of information technology application have been called national challenge (NC) applications.

The pervasiveness and national role of the NC applications prevent them from developing dependency on new technologies, even when those technologies offer important new capabilities. This is so unless the risks and costs are manageable, and there is a clear trajectory for growth in capability and scale, and that growth is responsive to new technologies and emerging needs. For this reason, while computing and communications technologies have taken hold in numerous specific application areas within these sectors, in most cases the challenge remains for advanced information technology to take on a significant sectorwide infrastructural role.

The NII, in the simplest terms, consists of available, usable, and interoperable computing and communications systems, built on underlying communications channels (the bitways) and providing a broad range of advanced information technology capabilities (the services). These services provide the basis for a wide range of use (the applications), ranging in scale up to national challenge applications. A key point is that NII is far more than communications connectivity; indeed it is generally independent of how communications connectivity is supplied.

Generally speaking, infrastructural systems consist of ubiquitous shared resources that industry, government, and individuals can depend on to enable more productive and efficient activity, with broadly distributed benefit (Box 1). The resources can include physical assets, such as the national air-traffic control system or the nationwide highway system. The resources can also include key national standards, such as the electric power standards, trucking safety standards, railroad track structure, and water purity standards. The resources are ubiquitous and reliable to an extent that all participants can commit to long-term investment dependent on these resources. This also implies a capacity for growth in scale and capability, to enable exploitation of new technologies and to assure continued value and dependability for users. The value can be in the form of increased quality and efficiency, as well as new opportunities for services.

It is therefore clear that a critical element of NII development is the fostering of appropriate commonalities, with the goal of achieving broad adoptability while promoting efficient competition and technological evolution. Commonalities include standard or conventional interfaces, protocols, reference architectures, and common building blocks from which applications can be constructed to deliver information services to end users. A fundamental issue is management and evolution, and in this regard other examples of national infrastructure reveal a wide range of approaches, ranging from full government ownership and control to private-sector management, with government participation limited to assurance of standard setting.

The Clinton Administration, under the leadership of Vice President Gore, has made national information infrastructure a priority (Gore, 1991; Clinton and Gore, 1993), as have other nations (examples: NCBS, 1992, and Motiwalla et al., 1993). The NII vision embraces computing and communications, obviously areas of considerable private investment and rapid technological change. The definition of the government role in this context has been ongoing, but several elements are clear. At the national policy level, the NII agenda embraces information and telecommunications policy, issues of privacy and rights of access, stimulation of new technologies and standards, and early involvement as a user (IITF, 1993). The federal High Performance Computing and Communications (HPCC) program, and the advanced information technologies being developed within it, play a key role in addressing the research and technical challenges of the NII.

In this paper we examine several aspects of the conceptual and technological challenge of creating information infrastructure technologies and bringing them to fruition in the form of an NII built by industry and ubiquitously adopted in the NC applications sectors. Topics related to telecommunications policy, intellectual property policy, and other aspects of information policy are beyond our scope.

In the first section below, we examine the federal government's role in fostering NII technologies and architecture. We then analyze the relationship between high-performance technologies and the NII and describe our three-layer NII vision of applications, services, and bitways. This vision is expanded in the next two sections, in which the NC applications and the technologies and architectural elements of the services layer are discussed. We present the research agenda of the Federal High Performance Computing and Communications Program in the area of information infrastructure technologies and applications, followed by our conclusions.

ROLE OF THE FEDERAL GOVERNMENT IN INFORMATION INFRASTRUCTURE

New technologies are required to achieve the NII vision of ubiquitous and reliable high-level information services. Many of the envisioned NII services place huge demands on underlying computing and communications capabilities, and considerable energy is being applied in industry, government, and research to creating these new capabilities. But there is more to NII than making computers faster, smarter, and more widely connected together.

Creation of national infrastructure entails delivery of services with sufficient reliability, ubiquity, and freedom from risk that they can be adopted sectorwide in national applications. The challenge to achieve this is considerable in any infrastructural domain and particularly difficult in information infrastructure. These goals usually involve rigorous standards and stability of technology, which appear all but precluded by the extremely rapid evolution in every dimension of information technology.

In the development of other kinds of national infrastructure, government has had a crucial catalytic role in fostering the broad collaboration and consensus-building needed to achieve these goals, even when industry has held the primary investment role in creating needed technologies and standards.

In the case of national information infrastructure, it is manifestly clear that it should not and indeed cannot be created and owned by the government. But the catalyzing role of government is nonetheless essential to bring the NII to realization. The government has an enormous stake in the NII as a consequence of its stake in the national challenge applications. Information infrastructure technologies play a critical role in the federal government's own plan to reengineer its work processes (Gore, 1993). Vice President Gore draws an analogy between the NII and the first use of telegraphy:

Basically, Morse's telegraph was a federal demonstration project. Congress funded the first telegraph link between Washington and Baltimore. Afterwards, though—after the first amazing transmission—most nations treated the telegraph and eventually telephone service as a government enterprise. That's actually what Morse wanted, too. He suggested that Congress build a national system. Congress said no. They argued that he should find private investors. This Morse and other companies did. And in the view of most historians, that was a source of competitive advantage for the United States.

Government fostered the technology through initial demonstrations and encouragement of private investment. But the U.S. telecommunications infrastructure has been built with private funds. And analogously, the NII implementation must be a cooperative effort among private- and public-sector organizations.

What are the specific roles for government? Addressing this question requires understanding how the NII differs from other major federal research and development efforts. The following characteristics summarize the differences:

These considerations yield a four-pronged strategy for government investment in research and development related to the NII:

The first of these elements, research, is clear. Government has a traditional role as farsighted investor in long-term, high-risk research to create new concepts and technologies whose benefits may be broadly distributed. In the case of the NII, the government needs to invest both in problem-solving research, to fulfill the promise of today's vision, and also in exploratory research to create new visions for tomorrow. Government investment in research and development can support the rapid and continual transition of new NII capabilities into commercialization and adoption. Basic research can yield paradigmatic improvements with marketwide benefits. Intensive discussions among leaders from academia, industry, and government have been under way to develop a view of the technical research and development challenges of the NII (Vernon et al., 1994).

The second element involves stimulating commonalities within the NII that can achieve economies of scale while simultaneously creating a foundation for a competitive supply of services. Interface and protocol commonalities foster conditions where the risks of entry for both users and creators of technology are reduced. We use the term commonality because it is more inclusive than the conventional notion of standards. It covers routine development of benchmarks, criteria, and measures to facilitate making choices among competing offerings. It also encompasses the definition of common systems architectures and interfaces to better define areas for diversity and differentiation among competing offerings. Common architectural elements help both developers and users decouple design decisions. Of course, inappropriate standards can inhibit innovation or predispose the market to particular technological approaches. A critical issue for the NII is the speed of convergence to new conventions and standards. In addition, conventions and standards must themselves enable rapid evolution and effective response to new technology opportunities. These are familiar issues in the realm of conventionalization and standards generally; but they are also among the most fundamental considerations in achieving new high-level NII services, and are in need of specific attention.

Demonstration, the third element, involves government sponsorship of testbeds to explore scalability and give early validation to new technology concepts. Testbeds can span the range from basic technologies coupled together using ad hoc mechanisms, to large-scale integration projects that demonstrate utility of services for applications in a pilot mode. These latter integration experiments can bootstrap full-scale deployments in applications areas.

Finally, acting in the interest of government applications, the government can take a proactive role as consumer of NII technologies to stimulate its suppliers to respond effectively in delivering information infrastructure that supports government applications. Possible government applications include systems for government information, crisis response, and environmental monitoring.

The gigabit testbeds in the HPCC program offer a model for research partnerships among government, industry, and academe and represent a resource on which to build prototype implementations for national applications. Each testbed is cost-shared between government and the private sector and embraces the computer and telecommunications industries, university research groups, national laboratories, and application developers. The key function of the testbeds is to experiment with new networking technology and address interoperability and commonality concerns as early as possible.

RELATIONSHIP BETWEEN HIGH-PERFORMANCE TECHNOLOGIES AND THE NII

The federal HPCC program supports the research, development, pilot demonstration, and early evaluation of high-performance technologies. HPCC's focus in its initial years was on the grand challenges of science and engineering, with a strategy of developing a base of hardware and software technologies that can scale up to large-scale processing systems, out to wide-area distributed systems, and down to capable yet portable systems (FCCSET, 1994; CIC, 1994). These scalable technologies will contribute strongly to the NII, as will the legacy of cooperation between government, industry, and academia. These can greatly accelerate establishment of an evolvable information infrastructure architecture, with testbed development, protocol and architecture design, interoperability experiments, and benchmarking and validation experiments. This legacy has helped facilitate adoption of HPCC-fostered technologies by independent users by significantly cutting costs and risks of adoption.

This twofold HPCC stimulus of research and cooperation combines with a program emphasis on demonstration, validation, and experimental application to create a framework for government technology investment in NII. For this reason, HPCC was expanded in FY1994 to include a new major program component, Information Infrastructure Technology and Applications (IITA), focusing on creation of a universally accessible NII, along with its application to prototype NC applications. (These activities are described in more detail in the section "The Federal HPCC Program and the NII.")

Each of the other HPCC program activities contributes to IITA. For example, emerging large-scale information servers designed to provide information infrastructure services are based on HPCC-developed, high-performance systems architectures, including architectures based on use of advanced systems software to link distributed configurations of smaller systems into scalable server configurations. The microprocessors used in these large-scale systems are the same as those found in relatively inexpensive desktop machines. High-performance networking technologies, such as communications network switches, are increasingly influenced by processor interconnection technologies from HPCC. Networking technologies are also being extended to a broad range of wireless and broadcast modalities, enhancing mobility and the extent of personal access. Included in this effort are protocols and conventions for handling multimedia and other kinds of structured information objects.

NII can be viewed as built on a distributed computing system of vast scale and heterogeneity of an unprecedented degree. HPCC software for operating systems and distributed computing is enhancing the interoperability of computers and networks as well as the range of information services. The software effort in the HPCC program is leading to object management systems, methodologies for software development based on assembly of components, techniques for high assurance software, and improvements to programming languages. These efforts will contribute to the development and evolution of applications software built on the substrate of NII services.

THREE-LAYER NATIONAL INFORMATION INFRASTRUCTURE ARCHITECTURE

Within the HPCC community, a much-discussed conceptual architecture for the National Information Infrastructure has three major interconnected layers: National Challenge Applications, supported by diverse and interdependent NII communication and computation Services, built on heterogeneous and ubiquitous NII bitways (see Figure 1). Each layer sustains a diverse set of technologies and involves a broad base of researchers and technology suppliers, yielding a continuously improving capability for users over time. By delivering utility to clients in the layers above through common mechanisms or protocols, a rapid rate of evolution of capability can be sustained in a competitive environment involving diverse suppliers. Thus, developments in each of these layers focus both on stimulating the creation of new technologies and on determining common mechanisms or protocols the commonality through which that capability can be delivered. For example:

This architecture addresses directly the challenge of scale-up in capability, size, and complexity within each of the three layers. Ongoing validation of concepts can be achieved, in each layer, through large-scale testbed experimentation and demonstration conducted jointly with industry, users, and suppliers of new technologies and information capabilities. If the evolution of the NII architecture proceeds as envisioned, the result will be the integration of new capabilities and increased affordability in the national challenge applications. Each layer supports a wide range of uses beyond those identified for the specific national challenge applications. For example, generalized NII service and bitway technologies can also support applications on a very small scale, extensions of existing services, ad hoc distributed computing, and so on.

The national challenge applications are described in more detail in the next section, with the issues addressed by the services layer in the succeeding section titled "Services." Bitways technologies are well covered in other sources, such as Realizing the Information Future (CSTB, 1994), and are not discussed here.

NATIONAL CHALLENGE APPLICATIONS

Numerous groups have developed lists of critical applications, characterized by the potential for a pervasive impact on American society and exploitation of extensive communications and information processing capabilities. For example, in 1993 the Computer Systems Policy Project identified design and manufacturing, education and training, and health care as the national challenges (CSPP, 1993). A more exhaustive list has been developed by the Information Infrastructure Task Force, representing the union of much of what has been proposed (IITF, 1994). Among these NC applications are the following:

In each of these applications there is an unmet challenge of scale: How can the service be made ubiquitously available with steadily increasing levels of capability and performance? The applications communities depend on information technology for solutions but are facing scaling barriers, and hence the NII goal of crossing the threshold of ubiquity. In the absence of common architectural elements, such as interfaces, methods, and modules, it may be possible to demonstrate prototype solutions to specific applications problems through monolithic stovepipes. But these solutions may not give any means to pass this threshold of pervasiveness and dependability.

SERVICES

Overview

As we have noted, information infrastructure is more than bandwidth, switching, and ubiquitous communications access. It is (1) the common service environment in which NC applications are built. All applications share generic service needs: human interfaces (e.g., graphical user interaction, speech recognition, data visualization), application building blocks (e.g., planning subsystem, imaging subsystem), data and process management (e.g., search and retrieval, hyperlink management, action sequencing), and communications (e.g., IPC, mobile computation). Also, the engineering of applications requires (2) tools in the form of development environments, toolkits, operational protocols, and data exchange and action invocation standards from which service solutions can be combined, integrated, and reused. Finally, the engineering of applications becomes more efficient (as is already occurring for shrink-wrap software running on personal computers) in the presence of (3) a marketplace of reusable subsystems; in this manner, applications systems can be assembled from competitively acquired subsystems rather than built directly from the raw material of lines of code.

We elaborate slightly some of the elements of the common service environment:

Framework Initiative (CFI) accomplishes this by providing interface specifications for tool-to-tool communications and tool-to-database communications. In addition, the CFI has developed prototype implementations of these capabilities. These can form the basis of value-aided and commercially supported packages and software toolsets. Commercial vendors of applications software for desktop computers are developing a variety of frameworks (such as CORBA, OLE, OpenDoc, and others) for integration of software applications. Users expect that commercial pressures will eventually result in some degree of integration of these various frameworks. This issue of multiple standards is discussed further below.

Considerations in Constructing the National Information Infrastructure

Common architectural elements. The national challenge applications obtain service capabilities delivered through common protocols or interfaces (known commercially as APIs, or applications portability interfaces). Though service capabilities may evolve rapidly, to the benefit of users, they are delivered through particular interfaces or protocols that evolve more slowly. This insulates the client architecturally from the rapid pace of change in implementations, on the one hand, but it enables the client to exploit new capabilities as soon as they appear, as long as they are delivered through the accepted interface. A competitive supply of services hastens the processes of convergence to common protocols and evolution therefrom.

Industry standards, stovepipes, and risk. We have asserted that commonality among the protocols, interfaces, and data representations used in the services layer of the NII will be critical for its success. To the extent that emerging or evolving industry-standard commonalities are replaced by ad hoc or proprietary stovepipe approaches for the national challenge areas, applications developers place themselves at risk with respect to delivery of capability and future evolution path. In particular, in return for complete ownership or control of a solution, they may give up the opportunity to ride the curves of growth in rapidly growing underlying technologies, such as multimedia, digital libraries, and data communication. The challenge of the national challenge applications is how the applications constituencies can have both control of applications solutions and participation in the rapid evolution of underlying technologies. Government, supported by research, can invest in accelerating the emergence of new common architectural elements, and in creating technologies that reduce the risk and commitment associated with adoption of rapidly evolving standards.

Evolution of commonalities. Accepted protocols naturally manifest a certain stickiness independent of their merit, because they become a stable element in determining systems structure and develop associated transition costs and risks. The history of TCP/IP and OSI is a good example of this well-known phenomenon, as is the recent introduction of de facto standards relating to the World Wide Web (URLs and HTML). In particular, research and government can take a leading role in establishing new commonalities that foreshadow industry standards.

Rapid evolution and multiple standards. There are numerous standards presently in use for image representation. Most, but not all, are open standards; several are proprietary or otherwise encumbered. Regardless of the degree of acceptance of any one these standards, the pace of change is such that it would be foolish for a major software application developer to lock itself into accepting or producing images according to just one of these standards. Indeed, most major software applications building-blocks accept multiple such standards, thus increasing the robustness of the client applications with respect to either the technical characteristics or market acceptance of any one of the particular standards for bitmaps. In addition, tools are readily available for converting among the various representations for images. Thus, from the standpoint of applications architecture, a robust design can be created that does not depend on the fate of any one of the many standards, but rather on the evolution of the entire suite. The multiple commonalities emerge as customers and producers seek frameworks for competition in service niches. However, experience suggests that over time multiple related standards may begin to coalesce, as the commercial focus (and margins) move to higher levels of capability and the differential commercial advantage of any specific standard diminishes or even evolves into a liability. Anticipation of the process can yield robust scalable designs for major applications even when there is volatility in the markets for the subsystems they depend on.

Competition and layering. With the right approach to standards and infrastructural subsystems, diverse underlying technologies can evolve into common, shareable, and reusable services that can be leveraged across multiple NC applications. Alternative implementations of a frequently used service, such as display window management, eventually will lead to the identification of best practices that can be embodied in a common services layer for example, for human interfaces. And robust designs of the applications layers above will enable this rapid evolution to be accepted and indeed exploited. (Consider, for example, the rapid rate of release of new versions of World Wide Web browsers, and the huge multiplicity of platforms they run on, and the rapid rate of evolution of the many multimedia (and other) standards they rely on. The Web itself, however, evolves at a slower rate and is not invalidated by these changes in particular niche services. The standards on which the Web is based evolve even more slowly.) The conclusion we draw is that simultaneous evolution at multiple layers is not only possible but also needs to be an explicit architectural goal if ubiquity is to be attained at the applications level.

Concerning layers. Services depend on other services for their realization. For example, a protocol for microtransactions will likely rely on other protocols for encryption and authentication. This enables a microtransaction system not only to be designed independently of the particular encryption and authentication services, but also to sustain later upgrade of (or recompetition for) those services in a robust manner. In spite of this dependency, services are not organized rigidly into layers as is, for example, the seven-layer OSI model. The term "layering" is instead meant to suggest that services naturally depend on other services. But the exact interdependency can change and evolve over time. The commonalities through which services are delivered thus form a set of multiple bottlenecks in a complex and undulating hourglass (using the analogy of CSTB, 1994).

Service classification. A consequence of the above argumentation is that the success of the overall NII does not depend on achievement of a particular master infrastructural architecture. But it must be emphasized that it does strongly depend on emergence of a broad variety of infrastructural service architectures designed with scale-up, and indeed ubiquity, in mind. Ubiquity (as suggested in the comments above on multiple standards) is in the appearance of representatives of a set of related commonalities, and not in any particular protocol or component. This also suggests there is no ultimately correct layering lurking in the soup of services, but rather multiple candidates and arrangements. Without commonalities there is no national information infrastructure, but the particular need for specific all-encompassing commonalities is mitigated to the extent that technologies and tools for interoperability are available. That is, suites of related evolving commonalities can be supported to the extent that conversion and interoperability tools are available. The issue devolves into finding the right balance in this equation.

The government thus can employ a mixed strategy in fostering national challenge applications through infrastructural commonalities. It can stimulate development of new services, creation and evolution of new architectural commonalities, and development of readily available technologies of interoperability. Direct research and development is the most effective way to stimulate new service capabilities and associated commonalities. The government can also exploit its own market presence (though the leverage is less), taking an activist role in industry forums for conventionalization (informal emergent commonalities) and standards (formalized commonalities).

An illustrative layered model. One possible service taxonomy, elaborated below, classifies generic services into categories: human interfaces, applications building blocks, data and process management, and communications. Human interface services include window managers (e.g., Motif, NextStep), tools for speech handling and integration (generation as well as recognition), handwriting recognition, data visualization packages, toolkits for audio and video integration, and so on. Applications building blocks include planning packages, scheduling packages, data fusion, collaboration support, virtual reality support, and image processing and analysis. Data and process management services consist of capabilities for configuration management, shared data spaces, process flows, data integration, data exchange and translation, and data search and retrieval. Communications services include ubiquitous access through various communications mechanisms (e.g., wireless as well as wired connections into the bitways), mobility services to support users as they move through the points of connection into the network, interprocess communications and remote process call mechanisms to support distributed processing, and trust mechanisms such as authentication, authorization, encryption, password, and usage metering.

The service layers themselves evolve as new underlying technologies appear that provide new functionality or better ways of doing things. A construction kit can support the assembly and evolution of applications based on the service suite. Elements of this kit, also elaborated below, could include software environments for developing applications, evolution of standard operational and data exchange protocols, software toolkits and software generators for building or generating well-defined portions of applications, and frameworks for integrating tools and data into coherent, interoperable ensembles.

The value of a common services layer is conceptually indicated by Figure 2. In Figure 2(a), the lack of a common services infrastructure leads to stovepipe implementations, with little commonality among the service capabilities of the various national challenges. In Figure 2(b), a common set of services is leveraged among the national challenges, aided by a collection of toolkits, integration frameworks, and applications generators.

Information Enterprise Elements

Commonalities usually (but not always) emerge in the presence of a diversity of evolving implementations. A commonality in the form of a protocol is an abstraction away from the details of implementation that allows utility or value to be delivered in an implementation-independent manner to the service client. This suggests a threefold analysis for service capabilities: utility of some kind, delivered through a particular commonality such as a protocol, abstracting away the details of the diversity of implementations. Of course, the commonalities themselves evolve; they just evolve more slowly.

Figure 3 shows examples of elements for each of the three layers of the national information infrastructure architecture. In the figure, the three columns indicate the following:

Figure 3 shows examples of these concepts for each of the layers of the NII conceptual architecture. This organization focuses attention on two critical issues, alluded to in the foregoing, that must be addressed in the design of service commonalities:

Opportunities for competition are naturally sought by service clients, and a diversity of implementations indicates success in this regard. At the level of bitways, for example, the pace of change is rapid, and there are wide-ranging approaches for achieving a given capability (e.g., physical media may consist of optical fiber, land mobile wireless radios, or laser communications). The challenge for the application developer is how to exploit the continuing innovation while remaining insulated from continuous change; the client wants to ride the curves of growth while avoiding continual reengineering.

One conclusion to draw from this analysis is that research must focus not only on creation and demonstration of new kinds of service capability, but also on the scientific and technological aspects of architectural design: designing and evaluating candidates for protocol and API definitions, looking at both the supplier and client perspectives.

THE FEDERAL HPCC PROGRAM AND THE NII

Overview

In FY1994, the federal HPCC program was extended with a new responsibility, to develop Information Infrastructure Technology and Applications (IITA) to demonstrate prototype solutions to selected national challenge applications using the full potential of the rapidly evolving high performance communications and information processing capabilities. The details of the programs evolving goals and research plans are in its annual reports to Congress (FCCSET, 1994; CIC, 1994).

With the incorporation of IITA within its research agenda, the HPCC program is advancing key NII-enabling technologies, such as intelligent system interfaces, real environments augmented with synthetic environments, image understanding, language and speech understanding, intelligent agents aiding humans in the loop, and next-generation data and object bases for electronic libraries and commerce. This is being coupled with a vigorous program of testbed experimentation that will ensure continued U.S. leadership in information processing technologies.

IITA efforts are designed to strengthen the HPCC technology base, broaden the markets for these technologies, and accelerate industry development of the NII. Federal HPCC agencies are working closely with industry and academia in pursuit of these objectives. These objectives are to be accomplished, in part, by accelerating the development of readily accessible, widely used, large-scale applications with significant economic and social benefit. The HPCC program's original focus of enhancing computing and communications capabilities is thus extended to address a broader set of technologies and applications that have an immediate and direct impact on critical information capabilities affecting every citizen.

As we have described in the previous section, the development of such applications is predicated on (1) creating the underlying scalable computing technologies for advanced communication services over diverse bitways, effective partitioning of applications across elements of the infrastructure, and other applications support services that can adapt to the capabilities of the available infrastructure; and (2) creating and inserting a richly structured and intelligent service layer that will significantly broaden the base of computer information providers, developers, and consumers while reducing the existing barriers to accessing, developing, and using advanced computer services and applications. In parallel with these activities, a more effective software development paradigm and technology base must also be developed, since full-scale implementations in support of the national challenges will be among the largest and most complex applications ever implemented. This will be founded on the principles of composition and assembly rather than construction, solid architectures rather than ad hoc styles, and more direct user involvement in all stages of the software life cycle. The entire technology base developed in this program, including services and software, will be leveraged across the national challenges, leading to significant economies of scale in the development costs.

The intended technical developments of IITA include the following:

Each of the three technology areas (the first three bullets above) is discussed in additional detail in the following subsections, which include a sampling of technical subtopics. The national challenges have already been summarized in a prior section.

Information Infrastructure Services

Services provide the underlying building blocks upon which the national challenge applications can be constructed. They are intended to form the basis of a ubiquitous information web usable by all. A rich array of interdependent services bridge the gap between the communications bitways and the application-specific software components that implement the national challenges.

System Development and Support Environments

These provide the network-based software development tools and environments needed to build the advanced user interfaces and the information-intensive NC applications.

Intelligent Interfaces

Advanced user interfaces will bridge the gap between human users and the emerging national information infrastructure. A wide range of new technologies that adapt to human senses and abilities must be developed to provide more effective human-machine communications. The IITA program must achieve a high level user interface to satisfy the many different needs and preferences of vast numbers of citizens who interact with the NII.

SUMMARY AND CONCLUSIONS

Much of the discussion of the national information infrastructure has been at the applications level or the level of the bitways. Various groups, including Congress and the Clinton administration, have identified candidate NC applications on the one hand, while others have dealt with the issues of making interoperable the various existing and emerging communications infrastructures. This discussion suggests a shift in focus to the services layer. The right collection of capabilities at this level of the infrastructure will have an extraordinary impact on a wide range of applications.

We have cataloged many of the key technology areas needed for the service layer of the NII: information infrastructure services, systems development and support environments, and intelligent interfaces. The further development of these technologies and their integration into coherent and robust service architectures, incorporating the principles of utility, diversity, and commonality as described here, will be a major challenge for the information technology research community in coming years.

Cost-shared sponsorship of pilot demonstrations and testbeds is a key role for government in accelerating the development of the NII. In each NC application area, opportunities exist to demonstrate early solutions, including the potential for scaling up. We suggest that in the exploration of commonality and conversion issues, testbeds can also help address the fundamental issue of ubiquity. The scale of the enterprise, and the fundamental opportunities being addressed, necessitate cooperation among industry, government, and academia for success. We have suggested appropriate roles and approaches to cooperation, with emphasis on the roles of government and research. This is predicated on the assumption that government, in addition to sponsoring key basic research, has a crucial catalytic role in working with all sectors to address the challenge of the national applications to scaling up to the point of ubiquity and reliance.

ACKNOWLEDGMENTS

The ideas expressed in this paper have been influenced by discussions with colleagues at DARPA, especially Duane Adams, Steve Cross, Howard Frank, Paul Mockapetris, Michael St. Johns, John Toole, Doyle Weishar, and Gio Wiederhold. Our ideas have also benefited from extensive discussions with participants in the HPCC program from a diverse collection of federal agencies: Howard Bloom (NIST), Roger Callahan (NSA), Y.T. Chien (NSF), Mel Ciment (NSF), Sherri de Coronado (NIH), Ernest Daddio (NOAA), Norm Glick (NSA), Steve Griffin (NSF), Dan Hitchcock (DOE), Paul Hunter (NASA), Jerry Linn (NIST), Dan Masys (NIH), Cherie Nichols (NIH), Walter Shackelford (EPA), and Selden Stewart (NIST).

References

Clinton, William J., and Albert Gore, Jr. 1993. Technology for America's Economic Growth: A New Direction to Build Economic Strength, February 22.

Committee on Information and Communication (CIC). 1994. High Performance Computing and Communications: Technology for the National Information Infrastructure, Supplement to the President's Fiscal Year 1995 Budget. National Science and Technology Council, Washington, D.C.

Computer Science and Telecommunications Board (CSTB), National Research Council. 1994. Realizing the Information Future: The Internet and Beyond. National Academy Press, Washington, D.C.

Computer Systems Policy Project (CSPP). 1993. Perspectives on the National Information Infrastructure: CSPP's Vision and Recommendations for Action. Computer Systems Policy Project, Washington, D.C., January 12.

Federal Coordinating Council for Science, Engineering, and Technology (FCCSET), Office of Science and Technology Policy. 1993. FCCSET Initiatives in the FY 1994 Budget. Office of Science and Technology Policy, Washington, D.C., April 8.

Federal Coordinating Council for Science, Engineering, and Technology (FCCSET), Office of Science and Technology Policy. 1994. High Performance Computing and Communications: Toward a National Information Infrastructure. Committee on Physical, Mathematical, and Engineering Sciences, Office of Science and Technology Policy, Washington, D.C.

Gore, Jr., Al. 1991. "Infrastructure for the Global Village," Scientific American 265(3):150-153.

Gore, Jr., Albert. 1993. From Red Tape to Results, Creating a Government That Works Better & Costs Less: Reengineering Through Information Technology, Accompanying Report of the National Performance Review. U.S. Government Printing Office, Washington, D.C., September.

Information Infrastructure Task Force (IITF). 1993. The National Information Infrastructure: Agenda for Action. Information Infrastructure Task Force, U.S. Department of Commerce, Washington, D.C., September 15.

Information Infrastructure Task Force (IITF), Committee on Applications and Technology. 1994. Putting the Information Infrastructure to Work. NIST Special Document No. 857. Information Infrastructure Task Force, U.S. Department of Commerce, May.

Information Technology Association of America (IITA). 1993. National Information Infrastructure: Industry and Government Roles. Information Technology Association of America, Washington, D.C., July.

Institute for Information Studies (IIS). 1992. A National Information Network: Changing Our Lives in the 21st Century. Annual Review of the Institute for Information Studies (Northern Telecom Inc. and the Aspen Institute), Queenstown, Md.

Kahin, Brian. 1993. "Information Technology and Information Infrastructure," in Empowering Technology: Implementing a U.S. Strategy, Lewis M. Branscomb (ed.). MIT Press, Cambridge, Mass.

Motiwalla, J., M. Yap, and L.H. Ngoh. 1993. "Building the Intelligent Island," IEEE Communications Magazine 31(10):28-34.

National Computer Board of Singapore (NCBS). 1992. "A Vision of an Intelligent Island: The IT2000 Report," March.

Vernon, Mary K., Edward D. Lazowska, and Stewart D. Personick (eds.). 1994. R&D for the NII: Technical Challenges. Report of a workshop held February 28 and March 1, 1994, in Gaithersburg, Md. EDUCOM, Washington, D.C.