The national information infrastructure (NII) is envisioned as a national public internetwork that encompasses existing networks, such as the Internet, the public telephone network and its extensions, and CATV distribution systems and their extensions, as well as new network technologies yet to be invented. Today, these networks appear to the user to be separate and noninteroperable, in the sense that a user cannot reasonably make a telephone call over the Internet or most CATV systems, cannot reasonably watch video over the Internet or the telephone network (except at unacceptably poor levels of quality by entertainment standards), and cannot send data over the telephone network or most CATV systems (except in the limited sense of using these media for access to data networks or for point-to-point data transmission). It is clear that underlying the NII will be a collection of proprietary networks incorporating a variety of different technologies; indeed, there is general agreement that this is highly desirable. The question addressed in this white paper is what the NII will look like from the user perspective, and how it might differ from today's limited-functionality and noninteroperable networks. We address this question by describing a vision of what the NII could be from a user perspective. In particular, we describe those characteristics of the NII that we believe will be important to users, including connectivity and mobility, quality of service options, security and privacy, openness to new applications across heterogeneous transport and terminal environments, and pricing.
This white paper is an outgrowth of the planning workshop organized by the NII 2000 Steering Committee. Representatives of a number of industries participating in the NII and its underlying technologies were present. Not surprisingly, given the great variety of industries and their respective largely independent histories and markets, the representatives were often "talking past" one another, not sharing a common vision of what the NII should be, and not sharing the common vocabulary necessary for productive discussion.
In the deployment of a massive infrastructure such as the NII, there is great danger that near-term tactical decisions made by the diverse participants in the absence of a long-term strategic vision will result in an infrastructure that precludes the broad deployment of unanticipated but important applications in the future. Such an infrastructure will not meet the needs of the users and the nation, and will offer its builders a lower return on investment than would otherwise be possible. It might even result in widespread abandonment of existing infrastructure in favor of new technologies, in similar fashion to the recent widespread and costly abandonment of partially depreciated analog communications facilities.
In this white paper, we take the perspective of the users of the future NII and ask fundamental questions about how it should appear to them. It is our belief that, near-term corporate strategies aside, an NII that best meets the future needs of the users will be the most successful, not only in its benefits to society and the nation, but also in terms of its return on investment. Thus, the full spectrum of industrial and government participants should have a shared interest in defining a strategic vision for the long term, and using that vision to influence near-term business decisions.
Looking at the NII from a long-term user perspective, we naturally envision a network that has many capabilities beyond those of any of the current networks or distribution systems. Provisioning such a broad range of capabilities would have cost implications and is economically feasible only to the extent that it provides value to the user well in excess of the incremental costs. This is problematic if one accepts one of our fundamental hypotheses, namely, that we cannot possibly anticipate all the big-hitting applications of the NII. However, it should be emphasized that it is not necessary that all near-term deployments provide all the capabilities incorporated into a strategic vision. Indeed, one critical aspect of such a vision is that it should be easy and cost effective to add new technologies and capabilities to the NII as unanticipated applications and user needs emerge. If this is achieved, it is only necessary that near-term investments be compatible with a long-term strategic vision, and hence not preclude future possibilities or force later disinvestment and widespread replacement of infrastructure. This is admittedly not straightforward but is nevertheless a worthwhile goal.
One can anticipate the NII falling somewhere on the spectrum from a collection of proprietary and noninteroperable networks (largely the situation today) to a single, universal network that appears to the user to seamlessly and effortlessly meet all user needs. We argue that from the user perspective the NII should, although consisting internally of a diversity of heterogeneous transport and terminal technologies, offer the seamless deployment of a wide range of applications and openness to new applications. Not all participants in the NII may judge this to be in their best interest, and of course they all encounter serious cost and time-to-market constraints. However, if they take into account longer-term opportunities in the course of their near-term business decisions, we believe that both theythe usersand the nation will benefit greatly in the long term. It is our hope that the NII 2000 technology deployment project will move the collective deliberations in this direction.
First we define some consistent terminology for the remainder of this white paper.
The users of the NII are people. The NII will consist of a network (or more accurately a collection of networks) to which are attached access nodes at its edge. We distinguish between two types of devices connected to access nodes: information and applications servers, and user terminals (for simplicity, we will abbreviate these to servers and terminals). A networked application is a set of functionality that makes use of the transport services of the network and the processing power in the servers and terminals, and provides value to users. Servers make databases or information sources available to the terminals, or provide processing power required to provision applications. Users interact directly with terminals, which provide the user interface and may also provision processing power or intelligence in support of applications. Examples of terminals are desktop computers, wireless handheld PDAs, and CATV set-top boxes.
There are two generic classes of applications: user-to-user or communications applications, and user-to-server or information access applications. These can be mixed, for example, a collaborative application that combines voice telephony with database access.
The business entities involved in the operation of the NII are network service providers, who provision the transmission and switching equipment in the network, and application service providers, who provision the servers and maintain the databases involved in the applications. These may be one and the same, as is the case for the telephone application in the public telephone network. The users may be the application service provider, as when they load software purchased at a computer store on their terminals. Other entities involved are the equipment vendors, who develop, manufacture, and market the equipment (transmission, switching, terminals, etc.), and the application vendors, who develop and market applications for deployment in the NII.
The most basic property of a network from a user perspective is the logical connectivity it offers. The network is said to provide logical connectivity between two access nodes if it is feasible to transport data between those nodes through the network. When one access node sends data to another access node, we call the former the source and the latter the sink. It may be the case that each logically connected access node is simultaneously a source and a sink (a duplex logical connection) or that one may be exclusively a source and the other exclusively a sink (simplex logical connection).
Logical connectivity should be distinguished from network topology. The topology refers to the physical layout of the transmission media used in the network (coax, wire pairs, fiber, radio). Examples are the star topology of the public telephone network and the tree topology of a CATV system. The logical connectivity is determined not only by the topology, but also by the internal switching nodes. Generally, the user is not directly concerned with the topology of the network, although some of the important characteristics of the network (like throughput and quality of service; see below) are affected or constrained by the topology. On the other hand, the network service provider is critically concerned with the topology, as it affects costs.
An important distinction is between the possible logical connections in a network (which may be astronomically large), and the actual provisioned logical connections required by a particular application (typically small in number). A similar distinction must be made between the possible applications (i.e., those that have been developed and made available to users) and those that are actually in use at a particular time. An actual application in use is called an instance of that application, and the actual provisioned logical connections in use by that application are called instances of connections.
There are several important types of connections that arise in the context of specific applications:
Multicast or multisource connections are by their nature simplex. If there are only two access nodes, the connection is necessarily point-to-point. If access nodes are involved, and if for example every access node can send information to and receive information from the remaining nodes, then the connectivity can be thought of as a combination of simplex multisource connections (one to each node) and simplex multicast connections (one from each source). Many other combinations are possible.
From a technology standpoint, multisource connectivity merely requires flexibility in the number of simultaneous point-to-point connections to a given sink, which is a natural capability of packet networks. Similarly, simulcast connectivity requires flexibility in the number of simultaneous point-to-point connections to a source. Multicast connectivity, on the other hand, while beneficial in its sparing use of resources and the only scalable approach to broadcast, requires fundamental capabilities anticipated in the design and provisioning of the network.
From the user perspective, it is desirable to have full logical connectivity in a network. Any limitations on connectivity restrict the functionality and availability of both information access and communications applications. For example:
Similarly, the user would like to see all three types of connections (point-to-point, broadcast, and multisource), since eliminating any one of them will preclude valued applications. For example:
Network or applications service providers may view it as in their best interest to restrict the range of applications, information servers, or application service providers that they make available to their subscribers. However, the experience of the computer industry makes it clear that users will choose options with greater flexibility, given the choice and appropriate pricing. For example, restricted-functionality appliances such as the stand-alone word processor quickly lost market share to the personal computer, which offered access to a broad range of applications.
Conversely, in an environment with greater logical connectivity, it becomes more economically viable for new and innovative applications to reach the market. Application service providers with access to a broad range of users (not restricted to the limited market of subscribers to a particular service provider) quickly exploit their economies of scale. Again, the computer industry offers valuable lessons. The personal computer made available an embedded large market for new applications running on widely deployed terminals. Applications vendors targeting the most widely deployed architectures gained the upper hand because of the larger development investments they were able to make.
In conclusion, greater logical connectivity and more connectivity options offer more value to users and hence make the network service provider more economically viable; in addition, there are natural market forces that favor application service providers that target those high-connectivity networks.
The classification of connections is simplest to apply where users are in fixed locations. Users are actually mobile. They may be satisfied with accessing the network from a fixed location, which implies that they can access it only at those times they are physically in that location. Increasingly, however, users expect to be able to access the network more flexibly. There are several cases:
The flexible and moving location options require high logical connectivity in the network. Thus, greater logical connectivity provides great value to users who desire to be mobile. As witnessed by the rapid growth of cellular telephony, this is a large proportion of users, at least for telephone, data, and document applications.
Like multicast forms of broadcast connections, the moving location option requires fundamental capabilities in the network that must be anticipated in its design and provisioning, since connection instances must be dynamically reconfigured. This option makes much more sense for some applications than others. For example, it is reasonable to conduct a phone conversation while in motion, but more difficult and perhaps even dangerous to watch a video presentation or conduct a more interactive application. Even the latter becomes feasible, however, for users in vehicles driven or piloted by others.
Aside from the logical connectivity of the network, the second most important characteristic to users is the available range of applications. It is a given that the application possibilities cannot be anticipated in advance, and thus the network should be able to accommodate new applications.
Again the evolution of the computer industry offers useful insights. Because the desktop computer was a programmable device, a plethora of new applications was invented long after the architecture was established. Equally important was the availability of the market to many application vendors, which led to rapid advancement. A primary driving force for the desktop computer was that it freed the user from the slow-moving bureaucracy of the computer center and made directly available a wealth of willing application vendors.
The Internet was architected with a similar objective. The network functionality is kept to a minimum, with no capability other than the basic transport of packets from one access node to another embedded within the network. Beyond these minimal capabilities, the intelligence and functionality required to implement particular applications are realized in the servers and terminals. This architecture separates the development and deployment of applications from the design and provisioning of the network itself. New or improved applications can be deployed easily without modifications or added capabilities within the network, as long as they comply with any limitations imposed by the network design (see "Quality of Service," below). This characteristic has been the key to the rapid evolution of Internet applications, and in turn to the success and rapid growth of the Internet itself.
To be of maximum benefit to users, we believe the NII should be designed according to a philosophy similar to that for the Internet (although without some of its limitations). One can summarize these characteristics as follows:
Even when the NII is designed according to this philosophy, there is still a major obstacle to the economic deployment of new communications (as opposed to database) applications: the community of interest problem. Before one user is willing to purchase an application, it is inherent in a network environment that there must be a community of other users able to participate in that application. For example, an isolated user can usefully benefit from a shrinkwrapped personal computer application purchased locally, but in a networked environment may depend on other interested users who have purchased the same application. This can place a daunting obstacle in the way of new applications and limit the economic return to application vendors or service providers. Fortunately, there is a solution. If applications are largely defined in software rather than hardware primitives, they can be dynamically deployed as needed to terminals participating in the application. We call this dynamic application deployment.
A crucial element of the NII required to support dynamic application deployment is the ability to transfer software application descriptions in the establishment phase of an application instance. Deployment can also occur during an application instance (if it is desired to change or append the application functionality). This requires a reliable connection to the terminal, even where other aspects of the application (such as audio or video) may not require reliable protocols. Since such application descriptions are likely to be large, the user is also better served if there is a broadband connection for this purpose to limit the time duration of the establishment phase.
Flexibility in deployment of applications also requires a full suite of control primitives as a part of the network control and signaling interface to the user terminal. Anticipating all the capabilities needed here is a key design element of the NII. Such a design also needs to control the complexity inherent in such a heterogeneous environment, for example by defining an independent "universal" signaling layer together with adaptation layers to different network technologies and preexisting signaling systems.
Many applications call for control over aspects of the quality of service (QOS) provided by the network. From the user and application perspective, QOS parameters include the following:
The user is of course also concerned with the pricing of the application, which is likely to be related to the QOS it requires. The QOS parameters of the network itself affect users and applications, and include:
There are two distinct philosophies of network design:
Rarely does a network strictly follow one of these models. For example, the Internet offers as one option guaranteed delivery (zero loss) service, but does not guarantee against delay. Conversely, the public telephone network offers delay guarantees, but does not guarantee against corruption. Even for a single QOS parameter, best-effort and guarantees can be mixed for different connections, by reserving network resources for some connection instances and providing only leftover resources to other connection instances. QOS guarantees have a cost associated with them, principally in reserving resources, making them unavailable to other connection instances even when unused. There is also a substantial increase in the complexity of the network associated with QOS guarantees. The QOS of the network can sometimes be modified more simply in the access nodes, for example by introducing forward error-correction coding to reduce the corruption probability (at the expense of added delay).
There is considerable controversy over the relative merits of best-effort vs. guaranteed QOS transport. It appears that both models have merit and may reasonably coexist. QOS guarantees will be mandatory for some applications: consider the possible consequences of unanticipated interactive delay in a remote telesurgery application! It has not yet been established or demonstrated that best-effort transport can achieve entertainment-quality video. On the other hand, the simplicity and lower cost of best-effort transport seem desirable for other applications, like interactive graphics. The QOS requirements (or lack thereof) vary widely across different applications. Thus, the NII should be capable of provisioning different types of QOS guarantees to different applications on request, and should also offer a lower-cost, best-effort service to other applications.
For both best-effort and guaranteed QOS, an important issue to the users is any inherent limitations on available QOS. There are many network design choices that can (inadvertently or for reasons of cost) limit the best available QOS. Since the NII is expected to support many applications, it is important that fundamental design choices not be made that unduly restrict the best available QOS, although some portions of the NII may deliberately be provisioned in a fashion that temporarily limits QOS for cost reasons. Among the most important of these design issues are the following:
Because of QOS limitations that are either fundamental (like propagation delay) or expensive to circumvent (like wireless corruption), it is important that applications be scalable and configurable to available QOS (see below).
Delay appears to be a particular problem area for the NII. Of all the QOS parameters, delay is the only one that suffers from a fundamental limit, namely, the physical propagation delay. Propagation delay will be on the order of at least 200 to 300 milliseconds round trip for a connection halfway around the world. The desired delay for some applications is actually less than this. For example, desirable round-trip delays for synchronous continuous media applications like voice telephony and video conferencing, as well as interactive keyboard applications, are on the order of 50 to 100 milliseconds, and delays on the order of a few hundred milliseconds are significantly annoying. Thus, there is little margin for introducing delays in excess of the propagation delay without significant impairment at the greater geographic distances. Unfortunately, there are many design choices that can introduce significant delay that are already observed in present networks:
Achieving a feasible delay QOS in the NII (and especially its global extensions) acceptable to the most critical applications will require major attention in the design phase and coordination among the network service providers. Past and present trends are not encouraging in this regard, as many network technologies developed nominally for a limited geographical area have unwittingly introduced substantial delays.
Another troublesome observation is that QOS guarantees will require dynamic coordination among network service providers at connection establishment. A typical connection instance will span at least several network service providers, and possibly many more, for example, local-area network and metropolitan-area network providers at both ends and a long-haul provider. QOS parameters like delay, loss, and corruption will be affected by all the providers' networks; however, the user cares only about end-to-end QOS. Achieving end-to-end QOS will require an allocation of impairments among the providers. Such an allocation should be dynamically determined at establishment, since a static allocation will require that all networks provide a QOS appropriate for the worst-case scenario, an expensive proposition. The only practical approach appears to be dynamic allocation mechanisms that relax QOS objectives for individual links to fit the circumstances, such as local congestion or wireless access. There are no such mechanisms in place, nor a credible process to establish such mechanisms.
A weakness of some current networks, particularly wireless ones, is lack of security and privacy. It is evident, for example, that insufficient effort has been devoted to this in cellular telephony networks in North America, as evidenced by the ease of eavesdropping and the widespread theft of service. This becomes an issue for both users and network service providers. From a user perspective, the following characteristics of the NII are important:
Achieving all these goals requires careful attention in the design phase of the NII. As an example, transcoders already introduced in cellular telephony preclude privacy by end-to-end encryption.
As previously mentioned, the maximum benefit will accrue to the user if new applications can be freely deployed and made available to all users, regardless of their terminal capabilities and the transport facilities available. In this model, the application will be dynamically configured to fit the environment (terminal and connection instances), attempting to achieve the best quality consistent with the limitations. Examples include:
Dynamic configuration requires scalability and configurability of all aspects of the application. It also requires a rich signaling and control environment that passes to the application all the information needed to scale to the environment. The mechanisms described above for negotiating and configuring QOS parameters of the transport at establishment do not by themselves provide needed information about terminal capabilities. Thus, there need to be standardized signaling capabilities among the terminal instances at establishment.
The pricing model is a key to the desirability and viability of applications in the NII. It is ultimately in the best interest of the users that both network and application service providers derive revenue related to their costs. This is a difficult issue because of the great heterogeneity of networks and applications.
If the NII provides QOS guarantees as described previously, there must be a coupling of pricing and the cost of resources reserved to provide the QOS, since otherwise applications will always request the highest quality available. Since the cost of provisioning a given QOS will also depend on current traffic conditions, it is desirable that pricing be traffic dependent. Many connections will involve two or more network service providers, each provisioning identical rate parameters, but possibly contributing quite different impairments such as loss and delay to the end-to-end QOS (based on their technology, local traffic conditions, etc.). Those network service providers should derive revenue that is related to their contribution to end-to-end QOS, since otherwise they will all have an incentive to fully consume the end-to-end impairment objectives.
Thus, we conclude that the pricing to the user and division of revenue should be established based on the rate parameters, the contributions to the impairments of the individual network service providers, and local traffic conditions. This requires a complex negotiation between the application and a set of network service providers to establish an end-to-end QOS that achieves an appropriate trade-off between price and QOS, and a partitioning of that QOS among the network service providers. One approach is a broker that mediates among the application and all potential network service providers. A desirable feature of a brokerage system from the user perspective is that all available network service providers could be considered, choosing the set of providers that is most economic based on their current traffic conditions and pricing strategies.
Looking at the NII from a user perspective, we can identify some key challenges for the future:
The greatest challenge in the NII is to allow for and encourage a variety of technologies, applications, network service providers, and applications service providers to coexist in a dynamic environment, while satisfying the user's desire for interoperability, openness to new applications, and acceptable levels of performance. This will be possible only with initial planning and coordination and ongoing cooperation among all parties involved.