National Academies Press: OpenBook

The Unpredictable Certainty: White Papers (1997)

Chapter: What the NII Could Be: A User Perspective

« Previous: Aspects of Integrity in the NII
Suggested Citation:"What the NII Could Be: A User Perspective." National Research Council. 1997. The Unpredictable Certainty: White Papers. Washington, DC: The National Academies Press. doi: 10.17226/6062.
×

Page 378

45
What the NII Could Be: A User Perspective

David G. Messerschmitt
University of California at Berkeley

Abstract

The national information infrastructure (NII) is envisioned as a national public internetwork that encompasses existing networks, such as the Internet, the public telephone network and its extensions, and CATV distribution systems and their extensions, as well as new network technologies yet to be invented. Today, these networks appear to the user to be separate and noninteroperable, in the sense that a user cannot reasonably make a telephone call over the Internet or most CATV systems, cannot reasonably watch video over the Internet or the telephone network (except at unacceptably poor levels of quality by entertainment standards), and cannot send data over the telephone network or most CATV systems (except in the limited sense of using these media for access to data networks or for point-to-point data transmission). It is clear that underlying the NII will be a collection of proprietary networks incorporating a variety of different technologies; indeed, there is general agreement that this is highly desirable. The question addressed in this white paper is what the NII will look like from the user perspective, and how it might differ from today's limited-functionality and noninteroperable networks. We address this question by describing a vision of what the NII could be from a user perspective. In particular, we describe those characteristics of the NII that we believe will be important to users, including connectivity and mobility, quality of service options, security and privacy, openness to new applications across heterogeneous transport and terminal environments, and pricing.

Introduction

This white paper is an outgrowth of the planning workshop organized by the NII 2000 Steering Committee. Representatives of a number of industries participating in the NII and its underlying technologies were present. Not surprisingly, given the great variety of industries and their respective largely independent histories and markets, the representatives were often ''talking past" one another, not sharing a common vision of what the NII should be, and not sharing the common vocabulary necessary for productive discussion.

In the deployment of a massive infrastructure such as the NII, there is great danger that near-term tactical decisions made by the diverse participants in the absence of a long-term strategic vision will result in an infrastructure that precludes the broad deployment of unanticipated but important applications in the future. Such an infrastructure will not meet the needs of the users and the nation, and will offer its builders a lower return on investment that would otherwise be possible. It might even result in widespread abandonment of existing infrastructure in favor of new technologies, in similar fashion to the recent widespread and costly abandonment of partially depreciated analog communications facilities.

In this white paper, we take the perspective of the users of the future NII and ask fundamental questions about how it should appear to them. It is our belief that, near-term corporate strategies aside, an NII that best meets the future needs of the users will be the most successful, not only in its benefits to society and the nation, but also in terms of its return on investment. Thus, the full spectrum of industrial and government participants should have a shared interest in defining a strategic vision for the long term, and using that vision to influence near-term business decisions.

Looking at the NII from a long-term user perspective, we naturally envision a network that has many capabilities beyond those of any of the current networks or distribution systems. Provisioning such a broad range

Suggested Citation:"What the NII Could Be: A User Perspective." National Research Council. 1997. The Unpredictable Certainty: White Papers. Washington, DC: The National Academies Press. doi: 10.17226/6062.
×

Page 379

of capabilities would have cost implications and is economically feasible only to the extent that it provides value to the user well in excess of the incremental costs. This is problematic if one accepts one of our fundamental hypotheses, namely, that we cannot possibly anticipate all the big-hitting applications of the NII. However, it should be emphasized that it is not necessary that all near-term deployments provide all the capabilities incorporated into a strategic vision. Indeed, one critical aspect of such a vision is that it should be easy and cost effective to add new technologies and capabilities to the NII as unanticipated applications and user needs emerge. If this is achieved, it is only necessary that near-term investments be compatible with a long-term strategic vision, and hence not preclude future possibilities or force later disinvestment and widespread replacement of infrastructure. This is admittedly not straightforward but is nevertheless a worthwhile goal.

One can anticipate the NII falling somewhere on the spectrum from a collection of proprietary and noninteroperable networks (largely the situation today) to a single, universal network that appears to the user to seamlessly and effortlessly meet all user needs. We argue that from the user perspective the NII should, although consisting internally of a diversity of heterogeneous transport and terminal technologies, offer the seamless deployment of a wide range of applications and openness to new applications. Not all participants in the NII may judge this to be in their best interest, and of course they all encounter serious cost and time-to-market constraints. However, if they take into account longer-term opportunities in the course of their near-term business decisions, we believe that both they—the users—and the nation will benefit greatly in the long term. It is our hope that the NII 2000 technology deployment project will move the collective deliberations in this direction.

Terminology

First we define some consistent terminology for the remainder of this white paper.

The users of the NII are people. The NII will consist of a network (or more accurately a collection of networks) to which are attached access nodes at its edge. We distinguish between two types of devices connected to access nodes: information and applications servers, and user terminals (for simplicity, we will abbreviate these to servers and terminals). A networked application is a set of functionality that makes use of the transport services of the network and the processing power in the servers and terminals, and provides value to users. Servers make databases or information sources available to the terminals, or provide processing power required to provision applications. Users interact directly with terminals, which provide the user interface and may also provision processing power or intelligence in support of applications. Examples of terminals are desktop computers, wireless handheld PDAs, and CATV set-top boxes.

There are two generic classes of applications: user-to-user or communications applications, and user-to-server or information access applications. These can be mixed, for example, a collaborative application that combines voice telephony with database access.

The business entities involved in the operation of the NII are network service providers, who provision the transmission and switching equipment in the network, and application service providers, who provision the servers and maintain the databases involved in the applications. These may be one and the same, as is the case for the telephone application in the public telephone network. The users may be the application service provider, as when they load software purchased at a computer store on their terminals. Other entities involved are the equipment vendors, who develop, manufacture, and market the equipment (transmission, switching, terminals, etc.), and the application vendors, who develop and market applications for deployment in the NII.

Connectivity Issues

Logical Connectivity of a Network

The most basic property of a network from a user perspective is the logical connectivity it offers. The network is said to provide logical connectivity between two access nodes if it is feasible to transport data between those nodes through the network. When one access node sends data to another access node, we call the former the

Suggested Citation:"What the NII Could Be: A User Perspective." National Research Council. 1997. The Unpredictable Certainty: White Papers. Washington, DC: The National Academies Press. doi: 10.17226/6062.
×

Page 380

source and the latter the sink. It may be the case that each logically connected access node is simultaneously a source and a sink (a duplex logical connection) or that one may be exclusively a source and the other exclusively a sink (simplex logical connection).

Logical connectivity should be distinguished from network topology. The topology refers to the physical layout of the transmission media used in the network (coax, wire pairs, fiber, radio). Examples are the star topology of the public telephone network and the tree topology of a CATV system. The logical connectivity is determined not only by the topology, but also by the internal switching nodes. Generally, the user is not directly concerned with the topology of the network, although some of the important characteristics of the network (like throughput and quality of service; see below) are affected or constrained by the topology. On the other hand, the network service provider is critically concerned with the topology, as it affects costs.

An important distinction is between the possible logical connections in a network (which may be astronomically large), and the actual provisioned logical connections required by a particular application (typically small in number). A similar distinction must be made between the possible applications (i.e., those that have been developed and made available to users) and those that are actually in use at a particular time. An actual application in use is called an instance of that application, and the actual provisioned logical connections in use by that application are called instances of connections.

Application Connectivity

There are several important types of connections that arise in the context of specific applications:

A logical point-to-point connection, in which access nodes are connected in either simplex or duplex fashion. One node in a point-to-point connection may be a source or sink or both, the latter in a duplex connection.

A logical broadcast connection, in which a single source is connected to two or more sinks. Within the network, this type of connection can be provisioned in different ways. Simulcast implies separate component connections from source to each sink, and multicast refers to a tree structure (where network resources are shared among the component connections). The distinction between these alternatives is generally not of immediate concern to users, who see only indirect effects (cost, quality of service, etc.).

A logical multisource connection, in which two or more sources are connected to a single sink. A distinction analogous to multicast vs. simulcast does not apply to multisource, since there is generally no advantage to sharing resources among the components of a multisource connection.

Multicast or multisource connections are by their nature simplex. If there are only two access nodes, the connection is necessarily point-to-point. If access nodes are involved, and if for example every access node can send information to and receive information from the remaining nodes, then the connectivity can be thought of as a combination of simplex multisource connections (one to each node) and simplex multicast connections (one from each source). Many other combinations are possible.

From a technology standpoint, multisource connectivity merely requires flexibility in the number of simultaneous point-to-point connections to a given sink, which is a natural capability of packet networks. Similarly, simulcast connectivity requires flexibility in the number of simultaneous point-to-point connections to a source. Multicast connectivity, on the other hand, while beneficial in its sparing use of resources and the only scalable approach to broadcast, requires fundamental capabilities anticipated in the design and provisioning of the network.

Suggested Citation:"What the NII Could Be: A User Perspective." National Research Council. 1997. The Unpredictable Certainty: White Papers. Washington, DC: The National Academies Press. doi: 10.17226/6062.
×

Page 381

User Perspective

Connectivity

From the user perspective, it is desirable to have full logical connectivity in a network. Any limitations on connectivity restrict the functionality and availability of both information access and communications applications. For example:

A user who purchases a telephony application from one application service provider wants the option to call all other telephones, whether they are connected to the telephone network, a CATV network, the Internet, etc. Any application service provider who restricts destinations, let's say to only its own subscribers, will be at a disadvantage.

The telephone example extends readily to other communications applications. The user will find much less value if the application supplier limits connectivity to a proper subset of those other users who could participate in that application (i.e., who have appropriate terminals, etc.).

A user with appropriate terminals to access a type of information access application naturally desires connectivity to every available instance of that type of application. For example, a user with the terminal capability to view a video presentation would prefer to maximize the leverage of the investment in terminal equipment by having access to the maximum range of source material.

Similarly, the user would like to see all three types of connections (point-to-point, broadcast, and multisource), since eliminating any one of them will preclude valued applications. For example:

A "conference telephone call" and "video teleconference" are examples of communications applications that require both multisource and broadcast connections. They are multisource because one participant will want to see and/or hear two or more other participants simultaneously. They are broadcast because any one participant will want to be seen by all the other participants.

A remote learning class or seminar requires broadcast connectivity because many participants may want to see the presentation, and may also be multisource if the participants have audio or video feedback to the instructor.

The UNIX X-windows graphical user interface illustrates the value of running applications on two or more servers and displaying the results on a single terminal. This requires multisource connectivity.

Network or applications service providers may view it as in their best interest to restrict the range of applications, information servers, or application service providers that they make available to their subscribers. However, the experience of the computer industry makes it clear that users will choose options with greater flexibility, given the choice and appropriate pricing. For example, restricted-functionality appliances such as the stand-alone word processor quickly lost market share to the personal computer, which offered access to a broad range of applications.

Conversely, in an environment with greater logical connectivity, it becomes more economically viable for new and innovative applications to reach the market. Application service providers with access to a broad range of users (not restricted to the limited market of subscribers to a particular service provider) quickly exploit their economies of scale. Again, the computer industry offers valuable lessons. The personal computer made available an embedded large market for new applications running on widely deployed terminals. Applications vendors targeting the most widely deployed architectures gained the upper hand because of the larger development investments they were able to make.

In conclusion, greater logical connectivity and more connectivity options offer more value to users and hence make the network service provider more economically viable; in addition, there are natural market forces that favor application service providers that target those high-connectivity networks.

Suggested Citation:"What the NII Could Be: A User Perspective." National Research Council. 1997. The Unpredictable Certainty: White Papers. Washington, DC: The National Academies Press. doi: 10.17226/6062.
×

Page 382

Mobility

The classification of connections is simplest to apply where users are in fixed locations. Users are actually mobile. They may be satisfied with accessing the network from a fixed location, which implies that they can access it only at those times they are physically in that location. Increasingly, however, users expect to be able to access the network more flexibly. There are several cases:

Fixed location, where an application is provisioned to be accessed from a specific access node. Wired telephony is an example.

Flexible location but static access node, where the user is allowed to choose the access node from among different geographic locations, but that access node is not allowed to change during a single application instance. An example is wireless access to the network with the assumption that the user remains within range of a single base station.

Moving location and access node, in which a user with wireless access is allowed to move from the coverage of one base station to another for the duration of an application instance. This allows the user be in motion, such as on foot or in a moving vehicle.

The flexible and moving location options require high logical connectivity in the network. Thus, greater logical connectivity provides great value to users who desire to be mobile. As witnessed by the rapid growth of cellular telephony, this is a large proportion of users, at least for telephone, data, and document applications.

Like multicast forms of broadcast connections, the moving location option requires fundamental capabilities in the network that must be anticipated in its design and provisioning, since connection instances must be dynamically reconfigured. This option makes much more sense for some applications than others. For example, it is reasonable to conduct a phone conversation while in motion, but more difficult and perhaps even dangerous to watch a video presentation or conduct a more interactive application. Even the latter becomes feasible, however, for users in vehicles driven or piloted by others.

Openness to New Applications

Aside from the logical connectivity of the network, the second most important characteristic to users is the available range of applications. It is a given that the application possibilities cannot be anticipated in advance, and thus the network should be able to accomodate new applications.

Again the evolution of the computer industry offers useful insights. Because the desktop computer was a programmable device, a plethora of new applications was invented long after the architecture was established. Equally important was the availability of the market to many application vendors, which led to rapid advancement. A primary driving force for the desktop computer was that it freed the user from the slow-moving bureaucracy of the computer center and made directly available a wealth of willing application vendors.

The Internet was architected with a similar objective. The network functionality is kept to a minimum, with no capability other than the basic transport of packets from one access node to another embedded within the network. Beyond these minimal capabilities, the intelligence and functionality required to implement particular applications are realized in the servers and terminals. This architecture separates the development and deployment of applications from the design and provisioning of the network itself. New or improved applications can be deployed easily without modifications or added capabilities within the network, as long as they comply with any limitations imposed by the network design (see "Quality of Service," below). This characteristic has been the key to the rapid evolution of Internet applications, and in turn to the success and rapid growth of the Internet itself.

To be of maximum benefit to users, we believe the NII should be designed according to a philosophy similar to that for the Internet (although without some of its limitations). One can summarize these characteristics as follows:

Suggested Citation:"What the NII Could Be: A User Perspective." National Research Council. 1997. The Unpredictable Certainty: White Papers. Washington, DC: The National Academies Press. doi: 10.17226/6062.
×

Page 383

Provide full logical connectivity among all access nodes, and do not limit the number of logical connections available to any single access node. To do otherwise limits future applications.

Do not design the NII or portions of the NII around specific applications, thereby limiting its capabilities to support future unanticipated applications. Rather, realize within the network the minimum capabilities required across all present and future applications (to the extent it is possible to anticipate those capabilities).

Realize the primary application functionality in the terminals or servers, or alternatively at access points to the network (but within the domain of the network service provider), rather than internal to the network itself. This way new applications can be deployed by adding functionality at only those access nodes associated with users willing to pay for those applications, without the obstacle of making uneconomic modifications throughout the network infrastructure.

Since standardization presents a potential obstacle to the rapid deployment of innovative applications, consciously limit the role of standardization to the basic network infrastructure. Do not attempt to standardize applications, but rather allow them to be provisioned and configured dynamically as needed.

Even when the NII is designed according to this philosophy, there is still a major obstacle to the economic deployment of new communications (as opposed to database) applications: the community of interest problem. Before one user is willing to purchase an application, it is inherent in a network environment that there must be a community of other users able to participate in that application. For example, an isolated user can usefully benefit from a shrinkwrapped personal computer application purchased locally, but in a networked environment may depend on other interested users who have purchased the same application. This can place a daunting obstacle in the way of new applications and limit the economic return to application vendors or service providers. Fortunately, there is a solution. If applications are largely defined in software rather than hardware primitives, they can be dynamically deployed as needed to terminals participating in the application. We call this dynamic application deployment.

A crucial element of the NII required to support dynamic application deployment is the ability to transfer software application descriptions in the establishment phase of an application instance. Deployment can also occur during an application instance (if it is desired to change or append the application functionality). This requires a reliable connection to the terminal, even where other aspects of the application (such as audio or video) may not require reliable protocols. Since such application descriptions are likely to be large, the user is also better served if there is a broadband connection for this purpose to limit the time duration of the establishment phase.

Flexibility in deployment of applications also requires a full suite of control primitives as a part of the network control and signaling interface to the user terminal. Anticipating all the capabilities needed here is a key design element of the NII. Such a design also needs to control the complexity inherent in such a heterogeneous environment, for example by defining an independent "universal" signaling layer together with adaptation layers to different network techonologies and prexisting signaling systems.

Quality of Service

Many applications call for control over aspects of the quality of service (QOS) provided by the network. From the user and application perspective, QOS parameters include the following:

The setup time in establishment, including configuration of the connection instances, transport of the application description to the participating terminals, etc.

The frequency with which an application is refused by the network (due to failures, traffic overload, etc.).

The interactive delay through the network (the time from user action to appropriate application reaction).

Suggested Citation:"What the NII Could Be: A User Perspective." National Research Council. 1997. The Unpredictable Certainty: White Papers. Washington, DC: The National Academies Press. doi: 10.17226/6062.
×

Page 384

The subjective quality of application components like audio and video, which is affected not only by quantization and network loss artifacts, but also the delay introduced in the transport and synchronization of the audio or video. The subjective quality depends not only on network QOS characteristics, but also on the characteristics of the application implementation in the terminals, such as the algorithms used for audio or video compression.

The user is of course also concerned with the pricing of the application, which is likely to be related to the QOS it requires. The QOS parameters of the network itself affect users and applications, and include:

The throughput of the network, in both directions in the case of a duplex connection;

The delay, variation in delay, and temporal characteristics of delay variation in transport through the network;

The frequency with which losses occur, and the temporal characteristics of those losses (such as whether they are bunched together or spread out); and

The frequency of corruption of data, and the temporal characteristics of that corruption. (For data transport, corrupted data must be discarded, whereas in continous-media transport such as audio and video, corrupted data are useful but cause subjective impairments.)

There are two distinct philosophies of network design:

The network provides guarantees on some QOS parameters. The quantitative guarantees are established by negotiation between application and network at establishment, and appropriate resources within the network are reserved for the connection instances to ensure that the guarantees will be satisfied.

The network provides best-effort transport, in which resources are provided to a connection instance on an as-available basis, without guarantee.

Rarely does a network strictly follow one of these models. For example, the Internet offers as one option guaranteed delivery (zero loss) service, but does not guarantee against delay. Conversely, the public telephone network offers delay guarantees, but does not guarantee against corruption. Even for a single QOS parameter, best-effort and guarantees can be mixed for different connections, by reserving network resources for some connection instances and providing only leftover resources to other connection instances. QOS guarantees have a cost associated with them, principally in reserving resources, making them unavailable to other connection instances even when unused. There is also a substantial increase in the complexity of the network associated with QOS guarantees. The QOS of the network can sometimes be modified more simply in the access nodes, for example by introducing forward error-correction coding to reduce the corruption probability (at the expense of added delay).

There is considerable controversy over the relative merits of best-effort vs. guaranteed QOS transport. It appears that both models have merit and may reasonably coexist. QOS guarantees will be mandatory for some applications: consider the possible consequences of unanticipated interactive delay in a remote telesurgery application! It has not yet been established or demonstrated that best-effort transport can achieve entertainment-quality video. On the other hand, the simplicity and lower cost of best-effort transport seem desirable for other applications, like interactive graphics. The QOS requirements (or lack thereof) vary widely across different applications. Thus, the NII should be capable of provisioning different types of QOS guarantees to different applications on request, and should also offer a lower-cost, best-effort service to other applications.

For both best-effort and guaranteed QOS, an important issue to the users is any inherent limitations on available QOS. There are many network design choices that can (inadvertently or for reasons of cost) limit the best available QOS. Since the NII is expected to support many applications, it is important that fundamental design choices not be made that unduly restrict the best available QOS, although some portions of the NII may deliberately be provisioned in a fashion that temporarily limits QOS for cost reasons. Among the most important of these design issues are the following:

Suggested Citation:"What the NII Could Be: A User Perspective." National Research Council. 1997. The Unpredictable Certainty: White Papers. Washington, DC: The National Academies Press. doi: 10.17226/6062.
×

Page 385

The network topology can substantially increase the lowest available delay.

The choice of physical layer technology in conjunction with topology can severely limit QOS. For example, wireless and wire-pair access technologies can limit the highest available rate, as can multiaccess topologies (wireless reverse links or tree-topology CATV distribution system reverse links).

Achieving high reliability on wireless access links can be expensive (in terms of system capacity), especially in the worst case and especially in the context of moving terminals.

Because of QOS limitations that are either fundamental (like propagation delay) or expensive to circumvent (like wireless corruption), it is important that applications be scalable and configurable to available QOS (see below).

Delay appears to be a particular problem area for the NII. Of all the QOS parameters, delay is the only one that suffers from a fundamental limit, namely, the physical propagation delay. Propagation delay will be on the order of at least 200 to 300 milliseconds round trip for a connection halfway around the world. The desired delay for some applications is actually less than this. For example, desirable round-trip delays for synchronous continuous media applications like voice telephony and video conferencing, as well as interactive keyboard applications, are on the order of 50 to 100 milliseconds, and delays on the order of a few hundred milliseconds are significantly annoying. Thus, there is little margin for introducing delays in excess of the propagation delay without significant impairment at the greater geographic distances. Unfortunately, there are many design choices that can introduce significant delay that are already observed in present networks:

Packet networks trade network capacity through statistical multiplexing for queuing delay at switching nodes, and this queuing delay increases substantially during periods of congestion. A given connection instance may traverse many such switches in a network with a "sparse" topology, and thus there is an unfortunate tendency for propagation and queuing delay to increase in tandem.

A high degree of logical connectivity can be achieved in virtually any network topology, including those with sparse physical connectivity, by adding switching. However, as previously noted, this switching can itself introduce queuing delay. Beyond this, the physical path traversed by the data can be considerably lengthened, increasing the propagation delay as well. This is a flaw in any approach involving a collection of "overlay" subnetworks with Internet gateways.

In packet networks, large packet headers encourage long average packet lengths at high network utilization. For low-throughput applications like voice and audio, the packet assembly time for large packets introduces a large delay (independent of network throughput). An example is the Internet Protocol, which has a large packet header (scheduled to get even larger in the future).

It is tempting to insert transcoders from on compression standard to another in the network for audio and video applications. These transcoders force delays to add across network links on a worst-case (as opposed to statistical) basis, and also add significant signal-processing delays. For example, digital cellular base station voice transcoders add a one-way signal-processing delay of about 80 milliseconds.

Achieving a feasible delay QOS in the NII (and especially its global extensions) acceptable to the most critical applications will require major attention in the design phase and coordination among the network service providers. Past and present trends are not encouraging in this regard, as many network technologies developed nominally for a limited geographical area have unwittingly introduced substantial delays.

Another troublesome observation is that QOS guarantees will require dynamic coordination among network service providers at connection establishment. A typical connection instance will span at least several network service providers, and possibly many more, for example, local-area network and metropolitan-area network providers at both ends and a long-haul provider. QOS parameters like delay, loss, and corruption will be affected by all the providers' networks; however, the user cares only about end-to-end QOS. Achieving end-to-end QOS will require an allocation of impairments among the providers. Such an allocation should be dynamically determined at establishment, since a static allocation will require that all networks provide a QOS appropriate for the worst-case scenario, an expensive proposition. The only practical approach appears to be dynamic allocation mechanisms that relax QOS objectives for individual links to fit the circumstances, such as

Suggested Citation:"What the NII Could Be: A User Perspective." National Research Council. 1997. The Unpredictable Certainty: White Papers. Washington, DC: The National Academies Press. doi: 10.17226/6062.
×

Page 386

local congestion or wireless access. There are no such mechanisms in place, nor a credible process to establish such mechanisms.

Security and Privacy

A weakness of some current networks, particularly wireless ones, is lack of security and privacy. It is evident, for example, that insufficient effort has been devoted to this in cellular telephony networks in North America, as evidenced by the ease of eavesdropping and the widespread theft of service. This becomes an issue for both users and network service providers. From a user perspective, the following characteristics of the NII are important:

Freedom from casual eavesdropping;

The capability to make eavesdropping infeasible for sensitive applications, acceptably at extra cost;

Freedom from theft of services (obviously of interest to service providers as well); and

Inability to surreptitiously track the identity or movements of users.

Achieving all these goals requires careful attention in the design phase of the NII. As an example, transcoders already introduced in cellular telephony preclude privacy by end-to-end encryption.

Application Scalability and Configurability

As previously mentioned, the maximum benefit will accrue to the user if new applications can be freely deployed and made available to all users, regardless of their terminal capabilities and the transport facilities available. In this model, the application will be dynamically configured to fit the environment (terminal and connection instances), attempting to achieve the best quality consistent with the limitations. Examples include:

Scalability to the connection QOS. For example, a video application may be configured to lower resolution or subjective quality in the case of wireless access, as opposed to a backbone-only connection. It is not desirable for the user that an application is precluded by, for example, a wireless access; rather, the user would prefer that some QOS parameters (and thereby subjective quality) be compromised.

Scalability to the terminal capabilities. For example, a video application will be configured to a compression algorithm requiring less processing (trading that off against lower quality, resolution, or greater transport bandwidth) should the originating or receiving terminal instances have limited processing. It is not desirable for the user that applications be limited to terminals provided by particular manufacturers or with particular capabilities.

Dynamic configuration requires scalability and configurability of all aspects of the application. It also requires a rich signaling and control environment that passes to the application all the information needed to scale to the environment. The mechanisms described above for negotiating and configuring QOS parameters of the transport at establishment do not by themselves provide needed information about terminal capabilities. Thus, there need to be standardized signaling capabilities among the terminal instances at establishment.

Pricing

The pricing model is a key to the desirability and viability of applications in the NII. It is ultimately in the best interest of the users that both network and application service providers derive revenue related to their costs. This is a difficult issue because of the great heterogeneity of networks and applications.

If the NII provides QOS guarantees as described previously, there must be a coupling of pricing and the cost of resources reserved to provide the QOS, since otherwise applications will always request the highest quality

Suggested Citation:"What the NII Could Be: A User Perspective." National Research Council. 1997. The Unpredictable Certainty: White Papers. Washington, DC: The National Academies Press. doi: 10.17226/6062.
×

Page 387

available. Since the cost of provisioning a given QOS will also depend on current traffic conditions, it is desirable that pricing be traffic dependent. Many connections will involve two or more network service providers, each provisioning identical rate parameters, but possibly contributing quite different impairments such as loss and delay to the end-to-end QOS (based on their technology, local traffic conditions, etc.). Those network service providers should derive revenue that is related to their contribution to end-to-end QOS, since otherwise they will all have an incentive to fully consume the end-to-end impairment objectives.

Thus, we conclude that the pricing to the user and division of revenue should be established based on the rate parameters, the contributions to the impairments of the individual network service providers, and local traffic conditions. This requires a complex negotiation between the application and a set of network service providers to establish an end-to-end QOS that achieves an appropriate trade-off between price and QOS, and a partitioning of that QOS among the network service providers. One approach is a broker that mediates among the application and all potential network service providers. A desirable feature of a brokerage system from the user perspective is that all available network service providers could be considered, choosing the set of providers that is most economic based on their current traffic conditions and pricing strategies.

Conclusions

Looking at the NII from a user perspective, we can identify some key challenges for the future:

To meet a wide range of application needs and provide flexibility for the future, individual network service providers and their equipment vendors need to take a general perspective, as opposed to developing and deploying technologies defined for narrow currently defined applications.

Major cooperation is needed among network service providers to coordinate their design and deployment strategies in areas like end-to-end transport protocols and signaling capabilities that allow dynamic allocation of end-to-end QOS impairments, support scalability and configurability of applications, and provide desired levels of privacy and security.

Overall planning is needed, with specific action on the part of individual network service providers, to be sure that near-term decisions do not compromise end-to-end QOS objectives in the NII and especially its global extensions.

The greatest challenge in the NII is to allow for and encourage a variety of technologies, applications, network service providers, and applications service providers to coexist in a dynamic environment, while satisfying the user's desire for interoperability, openness to new applications, and acceptable levels of performance. This will be possible only with initial planning and coordination and ongoing cooperation among all parties involved.

Suggested Citation:"What the NII Could Be: A User Perspective." National Research Council. 1997. The Unpredictable Certainty: White Papers. Washington, DC: The National Academies Press. doi: 10.17226/6062.
×
Page 378
Suggested Citation:"What the NII Could Be: A User Perspective." National Research Council. 1997. The Unpredictable Certainty: White Papers. Washington, DC: The National Academies Press. doi: 10.17226/6062.
×
Page 379
Suggested Citation:"What the NII Could Be: A User Perspective." National Research Council. 1997. The Unpredictable Certainty: White Papers. Washington, DC: The National Academies Press. doi: 10.17226/6062.
×
Page 380
Suggested Citation:"What the NII Could Be: A User Perspective." National Research Council. 1997. The Unpredictable Certainty: White Papers. Washington, DC: The National Academies Press. doi: 10.17226/6062.
×
Page 381
Suggested Citation:"What the NII Could Be: A User Perspective." National Research Council. 1997. The Unpredictable Certainty: White Papers. Washington, DC: The National Academies Press. doi: 10.17226/6062.
×
Page 382
Suggested Citation:"What the NII Could Be: A User Perspective." National Research Council. 1997. The Unpredictable Certainty: White Papers. Washington, DC: The National Academies Press. doi: 10.17226/6062.
×
Page 383
Suggested Citation:"What the NII Could Be: A User Perspective." National Research Council. 1997. The Unpredictable Certainty: White Papers. Washington, DC: The National Academies Press. doi: 10.17226/6062.
×
Page 384
Suggested Citation:"What the NII Could Be: A User Perspective." National Research Council. 1997. The Unpredictable Certainty: White Papers. Washington, DC: The National Academies Press. doi: 10.17226/6062.
×
Page 385
Suggested Citation:"What the NII Could Be: A User Perspective." National Research Council. 1997. The Unpredictable Certainty: White Papers. Washington, DC: The National Academies Press. doi: 10.17226/6062.
×
Page 386
Suggested Citation:"What the NII Could Be: A User Perspective." National Research Council. 1997. The Unpredictable Certainty: White Papers. Washington, DC: The National Academies Press. doi: 10.17226/6062.
×
Page 387
Next: Role of the PC in Emerging Information Infrastructures »
The Unpredictable Certainty: White Papers Get This Book
×
Buy Paperback | $120.00 Buy Ebook | $94.99
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

This book contains a key component of the NII 2000 project of the Computer Science and Telecommunications Board, a set of white papers that contributed to and complements the project's final report, The Unpredictable Certainty: Information Infrastructure Through 2000, which was published in the spring of 1996. That report was disseminated widely and was well received by its sponsors and a variety of audiences in government, industry, and academia. Constraints on staff time and availability delayed the publication of these white papers, which offer details on a number of issues and positions relating to the deployment of information infrastructure.

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    Switch between the Original Pages, where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

    « Back Next »
  6. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  7. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  8. ×

    View our suggested citation for this chapter.

    « Back Next »
  9. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!