Skip to main content

The Internet's Coming of Age (2001) / Chapter Skim
Currently Skimming:

2 Scaling Up the Internet and Making It More Reliable and Robust
Pages 53-106

The Chapter Skim interface presents what we've algorithmically identified as the most significant single chunk of text within every page in the chapter.
Select key terms on the right to highlight them within pages of the chapter.


From page 53...
... Reflecting demand for its capabilities, the Internet is expected to grow substantially worldwide in terms of users, devices, and applications. A dramatic increase in the number of users and networked devices gives rise to questions of whether the Internet's present addressing scheme can accommodate the demand and whether the Internet community's proposed solution, IPv6, could, in fact, be deployed to remedy the situation.
From page 54...
... . Over 100 million people report that they are Internet users in the United States.2 Overseas, while the current level of Internet penetration differs widely from country to country, many countries show rates of growth comparable to or exceeding the rapid growth seen in the United States,3 so it is reasonable to anticipate that similar growth curves will be seen in other less-penetrated countries, shifted in time, reflecting when the early adoption phase began.
From page 55...
... by deploying servers throughout the Internet. Cache servers keep local copies of frequently used content, and locally placed streaming servers compensate for the lack of guarantees against delay.
From page 56...
... , not to the core Internet protocols. Early versions of HTTP relied on a large number of short TCP sessions, adding considerable overhead to the retrieval of a page containing many elements and preventing TCP's congestion control mechanisms from working.5 An update to the protocol, HTTP 1.1, adopted as an Internet standard by the IETF in 1999,6 finally fixed enough of the problem to reduce the pressure on the network infrastructure, but the protocol still lacks many of the right properties for use at massive 5Though it took some time to launch an update, the shortcomings of HTTP l.o were recognized early on.
From page 57...
... Resolving this situation requires not merely defining an appropriate protocol but also researching a hard routing question how to coalesce routing information of multiple groups into manageable aggregates without generating too much inefficiency. 7For example, Internet traffic statistics for the vBNS, a research backbone, show that about two-thirds of TCP flows were HTTP.
From page 58...
... , indirection provides users with portability if they wish to switch Internet providers. While most users receive IP address allocations from their ISP and thus have to change address if they change ISP, DNS names are controlled by the user a change of provider requires only that the address pointed to by the DNS entry be changed.
From page 59...
... To access named objects, Internet sessions start with a transaction with a name server, known as name resolution, to find the IP address at which the resource is located, in which a domain name such as www.example.com is translated into a numerical address such as 128.9.176.32. Assuming that the local name server has not previously stored the requisite information locally (see the discussion of caching, below)
From page 60...
... These intercept and divert data packets going to a particular address to one of a number of servers that contain the same content. Because it interposes information processing outside the control of either the user's computer or the server he is
From page 61...
... SCALING UP THE INTERNET AND MAKING IT MORE RELIABLE AND ROBUST 61 level domains currently limited to one national domain per country (e.g., .fr for France) , plus a limited number of global domains (e.g., .com and .org)
From page 62...
... The multistage process required to find the address of a target, repeated for many Web page accesses by millions of Internet users, can result in a heavy load on the servers one level down from the top of the tree. If the name servers were to be overwhelmed on a persistent basis, all Internet transactions that make use of domain names (i.e., virtually all Internet transactions)
From page 63...
... (Even after efforts were made to shorten host names, the number of root servers remains limited to 13.) Once the maximum number of servers that will fit within the single-packet constraint has been deployed, increased load in that domain can only be dealt with by increasing the capacity and processing power of each of the individual .com name servers.
From page 64...
... and the Internet community's traditional resistance to changing something that is working although only poorly is probably responsible for impeding deployment of any one of these proposals. There is reason to hope that rising pressure for new capabilities that the DNS cannot easily accommodate, such the ability to support non-Roman alphabet characters in domain names, could unlock the problem and speed deployment of a directory-based solution that would alleviate scaling pressures.
From page 65...
... . They, in turn, distribute smaller blocks of addresses to Internet service providers.
From page 66...
... Today this requires tables that hold on the order of 75,000 entries.l6 As the Internet grows, the routing tables are sure to grow, but the limited capabilities of today's routers dictate that this growth must be constrained. The first and most obvious consideration is that the size of the table cannot exceed the memory available in the routers.
From page 67...
... This involves, for example, Internet service providers consolidating address space allocations into fewer, larger routing prefixes, which are then allocated to customers (these, in turn, may include smaller Internet service providers) out of the service provider's block of addresses.
From page 68...
... Deployment of CIDR, together with the adoption of a restrictive address allocation policy by the registries and the use of network address translation, has contained the growth of the routing tables, and the growth in the global routing table has by and large been slow and linear (Figure 2.2~. Note, however, that the most recent data displayed in this figure suggest that CIDR and restrictive allocation policies have not entirely alleviated pressures on the routing table size and that table size has recently grown faster than linear.
From page 69...
... As a result, there have been calls for users to be provided again with portable addresses so as to minimize switching costs. However, because addresses would no longer be aggregated within the blocks assigned to an ISP network, allocating portable addresses in small blocks to small networks would trigger a dramatic increase in the size of the routing tables.
From page 70...
... But NAT had an unintended side effect the explosion of private addressing. This widespread use had the effect of letting the wind out of IPv6's sails, as the perception of crisis requiring a wholesale replacement of the Internet Protocol faded.
From page 71...
... The present Internet protocol, IPv4, provides 32-bitlong addresses, which translates into an address "space" of about 4.3 billion unique addresses. For historical reasons, about seven-eighths of this address space is available for use as addresses for connecting devices to the Internet; the remainder is reserved for multicast and experimental addresses.
From page 72...
... The NLANR data also indicate relatively rapid consumption: the advertised address space increased by roughly 6 percent from November 1997 to May 1999.2° Interpretation of these data is further complicated because not all the addresses that are advertised are actually assigned to an active host. A provider using an address block for a set of devices will generally advertise the whole block (since there is no cost to it in doing so, and doing otherwise would result in many more routing table entries)
From page 73...
... 26Various forecasts project that the number of such networked devices will vastly exceed the number of individual Internet users within the next decade. See, for example, Frank Gens.
From page 74...
... Data transfer into and out of a system or facility can be mediated by a special computer that acts as a gateway or external interface between a group of computers and the outside network. The number of computers currently assigned private addresses could be a significant factor in estimating future demand for global addresses.
From page 75...
... and the Strategis Group projects that total Internet users in China could exceed 33 million by the end of 2003. (See PRC Information Technology Review.
From page 76...
... NAT is also not a satisfactory solution for some very large networks because the size of the address blocks designated for use in private networks (i.e., blocks of IPv4 addresses that are not allocated for global addresses) is finite.
From page 77...
... NATs have the advantage that they provide some degree of security by hiding private addresses behind the address translator, but the protection afforded is limited. Another difficulty with the model is that it presumes that the Internet is limited in design to a finite set of edge domains surrounding a single core network, each of which contains a number of machines sitting behind a NAT.
From page 78...
... The request for proposals for a next-generation Internet Protocol was released in fuly 1992, and seven responses, which ranged from making minor patches to IP to replacing it completely with a different protocol, had been received by year's end. Eventually, a combination of two of the proposals was selected and given the designation IPv6.
From page 79...
... Reflecting their perception that the gain for switching to IPv6 is not sufficient to justify the pain of the switch, customers have not expressed much willingness to pay for it, and equipment vendors and service providers are for the most part not yet providing it. An important exception is the planned use of IPv6 for the so-called third-generation wireless devices now being developed as successors to present mobile telephone systems.35 For many, the devil they know is better than the one they don't know.
From page 80...
... With NAT, this is not an issue, because subscriber machines are assigned private addresses, 36Another example of this is multicast, which, although it is supported in major operating systems such as Windows, is not widely used.
From page 81...
... It is reasonable to assume that such expectations including, for instance, the capability for automatically notifying authorities of the geographical location of a 911 caller will transfer to Internet telephony services. As Internet use becomes more widespread, it is conceivable, or even likely, that other, new life-critical applications will emerge.
From page 82...
... The choice to base the Internet on richly interconnected 38See, for example, National Security Telecommunications Advisory Committee (NSTAC)
From page 83...
... Typically these failures are minor (e.g., a few phone calls get prematurely disconnected) but some can be spectacular (e.g., the multiday loss of phone service as a result of a fire at a central office in Hinsdale, Illinois, in 1990~.
From page 84...
... Indeed, there are a number of points of potential massive vulnerability in today's Internet, including the integrity of the addresses being routed, the integrity of the routing system, the integrity of the domain name system, and the integrity of the end-to-end application communication (Box 2.3~. Considerable attention has been devoted to the Internet as part of the late-199Os examination of the nation's critical infrastructure.
From page 85...
... The types of attacks discussed here are varied; while those who conduct the attacks may be unsophisticated, those who develop the capabilities are resourceful and creative. While a number of measures can be implemented within the network 43For example, the routing infrastructure itself might be attacked by hijacking a TCP connection or inserting incorrect routing information into the system.
From page 86...
... 86 THE INTERNET'S COMING OF AGE to enhance robustness, the Internet's architecture, in which service definition is pushed to the end systems connected to the network, requires many security issues to be addressed at the edges of the network. Both the performance of individual applications and services and the overall integrity of the network are therefore placed in the hands of these end systems.
From page 87...
... SCALING UP THE INTERNET AND MAKING IT MORE RELIABLE AND ROBUST 87 lion of mutual trust becomes. Thus, absent the adoption of new measures both within the network as well as at its edges, security violations of varying degrees of severity can be expected to continue.
From page 88...
... Enhanced technologies such as single-use passwords, cryptographic authentication, and message digest authentication technology have been developed and are starting to see deployment. Nonetheless, many network operators do not necessarily use these capabilities to defend their systems; plain text passwords, which are readily guessed or captured, remain the authentication technique in most common use.
From page 89...
... Why do the routing protocols have such long time constants? The basic reason is that a local routing change can, in principle, have global consequences.
From page 90...
... Putting It Together The heterogeneous, multiprovider nature of the public Internet poses additional robustness challenges. The Internet is not a monolithic system, and the several thousand Internet service providers in North America range in size from large international providers to one-room companies serving a single town with dial-up service.
From page 91...
... and through loose coordination mechanisms (e.g., the North American Network Operators Group) in order to minimize network service outages.
From page 92...
... In this case, although a service provider can be identified, it need not have established any relationship with the Internet service providers responsible for actually serving the customer and carrying the data associated with a telephone call. In the absence of specific service arrangements made by customers or an Internet telephony provider, an Internet provider that happens to be carrying voice traffic will not in general even know that its facilities are being used for telephony.
From page 93...
... Wrong decisions, poor reliability of auxiliary servers, or implementation mistakes only affect the products of a specific publisher, without compromising the service experienced by other users. In contrast, a network provider that decides to intercept Web requests and to route them to its own content servers may run a greater risk, as there is no way for the content provider or end user to correct the
From page 94...
... provide additional information. Thus we have some indication of the sources of failure, which include the following: · Communications links that are severed by construction workers digging near fiber-optic cables; · Network operators that issue incorrect configuration commands; and vices.
From page 95...
... must be publicly documented.52 For example, service outages that significantly degrade the ability of more than 30,000 customers to place a call for more than 30 minutes must be reported.53 resee FCC common carrier Docket No. 91-273, paragraphs 4 and 32 February 27, 1992y, as cited in Network Reliability and Interoperability Council ENRICH.
From page 96...
... 55see Network Reliability and Interoperability Council ENRICH.
From page 97...
... For example, it is not hard to imagine that at some point there would be calls from high-end users for a more reliable service that spans the networks of multiple ISPs and that some of the ISPs would decide to work together to define an "industrial-strength" Internet service to meet this customer demand. When they interconnect their networks, how would they define the service that they offer?
From page 98...
... Some performance issues, of course, are due to overloaded servers and the like, but others are due to congestion within the Internet. Interest in adding new QOS mechanisms to the Internet that would tailor network performance for different classes of application as well as interest in deploying mechanisms that would allow ISPs to serve different groups of customers in different ways for different prices have led to the continued development of a range of quality-of-service technologies.
From page 99...
... Because the adaptation mechanisms are based on reactions to packet loss, the congestion level of a given link translates into a sufficiently large packet loss rate to signal the presence of congestion to the applications that share the link. Congestion in many cases only lasts for the transient period during which applications adapt to the available capacity, and it reaches drastic levels only when the capacity available to each application is less than the minimum provided by the adaptation mechanism.
From page 100...
... First, the TCP rate-adaptation mechanisms described above may mask pent-up demand for transmission, which will manifest itself as soon as new capacity is added. Second, on a slightly longer timescale, both content providers and users will adjust their usage habits if things go faster, adding more images to Web pages or being more casual about following links to see what is there and so on.
From page 101...
... If these conditions were obtainable on the public Internet (e.g., if the packet loss rate or jitter requirements for telephony were met 99 percent of the time) , business incentives to deploy QOS for multimedia applications would disappear and QOS mechanisms might never be deployed.
From page 104...
... There are also several technical obstacles to deployment of end-toend QOS across the Internet. One challenge is associated with the routing protocols used between network providers (e.g., Border Gateway Protocol, or BGP)
From page 105...
... One consequence of the development of mechanisms that enable disparate treatment of customer Internet traffic has been concern that they could be used to provide preferential support for both particular customers and certain content providers (e.g., those with business relationships with the ISP) .59 What, for instance, would better service in delivery of content from preferred providers imply for access to content from providers without such status?
From page 106...
... Nor can it be concluded at this time whether QOS will see significant deployment in the Internet, either over local links, within the networks of individual ISPs, or more widely, including across ISPs. Research aimed at better understanding network performance, the limits to the performance that can be obtained using best-effort service, and the potential benefits that different QOS approaches could provide in particular circumstances is one avenue for obtaining a better indication of the prospects for QOS in the Internet.


This material may be derived from roughly machine-read images, and so is provided only to facilitate research.
More information on Chapter Skim is available.