Skip to main content

The Internet's Coming of Age (2001) / Chapter Skim
Currently Skimming:

3 Keeping the Internet the Internet: Interconnection, Openness, and Transparency
Pages 107-150

The Chapter Skim interface presents what we've algorithmically identified as the most significant single chunk of text within every page in the chapter.
Select key terms on the right to highlight them within pages of the chapter.


From page 107...
... In the Internet, this design translates into a minimum requirement that there be a public address space to label all of the devices attached to all of the constituent networks and that data packets originating at devices located at each point throughout the networks can be transmitted to a device located at any other point. Indeed, as viewed by the Internet's technical community in a document that articulates the basic architectural principles of the Internet, the basic 107
From page 108...
... In 1995, interconnection relied on public network access points where multiple providers could exchange traffic.2 Today, there is a much larger set of players and a much greater reliance on private interconnects that is, direct point-to-point links between major network providers. Indeed, there are multiple arrangements for interconnecting Internet service providers, encompassing both public and private (bilateral)
From page 109...
... , trends toward consolidation though mergers and acquisitions, and moves to vertically integrate a full range of services, from Internet access to entertainment, news, and e-commerce. The interlinked networks that are the Internet form a complex web with many layers and levels; the discussion that follows should not be taken to suggest simplicity.4 30ne source of information on Internet service providers is Boardwatch magazine s Directory of Internet Service Providers.
From page 110...
... defined as those providers that have full peering with at least the other tier 1 backbone providers. Tier 1 backbones by definition must keep track of global routing information that allows them to route data to all possible destinations on the Internet which packets go to which peers.
From page 111...
... D1. 7Boardwatch magazine's directory of Internet service providers in North America showed continual growth in the number of ISPs from February 1996 to July 1999.
From page 112...
... , the Internet service providers, and the content providers, with which both facilities and service providers may have business arrangements. Another recent trend has been the establishment of a new form of ISP, the hosting provider.
From page 113...
... The very large volume of traffic that would be associated with a major public access point can be disaggregated into smaller, more easily implemented connections (e.g., a provider manages 9If they provide direct connections to multiple provider networks, public exchanges can also turn out to be very efficient places to locate other services such as caches, DNS servers, and Web hosting services. And because public exchanges bring together connections to various providers, they are also useful places to conduct private bilateral connection through separate facilities.
From page 114...
... In the transit model, the transit provider agrees to accept and deliver all traffic destined for any part of the Internet from another provider that is the transit customer. It is possible that two providers in a transit arrangement will exchange explicit routing information, but more typically the transit provider provides the transit customer with a default route to the transit network while the transit customer provides the transit provider with an explicit set of routes to the customer's network.
From page 115...
... Based on that routing information, each peer only receives traffic destined for itself and its transit clients. This exchange of routing information takes the form of automated exchanges among routers.
From page 116...
... In the transit model, a transit customer buys transit service from a transit provider and pays for an access line to that larger provider's network. These arrangements take the form of bilateral agreements that specify compensation (if any)
From page 117...
... They often result in companies connecting all their sites through a single provider's network rather than through a variety of providers and depending on this interprovider connectivity. They also result in large content-hosting providers almost always attaching to each of the major backbone networks (usually as a transit customer rather than a peer)
From page 118...
... What content or service providers do today is enter into an agreement with a company that delivers specialized content services located throughout the Internet so as to improve the quality of the connection seen by end users. For example, RealAudio or Akamai will load streaming media content onto their servers.
From page 119...
... There are also concerns on the part of tier 1 providers about the potential for free-riding in peering. For this reason, many large tier 1 backbone providers are reluctant to peer with smaller networks because doing so would open them up to this vulnerability.
From page 120...
... Indeed, many would-be ISP customers rely on the type of peering being used as an indicator of quality. Because private interconnects can provide a better service quality owing to their greater capacity, dedicated nature, and ability to more carefully manage the traffic across them, the existence of such interconnects is often seen by customers as a sign that a provider offers generally higher quality Internet service.l3 Peer status is used at least in part because there are no agreed-on quantitative metrics and processes for evaluating the quality of Internet interconnections, particularly public metrics that detail the status of connectivity.
From page 121...
... "On 23 Apr[il] 1997 at 11:14 am EDT, Internet service providers lost contact with nearly all of the U.S.
From page 122...
... InterNAP is able to pay a reasonable price for its connections because it is mostly delivering traffic into each ISP that would eventually get there anyway by some other route. With these interconnection relationships established, it then sells Internet access to smaller ISPs, who it claims receive a better quality service than if they had purchased transit service from just one ISP or made use of a public exchange and experience less hassle than if they had tried to negotiate a number of peering relationships on their own.
From page 123...
... Additionally, providers offering transit service frequently incorporate into their interconnection agreements restrictions on transit customers becoming peers. Thus, where a provider starts with some or all of its relationships being of the transit sort, it may be unable to
From page 124...
... The standards development process, which came to be formalized through the Internet Engineering Task Force (made up of technical experts from academia and industry) , emphasized standardization because it grew out of a highly diffuse but collaborative development environment.
From page 125...
... 3. An open specification published by a neutral institution, such as the World Wide Web Consortium (W3C)
From page 126...
... Imposing a narrow point in the protocol stack removes from the application builder the need to worry about details and evolution of the underlying network facilities and removes from the network provider the ) 7Computer Science and Telecommunications Board (CSTB)
From page 127...
... \~DirectoriesJ ,^ Multisite ~ / \ ~K ~~ ~ / \ J ~ / \ ~ / Open Bearer \ Transport Services and / Service Interface Layer2 \ Representation Standards / / >(fax, video, audio, text, and so on) 4 Layer 1 ODN Bearer Service LANs r Point-to-Point ~~ Circuits ,~,~ FIGURE 3.1 The hourglass model of Internet architecture.
From page 128...
... This separation of IP from the higher-level conventions is one of the tools that ensure an open network; it hinders, for example, a network provider from insisting that only a controlled set of higher-level standards should be used on the network, a requirement that would in
From page 129...
... · Routing protocols. Providers must typically exchange routing information at interconnect points.
From page 130...
... In each, the affected parties application developers, service providers, and consumers must decide when and where one or the other should be emphasized. While most of this discussion has examined the upper half of the hourglass, the innovation that the Internet's architecture enables at the "transmission" level is another crucial element of the Internet's success.
From page 131...
... This ease of aggregation also permits secondary opportunities to build services that Internet applications can reuse, such as news feeds; advertising; middleware services such as authentication and name registration; and infrastructure services such as online data storage and application hosting. The rosy expectations for electronic commerce rest on the standardized, open Internet protocols and the ease with which applications can be developed and aggregated.
From page 132...
... This climate set the stage for tension between, on the one hand, the potential for seemingly unbounded innovation in applications and services and, on the other, the potential for Internet-based businesses to foster market consolidation, to raise barriers to open access, and to drive other outcomes in their effort to make and maximize profits. Evolution of Internet Standards Setting Several trends have emerged that run counter to the openness paradigm that has characterized the Internet's development.
From page 133...
... When developing Internet standards, companies and industry groups are likely to select whichever standards body they believe will be the most effective avenue for their business plan, and they may pursue simultaneous standardization efforts in multiple forums. In addition to the IETF, several more traditional standards bodies, including the ITU, International Organization for Standardization (ISO)
From page 136...
... These groups, such as the World Wide Web Consortium or the Wireless Access Protocol Forum, tend to be narrower in scope, less open, and more industry-centered. Internet standards are being developed in an active, diverse, and dynamic market space a model that parallels the freewheeling creativity of the Internet.
From page 137...
... Several factors contribute. Today, industrial development is so rapid that pressures to focus on products limit the amount of time technical staff in industry can spend on efforts aimed at the broader Internet community.
From page 138...
... Crucially, this communication takes place as a result of actions by users at the edges of the network; new applications can be brought to the Internet without the need for any changes to the underlying network or any action whatsoever by Internet service providers. Indeed, over the life of this report, many new applications and associated communication protocols have emerged.
From page 139...
... It is common practice today to assign Internet addresses in a dynamic rather than static fashion. Dynamic assignment provides an address on request from a networked computer, generally via the Dynamic Host Configuration Protocol (DHCP)
From page 140...
... These include providing a larger number of computers with Internet access using a limited pool of Internet addresses, providing local control over the addresses assigned to individual computers, and providing the limited degree of security that is obtained by hiding internal addresses from the Internet. Network address translation involves the mapping of a set of local addresses, which are not visible to the outside world (i.e., not visible on the Internet)
From page 141...
... There are also costs associated with deploying computers with sufficient computing power to carry out the application-level translations. The other option would be for the application to discover that the network is making use of NAT and then make the necessary translations itself; requiring an application to learn about the details of the network is an undesirable violation of the basic Internet architecture.22 Significant problems arise if one wishes to initiate communications between two computers, each of which is sitting behind a NAT, since neither has a way of knowing the internal address of the other.
From page 142...
... The basic problem is that if the packet payload is encrypted, addresses within it cannot be translated by a NAT. Because IPSec is a more broadly applicable protocol, used notably for standard Internet-layer virtual private networks, the incompatibility is a significant concern for some users.
From page 143...
... . Port numbers are perhaps the easiest method of filtering, but filtering can also be performed using other information contained in packet headers or the contents of the data packets themselves.
From page 144...
... Local caching or overlay distribution networks do not provide end-to-end connectivity between the original content or service provider and the end user. Also, depending on the particular technical and business model, such networks may only be available to those providers who are willing and able to pay for specialized services.
From page 145...
... Such vertical integration, where a network provider attempts to provide a partial or complete solution, from the transmission cable to the applications and contents, could, if successful, cause a change in the Internet market, with innovation and creativity becoming more the province of vertically integrated corporations. Microsoft's "everyday Internet" MSN offering further supports the notion that businesses see a market for controlled, preferred content offerings as a complement to the free-for-all of the public Internet.
From page 146...
... Today Hotmail is at sufficient scale that special code to support its proprietary protocol is written into e-mail clients; the result is that a frequently used Internet service is no longer running the standard Internet protocols. Equipment suppliers are similarly willing to accommodate such customer demands; their routers are more programmable than ever to support custom protocols.
From page 147...
... One scenario would be that some dozen or halfdozen tier 1 service providers would operate somewhat separately, although still using IP and other standard Internet protocols to enable some degree of interoperability among them. If a situation develops where several large providers start using proprietary protocols inside their networks, the incentives for new content and application development could shift.
From page 148...
... To reach customers within closed networks, they would need to make their protocol work over each of the closed network's proprietary protocols and might also need the closed networks to configure their networks to enable the applications to work. From the perspective of the would-be application provider, the ISPs become a roadblock to innovation.
From page 149...
... Keeping the Internet Open Provision of open IP service ensures that whichever service provider a consumer or business picks, the consumer or business can reach all the parties it wishes to communicate with. Much as the PSTN dial tone offers customers the ability to connect to everyone else connected to the global PSTN network, open IP service offers access to all the services and content available on the public Internet.
From page 150...
... applications such as telephony and audio and video streaming that may depend on QOS mechanisms. Because IP connectivity affords users the potential to misbehave or pose unacceptable demands on the network, this definition of open IP service is not intended to preclude service providers that want to ensure safe, effective operation or meet the desire of customers to block certain types of IP traffic from restricting how their network is used.


This material may be derived from roughly machine-read images, and so is provided only to facilitate research.
More information on Chapter Skim is available.