National Academies Press: OpenBook

Broadband: Bringing Home the Bits (2002)

Chapter:4 Technology Options and Economic Factors

« Previous: 3 Broadband Applications and Content
Suggested Citation:"4 Technology Options and Economic Factors." National Research Council. 2002. Broadband: Bringing Home the Bits. Washington, DC: The National Academies Press. doi: 10.17226/10235.
×

4
Technology Options and Economic Factors

Although great technical and business strides have been made in improving the data transmission speeds of communications networks, the local access technologies that make up the last (or first) mile connections in the network have mostly lagged far behind. Enhancing the local access infrastructure to bring high-speed services to residences and small businesses requires upgrading or building infrastructure to each premises served. There are a variety of technology options with different characteristics and cost structures and variation in willingness to pay among potential customers. This chapter explores the characteristics of the various local access technologies and the interplay among relevant economic considerations.

LOCAL ACCESS TECHNOLOGIES IN CONTEXT

While this chapter focuses on local access, the other network elements through which content, applications, and services are provided also contribute to the total cost and performance characteristics of broadband service. Local access links carry communications to and from points at which communications from multiple premises are aggregated and funneled onto higher-capacity links that ultimately connect to the Internet or other broadband services. The first point of aggregation, also known as the point of presence, is most commonly located at a telephone company central office, cable system head end, or radio tower (which may be at a considerable distance from the premises) but may also be in a piece of

Suggested Citation:"4 Technology Options and Economic Factors." National Research Council. 2002. Broadband: Bringing Home the Bits. Washington, DC: The National Academies Press. doi: 10.17226/10235.
×

equipment in a vault, pedestal, wireless antenna site, or pole-top device located nearby to the premises. Circuits installed or leased by the provider in turn run from the point of presence to one or more public or private access points for interconnection with the Internet. The so-called second mile connects local access facilities with upstream points of aggregation. In connecting to the Internet, broadband providers either pay for transit service or establish peering agreements with other ISPs to exchange traffic on a settlement-free (barter) basis. Caches, e-mail and content servers, and servers supporting specialized services such as video-on-demand or voice telephony are located at points of presence and/or data centers. Routers located in points of presence and data centers take care of directing data packets on to the next point in the cross-network trip to their eventual destination.

ESSENTIAL FEATURES OF THE LOCAL ACCESS TECHNOLOGY OPTIONS

The future of broadband is sometimes described as a shootout among competing technologies that will result in a single technology dominating nationwide. This view, however, is simplistic and unrealistic; there is no single superior technology option. Broadband is going to be characterized by diverse technologies for the foreseeable future. There are a number of reasons for this:

  • Incremental investment in existing infrastructure. While some firms may have access to large amounts of venture capital, the expectations of investors in existing firms is for short-term payoffs. As a result, the technological approach chosen by an incumbent is likely to make use of existing equipment and plant, and the deployment strategy must be amenable to incremental upgrades. The infrastructures of the various incumbents in the broadband marketplace—telephone local exchange carriers with copper loops, cable television companies with coaxial cable, cellular companies with towers for point-to-point wireless telephony—will continue to make incremental improvements unique to their respective technologies to provide and enhance broadband services.

  • Continued exploitation of skills. Technologies require distinctive skills and knowledge—those needed, for example, to design, launch, and operate a satellite. Similarly, cable and telephone companies understand the technological challenges associated with their respective systems. Companies that know how to do one or another thing well will attempt to find market opportunities where these skills give them an advantage.

  • Different demographics and density. The United States (and world) population is very diverse in topography, density, wealth, and demand

Suggested Citation:"4 Technology Options and Economic Factors." National Research Council. 2002. Broadband: Bringing Home the Bits. Washington, DC: The National Academies Press. doi: 10.17226/10235.
×

for communications services. The particular economic and technical characteristics of each broadband technology will provide specific advantages in serving certain geographical areas or demographic groups. Some may have an economic advantage in particular locales owing to the nature of the infrastructure already in place or to inherent physical attributes of the environment. Planning should reflect the existence of a diverse set of solutions that depend on particular circumstances rather than a technology monoculture.

This section discusses the salient characteristics of each technology option and provides a brief road map of how existing technology and anticipated research and development will play out in coming years.

Wireline Options

In rough terms, access technologies are either wireline or wireless. Wireline includes telephone network copper pairs and the coaxial cable used for cable television service. Incumbent telephone companies and cable operators are both in the process of upgrading their infrastructures to provide broadband services. Wireline infrastructure is also being built in some areas by so-called overbuilders, who are building new wireline infrastructure in competition with the incumbent wireline providers. In the United States, this has largely been through deployment of hybrid fiber coax to provide some mix of television, data, and voice services. There are also a few overbuilders that are using or plan to use fiber.

The wireline technologies all share the feature that labor and access to a right-of-way are significant components of the cost. These costs are more significant where infrastructure must be buried than where it can be installed on existing poles.1 The other major component is the electronics at each end of the line, where costs are subject to rapid decreases over time as a result of Moore’s law improvements in the performance-to-cost ratio and increasing production volumes. Labor, on the other hand, is not subject to Moore’s law, so there is no obvious way within the wireline context for dramatic declines in cost for new installation (though one cannot rule out very clever solutions that significantly reduce the labor required for some elements of the installation).

1  

One estimate provided to the committee is that aerial installation is almost twice as inexpensive as when the infrastructure must be buried.

Suggested Citation:"4 Technology Options and Economic Factors." National Research Council. 2002. Broadband: Bringing Home the Bits. Washington, DC: The National Academies Press. doi: 10.17226/10235.
×
Hybrid Fiber Coax

Cable systems pass 97 percent of the homes in the United States.2 The older generation of cable technology uses a branching structure of coaxial cables fanning out from a central point or head end to the buildings in a community (see Figure 4.1a). The older systems rely on long chains of coaxial cables and amplifiers, with each segment feeding into a smaller coaxial segment.

Hybrid fiber coax (HFC) is the current generation of cable system technology. HFC systems carry analog signals that feed conventional television sets as well as digital signals encoded onto analog signals that carry digital video programming and up- and downstream data. In the new architecture, the system is divided into a number of small coaxial segments with a fiber optic cable used to feed each segment or cluster. By using fiber instead of coax to feed into neighborhoods, the system’s performance and reliability is significantly improved.

Another benefit of an HFC upgrade is that the resulting system can carry two-way data communications, such as Internet access. Additional equipment is installed to permit information to flow both to and from the home (see Figure 4.1b). Internet service is provided using a device called a cable modem in the home and a device known as a cable modem termination system in the head end. The ability to offer competitive video, voice, and high-speed data services using the present generation of technology has attracted several nonincumbent companies to enter a few markets as overbuilders using the HFC technology.

Over 70 percent of the homes in the United States are now passed by this upgraded form of cable infrastructure. The fraction of homes served by HFC is continuing to increase as cable companies upgrade connections to all homes in their franchise areas and can, with continued investment in upgrades, increase until it approaches the 97 percent of households that currently have cable service available at their property lines.

A technology standard for cable modems known as DOCSIS has been adopted industrywide. Developed by an industry consortium seeking a quicker alternative to the more traditional standards development process then underway under the auspices of the IEEE, the DOCSIS standard is stable, and more than 70 modems have been certified as compliant. Standardization has helped modems become a mass-market product. The standard provides consumers the assurance that if they purchase certified modems at retail, or have them built into PCs or other appliances, cable operators will support them across the country. Further helping push down costs, several competing suppliers have developed highly inte-

2  

Paul Kagan Associates. 2001. The Kagan Media Index, Jan. 31, 2001.

Suggested Citation:"4 Technology Options and Economic Factors." National Research Council. 2002. Broadband: Bringing Home the Bits. Washington, DC: The National Academies Press. doi: 10.17226/10235.
×

FIGURE 4.1 Evolution of cable systems to support two-way data. SOURCE: James Chiddix. 1999. “The Evolution of the U.S. Telecommunications Infrastructure Over the Next Decade. TTG2: Hybrid-Fiber-Coax Technology” (IEEE workshop paper).

Suggested Citation:"4 Technology Options and Economic Factors." National Research Council. 2002. Broadband: Bringing Home the Bits. Washington, DC: The National Academies Press. doi: 10.17226/10235.
×

grated silicon, and single-chip DOCSIS solutions are available to modem manufacturers. With increasing volumes, a single standard, and single-chip solutions, the cost of a cable modem at wholesale has already dropped to $150 or less and can be expected to continue to drop as volumes increase.

Digital Subscriber Line

Digital subscriber line (DSL) is the current method by which twisted copper pairs (also known as loops), the decades-old technology used by the telephone companies to reach the residence, can be upgraded to support high-speed data access. In some newer builds, analog transmission over copper wire is only used between the premises and a remote terminal (which may be at curbside or, more commonly in a pedestal or underground vault within a neighborhood), while a digital loop carrier (DLC) generally using fiber optic cable connects the remote terminal with the central office. In a traditional, all-copper plant, the first segment of the loop plant is referred to as the “feeder plant,” in which hundreds of phone lines are bundled in a cable that runs from the central office to a smaller distribution point. From the distribution point, smaller cables containing fewer phone lines run to pedestals or cabinets within a neighborhood, where they in turn connect to the twisted pairs that run to the customer premises (see Figure 4.2).

All transmission of data over wire involves coding these data in some way consistent with the carrying capacity and noise conditions of the wire. The familiar dial-up modems code (and decode) data in such a way that the data can pass through the traditional switches and transmission links that were designed to carry voice, which more or less limits speeds to today’s 56 kbps. DSL uses an advanced coding scheme that is not compatible with existing switches. Consequently, new electronics known as a DSL access multiplexer (DSLAM) has to be installed in any central office where DSL is to be offered. The DSLAM must in turn be connected to a switched data network that ultimately connects the central office to the Internet (see Figure 4.3). DSL service enables the transmission of packet-switched traffic over the twisted copper pairs at much higher speeds than a dial-up Internet access service can offer. DSL can operate at megabits per second, depending on the quality and length of the particular cable. It is thus the upgrade of choice to bring copper pairs into the broadband market.

DSL standards have existed since 1998, and new versions of these standards, which add enhancements to asynchronous transfer mode (ATM), IP, and voice services over DSL, are expected in 2001 or 2002 from the International Telecommunication Union (ITU). Large interoperability

Suggested Citation:"4 Technology Options and Economic Factors." National Research Council. 2002. Broadband: Bringing Home the Bits. Washington, DC: The National Academies Press. doi: 10.17226/10235.
×

FIGURE 4.2 Telephone company copper loop plant. SOURCE: Adapted from a figure supplied by John Cioffi, Stanford University.

programs with dozens of qualified suppliers have been implemented by the DSL Forum, which has about 400 member companies. The forum develops implementation agreements in support of interoperable DSL equipment and services and acts as an industry marketing organization for DSL services, applications, and technology for DSL in general. Also, to help reduce the cost of asymmetric DSL (ADSL) deployment by specifying a common product and increasing volumes, several companies formed a procurement consortium.

The present generation of DSL products can, depending on line length and conditions, reach 1.5 to 8 Mbps downstream and 150 to 600 kbps upstream in the near future. The present generation of DSL technology aimed at residential customers, ADSL, currently supports a typical maximum of 8 Mbps downstream and 800 kbps upstream (different flavors deployed by various providers vary somewhat). A related flavor, G.lite, which makes compromises in order to permit customer self-installation on the same line being used for analog voice service, supports up to 1.5 Mbps downstream. (Another variant, symmetric DSL [SDSL], supports

Suggested Citation:"4 Technology Options and Economic Factors." National Research Council. 2002. Broadband: Bringing Home the Bits. Washington, DC: The National Academies Press. doi: 10.17226/10235.
×

higher, symmetric speeds.) All of these speeds are maximums—the actual speed obtainable over the DSL link depends on line length, noise, and other aspects of the line condition as well as on the maximum speed supported by the particular service that a customer has subscribed to. Higher-speed versions of DSL, known as very high data rate DSL (VDSL), are in development. These depend on investment in new fiber in the loop plant that shortens the copper loop length to enable higher speeds—tens of megabits per second in both directions. Figure 4.4 summarizes the rate and distance trade-offs for the various flavors of DSL.

DSL is available to a large fraction of homes and businesses in the United States over normal phone lines (the exact fraction is hard to determine because of the factors discussed below). However, not all of the homes that are passed by telephone cables easily support DSL, and some homes cannot be offered DSL service at all without major upgrades to the infrastructure. Certain pairs are unsuited for such upgrades because of how they were engineered—for example, using bridge taps or loading coils. Also, where the loop between central office and premises includes a digital loop carrier, the remote terminal equipment must be upgraded to

FIGURE 4.3 DSL connections at the central office.

Suggested Citation:"4 Technology Options and Economic Factors." National Research Council. 2002. Broadband: Bringing Home the Bits. Washington, DC: The National Academies Press. doi: 10.17226/10235.
×

FIGURE 4.4 Rate and maximum distances for various flavors of DSL. SOURCE: Adapted from a figure provided to the committee by Ted Darcie, AT&T Research.

support DSL. More significantly, DSL does not work over wires longer than a certain distance (18,000 feet for the primary flavor used for residential service today, ADSL). It should be noted that wire lengths are substantially shortened by the deployment of remote terminals.

Crosstalk—the coupling of electrical signals between nearby wires— gives rise to interference that degrades the carrying capacity of each copper pair. The level of crosstalk depends on the number of pairs within the bundle carrying DSL, their proximity, and the power and bandwidths they use. It is even possible for DSL signals from adjacent lines to create signals larger than the intended DSL signal on the line. The interference has the effect of reducing the maximum data rate at a particular loop

Suggested Citation:"4 Technology Options and Economic Factors." National Research Council. 2002. Broadband: Bringing Home the Bits. Washington, DC: The National Academies Press. doi: 10.17226/10235.
×

length (or the maximum loop length for a given data rate). In essence, an issue of spectrum sharing within the cable bundles arises. The term “spectrum” is appropriate because the crosstalk and interference effects depend on how the signals on the different pairs make use of the different frequencies used for transmission over the lines. Today, incumbents and competitive providers using unbundled loops are free to choose among a number of flavors of DSL, without regard to how the spectrum used by one service affects services running over other copper pairs.

At the request of the FCC, a working group of carriers and vendors worked to develop a spectrum management standard for DSL. The present standard, released in 2001, places forward-looking limits on signal power, bandwidth, and loop length.3 By establishing thresholds with which the current DSL technology is generally compliant, the standard seeks to prevent future escalation (where each DSL product or service would try to “out-shout” the others) and thus place a bound on the level of crosstalk that will be faced in the future. While the standard is currently voluntary, it is generally expected that it will provide the technical basis for future FCC rulemaking. Issues that the standard does not address—which are being explored by a Network Reliability and Interoperability Council subgroup under American National Standards Institute (ANSI) T1 auspices that is developing guidance to the FCC on crosstalk—include how many DSL lines are permitted per binder group, what standards apply to lines fed from digital loop carriers, how products should be certified or self-certified, and how rule compliance should be enforced.

Advanced Wireline Offerings—Fiber Optics in the Loop

Optical fiber has a theoretical capacity of about 25,000 GHz, compared to the roughly 155 megahertz (MHz) possible over short copper pairs, the roughly 10 GHz4 capacity of coaxial cable. (The relationship

3  

Working Group on Digital Subscriber Line Access (T1E1.4). 2001. American National Standard for Telecommunications—Spectrum Management for Loop Transmission Systems (T1.4172001). Standards Committee T1. Alliance for Telecommunications Industry Solutions, Washington, D.C.

4  

The practical upper limit for data transmission over coaxial cable has not been well explored. The upper cutoff frequency for a coaxial cable is determined by the diameter of the outer copper conductor. Smaller cables (1/4-inch- to 1/2-inch-diameter) probably have a cutoff frequency well in excess of 10 GHz. It is unclear what the upper limit is on modulation efficiency. The 256 quadrature amplitude modulation (QAM) currently in wide use allows 7 bits per hertz, but in short, passive runs in neighborhoods, much more efficient modulation schemes are possible, suggesting that HFC could evolve to speeds exceeding 100 Gbps to small clusters of customers.

Suggested Citation:"4 Technology Options and Economic Factors." National Research Council. 2002. Broadband: Bringing Home the Bits. Washington, DC: The National Academies Press. doi: 10.17226/10235.
×

between hertz and bits per second depends on the modulation scheme; the number of bits per hertz typically ranges from 1 to more than 7.) This very high capacity and consequent low cost per unit of bandwidth are the primary reasons why fiber is preferred wherever individual demand is very high or demand from multiple users can be aggregated. Other considerations in favor of fiber include high reliability, long service lifetime,5 protocol transparency, and consequent future-proof upgradability.6 Thus, fiber predominates in all of the telecommunications links (voice and data) except the link to the premises, where cost considerations come into play most, or for untethered devices. Because of their large demand for bandwidth, an increasing fraction of large businesses is being served directly by fiber links. There is also increasing attention to fiber technologies for local area and local access networks, as evidenced by recent development of new technologies such as gigabit Ethernet over fiber.

One important use of fiber for broadband is that of increasing the performance of other wireline technologies through incremental upgrades. Both HFC systems and DSL systems benefit from pushing fiber further into the system. To increase the performance of DSL, the copper links must get shorter. As penetration and the demand for higher speed increase, the upgrade strategy is to push fiber deeper, with each fiber feeding smaller service areas in which shorter copper connections run to the individual premises. So a natural upgrade path for copper infrastructure is to install electronics ever closer to the residence, to a remote terminal located in a pedestal or underground vault or on a telephone pole; to run fibers from the central office to this point; and only to use copper

5  

In the 1970s, researchers worried about the possibility of fiber degradation over time. A number of experiments were conducted and no degradation effects were found. Thus— barring an accidental cut—the only reason fiber is replaced is when some new transmission scheme reveals the old fiber to have too much eccentricity of the core or too much material dispersion. These factors have only come into play in very particular situations. For example, when OC192 (10 Gbps) transmission was introduced, there were concerns that old fiber with an out-of-round cross-section would cause problems. But in the end, only a limited amount of fiber required replacement to support the new, higher-speed transmissions.

6  

“Protocol transparency” refers to the ability to run any communications protocol over the fiber by changing the end equipment and/or software. Other communications media display some degree of protocol transparency, but with fiber, the large RF spectrum on an individual fiber is entirely independent of other fibers (in contrast to DSL, which has crosstalk issues; wireless, which has obvious spectrum-sharing; and HFC, which also has shared spectrum). This transparency property only holds true over the fiber segments that are unshared—where passive splitting is done, all must agree on at least the time division multiplexing (TDM) or wavelength division multiplexing (WDM) scheme, and where active switching is used, all must agree on the packet protocol. True protocol transparency— and true future-proofing—is thus greatest in a home-run architecture.

Suggested Citation:"4 Technology Options and Economic Factors." National Research Council. 2002. Broadband: Bringing Home the Bits. Washington, DC: The National Academies Press. doi: 10.17226/10235.
×

between the remote terminal and the home. Similarly, to deliver higher performance over HFC, the number of subscribers in each cluster must shrink, so that the total capacity of a single coaxial segment is shared by a smaller number of subscribers and the per-subscriber performance goes up. This also requires that fiber be installed farther out into the distribution tree.7

A basic upgraded architecture is apparent: fiber optic cables radiate out from a central office or head end to local distribution points that serve small clusters of buildings. At each cluster, a relatively compact set of electronics couples the fiber to a local distribution plant. For HFC, a short segment of coax runs from this distribution point to feed a cluster of homes, while for DSL this is a short copper twisted pair. As the cluster size continues to decrease, from the hundreds of homes commonplace in much of the industry today down to tens of homes, HFC and copper pair systems will come to resemble each other. The networks will not, to be sure, be the same in all details. For example, different networks will have different “active elements” in different parts; some networks will have active switching deep into the network, while cable networks will likely place less emphasis on remote switching in favor of carrying traffic back to the head end before aggregation or routing. Where active components are located has implications for where power must be delivered, and thus implications for cost, ease of installation, and so forth. But the essential feature is a continuing trend toward pushing fiber deeper into networks.

Fiber-to-the-curb (FTTC) is a general term for this class of system. FTTC is also a label for a specific class of technology that makes extensive use of fiber for local distribution and that local exchange carriers are using to build or rebuild their telecommunications. A technology being used in new construction today, it will in turn be a basis for incremental upgrades of the telephone infrastructure in the future.

Whether as the final upgrade step in the incremental path described above or for installation by another player, another alternative is to run fiber to the premises themselves, dubbed fiber-to-the-home. The term FTTH encompasses multiple architectures. The factors that control what speeds are actually provided are the technology components that are

7  

Deployment of fiber deeper into incumbent telephone networks also raises interesting questions about how one would implement unbundling, which was originally premised on unbundling a copper loop running from the central office to the subscriber. Issues such as colocation become more complicated when the loop terminates at a curbside pedestal or controlled environment vault. Colocation is even more complicated if fiber is pushed deep enough that it reaches to the poletop or even into the home. Aesthetic and practical concerns limit the size and number of these remote terminal units, which in term complicates the provision of colocation space.

Suggested Citation:"4 Technology Options and Economic Factors." National Research Council. 2002. Broadband: Bringing Home the Bits. Washington, DC: The National Academies Press. doi: 10.17226/10235.
×

installed at the end points of the fiber—the residence and the service provider’s point of presence, which may be located at a head end, central office, or remote terminal. The three principal forms of FTTH are these:

  • “Home run” systems, where there is a separate fiber or fiber pair that runs all the way from each residence to the central office or other point of presence. Because there is no sharing of fibers, this scheme has a higher cost of installation, but offers the highest ultimate performance with the appropriate system design and terminal equipment and the most flexibility. Providers can deploy the technology of their choice independent of other providers (there are no spectrum-sharing issues, as is the case, for example, with wireless, and no crosstalk problems, as is the case with DSL). Also, the end-point equipment attached to each fiber (at the central office and home) can be upgraded independently.

  • The Passive Optical Network (PON) architecture, in which a single fiber runs from a central office to a simple optical divider, called a passive splitter (hence the “passive” in PON), which may be quite compact, from which individual fibers in turn run to each of a group of homes. The absence of active electronics in the field and the overall simplicity yield lower life-cycle costs.8 The PON architecture also avoids the complications and expense associated with providing robust power at the remote switching point. Unlike switched fiber or home runs, the format of the information on the different paths in a PON system is not totally independent. This implies that there may be some upgrade strategies that are not backward-compatible and would require simultaneous upgrades of head-end and terminal equipment. Just how must flexibility for upgrade and change is available in a PON system depends on the details of the design. As part of an effort to reduce costs, an ATM-specific realization of the PON architecture has been standardized in the ITU (the Full Service Access Network or ATM PON standard).

  • FTTH systems with fully active (electronic) elements in the path from the central office to the residence, in which fiber runs from the central office to one or more stages of remote terminals at which the signals are switched among fibers that go on to feed individual premises. Two examples of this approach are switched Ethernet and HFC using active switching. Switched Ethernet systems are beginning to be used by companies providing fiber to the home and businesses, extending what is normally a local area network technology over a metropolitan area. HFC systems of the future, instead of using a passive splitter, might have a fiber connect-

8  

Paul Shumate provided estimates to the committee of 20 percent lower capital expenses and a $500 life-cycle cost savings.

Suggested Citation:"4 Technology Options and Economic Factors." National Research Council. 2002. Broadband: Bringing Home the Bits. Washington, DC: The National Academies Press. doi: 10.17226/10235.
×

ing to some electronics that serves a small cluster of homes, using fiber instead of coaxial cable to connect to individual homes. Unlike the other two architectures, this approach requires special attention to how to power the remote switching points, especially where reliability requirements (and associated regulatory requirements) demand robustness in the face of power grid failures.

These various forms of FTTH have different cost structures and present different opportunities for incremental upgrade.

FTTH is seen by some as the “holy grail” of residential access. From a technology perspective, it is a high-performance end point, with enormous headroom for future upgrades. As a result, the sentiment is often expressed that the nation should strive to deploy that solution directly, without spending time and diverting investment dollars in intermediate technology of an incremental nature that might, eventually, be obsolete.

From a business perspective, a direct move to FTTH raises several issues. There is a significant investment in telephone and cable infrastructure that can meet many of today’s broadband Internet access needs with modest incremental expense.

Business choices among the alternatives thus hinge on such factors as the investment horizon and forecasts for bandwidth demand. While both DSL and HFC can evolve toward higher performance, it is still unclear whether the pace of improvement in these technologies will continue to meet customer needs. A second issue is whether the performance benefits of FTTH over those of other alternatives would be of sufficient value to consumers to support the prices needed to cover the at least somewhat higher costs. The familiar case of the recent slowdown in new PC sales may offer a useful illustration of this point. New PCs are faster and have a variety of capabilities that older models do not, but it seems, at least at present, that many buyers find the older models more than adequate for what they want to do. If this is the case, then some new, compelling set of applications that requires those capabilities will have to emerge to really boost PC sales.

The total cost of deploying FTTH is, of course, substantial, involving both the basic costs associated with wireline infrastructure deployment and the premium associated with fiber. Areas being newly developed (so-called green-field areas) offer an especially attractive market for fiber, to the extent that the additional costs are modest compared with the basic installation costs of any local access technology. Indeed, the total lifecycle costs for fiber are believed to be lower than the costs of alternatives for new installations. When new wireline infrastructure is installed (e.g., in a new housing development), FTTH at present costs more to install, by at least several hundred dollars a home, than alternatives. The total cost

Suggested Citation:"4 Technology Options and Economic Factors." National Research Council. 2002. Broadband: Bringing Home the Bits. Washington, DC: The National Academies Press. doi: 10.17226/10235.
×

includes the costs of installing the wireline itself (digging trenches, hanging on poles, and so on), which are similar in magnitude for any wireline overbuilder whatever the specific technology.

Unlike copper wires or coaxial cable, however, non-home-run fiber architectures require that fiber be spliced together. Splices are more time-consuming than electrical connections and require specialized expertise and/or complicated equipment to produce. Not surprisingly, this has been an area of considerable attention, and increasingly sophisticated techniques and equipment have been entering the market, but costs remain higher. The significant improvements made in splicing technology and techniques over the past few years mean that this is likely to become less of an issue; moreover, home-run FTTH systems do not require splicing in the access network. In addition, there are increased costs associated with the terminal equipment (the lasers and other electronics that transmit and receive light signals over the fiber). Costs here exceed those of terminal equipment for DSL or HFC, in part because of the higher costs associated with the optoelectronic components and in part simply because of lower product volumes typical of any new product. There have been significant improvements in the cost and performance of fiber distribution technology over the last few years as a result of technical advances and increased deployments in gigabit Ethernet, wavelength division multiplexing (WDM), passive optical networks, and optical switching, but there is still a good bit of room for cost improvements in terminal equipment, splicing, and trenching.

A small number of new private sector entrants are planning or starting to deploy FTTH as an overbuild. For them, becoming a facilities-based provider would require installing infrastructure in any event. In addition, the higher performance potential of fiber and, in light of its longevity and future-proof quality, a total life-cycle cost not dissimilar to that of alternatives, are viewed as giving these entrants a competitive advantage in the market.

Other deployments are taking place in a different economic context; these include, for example, municipal deployment; deployment as a part of new residential construction; or deployment as an offshoot of fiber installations for government or business customers. These scenarios alter the economic calculus and hence the set of technology choices that can be justified. Once the high up-front costs of laying fiber are paid, the incremental costs for upgrades are predominantly per-subscriber and not per-passing. In return for the high initial investment comes a measure of future-proofing, as the same fiber can provide decades of useful service. This sort of economic model will make sense for an investor with a long investment horizon. For instance, it may be attractive to a municipality that has to float a bond issue for a one-time investment, and then live with

Suggested Citation:"4 Technology Options and Economic Factors." National Research Council. 2002. Broadband: Bringing Home the Bits. Washington, DC: The National Academies Press. doi: 10.17226/10235.
×

the resulting investment for the life of the bond. The technology also allows the municipality to place responsibility on the individual consumer to make any future incremental investments. It might also make sense for an individual to finance the fiber installation, much as houses are financed through decades-long mortgages. This economic model makes much less sense to a corporation seeking to make continuous incremental investments with a goal of showing short-term returns each quarter.

In the long run, all the wireline alternatives have the option of converging on FTTH, if the market demands it. For those with existing infrastructure, the issues are the incremental costs of getting there and the question of whether the intermediate steps are sustainable. For those contemplating installing new infrastructure, the issue is the cost-effectiveness of fiber compared with other technology alternatives available to them, and whether fiber offers them sufficient advantage in the marketplace.

Powerline

The pervasiveness of powerlines has led to consideration of using them to provide broadband connectivity to the home and within the home, with speeds of 20 Mbps and 1 Mbps, respectively, typically envisioned. Several experiments have been conducted9 and proposals have also been made to develop both national and international standards for powerline communications technology. There has been less of a push to use powerline connectivity in the United States, in part because the U.S. power distribution system, in which each secondary transformer serves only a few households (on the order of 5), makes the per-subscriber capital costs much higher. In contrast, this ratio is on the order of 50 in Europe, reflecting the higher voltages and lower currents in the European distribution systems; this difference has tempered continual interest in this technology on the part of U.S. companies such as Nortel and Intel. From an economic viewpoint, powerline communications for the last mile competes against well-established multimegabit per second wired and wireless options described in this chapter. In addition to questions about the

9  

One example of recent explorations is a 1999 pilot test by the German company VEBA (now part of e.on), which demonstrated a 2-Mbps per customer result in a trial involving eight households. Results were found to be good enough to suggest more extensive testing and plans for commercialization (involving AVACON A.G., a regional utility). This service uses a device attached at the meter that in turn provides connectivity at each power outlet in the household, providing Internet data and telephone and other value-added services.

Suggested Citation:"4 Technology Options and Economic Factors." National Research Council. 2002. Broadband: Bringing Home the Bits. Washington, DC: The National Academies Press. doi: 10.17226/10235.
×

cost-effectiveness of powerline data transmissions, there is an overarching and long-standing concern about the interference from powerline communications to wireless applications, including amateur radio, home stereo, and emergency broadcast services. The United Kingdom, for example, has discouraged powerline communications for this specific reason. Powerline communications will not experience widespread deployment until questions about acceptable operating frequencies and interference thresholds are resolved. For in-home networking, powerline technology has to compete with more mature wireless—802.11b (11 Mbps today, with aggregate speeds up to 100 Mbps possible); Ethernet (commonly 10 or 100 Mbps, but capable of speeds up to 1 Gbps); and phone line networking (10 Mbps). These are difficult technical figures to overcome for the powerline medium, even before considering the cost of deploying it. Intel backed out of the HomePlug system for home distribution, partly because of an underwhelming nominal aggregate speed of 14 Mbps but mainly because of the potential interference issues mentioned earlier. In short, powerline communications may yet play some role (last mile or in-home), but it is too immature compared with alternatives to characterize its importance or impact, absolute or relative, as a broadband technology.10

Wireline Roadmap

How can wireline providers offer greater bandwidth in the future? Both DSL and cable modem technologies have demonstrated that they can work in mass deployment and as a business proposition for providers. The existence of standards and interoperation among equipment from different vendors is a signal of technology that is mature in the marketplace. The cable industry has a roadmap for performance innovation that does not depend on substantial technical innovation, but only on the business decisions to deploy upgrades that have already been tested in the field. Similarly, the DSL industry has a roadmap for performance improvements that depends on redesign of the access network to install remote electronics in order to shorten the length of the copper pairs. In both cases, the technologies are relatively mature, so the rate of actual—as opposed to potential—performance improvement will depend mainly on

10  

For more on powerline communications technology, see David Essex, 2000, “Are Powerline Nets Finally Ready?” MIT Technology Review, June 21, available online at <http://www.technologyreview.com/web/essex/essex062101.asp> and John Borland, 2001, “Power Lines Stumble to Market,” CNET News.com, March 28, available online at <http://news.cnet.com/news/0-1004-200-5337770.html?tag=tp_pr>.

Suggested Citation:"4 Technology Options and Economic Factors." National Research Council. 2002. Broadband: Bringing Home the Bits. Washington, DC: The National Academies Press. doi: 10.17226/10235.
×

the costs of upgrade, the depreciation cycle of investment, and competition from other providers.

Improvements to DSL performance will run up against the crosstalk interference problem. The current ANSI standard for DSL spectrum management falls short of addressing the long-term challenge. The problems will become much more significant as the penetration of DSL grows and as the higher data rates contemplated in the DSL upgrade path—which are more sensitive to interference—begin to be widely implemented. The concern, looking forward, is that spectrum management problems will complicate and curtail installment and progress of DSL if the current line-level unbundling regime is maintained. On the horizon are methods for controlling power levels and bandwidth in ways that mitigate the effects of crosstalk. These include coordination of spectrum use within a carrier’s DSLAM (this does not, of course, address intercarrier crosstalk) and advanced signal processing technology that partially compensates for crosstalk.

While there are many unresolved questions about how one would actually implement such a process—especially given the contentious relationships among incumbent and competitive carriers—further aggregate performance improvements could be gained through some sort of systemwide coordination of spectrum use. Indications are that with appropriate coordination, symmetric data rates at least 3 times faster than the fastest asymmetric DSL data rates available today would be possible as fiber moves closer to the home. There are other possible advantages. With coordination, the DSLAM and modem equipment could be less complex (and thus less costly), and coordination would permit dynamic partitioning of bandwidth to users on demand that exceeds the factor of 3 indicated above. All of this presumes, of course, some change in the rules of the game. Making improvements in this area will require new regulatory approaches (e.g., how and whether to unbundle), new management strategies, and new technology.

While both HFC and DSL share the same general feature—an intrinsic limit to the data rate of the nonfiber portion of their networks—the limit is much higher for coax, which offers the cable industry more options for incremental investment to obtain incremental performance improvements. Companies providing data over cable have upgrade road-maps that illustrate the cost and performance benefits of various options. In rough terms, the HFC infrastructure is capable of offering the consumer a factor-of-10 improvement over the next 5 years—by decreasing the number of homes in each cluster and/or increasing the capacity allocated to data services—at relatively low incremental cost. The total capacity of a coaxial cable segment, including both the entertainment TV and data segments, is several gigabits per second. Beyond this point, the po-

Suggested Citation:"4 Technology Options and Economic Factors." National Research Council. 2002. Broadband: Bringing Home the Bits. Washington, DC: The National Academies Press. doi: 10.17226/10235.
×

tential of HFC to scale is not clear. The incremental deployment of fiber in the HFC infrastructure would imply that the long-term trend would be to replace the coaxial link to the home with fiber if performance gains in the range of 100 times current broadband were required. From today’s vantage point, it would be accompanied by the costs and complications associated with deploying fiber to the home.

While one can confidently predict that fiber will increasingly be found deeper and deeper within access networks, and can foresee that fiber will reach an increasing number of households, it is difficult to predict how fast this will happen. Fiber-to-the-home has labor costs that are not likely to yield fully to technical innovation, but the option of technical relief of these costs is very appealing and should justify research in support of creative proposals.11 The other major cost component is in optical components—for example, lasers and modulators. Right now, there are trends toward both higher performance (seen in wide area fiber optic networks) and lower cost. The industry speculation is that the costs of lasers for consumer premises devices can come down markedly when the volume demand is demonstrated for the specific elements. The presence of very cheap lasers in CD players and the falling cost of lasers in local area networks (e.g., gigabit Ethernet) illustrate at least the potential for inexpensive components.

A significant shift in the costs of fiber would probably require significant architectural innovation, not just improving the individual technology components of present systems. This sort of systems research works to find new ways of combining components into more cost-effective and flexible access systems. PON is an example of an architectural idea introduced in the past that had the effect of significantly reducing costs while offering other deployment advantages (it requires no active electronics and no power supply between the central office or head end and the customer premises). Further innovation is possible, and there are several fiber metropolitan area network companies claiming that they have a better architecture overall based on shared media access, optical switching, IP over SONET, or other innovations. The possibility remains that a sufficiently low cost solution will emerge from this sort of work to make fiber viable to the residence in the short-to-medium term.

It seems quite likely that within the next 5 to 10 years there will be significant FTTH deployment beyond initial field trials. Fiber is also likely to become an important technology for new installation and major upgrade deployments. Whether the amount of fiber deployed will represent a significant fraction of the installed base during this period is unclear, as

11  

Efforts in this direction include systems that install fiber in existing sewer pipes.

Suggested Citation:"4 Technology Options and Economic Factors." National Research Council. 2002. Broadband: Bringing Home the Bits. Washington, DC: The National Academies Press. doi: 10.17226/10235.
×

it will depend on many factors, both technical and economic. Fiber will come to each part of the network when a combination of economics, demand, and capabilities versus alternatives justifies it. The market will continue to test whether or not that time has come, and will continue to push the capabilities of other technologies as far as economically practical as well. Finally, it is worth noting that some caution is in order when making predictions on this subject. Some 15 years ago, there were claims by both infrastructure operators and fiber vendors that FTTH was coming soon, but high costs, uncertain demand, and other factors meant that these forecasts did not pan out (though green-fields situations are especially attractive on a total life-cycle cost basis).

Wireless Options

There are actually many different systems that make use of wireless communication; they are divided here into fixed terrestrial wireless, mobile wireless, fixed satellite service, and wireless local area networking. Fixed wireless service is being readied for direct competition with DSL and cable in major markets, while third-generation mobile and wireless local area networking alternatives aim to deliver services to mobile professionals. Over time, these seemingly disparate market segments are likely to overlap and converge, as portable computing devices and hybrid personal digital assistant (PDA) cell-phone-type devices proliferate further. In the long run, broadband wireless access may be expected to migrate toward the more unique task of supporting connectivity to the growing proportion of portable end-user devices. The relative roles of these wireless options will differ depending on market-demand factors, availability of capital, competitive strategies, and regulatory issues. The focus in this discussion, however, is on shorter-term prospects for broadband residential access, which is generally construed to be a fixed service.

Fixed Terrestrial Wireless

In contrast to mobile services, fixed wireless services provide connectivity from a base station to a stationary point, such as a home.12 Perpassing costs are more favorable, especially because the cell size can be made large initially, and then decreased as subscription rates increase. As a result, fixed wireless will be an attractive option for providers that do

12  

Connectivity may be either to a single gateway within the home (which in turn is connected through a home network to computers within the home) or directly to individual computers within the home. (As home networks become more commonplace, some of which themselves use short-range, low-cost wireless links, the former will likely dominate.)

Suggested Citation:"4 Technology Options and Economic Factors." National Research Council. 2002. Broadband: Bringing Home the Bits. Washington, DC: The National Academies Press. doi: 10.17226/10235.
×

not own last mile infrastructure in a desired service area to become facilities-based competitors. First-generation, proprietary systems providing data rates of about 1 to 10 Mbps are commercially available. These make use of spectrum set aside for and thus referred to as local multipoint distribution service (LMDS) and multipoint multichannel distribution service (MMDS). The LMDS spectrum, located above 20 GHz, is allocated for point-to-point voice, data, or video transmission. MMDS, which uses spectrum in 2.1- and 2.5- to 2.7- GHz bands, was traditionally used to provide so-called wireless cable video services, especially educational/instructional programming; but a rule change by the FCC in 1998 opened the door to two-way data service delivery over MMDS frequencies, and the channels have been made available to wireless providers for broadband services.13 LMDS, which offers very high data rates but has more limited range and requires more expensive equipment, is used primarily for high-speed business services. The longer range and lower frequencies of MMDS reduce both infrastructure and customer terminal costs, making it suitable for competing with DSL and cable in the residential market.

Several operators (including Sprint and WorldCom) have been deploying first-generation broadband fixed wireless networks (using MMDS spectrum) with the objective of providing Internet access with speeds of roughly 1 Mbps to homes and small businesses. In these systems, each antenna serves a large service area, and line of sight between the antenna and receiver is required. Coverage in these systems is roughly 50 to 60 percent of potential subscribers, with the exact figure depending on the topography and foliage density. Customer premises equipment costs are in the neighborhood of $500 to $1,000. The total cost of a base transceiver station is roughly $500,000. Assuming typical coverage over about an 8-to 10-mile radius, the per-passing cost is roughly $2,000 per square mile. The actual range that can be achieved will differ significantly depending on the topography, presence of buildings and trees, and so forth. The area and number of customers served by a base station (and thus the cost per subscriber) depends on signal range, desired bandwidth per customer, and channel capacity.

As of early 2001, service providers are testing second-generation products that use smaller cells to increase system capacity and enhanced signal processing to enable non-line-of-sight service. These products are ex-

13  

In response to a proposal submitted by participants in the old wireless cable industry, the FCC amended the rules to permit licensees to provide high-speed, two-way services, such as high-speed Internet access, to a variety of users. With wireless cable distribution of video entertainment programming proving a nonstarter, the commission concluded that two-way wireless could produce a continuing stream of leased channel revenues for the educational licensees (viable competition for hardwire cable was also a consideration).

Suggested Citation:"4 Technology Options and Economic Factors." National Research Council. 2002. Broadband: Bringing Home the Bits. Washington, DC: The National Academies Press. doi: 10.17226/10235.
×

pected to bring the coverage rate up to about 80 to 90 percent. The cost per-passing will be considerably higher because the cell sizes are smaller (3 to 5 miles), but the systems will have considerably higher overall capacity and coverage. Standards for second-generation LMDS and MMDS have been initiated in standards bodies such as IEEE 802.16. The technology of choice for mass-market MMDS is being explored in standards bodies and the marketplace, but it is likely that a variation of orthogonal frequency division multiplexing (OFDM) will be adopted at the physical level. At this point, deployment appears to be gated more by the availability of investment capital and the initial cost and performance of the technology than by the lack of standards, but agreement on a standard would permit component vendors to drive prices down farther and, in turn, could prompt more investment.

Other technologies (e.g., wideband code-division multiple access [CDMA]-derivative radios operating at several megabits per second, ultrawideband radio, and free space laser beams) are also under consideration for fixed or semimobile high-speed Internet access. As of 2001, there are several venture-funded companies (e.g., Iospan Wireless, BeamReach, and IPWireless using radio frequency transmissions, and Terabeam using free space laser transmissions) developing broadband wireless Internet access technologies, and some of these activities may lead to significantly improved cost and performance for fixed wireless.

Another alternative for broadband wireless is to extend technologies developed for wireless local area networks, which make use of low-power transmitters in unlicensed frequency bands. These have improved substantially in the past few years, with the mass-market IEEE 802.11b standard supporting speeds up to 11 Mbps in the 2.4-GHz band. Although the coverage area for wireless LANs is limited to small areas (microcells with a typical radius of less than several hundred feet, though favorable topography and directional antennas can extend this range), rapidly improving cost-performance makes it a viable option for public services in locations such as airports, shopping centers, rural communities, and dense urban areas. Future 802.11 and European Telecommunications Standards Institute (ETSI) Hiperlan II standards, which are still under development, are intended for use in the unlicensed 5-GHz band to provide speeds of roughly 50 Mbps.

There has been a dramatic surge of interest in 802.11b wireless local area network (WLAN) deployment (by individuals, community networking activists, and corporations) during the period of this committee’s work. Much of this investment is driven by the fact that WLANs can be readily deployed at a grass-roots level with modest investment: a few hundred dollars for a home, increasing to a few thousand dollarsfor a small office building or campus. This investment provides mutimegabit

Suggested Citation:"4 Technology Options and Economic Factors." National Research Council. 2002. Broadband: Bringing Home the Bits. Washington, DC: The National Academies Press. doi: 10.17226/10235.
×

(nominally 11 Mbps for recent 802.11b equipment) access capability to both fixed and mobile computing devices within the coverage area, as long as there is an appropriate broadband backhaul service such as Ethernet, HFC, or DSL to connect the access point. Depending on the number of users sharing the WLAN infrastructure, the costs per user can be relatively low (hundreds of dollars, as compared with thousands of dollars for other forms of broadband access), so long as average traffic contributed by each user is not too high. This favorable deployment cost model, along with the strategic advantage of being able to handle both fixed and portable devices with the same access network, seems to be driving a great deal of commercial interest. Most of the activity is of a grass-roots nature—building, store, and mall operators, individual homeowners, and community networking activists are installing their own WLANs that over time could provide fairly ubiquitous, though not uniform, coverage in populated areas, in a fashion reminiscent of the early deployment of networking within communities and educational institutions. Note that the use of WLAN for “last 100 meters” access does not eliminate the need for broadband wired access such as HFC, cable, or fiber, which is still needed for backhaul of traffic. WLAN may help the overall economics of each of these wired solutions by increasing the end-user’s utility and facilitating sharing of the wired link among multiple devices and/or subscribers. This also applies to rural areas where a single T1 connection along with WLAN access might be more affordable than DSL or cable service to each home, depending, of course, on the density of the population cluster.

Scalable deployment of public unlicensed band services poses additional challenges, such as improvements in spectrum etiquette to prevent destructive interference among multiple operators. The year 2001 also saw reports of breaches in the default 802.11b security technology that will require attention. Looking to the long term, one can anticipate that access could be provided by a heterogeneous mix that combines short-range wireless access points (using technologies such as 802.11, Bluetooth, and new higher-speed solutions) with the more traditional DSL/cable/ fiber/fixed wireless solutions.14 This scenario becomes of particular interest if a large base of users comes to value mobile devices.

Mobile Wireless

While fixed wireless is an important near-term broadband access alternative, it is generally agreed that over time, wireless technologies and

14  

See, for example, David Leeper, “A Long-term View of Short-Range Wireless,” 2001, IEEE Computer, June, pp. 39-44.

Suggested Citation:"4 Technology Options and Economic Factors." National Research Council. 2002. Broadband: Bringing Home the Bits. Washington, DC: The National Academies Press. doi: 10.17226/10235.
×

the spectrum associated with them will be aimed increasingly at communications for next-generation portable and mobile communication and computing devices rather than at fixed broadband. Users are likely not only to seek mobile service but also to look for broadband services that work in a more seamless fashion between home and mobile environments. As market demand and performance expectations grow, fixed wireless will be at a performance (or cost-to-performance ratio) disadvantage compared with wireline alternatives in most areas. At the same time, providers will likely find it more profitable to use spectrum for the mobile market, which wireline cannot serve. (This presumes regulatory changes in the licensing rules for that spectrum.) As a result, fixed wireless will have a long-term niche only in areas of low to medium population density, where wireline options will remain costly and the bandwidth feasible with fixed wireless is sufficient to meet demand.

In the mobile arena, solutions for third-generation (3G) digital cellular systems based on wideband CDMA have been standardized at the ITU, and early deployments are expected in 2001-2002, particularly in Japan and Europe. Deployment is expensive, requiring that the provider install new infrastructure and that the consumer purchase new phones or other receiver equipment. The 3G standard, which provides a theoretical 2-Mbps user bit-rate, is in practice limited to medium bit-rate services up to hundreds of kilobits per second due to both system capacity constraints and realistic wireless channel properties. Thus, 3G mobile, while a major step forward from current digital cellular systems, is unlikely to meet the needs of the full range of broadband access requirements that might be expected in the mobile services arena over the next 5 to 10 years. However, given that 3G chipsets will be available in the mass market within 1 to 2 years, there are efforts underway to leverage its wideband CDMA core technology to provide several-megabit fixed wireless access as well. There are also interim “2.5G” solutions, going by the names EDGE, GPRS, and HDR, which provide packet data services at moderate bit-rates (~10 to 100 kbps per user) using available “2G” digital cellular infrastructure.

Although the speeds of 3G represent a significant improvement over second-generation digital cellular in terms of peak bit-rate, the 3G service appears likely to fall short of consumer expectations for broadband services when they reach the marketplace. Despite the hype—and their usefulness for certain applications notwithstanding—3G services may turn out not to meet either the capacity or performance needs of truly scalable mass-market services that deliver several megabits to each mobile device. This indicates that there will be continued attention to developing broadband mobile technology. One interesting possibility is that derivatives of WLAN technologies—802.11, Hiperlan, or new standards—will be able to supply high bandwidth more effectively, so long as additional features to

Suggested Citation:"4 Technology Options and Economic Factors." National Research Council. 2002. Broadband: Bringing Home the Bits. Washington, DC: The National Academies Press. doi: 10.17226/10235.
×

support mobility, user registration, and the like are gradually added in. In contrast to the 3G model, in which large carriers are using government-allocated spectrum, the WLAN scenario is a bottom-up, small-operator approach that leverages unlicensed spectrum. There is the potential for rapid growth owing to the lower capital investment requirements; the ability to target service to urban areas, airports, and the like and to expand as needed; and the absence of spectrum licensing costs.

Satellite

Broadband local access via satellite provides another wireless alternative. Satellite services have been available for many years, based on geosynchronous Earth orbit (GEO) satellites. These satellites have been used for telephone communications, television distribution, and various military applications. Satellite access clearly has significant advantages in terms of rapid deployment (once a satellite is launched) and national coverage, but has cost and performance limitations and system capacity limitations, particularly for uplink traffic. The utility of satellite’s broadcast capabilities (one-way broadband) has already been seen in digital video via satellite (e.g., direct broadcast satellite [DBS]), which became pervasive in the 1990s. This has also been leveraged to deliver a mixedtechnology Internet access service (e.g., DirectPC) with satellite downlink and dial modem uplink. A bidirectional service, in which both the uplink and the downlink use the satellite, requires the solution of significant technical problems at the same time that costs are kept low enough to be attractive to consumers. An example of such a service is the recently introduced Starband service, which promises a peak rate of 500 kbps downstream and 150 kbps upstream, using a 2- by 3-foot antenna, at a current price of $70 per month plus $400 initial investment (figures that may change as the market grows).

While their coverage is very broad, GEO satellite systems possess a number of limitations. Power constraints and dish size (which is limited to a roughly 2-ft diameter for mass-market installation) limit the downlink transmission from a GEO satellite to about 100 to 200 Mbps today (systems under development for launch in the 2003 time frame are being designed to offer at least 400 Mbps downstream). Statistical multiplexing effects permit this capacity to be shared over more users than is suggested by simply dividing this number by the peak load per user, but the total number of customers that can be served per satellite is nonetheless limited. While other performance characteristics of satellites have increased significantly (along Moore’s law-like curves), the efficiency of power panels on satellites has not increased substantially over recent decades. New frequency bands can also be used to increase system capacity given a

Suggested Citation:"4 Technology Options and Economic Factors." National Research Council. 2002. Broadband: Bringing Home the Bits. Washington, DC: The National Academies Press. doi: 10.17226/10235.
×

finite number of orbital spots. Spot beams and on-board switching would provide roughly a factor-of-10 improvement in capacity at the expense of reduced geographic coverage and a heavier and costlier payload. There have been several attempts to develop commercial satellites of this sort earlier, but there seem to be various technical and cost problems in each case.

Even though GEO satellites may have a limited total capacity for broadband on a national scale, they may occupy a long-term market niche if the demand is restricted to a small, bounded set of (mostly rural) subscribers. In this respect, the GEO systems clearly provide an illustration that the broadband market will be served by a range of technology options, not a single technology winner.

GEO satellite systems also have a high round-trip transmission delay. The propagation time (speed of light) up to the satellite and back is about 250 milliseconds (ms), so the round-trip delay is 500 ms. This compares with a terrestrial cross-country round-trip delay over a fiber link of between 75 and 100 ms. This has caused some to conclude that GEO satellites are useless for data purposes. In fact, whether this delay matters depends on the application being used. For Web access the delay may be noticeable, but it does not seriously degrade the experience so long as the end-node software is properly set up. For other applications, the satellite delay is a more serious issue. For Internet telephony, the long delays cause a real degradation in usability, since there are well-known human-factors issues that arise when the round-trip delay in a conversation approaches 200 ms.

An alternative that has received a great deal of attention over the last several years is to use low Earth orbit (LEO) satellites, which in contrast to GEO satellites do not occupy a constant position in an assigned orbital slot, and to rely on multiple satellites to provide coverage. LEO satellite proponents claim that power limitations are less serious than those with GEO satellites, though both types of system are constrained by power considerations. LEO satellite technology, while challenging, can be fielded, as the pioneering Iridium and Globalstar deployments have demonstrated. LEO satellite deployment for broadband data services requires the solution of additional difficult technical problems, such as antennas that can track a moving satellite at a price point suited for a consumer.

However, the feasibility of LEO satellites for mass-market broadband access is constrained more by economic considerations than the technology challenges. A LEO satellite system requires the launching of many satellites, because in their low Earth orbit, the satellites are in rapid motion overhead, and there must be enough of them that one is always in range. This means that the system has a very high initial cost to build and launch, which in turn implies that there must be a significant user pool to

Suggested Citation:"4 Technology Options and Economic Factors." National Research Council. 2002. Broadband: Bringing Home the Bits. Washington, DC: The National Academies Press. doi: 10.17226/10235.
×

justify the investment. (Even if the per-passing cost is low, there must be a sufficient total customer base.) Satellite-based solutions must also compete with terrestrial networks as they expand to reach more customers.15

Wireless Roadmap

While wireless systems come in multiple flavors, some high-level observations are broadly applicable. First, a variety of technology options are possible depending on the planned customer density. Systems that have very few towers and more expensive residential equipment have lower per-passing costs and higher per-subscriber costs, which may provide a more favorable business case for initial deployment. Installing new antennas on existing cellular towers allows the leveraging of past investment in towers and related site costs. Thus, for wireless there is at least to some extent a path that provides incremental performance improvement for incremental cost. Because little or no wireline infrastructure needs to be installed, deployment can proceed comparatively rapidly, but the capacity for adding more customers is limited.

System capacity is limited by the amount of radio spectrum available, which depends on the total amount of radio spectrum suitable (in terms of its propagation characteristics) for broadband—the fraction of that spectrum which is allocated through government spectrum-licensing policies to broadband services—and the extent to which clever system design can increase the performance obtainable from a given amount of bandwidth.

One option for increasing performance is to make cell size even smaller, so that the same frequencies can be reused in more locations (a strategy called spatial reuse). The need to deploy more transceivers (and install wireline to connect them to the provider’s network) makes these improvements costly, so that systems with very small cell sizes have costs, dominated by labor costs that start to approach those of wireline systems.

The relative immaturity of wireless broadband technologies compared with wireline alternatives gives reason to believe that innovative research will yield significant improvements in performance. The total spectrum available for broadband services is limited by the amount of spectrum suitable for broadband. Moore’s law decreases in the cost of processing power that can be inserted into broadband transceivers permit

15  

This economic challenge has been seen in the case of satellite voice services, where terrestrial cellular voice service, which is much cheaper and requires much smaller handsets, was deployed on a more widespread basis than was contemplated when the initial Iridium business plans were formulated. If terrestrial broadband services discussed above are deployed over enough of the world during the time it takes to design and launch a LEO satellite broadband service, the pool of underserved users with the wealth to purchase this new satellite service may be too small to recover the high up-front cost.

Suggested Citation:"4 Technology Options and Economic Factors." National Research Council. 2002. Broadband: Bringing Home the Bits. Washington, DC: The National Academies Press. doi: 10.17226/10235.
×

use of more complex coding schemes, which permit increased data rates to be affordable. Some of the next-generation technologies under active consideration make use of increased signal processing to permit non-line-of-sight operation, which is important to increasing both the number of viable markets and the fraction of customers that can be served. More processing power may also permit more of the base-station functions to be implemented in software, which would permit new services to be introduced without a hardware upgrade. Another option is to exploit new wireless innovations such as space-time processing techniques with multiple antennas (so-called multiple-in, multiple-out, or MIMO, approaches) or smart antenna beam steering approaches, which hold the promise of roughly a factor-of-10 improvement in per-user throughput. These approaches rely heavily on improvements in the processing power of application-specific integrated circuits (ASICs) or digital signal processors (DSPs) that enable the requisite signal processing to be performed in real time in affordable hardware.

Even so, the wireless roadmap does not seem to offer performance improvements of the same magnitude as are possible with wireline. Even in the best case, wireless cannot match the ultimate transmission capacity of wire or fiber—though it could surpass the performance of today’s deployed DSL or cable systems. And in contrast to wireline, wireless does not have as clear a roadmap regarding the 5-year potential of further research. Wireless will, however, be an important technology option in view of several potential advantages, which include rapid deployment, lower initial capital investment, and the ability to serve portable or mobile devices.

Over time, as wireline facilities are built out, it is quite possible that the wireless spectrum used by fixed service will, for the most part, be shifted to mobile uses. In this scenario, fixed wireless would shift from playing a role as a facilities-based alternative in even densely populated areas to one where it provides niche service in lower-density and remote areas, particularly in newly developed areas where it complements wireline solutions by enabling niche services.

Note that even though today’s 3G technology may be immature, it is relatively safe to predict that there will be a growing demand for broadband wireless service to portable and mobile devices. This is because of fundamental trends toward smaller personal computing and communication devices (e.g., laptops, PDAs, and cellphones) that in the long term are likely to account for the majority of end-user devices, in contrast to the fast-growing minority that they represent today. Once tetherless computing devices become ubiqitous, today’s PC-centric broadband access network (HFC, DSL, and so on) will have to evolve toward hybrid wired and wireless networks in which the “last mile,” “last 100 m,” or even “last 10

Suggested Citation:"4 Technology Options and Economic Factors." National Research Council. 2002. Broadband: Bringing Home the Bits. Washington, DC: The National Academies Press. doi: 10.17226/10235.
×

m” are mostly wireless. The cost and performance of broadband wireless access networks will thus be crucial to the user’s overall experience.

In that context, it is worth noting that there are significant challenges associated with delivering true broadband services, as defined in this report, to mobile devices. For example, a 5-MHz chunk of 3G spectrum can support only about 10 simultaneous broadband users per cell, a number that has to increase by orders of magnitude to make the service viable beyond narrowband uses. Research and development (R&D) challenges faced by developers of “4G” wireless standards include higher speeds (on the order of 1 to 10 Mbps), maintenance of service quality under mobile fading conditions, integration of mobile and fixed network architectures, and greater spectral efficiency, capacity, and scalability. There has been recent interest in mobile Web access, media streaming applications aimed at portable devices, and the like, but consumer demand and the shape of the market are still evolving. Significant R&D investment will be needed to reach the scalability and cost and performance levels appropriate for ubiquitous mobile/portable broadband wireless deployment. Supportive FCC spectrum regulation policies that encourage efficient spectrum usage and easier access to new spectrum, rapid technology evolution, and market competition will also be needed to drive this important scenario forward.

The Diverse Technology Landscape

The different technology options—including HFC, DSL, fiber, wireless, and satellite—are different in detail. Some have higher delay, some have lower overall bandwidth, some may have higher prices, and so on. Different technology can be deployed to advantage in different circumstances. In dense urban and suburban areas, the present generation of wireline broadband—HFC and DSL—is being utilized successfully today. Fiber will be used in access networks wherever a combination of economics, demand, and capabilities (compared with alternatives, including the infrastructure already in place) justifies it. Fixed wireless is being used to support market entry by providers that do not own or have access to existing wireline assets. In less densely populated areas, fixed wireless may offer a longer-term solution for broadband access. Finally, in the most remote areas, a small percentage of the U.S. population may best be served by satellite where the very high fixed cost of construction and launching satellites is offset by the very low per-passing costs, given the enormous area that a satellite system can serve. One of the consequences one may have to accept for living in rural areas is that the available broadband service has some particular characteristics, such as higher delay and greater cost per unit of bandwidth. This may be an issue for certain appli-

Suggested Citation:"4 Technology Options and Economic Factors." National Research Council. 2002. Broadband: Bringing Home the Bits. Washington, DC: The National Academies Press. doi: 10.17226/10235.
×

cations, but one should look at this as just one consequence of technology diversity, not as a fatal flaw of one or another technology.

The market will continue to test whether or not the time has come to deploy fiber, and it will also continue to push the capabilities of other technologies as far as economically practical. In different parts of the nation (and the world), with different demographics and population distribution, these different technology options will play out in a different mix, but each will play a role in the diverse world of today.

LAYERING AND UNBUNDLING

This chapter devotes considerable attention above to the characteristics of different technology options for broadband access. But consumers do not normally care about the communications technology for its own sake; they care about the services that can be delivered over it—Internet applications (Web, audio, video), entertainment television, and so on. Hiding these details and separating the underlying communications technologies and the applications and services accessible to end users are accomplished through the engineering practice of layering. Communication systems are often designed (and described) in a layered fashion: that is, with a physical layer at the bottom that differs, depending on the particular communications technology chosen; a top layer that represents the specific applications that users run over the network; and some intermediate layers that help organize the engineering of the overall system.

As a simple example, consider the problem of sharing the total capacity of a cable system among a number of users and applications. At the physical layer is a coaxial cable capable of carrying radio frequency signals. The capacity of the cable system is divided into a number of channels of 6-MHz bandwidth, each capable of carrying a TV channel or other information. At the layer above that, one or another form of content is assigned to each frequency. Most channels are used today to carry a single TV signal, but channels can also be used for the Internet or for telephone service. Also, using new digital representations, multiple TV channels can now be carried in a single 6-MHz channel.

The Internet’s design is layered so that it works over a wide range of communications technologies, including all of the wireline and wireless broadband technologies discussed above. Consider Internet transmission over a cable system. First, one or more physical channels are assigned to Internet transmission. Then, at the lowest layer of the Internet’s design, the data to be transmitted over an individual cable system channel are divided into a sequence of small messages called packets. Multiple users share a single channel by sending their own packets one after another. Finally, the packets used by each user are assigned to one or another

Suggested Citation:"4 Technology Options and Economic Factors." National Research Council. 2002. Broadband: Bringing Home the Bits. Washington, DC: The National Academies Press. doi: 10.17226/10235.
×

“application,” such as Web access, e-mail, or streaming audio. So the layers of sharing for the Internet over cable are as follows: a physical cable, which is divided into channels, then divided into packets, which are each in turn assigned to a particular user and application.

To the user of the Internet, the details of layering are largely irrelevant. But the detail does matter in the debate about unbundling. The term “unbundling” is used to describe the situation in which the owner of physical facilities is required to make some portion of that resource available to a competitor. Unbundling of the incumbent local telephone company facilities is required by the Telecommunications Act of 1996. There are two distinct ways of unbundling the local loop: physically and logically.

In the case of the copper infrastructure of the telephone company, one form of unbundling is physical, where an actual copper pair is assigned to a competitor. In this way, the competitor has direct access to the electronic signals being carried over the wire (or to the light carried over a fiber) and can adopt whatever transmission scheme it chooses. For use of the loop, the competitor pays the rate negotiated with the incumbent or, in the absence of a negotiated agreement, the rate established by regulators through arbitration, and in turn directly implements the service and bills the customer. Physical-layer unbundling requires that the competitor have the ability to colocate equipment and upstream connectivity at the network termination point of the loop (typically but not exclusively at the central office).

Physical-layer unbundling offers several potential advantages for the competitor. First, it provides the competitor the freedom to select the type of transmission technology it chooses to implement over the copper loop, independent of whatever decisions the incumbent may make, permitting the competitor to compete with the incumbent on the basis of a variety of attributes, including speed, quality, and maximum loop length. Second, in the case of a loop running from the central office to the subscriber, it is in some sense a well-defined, easily separable network element.

Physical-layer unbundling may also impair the ultimate performance of the copper plant. While it holds true for voice signals, the assumption that copper loops are fully separable is not correct for high-speed data transmission using DSL because of crosstalk among wires within the telephone plant. This means that ultimate performance and reach are hampered because corrective measures—such as coordinated assignment of copper pairs and coordination of transmitted signals among pairs—cannot be implemented if competitors are left free to implement the technology of their choosing.

Unbundling also raises new issues when applied to new facilities. As an initial matter, an unbundling obligation may deter an incumbent local

Suggested Citation:"4 Technology Options and Economic Factors." National Research Council. 2002. Broadband: Bringing Home the Bits. Washington, DC: The National Academies Press. doi: 10.17226/10235.
×

exchange carrier (ILEC) from pushing fiber farther into the neighborhood. In addition, plant access on the network end of the loop today is generally at the central office. To improve the performance and reach of DSL service, it is natural to deploy fiber deeper into the telephone network. Then the copper pair terminates at a remote terminal, which may be a curbside pedestal or even a small box on a telephone pole. Issues raised in this context include these:

  • Unbundling at remote terminals is problematical because of space limitations and because the relatively small number of subscriber lines terminated at each remote terminal make colocation and interconnection (linking the copper loop to the competitor’s network) more difficult to achieve here than was the case at the central office.

  • As fiber is pushed deeper into the network, the copper loops become shorter, each remote terminal serves fewer customers, and (if only the copper is unbundled) each provider would need to separately provision fiber to interconnect at the remote terminal. The fiber running to the terminals might also be unbundled, which would require some sort of time-division multiplexing (i.e., each provider has its own time slots) or wavelength division multiplexing (each provider has its own wavelengths) of the incumbent-owned fiber.

  • Continuation of physical-layer unbundling requirements complicates establishment of technology-neutral rules because unbundling rules must take into account the particular details of each new communications technology used by incumbents.

The other unbundling option is logical—above the physical layer. Higher-layer services concerned with transmitting bits are implemented in some fashion on top of protocols concerned with transmitting electrical signals across the wire, which means that they can be implemented independent of the particulars of the physical-layer connection used to provide the higher-level service. That is, a competitor need not control the actual signals running over the wires if it can implement its service using bit transport capabilities provided by the incumbent. With logical-layer unbundling, the incumbent specifies the customer-premises equipment and operates the termination equipment.

Logical-layer unbundling offers several advantages. Colocation requirements are confined to those necessary for the competitor to interconnect with the incumbent’s network. Another advantage to logical-layer unbundling is that it may be easier to verify service-level agreements between the incumbent and competing service provider, because data on logical-layer service (throughput, quality of service, and so on) can be compiled more readily. Finally, with logical-layer unbundling, one avoids

Suggested Citation:"4 Technology Options and Economic Factors." National Research Council. 2002. Broadband: Bringing Home the Bits. Washington, DC: The National Academies Press. doi: 10.17226/10235.
×

much of the argument over such hard-to-measure issues as whether incumbent personnel “made life difficult” for CLEC employees while installing equipment or agreeing on a frequency plan.

A principal disadvantage of logical-layer unbundling for the competitor is that the performance characteristics of the link implemented by the incumbent may restrict the types of services the competitor may offer, and limits the competitor’s ability to differentiate itself from the incumbent. While the motivation of incumbents is certainly a matter of speculation and debate, it is often suggested that one reason incumbents have favored the lower-speed, asymmetric DSL technology is that symmetrical high-speed DSL service for business customers could undercut profits on more expensive T1 data service. Incumbents might also select a transmission technology that accommodates the typical copper loop but which may not be optimal for subscribers with longer loops. A competitor restricted to logical-layer unbundling cannot provide a symmetrical service or otherwise compete with the incumbent by offering higher performance than the incumbent’s system permits.

The nature of the local access technology affects what unbundling options are viable. In the case of the cable infrastructure, one could propose that different frequencies could be allocated to different providers, or that different providers could be assigned a share of the packets being sent in a single frequency, and so on. In practice, allocation schemes have not proved workable, and cable open access is being implemented at the packet level. So the fact that there are different ways of sharing at different layers, and that different technologies have different layering structure, makes the debate about unbundling complex.

ECONOMICS OF INFRASTRUCTURE INVESTMENT

Like any other business, revenue, at least in the long term, must be sufficient for a broadband service provider to be profitable (or at least to break even, in the case of a public sector enterprise). As the previous discussion suggests, different technologies have different cost structures that shape their attractiveness in different market segments. At the same time, uncertainty about demand for broadband, consumer willingness to pay, and the interaction of these factors with different business models shapes investment in broadband deployment.

Understanding Costs

Broadband deployment costs fall into two broad categories: fixed (or per-passing costs), which are roughly independent of the number of subscribers, and variable (or per-subscriber) costs. Fixed costs include those

Suggested Citation:"4 Technology Options and Economic Factors." National Research Council. 2002. Broadband: Bringing Home the Bits. Washington, DC: The National Academies Press. doi: 10.17226/10235.
×

of upgrading or installing wireline infrastructure within the neighborhood and installing or upgrading central office or head-end equipment. For wireless, the costs of acquiring wireless spectrum licenses are another per-passing cost. The most significant variable costs are the per-subscriber capital costs, including line cards, customer-premises equipment, and the costs of upgrading or installing connections to individual premises. Other variable costs include installation at the customer premises (which drives shifts to customer-installed solutions) and customer support and maintenance. Providing upstream connectivity involves both fixed costs, such as installation of regional or national transport links, and variable costs associated with provisioning regional and national connectivity to support the traffic load imposed by customers.

These costs are greatly shaped by density and dispersion. Where new wireline infrastructure is installed, more remote or sparsely populated areas will have significantly higher per-passing costs, reflecting per-mile constructions costs, that make investment riskier, and the lower per-passing costs of satellite or other wireless systems will be more attractive. Each particular circumstance will involve its own set of cost trade-offs, however. For instance, because installing remote terminal equipment imposes substantial costs, home-run fiber to the premises could turn out to be cheaper than a fiber-to-the-cabinet strategy in some rural cases.

Take-Rate Tyranny

Perhaps the most important implication of per-passing costs is the “take-rate tyranny” that dominates investment decisions. Because costs are dominated by the dollars-per-mile cost of installation, investment in wireline infrastructure has a cost structure in which most of the cost is determined by the number of houses passed, and a minority of the costs is determined by the number of subscribers. (Because they lend themselves to a strategy in which the cell size can be scaled to the take-rate, wireless systems can have an advantage, though the cost of spectrum must also be factored in.)

A very simplified cost model indicates the general shape of the financial dilemma facing those who invest in broadband infrastructure. If there are two providers instead of one—assuming no differentiation between the products, no first-mover advantage, and that costs are per-passing— the costs for each are unchanged but the revenues are halved. As a very rough example, if a provider makes an incremental investment in the distribution infrastructure that has a cost of $200 per passing and must recover this investment in 3 years, this is approximately $5 per month per passing. If the provider has the whole market and 50 percent of the homes

Suggested Citation:"4 Technology Options and Economic Factors." National Research Council. 2002. Broadband: Bringing Home the Bits. Washington, DC: The National Academies Press. doi: 10.17226/10235.
×

are subscribers, this would imply that $10 per month of the per-subscriber payment would have to be allocated for return on this investment. If, however, there were two providers and each held half of the total 50 percent market share, then each provider would have to collect $20 per month from each subscriber as a return on this investment, in a market where the typical consumer payment is just $40 per month. More generally, when the market is split among multiple providers, some cost and revenue models for residential broadband become unprofitable. This also amplifies the advantages held by a provider that can make incremental upgrades to existing wireline infrastructure (the advantage depends on the cost of any required upgrades) over a de novo facilities-based competitor.

As a result of the take-rate impact on per-subscriber costs, the provider with the highest penetration rate may have a substantial cost advantage over its competitors if per-passing costs are significant. According to rough figures supplied to the committee,16 the present per-passing costs to install fiber-to-the-curb are as follows: $150 per passing if only voice is offered, an additional $150 per passing if data service is provided as well, plus $300 per passing to provide video. Fiber cable installation adds another $350 to $400 per passing if done aerially and $700 to $800 per home passed if buried. Given these figures, consider two providers serving a local market, each offering voice, data, and video. Suppose each passes all homes, that 50 percent of homes subscribe in aggregate, and that one provider, the incumbent, serves 60 percent of all broadband homes, while a more recent entrant serves 40 percent of broadband homes. Both providers use aerial installations that cost $400 per passing. Then per-subscriber costs for the incumbent would be $1,000/0.30 = $3,333 (plus subscriber-specific installation costs), while for the entrant per-subscriber costs would be $1000/0.20 = $5,000 (plus subscriber-specific installation costs). The entrant’s costs would be 50 percent greater than the incumbent’s. This type of relationship means that competition that truly drove prices to costs would eliminate all but one firm unless markets were evenly divided among competitors or competitors offered differentiated services that appealed to different subsets of subscribers. The risks for entrants inherent in this type of relationship are obvious, especially if subscriptions are at all sticky (e.g., where customer loyalty or switching costs are significant). Unless competitors can find ways to substantially differentiate their services, entry may well be risky and vigorous competition difficult to sustain.

16  

From Mark MacDonald at Marconi.

Suggested Citation:"4 Technology Options and Economic Factors." National Research Council. 2002. Broadband: Bringing Home the Bits. Washington, DC: The National Academies Press. doi: 10.17226/10235.
×

Paying for Broadband

The economic challenge of building and upgrading broadband infrastructure has proved daunting. Cast simply in terms of consumer willingness to pay and ability to attract investment in terms of that demand, it may be difficult to sustain high growth in penetration, upgrades, and new facilities construction. But broadband involves more players than simply the consumer and the infrastructure owner, and there are various ways in which the costs could be allocated among the various players. Other industries that share costs include telephony, in which different types of customers (e.g., residential versus business) pay different prices; commercial broadcast radio and television, in which consumers are sold as audiences to advertisers; and newspaper publishing, in which the subscription price is only a fraction of the cost of production and distribution. Such arrangements have been instrumental in building other communications infrastructures. Complex arrangements among multiple parties are possible, as is seen in broadcasting, where both costs and revenue can be shared between broadcast networks and local affiliates. The critical role of content suggests that issues related to copyright protection of digital content will be intertwined with broadband for some time to come.

Because broadband is a service capable of supporting each of these types of services and many new ones as well, there are potentially many different options for cost sharing. Figure 4.5 depicts a cluster of other players that surround the consumer and broadband infrastructure builder. Notably, broadband subscribers generally are interested in a wide range of content and applications that are not provided directly by the broadband provider itself—today this is largely the universe of content and services available through the Web. These services have been supported through a combination of e-commerce, transaction and subscription charges, and advertising (both direct, in the form of banner ads and the like, and indirect, as when a Web site is used as to draw the user into other media channels). One opportunity—and challenge—is to find ways of better aligning the economic interests of content or applications providers and infrastructure owners in order to share the costs of the access link with the end users. Another is to explore how government incentives or contributions from employers interested in various flavors of telecommuting or employee education could contribute to the overall investment required. New approaches to financing broadband include homebuilders that include fiber connections in the price of the home (and can then promote the homes as broadband-ready) and municipalities that provide mechanisms for amortizing the investment over a relatively long time period.

Suggested Citation:"4 Technology Options and Economic Factors." National Research Council. 2002. Broadband: Bringing Home the Bits. Washington, DC: The National Academies Press. doi: 10.17226/10235.
×

FIGURE 4.5 Paying for broadband.

Focus on the Consumer

The factors discussed in the previous section notwithstanding, the consumer is the pivot around which all of the economic issues swing. Without consumer demand and a (somewhat) predictable willingness to pay (or evidence that advertising will be a large source of revenue), there is no market. Evidence from early deployment demonstrates demand. The national average penetration (somewhat more than 8 percent as of summer 2001) reflects and masks an uneven pace of deployment. In localities where the service has been available for a reasonable time, cable

Suggested Citation:"4 Technology Options and Economic Factors." National Research Council. 2002. Broadband: Bringing Home the Bits. Washington, DC: The National Academies Press. doi: 10.17226/10235.
×

industry reports on markets that have had cable modem service available for several years suggest considerable demand.17

Although the committee is not aware of definitive studies of consumer willingness to pay for broadband (and the notion proposed in the past, that consumer willingness to pay for entertainment and/or communications is a fixed percentage of income, is generally discounted by economists today), the general shape of the market for communications, entertainment content, and information technology is beginning to emerge. Over 50 percent of homes in America have some sort of PC, with prices that averaged near $2,000 in recent years, and which are now dropping below $1,000 for lower-end machines, illustrating that many consumers are willing to make a significant investment in computing hardware and software. In rough terms, a typical $1,200 home computer replaced after 4 years costs around $25 per month.

A majority of the homes that have PCs are going online and connecting to the Internet, and it is a reasonable projection that only a very small fraction of machines will remain offline in the coming years. Using the primary residence phone line, and purchasing a somewhat more limited dial-up Internet service, the price approaches the $10 per month (providers have also experimented with service and PCs that are provided free, so long as the consumer will allow advertisements to be displayed during network sessions, although recent reports from this market segment put in question the long-term viability of this approach). The entry price today for broadband is not dramatically different from that for high-end dial-up service. A separate phone line costs as much as $20 per month, and unlimited-usage dial-up Internet service generally runs $20 or more per month. Of course, the market offers a range of price and performance points from which the consumer can pick. At the high-end, high-speed DSL can cost up to several hundred dollars per month, and business-oriented cable services are offered at a premium over the basic service.

The total consumer expenditure for such a computer plus basic broadband service is potentially as much as $90 per month, of which the Internet provider can expect to extract less than half. From this revenue base a business must be constructed. If 100 million homes were to purchase broadband service at $50 per month, this would result in total annual revenues to broadband Internet providers of more than $50 billion, which is similar in magnitude to current consumer expenditures on long-distance services.

17  

For example, information supplied to the committee by Time Warner Cable is that take-rates have reached 17.5 percent of subscribers in Boston, Massachusetts, and 25 percent of subscribers in Portland, Maine.

Suggested Citation:"4 Technology Options and Economic Factors." National Research Council. 2002. Broadband: Bringing Home the Bits. Washington, DC: The National Academies Press. doi: 10.17226/10235.
×

One question that the market has not yet explored is whether the consumer would make a significant capital investment, similar to the $1,000 to $2,000 that a computer costs today, as part of obtaining Internet service. For example, if there were a home-run system with fiber running to the residence (making it a relatively future-proof investment), but the consumer had to activate that fiber by purchasing the end-point equipment, would this be an attractive option if the equipment costs were comparable? Would residents be willing to finance the capital costs of installing that fiber in the first place? While there is no hard evidence, wealthier consumers, who have demonstrated a willingness to make purchases such as multiple upscale multimedia PCs and expensive consumer electronics, might well be willing to make such investments, and some residential developers have opted to include fiber.

The Pace of Investment

The rapid evolution of some aspects of the Internet can lead observers into thinking that if something does not happen within 18 months, it will not happen. But the phenomena associated with deployment cycles measured in months have generally been in the non-capital-intensive software arena. The cost of entirely new broadband infrastructure—rewiring to provide fiber-to-the-home to all of the roughly 100 million U.S. households—would be some $100 billion, reflecting in considerable part construction costs that are not amenable to dramatic cost reductions. Even for cable and DSL, for which delivering broadband is a matter of upgrading existing infrastructure, simple economics gates the pace of deployment. For both new builds and incremental improvements, an accelerated pace of deployment and installation would bring with it an increased per-household cost. Some broadband deployment will be accomplished as part of the conventional replacement and upgrade cycles associated with telephone and cable systems. In some cases, this process will have dramatic effects—two examples are HFC replacement of all-coaxial cable plants and aerial replacement of copper with fiber as part of a complete rehabilitation of old telephone plant—but in many others cases, the improvements will be incremental. To accelerate beyond this pace means increasing and training an ever-larger workforce devoted to this task. As more new people are employed for this purpose, people with increasingly higher wages in their current jobs will have to be attracted away from those jobs. Similar considerations apply to the materials and manufacturing resources needed to make the equipment that is needed.

The investment rate also depends critically on the perspective and time horizon of the would-be investor. For an owner of existing facilities—the incumbent local exchange carriers and cable multiple system

Suggested Citation:"4 Technology Options and Economic Factors." National Research Council. 2002. Broadband: Bringing Home the Bits. Washington, DC: The National Academies Press. doi: 10.17226/10235.
×

operators—realistic investment is incremental, builds on the installed base, and must provide return on a relatively short timescale. The tendency to make incremental upgrades to existing telephone and cable plants reflects the view that a replacement of the infrastructure (such as with fiber) would necessitate installation costs that can be avoided by opting to upgrade. The perception is that users would not be willing to pay enough for the added functionality that might be achieved with an all-fiber replacement to offset the extra costs of all-new installation. Changes in either costs or perceived willingness to pay could, of course, shift the investment strategy.

Once the provider has a broadband-capable system, it will only have incentives to spend enough on upgrades to continue to attract subscribers and retain existing customers by providing a sufficiently valuable service. Where facilities-based competition exists, these efforts to attract and retain customers will help drive service-performance upgrades. From this perspective, the level of investment associated with building entirely new infrastructure is very difficult for the incumbents to justify. Viewing the incumbent’s incentives to invest in upgrades from the perspective of the two broadband definitions provided above, investment to meet definition 1 will be easier than that to meet definition 2. That is, it is easier to justify spending so that the local access link supports today’s applications, while it is harder to justify spending enough to be in front of the demand so as to stimulate new applications.

Two types of nonincumbent investor have also entered the broadband market, tapping into venture capital that seeks significant returns— and generally seeks a faster investment pace. One is the competitive local exchange carrier, which obtains access to incumbent local exchange carrier facilities—primarily colocation space in central offices and the copper loops that run from the central office to the subscriber—to provide broadband using DSL. The other is the overbuilder, which seeks to gain entry into a new market by building new facilities, most commonly hybrid fiber coax for residential subscribers, but also fiber-to-the-premises and terrestrial wireless. Satellite broadband providers in essence overbuild the entire country, though with the capacity to serve only a fraction of the total number of households. The 2000-2001 drying up of Internet-related venture capital has presented an obstacle to continued deployment, and the CLECs have also reported obstacles in coordinating activities with the ILECs that control the facilities they depend on.

Because public sector infrastructure investment generally is based on a long-term perspective, public sector efforts could both complement and stimulate private sector efforts. The key segment of the public sector for such investment is likely to be subfederal (state, local, regional), though the federal sector can provide incentives for these as well as private sector

Suggested Citation:"4 Technology Options and Economic Factors." National Research Council. 2002. Broadband: Bringing Home the Bits. Washington, DC: The National Academies Press. doi: 10.17226/10235.
×

investment. But decision making for such investments is not a simple matter, and, if present trends are any indication, such investments will be confined to those locales that project the greatest returns from accelerated access to broadband or possess a greater inclination for a public sector role in entrepreneurship.

Investment, Risk Taking, and Timelines

The myth of the “Internet year,” by analogy to a “dog year,” is well known. Where the Internet is concerned, people have been conditioned to expect 1-year product cycles, startups that go public in 18 months, and similar miracles of instant change. The 2000-2001 downturn in Internet and other computing and communications stocks dampened but did not eliminate such expectations. In fact, some things do happen very rapidly in the Internet—the rise of Napster is a frequently noted example. These events are characterized by the relatively small investments required to launch them. Software can diffuse rapidly once conceived and coded. But this should not fool the observer into thinking that all Internet innovation happens on this timescale.

As noted earlier, broadband infrastructure buildout will be a capitalintensive activity. In rough figures, a modest upgrade that costs $200 per passing would cost $20 billion to reach all of the approximately 100 million homes in the United States. Broadband deployment to households is an extremely expensive transformation of the telecommunications industry, second only to the total investment in long-haul fiber in recent years. In light of these costs, the availability of investment capital, be it private sector or otherwise, imposes a crucial constraint on broadband deployment—it is very unlikely that there will be a dramatic one-time, nationwide replacement of today’s facilities with a new generation of technology. Instead, new technology will appear piecemeal, in new developments and overbuild situations. Old technology will be upgraded and enhanced; a mix of old, evolving, and new should be anticipated. Whether national deployment takes the form of upgrades or new infrastructure, the relevant timescale will be “old fashioned”—years, not days or months.

As a consequence, observers who are conditioned to the rapid pace of software innovation may well lose patience and assume that deployment efforts are doomed to fail—or that policies are not working—simply because deployment did not occur instantly. One should not conclude that there is something wrong—that something needs fixing—when the only issue is incorrectly anticipating faster deployment.

Much private sector investment, especially by existing firms, is incremental, with additional capital made available as investments in prior quarters show acceptable payoff. As a result, the technological approach

Suggested Citation:"4 Technology Options and Economic Factors." National Research Council. 2002. Broadband: Bringing Home the Bits. Washington, DC: The National Academies Press. doi: 10.17226/10235.
×

chosen by an incumbent is likely to make use of existing equipment and plant, and the deployment strategy must be amenable to incremental upgrades. The evolution of cable systems is a good example. The previous generation of one-way cable systems is in the process of being upgraded to hybrid fiber coax systems, and these in turn are being upgraded to provide two-way capability, greater downstream capacity, and packet transport capabilities. The various incumbents now in the broadband marketplace have very different technology and business pasts—the telecommunications providers selling voice service over copper, the cable television companies using coaxial cable to deliver video, the cellular companies constructing towers for point-to-point wireless telephony, and so forth, and each will evolve to support broadband by making incremental improvements to its respective technologies and infrastructure. Incumbents seeking to limit regulators’ ability to demand unbundling have an incentive to avoid technologies that facilitate such unbundling.

Because they exist to take greater risks but possibly provide much greater returns by identifying new promising areas, venture capitalists seek to invest in opportunities that offer high payoff, not incremental improvements. So it is no surprise that the more mature technologies, such as cable and DSL, have attracted relatively little venture capital in recent years. Another investment consideration for the venture capitalist is the total available market, with niche markets being much less attractive than markets that have the potential to grow very large. Finally, because the eventual goal is usually to sell a company (or make an initial public offering) once it has been successfully developed, venture capitalists must pay attention to trends in the public equity markets.18

Uncertain Investment Prospects in the Private Sector

Over the past few years, broadband infrastructure has to some extent followed the overall trend of technology-centered enthusiasm for venture capital investment and high-growth planning. Broadband may similarly be affected by the current slowdown in investment and by the more careful assessment of business models to which companies are now being subjected. At this time, broadband providers, as well as Internet service providers more generally, are facing problems of lack of capital and cash flow. This could lead to consolidation, and perhaps to a slowdown in the overall rate of progress.

18  

In a white paper written for this project in mid-2000, George Abe of Palomar Ventures characterized venture capital investing as “faddish” and observed that “there is a bit of a herd mentality.” There are hints that with the 2001 market drop, venture capitalists have adopted a longer-term view and are seeking well thought-out opportunities rather than chasing fads.

Suggested Citation:"4 Technology Options and Economic Factors." National Research Council. 2002. Broadband: Bringing Home the Bits. Washington, DC: The National Academies Press. doi: 10.17226/10235.
×
Investment Options for the Public Sector

If and when the public sector chooses to intervene financially to encourage service where deployment is not otherwise happening, it will have a different set of constraints. Governments have access to bond issues and other financial vehicles that best match one-time capital investments with payback over a number of years, and they also have access to a tax base that reduces the risk of default. If a major, one-time investment is to be made, the implication is that this technology must be as future-proof as possible, because it must remain viable for the period of the payoff. The most defensible technology choice in this case is fiber-to-the-home, with a separate fiber to each residence. Fiber has an intrinsic capacity that is huge, but the actual service is determined by the equipment that is installed at the residence and at the head end. With dark fiber running to each customer, the end equipment need not be upgraded for all the users at once but can be upgraded for each consumer at the time of his or her choosing. Thus, this technology base permits different consumers to use the fibers in different ways, for different services, and with different resulting costs for end-point equipment. The consumer can make these subsequent investments, reusing the fiber over the life of the investment. Upgrades are not, however, fully independent as they depend on the backhaul infrastructure. An upgrade will require not only new central office or remote terminal line cards, but also a compatible infrastructure beyond that; the remote terminal or central office rack itself may not be able to switch or route a higher-speed input due to hardware or software constraints.

Businesses look at risk as an intrinsic part of doing business and manage risk as a part of normal planning. Some investments pay off; others may not. For residential access, for example, demand may exceed expectation, or perhaps not, and a business will mitigate these risks by investment in a number of situations—communities, services, and so on.

In contrast, a municipality serves only its own citizens, so any risk of bad planning must be carried within that community. Further, the voter reaction to miscalculation may amplify the perception of the error, which can have very bad personal implications for individual politicians. Long-term investment in services that do not bring visible short-term value to the citizens may be hard for some politicians to contemplate, because the payoff from this investment may not occur in a time frame that is helpful to them. So a planner in the public sector must balance the fact that most sources of capital imply a long-term investment with the fact that citizens may not appreciate the present value of long-term investment, and may assess the impact of investment decisions based on short-term consequences. This may lead to decision making that is either more or less risk-

Suggested Citation:"4 Technology Options and Economic Factors." National Research Council. 2002. Broadband: Bringing Home the Bits. Washington, DC: The National Academies Press. doi: 10.17226/10235.
×

averse (given the level of knowledge among the citizens and apparent level of popular demand) than the decision making of the private sector.

Moore’s Law and Broadband

This report defines broadband deployment as an ongoing process, not a one-time transition. The first proposed definition of what it means for a service to be broadband reflects this reality: Access is broadband if it is fast enough on an ongoing basis to not be the limiting factor for today’s applications. With that definition in mind, unfavorable comparisons are sometimes made between the sustained improvements in the performance-to-price ratio of computing (which relate to what is known as Moore’s law, the 18-month doubling of the number of transistors on an integrated circuit) and improvements in the capacity of broadband access links. In fact, communications technologies, as exemplified by sustained improvements in fiber optic transmission speeds, have by and large kept pace with or surpassed improvements in computing. The gap one sees is between deployed services and underlying technology, not an inherent mismatch of technology innovation.

This committee spent some time exploring why broadband local access has not kept pace with other areas in computing and communications, and it considered how the economics of broadband service providers, long-haul communications providers, and computer equipment vendors might differ. In the end, the committee concluded that present understanding is too limited to reach definitive conclusions on this question. Why productivity growth in access has not kept pace with other communications sectors is an interesting question worthy of further research.

ECONOMICS OF SCALING UP CAPACITY: CONGESTION AND TRAFFIC MANAGEMENT

Once initial systems are deployed, successful broadband providers are almost certain to experience continued demands on their networks owing to increased subscribership and increased traffic per subscriber. These demands have implications both for how the access links themselves are configured and managed and for the network links between the provider and the rest of the Internet. This section provides an overview of traffic on the Internet and discusses some of the common misunderstandings about broadband technology. The term “congestion” describes the situation in which there is more offered traffic than the network can carry. Congestion can occur in any shared system; it leads to queues at emergency rooms, busy signals on the

Suggested Citation:"4 Technology Options and Economic Factors." National Research Council. 2002. Broadband: Bringing Home the Bits. Washington, DC: The National Academies Press. doi: 10.17226/10235.
×

telephone system, inability to book flights at the holidays, and slowdowns within the Internet. As these examples illustrate, congestion may be a universal phenomenon, but the way it is dealt with differs in different systems. In the telephone system, certain calls are just refused, but this would seem inhumane if applied to an emergency room (although this is sometimes being done—emergency rooms are closing their doors to new emergencies and sending the patients elsewhere). In the Internet, the “best effort” response to congestion is that every user is still served, but all transfers take longer, which has led to the complaints and jokes about the “World Wide Wait.”

Congestion is not a matter of technology but of business planning and level of investment. In other words, it is a choice made by a service provider whether to add new capacity (which presumably has a cost that has to be recovered from the users) or to subject the users to congestion (which may require the provider to offer a low-cost service in order to keep them).

Shared links can be viewed as either a benefit or a drawback, depending on one’s viewpoint. If a link is shared, it represents a potential point of congestion: if many users attempt to transmit at once, each of them may see slow transfer rates and long delays. Looked at in another way, sharing of a link among users is a central reason for the Internet’s success. Since most Internet traffic is very bursty—transmissions are not continuous but come in bursts, as for example when a Web page is fetched—a shared communications path means that one can use the total unused capacity of the shared link to transfer the burst, which may make it happen faster.

In this respect, the Internet is quite different from the telephone system. In the telephone system, the capacity to carry each telephone call is dedicated to that one connection for its duration—performance is established a priori. There is still a form of sharing—at the time the call is placed, if there is not enough capacity on the links of the telephone system, the call will not go through. Callers do not often experience this form of “busy signal,” but it is traditionally associated with high-usage events such as Mother’s Day. In contrast, the Internet dynamically adjusts the rate of each sender on the basis of how many people are transferring data, which can change in a fraction of a second.

The links that form the center of the Internet carry data from many thousands of users at any one time, and the traffic patterns observed there are very different from those observed at the edge. While the traffic from any one user can be very bursty (for a broadband user on the Web, a ratio of peak to average receiving rate of 100 to 1 is realistic), in the center of the network, where many such flows are aggregated, the result is much smoother. This smoothness results from the natural consequences of aggregating many bursty sources, not because the traffic is “managed.”

Suggested Citation:"4 Technology Options and Economic Factors." National Research Council. 2002. Broadband: Bringing Home the Bits. Washington, DC: The National Academies Press. doi: 10.17226/10235.
×

With enough users, the peaks of some users align with the valleys of other users with high odds. One of the reasons that the Internet is a cost-effective way to send data is that it does not set up a separate “call” with reserved bandwidth for each communicating source, but instead combines the traffic into one aggregate that it manages as a whole.

For dial-up Internet users, the primary bottleneck to high throughput is the modem that connects the user to the rest of the Internet. If broadband fulfills its promise to remove that bottleneck, the obvious question is, Where will that bottleneck go? There has a been a great deal of speculation about how traffic patterns on the Internet will change as more and more users upgrade to broadband. Some of these speculations have led to misapprehensions and myths about how the Internet will behave in the future.

Cable systems have the feature that the coaxial segment that serves a particular neighborhood is shared. This has led to the misconception that broadband cable systems must slow down and become congested as the number of users increases. This may happen, but it need not. Indeed, shared media in various forms are quite common in parts of the Internet. For example, the dominant local area network standard, Ethernet, which is a shared technology with some of the same features as HFC cable modems, has proved very popular in the market, even though it, too, can become congested if too many people are connected and using it at once. Cable systems have the technical means to control congestion. They can allocate more channels to broadband Internet, and they can divide their networks into smaller and smaller regions, each fed by a separate fiber link, so that fewer households share bandwidth in each segment. Whether they are, in fact, so upgraded is a business decision, relating to costs, demand, and the potential for greater revenue. Of course, less sharing would tend to reduce the cost advantage of HFC relative to other higher-capacity solutions such as FTTH.

DSL is generally thought to suffer from fewer access network congestion problems because the user has a dedicated link from the residence to the central office. It is true that the user will never see contention from other users over the dedicated DSL link; however, it also means that the user can never go faster than the fixed dedicated capacity of this link, in contrast to being able to use the total unused capacity of a shared system.

Both the cable and DSL systems bring the traffic from all their users to a point of presence (central office or head end), where this traffic is combined and then sent out over a link toward the rest of the Internet. This link from the termination point to the rest of the Internet is, in effect, shared by all of the subscribers connected to that point of presence, whether the broadband system behind it is a shared cable system or a dedicated DSL system, making the link a common source of congestion

Suggested Citation:"4 Technology Options and Economic Factors." National Research Council. 2002. Broadband: Bringing Home the Bits. Washington, DC: The National Academies Press. doi: 10.17226/10235.
×

for all of the subscribers. The cost of the link depends on both the capacity of the physical link and the compensation that must be paid to other Internet providers to carry this traffic to the rest of the Internet. The cost of these links can be a major issue in small communities where it is difficult to provision additional capacity for broadband. So there is an incentive not to oversize that link. The economics and business planning of this capacity are similar for a cable or a DSL system.

The fact that the links from the point of presence to the rest of the Internet are often a source of congestion illustrates an important point. The number of users whose traffic must be aggregated to make the total traffic load smooth is measured in the thousands, not hundreds. So there may be a natural size below which broadband access systems become less efficient. For example, if it takes 10,000 active users to achieve good smoothing on the path from the rest of the Internet, then a provider who gets 10 percent of the market,19 and who can expect half of his users to be active in a busy hour, needs a total population of 200,000 households as a market base in a particular region.

Even if the broadband local access links themselves are adequately provisioned, bottlenecks may still exist, owing to such factors as peering problems between the broadband service provider and the rest of the Internet, host loading, or other factors. Performance will also be dependent on the performance of elements other than the communications links themselves, such as caches and content servers located at various points within the network (or even performance limitations of the user’s computer itself). These problems, which will inevitably occur on occasion, have the potential to confuse consumers, who will be apt to place blame on the local broadband provider, whether rightly or wrongly.

19  

For an examination of the smoothing phenomenon, see David D. Clark, William Lehr, and Ian Liu, “Provisioning for Bursty Internet Traffic: Implications for Industry Structure,” to appear in L. McKnight and J. Wroclawski, eds., 2002, Internet Service Quality Economics, MIT Press, Cambridge, Mass.

Suggested Citation:"4 Technology Options and Economic Factors." National Research Council. 2002. Broadband: Bringing Home the Bits. Washington, DC: The National Academies Press. doi: 10.17226/10235.
×
Page120
Suggested Citation:"4 Technology Options and Economic Factors." National Research Council. 2002. Broadband: Bringing Home the Bits. Washington, DC: The National Academies Press. doi: 10.17226/10235.
×
Page121
Suggested Citation:"4 Technology Options and Economic Factors." National Research Council. 2002. Broadband: Bringing Home the Bits. Washington, DC: The National Academies Press. doi: 10.17226/10235.
×
Page122
Suggested Citation:"4 Technology Options and Economic Factors." National Research Council. 2002. Broadband: Bringing Home the Bits. Washington, DC: The National Academies Press. doi: 10.17226/10235.
×
Page123
Suggested Citation:"4 Technology Options and Economic Factors." National Research Council. 2002. Broadband: Bringing Home the Bits. Washington, DC: The National Academies Press. doi: 10.17226/10235.
×
Page124
Suggested Citation:"4 Technology Options and Economic Factors." National Research Council. 2002. Broadband: Bringing Home the Bits. Washington, DC: The National Academies Press. doi: 10.17226/10235.
×
Page125
Suggested Citation:"4 Technology Options and Economic Factors." National Research Council. 2002. Broadband: Bringing Home the Bits. Washington, DC: The National Academies Press. doi: 10.17226/10235.
×
Page126
Suggested Citation:"4 Technology Options and Economic Factors." National Research Council. 2002. Broadband: Bringing Home the Bits. Washington, DC: The National Academies Press. doi: 10.17226/10235.
×
Page127
Suggested Citation:"4 Technology Options and Economic Factors." National Research Council. 2002. Broadband: Bringing Home the Bits. Washington, DC: The National Academies Press. doi: 10.17226/10235.
×
Page128
Suggested Citation:"4 Technology Options and Economic Factors." National Research Council. 2002. Broadband: Bringing Home the Bits. Washington, DC: The National Academies Press. doi: 10.17226/10235.
×
Page129
Suggested Citation:"4 Technology Options and Economic Factors." National Research Council. 2002. Broadband: Bringing Home the Bits. Washington, DC: The National Academies Press. doi: 10.17226/10235.
×
Page130
Suggested Citation:"4 Technology Options and Economic Factors." National Research Council. 2002. Broadband: Bringing Home the Bits. Washington, DC: The National Academies Press. doi: 10.17226/10235.
×
Page131
Suggested Citation:"4 Technology Options and Economic Factors." National Research Council. 2002. Broadband: Bringing Home the Bits. Washington, DC: The National Academies Press. doi: 10.17226/10235.
×
Page132
Suggested Citation:"4 Technology Options and Economic Factors." National Research Council. 2002. Broadband: Bringing Home the Bits. Washington, DC: The National Academies Press. doi: 10.17226/10235.
×
Page133
Suggested Citation:"4 Technology Options and Economic Factors." National Research Council. 2002. Broadband: Bringing Home the Bits. Washington, DC: The National Academies Press. doi: 10.17226/10235.
×
Page134
Suggested Citation:"4 Technology Options and Economic Factors." National Research Council. 2002. Broadband: Bringing Home the Bits. Washington, DC: The National Academies Press. doi: 10.17226/10235.
×
Page135
Suggested Citation:"4 Technology Options and Economic Factors." National Research Council. 2002. Broadband: Bringing Home the Bits. Washington, DC: The National Academies Press. doi: 10.17226/10235.
×
Page136
Suggested Citation:"4 Technology Options and Economic Factors." National Research Council. 2002. Broadband: Bringing Home the Bits. Washington, DC: The National Academies Press. doi: 10.17226/10235.
×
Page137
Suggested Citation:"4 Technology Options and Economic Factors." National Research Council. 2002. Broadband: Bringing Home the Bits. Washington, DC: The National Academies Press. doi: 10.17226/10235.
×
Page138
Suggested Citation:"4 Technology Options and Economic Factors." National Research Council. 2002. Broadband: Bringing Home the Bits. Washington, DC: The National Academies Press. doi: 10.17226/10235.
×
Page139
Suggested Citation:"4 Technology Options and Economic Factors." National Research Council. 2002. Broadband: Bringing Home the Bits. Washington, DC: The National Academies Press. doi: 10.17226/10235.
×
Page140
Suggested Citation:"4 Technology Options and Economic Factors." National Research Council. 2002. Broadband: Bringing Home the Bits. Washington, DC: The National Academies Press. doi: 10.17226/10235.
×
Page141
Suggested Citation:"4 Technology Options and Economic Factors." National Research Council. 2002. Broadband: Bringing Home the Bits. Washington, DC: The National Academies Press. doi: 10.17226/10235.
×
Page142
Suggested Citation:"4 Technology Options and Economic Factors." National Research Council. 2002. Broadband: Bringing Home the Bits. Washington, DC: The National Academies Press. doi: 10.17226/10235.
×
Page143
Suggested Citation:"4 Technology Options and Economic Factors." National Research Council. 2002. Broadband: Bringing Home the Bits. Washington, DC: The National Academies Press. doi: 10.17226/10235.
×
Page144
Suggested Citation:"4 Technology Options and Economic Factors." National Research Council. 2002. Broadband: Bringing Home the Bits. Washington, DC: The National Academies Press. doi: 10.17226/10235.
×
Page145
Suggested Citation:"4 Technology Options and Economic Factors." National Research Council. 2002. Broadband: Bringing Home the Bits. Washington, DC: The National Academies Press. doi: 10.17226/10235.
×
Page146
Suggested Citation:"4 Technology Options and Economic Factors." National Research Council. 2002. Broadband: Bringing Home the Bits. Washington, DC: The National Academies Press. doi: 10.17226/10235.
×
Page147
Suggested Citation:"4 Technology Options and Economic Factors." National Research Council. 2002. Broadband: Bringing Home the Bits. Washington, DC: The National Academies Press. doi: 10.17226/10235.
×
Page148
Suggested Citation:"4 Technology Options and Economic Factors." National Research Council. 2002. Broadband: Bringing Home the Bits. Washington, DC: The National Academies Press. doi: 10.17226/10235.
×
Page149
Suggested Citation:"4 Technology Options and Economic Factors." National Research Council. 2002. Broadband: Bringing Home the Bits. Washington, DC: The National Academies Press. doi: 10.17226/10235.
×
Page150
Suggested Citation:"4 Technology Options and Economic Factors." National Research Council. 2002. Broadband: Bringing Home the Bits. Washington, DC: The National Academies Press. doi: 10.17226/10235.
×
Page151
Suggested Citation:"4 Technology Options and Economic Factors." National Research Council. 2002. Broadband: Bringing Home the Bits. Washington, DC: The National Academies Press. doi: 10.17226/10235.
×
Page152
Suggested Citation:"4 Technology Options and Economic Factors." National Research Council. 2002. Broadband: Bringing Home the Bits. Washington, DC: The National Academies Press. doi: 10.17226/10235.
×
Page153
Suggested Citation:"4 Technology Options and Economic Factors." National Research Council. 2002. Broadband: Bringing Home the Bits. Washington, DC: The National Academies Press. doi: 10.17226/10235.
×
Page154
Suggested Citation:"4 Technology Options and Economic Factors." National Research Council. 2002. Broadband: Bringing Home the Bits. Washington, DC: The National Academies Press. doi: 10.17226/10235.
×
Page155
Suggested Citation:"4 Technology Options and Economic Factors." National Research Council. 2002. Broadband: Bringing Home the Bits. Washington, DC: The National Academies Press. doi: 10.17226/10235.
×
Page156
Suggested Citation:"4 Technology Options and Economic Factors." National Research Council. 2002. Broadband: Bringing Home the Bits. Washington, DC: The National Academies Press. doi: 10.17226/10235.
×
Page157
Suggested Citation:"4 Technology Options and Economic Factors." National Research Council. 2002. Broadband: Bringing Home the Bits. Washington, DC: The National Academies Press. doi: 10.17226/10235.
×
Page158
Suggested Citation:"4 Technology Options and Economic Factors." National Research Council. 2002. Broadband: Bringing Home the Bits. Washington, DC: The National Academies Press. doi: 10.17226/10235.
×
Page159
Suggested Citation:"4 Technology Options and Economic Factors." National Research Council. 2002. Broadband: Bringing Home the Bits. Washington, DC: The National Academies Press. doi: 10.17226/10235.
×
Page160
Suggested Citation:"4 Technology Options and Economic Factors." National Research Council. 2002. Broadband: Bringing Home the Bits. Washington, DC: The National Academies Press. doi: 10.17226/10235.
×
Page161
Suggested Citation:"4 Technology Options and Economic Factors." National Research Council. 2002. Broadband: Bringing Home the Bits. Washington, DC: The National Academies Press. doi: 10.17226/10235.
×
Page162
Suggested Citation:"4 Technology Options and Economic Factors." National Research Council. 2002. Broadband: Bringing Home the Bits. Washington, DC: The National Academies Press. doi: 10.17226/10235.
×
Page163
Suggested Citation:"4 Technology Options and Economic Factors." National Research Council. 2002. Broadband: Bringing Home the Bits. Washington, DC: The National Academies Press. doi: 10.17226/10235.
×
Page164
Suggested Citation:"4 Technology Options and Economic Factors." National Research Council. 2002. Broadband: Bringing Home the Bits. Washington, DC: The National Academies Press. doi: 10.17226/10235.
×
Page165
Suggested Citation:"4 Technology Options and Economic Factors." National Research Council. 2002. Broadband: Bringing Home the Bits. Washington, DC: The National Academies Press. doi: 10.17226/10235.
×
Page166
Next: 5 Broadband Policy and Regulation »
Broadband: Bringing Home the Bits Get This Book
×
Buy Paperback | $48.00 Buy Ebook | $38.99
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

Broadband communication expands our opportunities for entertainment, e-commerce and work at home, health care, education, and even e-government. It can make the Internet more useful to more people. But it all hinges on higher capacity in the “first mile” or “last mile” that connects the user to the larger communications network. That connection is often adequate for large organizations such as universities or corporations, but enhanced connections to homes are needed to reap the full social and economic promise.

Broadband: Bringing Home the Bits provides a contemporary snapshot of technologies, strategies, and policies for improving our communications and information infrastructure. It explores the potential benefits of broadband, existing and projected demand, progress and failures in deployment, competition in the broadband industry, and costs and who pays them. Explanations of broadband’s alphabet soup – HFC, DSL, FTTH, and all the rest – are included as well. The report’s finding and recommendations address regulation, the roles of communities, needed research, and other aspects, including implications for the Telecommunications Act of 1996.

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    Switch between the Original Pages, where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

    « Back Next »
  6. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  7. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  8. ×

    View our suggested citation for this chapter.

    « Back Next »
  9. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!