National Academies Press: OpenBook

The Evolution of Untethered Communications (1997)

Chapter: 2 TECHNOLOGY LIMITS, TRADE-OFFS, AND CHALLENGES

« Previous: 1 PAST, PRESENT, AND FUTURE
Suggested Citation:"2 TECHNOLOGY LIMITS, TRADE-OFFS, AND CHALLENGES." National Research Council. 1997. The Evolution of Untethered Communications. Washington, DC: The National Academies Press. doi: 10.17226/5968.
×

Page 56

2
Technology Limits, Trade-offs, and Challenges

Wireless communications networks incorporate a broad range of technologies, including electrochemical materials, electronic devices and circuits, antennas, digital signal processing algorithms, network control protocols, and cryptography. Although all of these technologies are well advanced in other applications, wireless systems introduce a set of constraints and challenges beyond those addressed in the evolution of other communications networks, such as the (wireline) public switched telephone network and the Internet. These special constraints make it exceedingly difficult to design affordable wireless systems that meet every need. The challenges can be grouped into three categories: mobility, connectivity, and energy.

Mobility is a fundamental feature of untethered communications networks. Portable, wireless communications devices significantly enhance the mobility of users, but they also pose network design difficulties. As the communications devices move, the network has to rearrange itself. To deliver information to a mobile terminal, the network has to learn the new location(s) of the terminal and change the routing of information accordingly, sometimes at very high speeds. The rerouting must be done seamlessly without any perceived interruption of service.

A wide variety of problems arise when mobile wireless communications terminals send and receive signals over the air. The signals of all the terminals are subject to mutual interference. The characteristics of the propagation medium change randomly as users move, and the mobile radio channel introduces random variation in the received signal power

Suggested Citation:"2 TECHNOLOGY LIMITS, TRADE-OFFS, AND CHALLENGES." National Research Council. 1997. The Evolution of Untethered Communications. Washington, DC: The National Academies Press. doi: 10.17226/5968.
×

Page 57

and other distortions, such as frequency shifts and the spreading of signals over time. Signals that travel over the air are also more vulnerable to jamming and interception than are those transmitted through wires or fibers. These limitations are often addressed with a combination of sophisticated signal processing techniques and antennas. However, these solutions add to the complexity of portable communications devices and increase power requirements.

Wireless systems pose two types of power challenges. First, when power is radiated from an antenna, very little of it typically reaches the receiver, a phenomenon known as path loss. This problem can be partly overcome with increased transmit power, special types of antennas, and other solutions. Second, wireless terminals often carry their own power supplies in the form of batteries. Battery life is limited and is influenced by many aspects of terminal design as well as the technology of the network infrastructure. Scarce power constrains the signal processing capabilities and transmit power of the mobile terminal, motivating efforts to keep these units as simple as possible. However, a low-power design cannot accommodate the most sophisticated techniques available to cope with the vagaries of the wireless channel and support the network protocols of mobility management. In the absence of research breakthroughs that simplify these techniques, the only solution is to increase the complexity of the network, which needs to compensate for the simplicity of portable communications devices.

The challenges related to mobility, connectivity, and energy have stimulated a high level of R&D activity in the telecommunications industry and academia. Still, a chasm remains between the capabilities of wired and wireless communications systems. Even as commercial wireless systems evolve, additional features will be needed to meet military requirements for untethered communications. Military applications introduce additional challenges because the systems need to be rapidly deployable on mobile platforms in any one of a diverse range of operating environments; they need to interoperate with other systems; and they need protection against enemy attempts to jam, intercept, and alter information.

This chapter provides the technical basis for the analysis of military-commercial synergy in Chapter 3 by examining the challenges of mobility, connectivity, and energy and the technologies devised to overcome them. The discussion refers to the various layers of a network as defined in the Open Systems Interconnection (OSI) model (see Box 2-1). Section 2.1 is a tutorial on the wireless channel, its capacity limits, techniques for overcoming channel impairments, and the access and operational issues that arise when multiple users share the same channel. The next three sections address network, system, and hardware issues with an emphasis

Suggested Citation:"2 TECHNOLOGY LIMITS, TRADE-OFFS, AND CHALLENGES." National Research Council. 1997. The Evolution of Untethered Communications. Washington, DC: The National Academies Press. doi: 10.17226/5968.
×

Page 58

BOX 2-1

Open Systems Interconnection Model

The Open Systems Interconnection model identifies seven layers, some or all of which are implemented by virtually any network system. The physical layer includes the mechanical, electrical, and procedural interfaces to the transmission medium. The link layer converts the transmission medium into a stream that appears to be free of undetected errors. This layer includes error-correction mechanisms and the protocols used to gain access to shared channels. The network layer chooses a route from the sender to the receiver and deals with congestion and address issues. The IP protocol falls into this layer. The transport layer is responsible for the end-to-end delivery of data. The TCP protocol falls into this layer. The session layer allows multiple transport-layer connections to be managed as a single unit. The presentation layer chooses common representations (typically application dependent) for the data being carried. The applications layer deals with application-specific protocol issues.

on military needs. Section 2.2 examines network design issues including architecture, resource allocation and discovery, inoperability, mobility management, and simulation and modeling tools. Section 2.3 addresses end-to-end systems design issues including application-level adaptation, quality of service, and security. Section 2.4 reviews hardware issues of particular military concern, focusing on radio components.

2.1 Communication Link Design

The ideal wireless communications system would provide high data rates with high reliability and yet use minimum bandwidth and power. It would perform well in wireless propagation environments despite multiple channel impairments such as signal fading and interference. The ideal system would accommodate hardware constraints such as imperfect timing and nonlinear amplifiers. The mobile units would have low power requirements and yet still provide adequate transmit power and signal processing. In addition, despite the system complexity required to achieve this performance level, both the transmitter and receiver would be affordable.

Such a system has yet to be built. In fact, many of the desired properties are mutually exclusive, meaning that trade-offs need to be made in system design. A case in point is the choice of approaches for overcoming the limitations and impairments of the wireless channel. The impairments inherent in any wireless channel include the rate at which received signal power decreases relative to transmitter-receiver distance (path loss),

Suggested Citation:"2 TECHNOLOGY LIMITS, TRADE-OFFS, AND CHALLENGES." National Research Council. 1997. The Evolution of Untethered Communications. Washington, DC: The National Academies Press. doi: 10.17226/5968.
×

Page 59

attenuation caused by objects blocking the signal transmission (shadow fading), and rapid variations in received signal power (flat fading). The impairments determine the types of applications that can be supported in different propagation environments. Applications require different data rates and bit-error rates (BER, or the probability that a bit is received in error). For example, voice applications require data rates on the order of 8 to 32 kbps and a maximum tolerable BER of 10-3, whereas database access and remote file transfer require data rates up to 1 Mbps and a maximum tolerable BER of 10-7.1Both sets of specifications are difficult to achieve in many radio environments.

In general, systems designed for the worst-case propagation conditions assume high error rates, which limit their capability to support high-speed data and video teleconferencing applications. The random nature of the radio channel makes it difficult to guarantee quality and performance for demanding applications. However, a wireless system can be designed to adapt to the varying link quality at both the link and network level, such that the system can support improved data rates and quality. Applications can also be designed to adapt to deteriorating channel conditions to minimize the degradation perceived by the user. The overall system can be optimized by making trade-offs among various performance measures such as BER, outage probability, and spectral and power efficiency. These trade-offs dictate the choice of modulation, signal processing, and antenna techniques used to mitigate channel impairments.

These techniques require fairly intensive digital signal processing at the mobile unit. The extent of the computation that can be performed is limited by the power available to drive the DSP chips and the microprocessor. Thus, in addition to being power limited, the mobile unit is also complexity limited, which means that trade-offs need to be made in designing the communication link. For example, the transmit power requirements of the mobile unit can be reduced if error-correction coding is used, but then additional power is needed to drive the encoding and decoding hardware. In cellular systems it is preferable to place much of the computational burden at the base station, which has fewer power restrictions than do the mobile units. Research aimed at simplifying DSP and antenna processing techniques (Section 2.4) can also help mitigate the computational burden.

The remainder of this section outlines the characteristics of the wireless channel, focusing on fading and interference problems (Section 2.1.1); key communications technologies, including modulation and coding (Sections 2.1.2 through 2.1.4); the countermeasures available to address fading and interference (Section 2.1.5); and the various ways in which users access wireless systems (Section 2.1.6).

Suggested Citation:"2 TECHNOLOGY LIMITS, TRADE-OFFS, AND CHALLENGES." National Research Council. 1997. The Evolution of Untethered Communications. Washington, DC: The National Academies Press. doi: 10.17226/5968.
×

Page 60

2.1.1 Characteristics of the Wireless Channel

The characteristics of the radio channel impose fundamental limits on the range, data rate, and quality of wireless communications. The performance limits are influenced by several factors, most significantly the propagation environment and user mobility pattern. For example, the indoor radio channels typically support higher data rates with better reliability than does the outdoor channel used by persons moving rapidly.

Electromagnetic signals can be characterized by the features of the waveform: amplitude (the power, or magnitude, of the signal); phase (the timing of the peak or trough of the signal variations); and frequency (the number of repetitions of the signal per second).2The effects of the wireless channel on the received signal power are typically divided into large-scale and small-scale effects. Large-scale effects involve the variation of the mean received signal power over large distances relative to the signal wavelength, whereas small-scale effects involve the fluctuations of the received signal power over distances commensurate with the wavelength. Path loss effects are noticeable over large distances (i.e., distances on the order of 100 m or more). Signal power variations due to obstacles such as building or terrain features are observable over distances that are proportional to the length of the obstructing object. Very rapid variations result from multipath reflections, which are copies of the transmitted signal that reflect or diffract off surrounding objects before arriving by different paths at the receiver. These reflections arrive at a receiver later than the nonreflected signal path and are often shifted in phase as well. The multipath reflections either reinforce or cancel each other and the nonreflected signal path depending on the exact position of the receiver (if moving) or the transmitter (if moving). The overall effects of multipath propagation involving a moving terminal are rapid variation in the received signal power and nonuniform distortion of the frequency components of the signal.

The first four subsections below discuss path loss, fading, and various sources of interference as they apply to the path between two terrestrial RF devices. The fifth subsection details the characteristics of satellite RF links.

2.1.1.1 Path Loss

Path loss is equal to the received power divided by the transmitted power, and this loss is a function of the transmitter-receiver separation. For a given transmit power, a path loss model3predicts the received power level at some distance from the transmitter. The simplest model for path loss, which captures the key characteristics for most channels, is an exponential relationship: The received signal power is proportional to the transmit power and inversely proportional to the square of the transmission

Suggested Citation:"2 TECHNOLOGY LIMITS, TRADE-OFFS, AND CHALLENGES." National Research Council. 1997. The Evolution of Untethered Communications. Washington, DC: The National Academies Press. doi: 10.17226/5968.
×

Page 61

frequency and the transmitter-receiver distance raised to the power of a ''path loss exponent."4In free space the path loss exponent is 2, whereas for typical outdoor environments it ranges from 3 to 5. In environments with dense buildings or trees, path loss exponents can exceed 8. Thus, systems designed for typical suburban or low-density urban outdoor environments require much higher transmit power to achieve the same desired performance in a dense jungle or downtown area packed with tall buildings.

The BER of a wireless link is determined by the received signal power, noise introduced by the receiver hardware, interference, and channel characteristics. The noise is typically proportional to the RF bandwidth. For the exponential path loss model just described, the received signal-to-noise ratio (SNR) is the product of the transmit power and path loss, divided by the noise power. The SNR required for faithful reception depends on the communications technique used, the channel characteristics, and the required BER. Because path loss affects the received SNR, path loss imposes limits on the data rate and signal range for a given BER. In general, for a given BER, high-data-rate applications typically require more transmit power or have a smaller coverage range (sometimes both) than do low-data-rate applications. For example, given a transmit power of 1 W, a transmit frequency of 1 GHz, and an omnidirectional antenna, the transfer of data through free space (for which the path loss exponent is 2) at 1 Mbps and 10-7 BER can be accomplished between radios that are 728 m apart, whereas in a jungle (for which the path loss exponent is 10) the range can be as low as 4 m.

2.1.1.2 Shadow Fading

A received signal is often blocked by hills or buildings outdoors and furniture or walls indoors. The received signal power is in fact a random variable that depends on the number and dielectric properties of the obstructing objects. Signal variation due to these obstructions is called shadow fading. Measurements have shown that the power, measured in decibels (dB), of signals subject to shadow fading exhibits a Gaussian (i.e., normal) distribution, a pattern referred to as long-normal shadowing. The random attenuation of shadow fading changes as the mobile unit moves past or around the obstructing object. Because the signal coverage is not uniform even at equal distances from the transmitter, the transmit power needs to be increased to ensure that the received-SNR requirements are met uniformly throughout the coverage region. The power increase imposes additional burdens on the transmitter battery and can cause interference for other users of the same frequency band.

Suggested Citation:"2 TECHNOLOGY LIMITS, TRADE-OFFS, AND CHALLENGES." National Research Council. 1997. The Evolution of Untethered Communications. Washington, DC: The National Academies Press. doi: 10.17226/5968.
×

Page 62

2.1.1.3 Small-Scale (Multipath) Fading

Small-scale fading is caused by interference between multiple versions of the signal that arrive at the receiver at different times. Multipath can be helpful if the signals add constructively to produce a higher power (a random event), but more often it results in harmful interference. The overall effect is a standing wave pattern of the received signal power. Harmful interference can cause the received signal power to drop by a factor of 1,000 below its average value at nulls in the standing wave pattern. Moreover, for practical speeds of wireless terminals, the changes in signal power are extremely rapid: At a frequency of 900 MHz the signal power changes every 30 centimeters, or every 23 milliseconds if the terminal is moving at 50 km per hour. In many practical environments, these changes are referred to as "Rayleigh fading" because the received signal amplitude conforms to a Rayleigh probability density function.

Signal fading can be characterized by determining the delay spread of the fading relative to the signal bandwidth. The delay spread is defined as the time delay between the direct-path signal component and the component that takes the longest path from the transmitter to the receiver. Because the delay spread is a random variable, it is often characterized by its standard deviation, called the root mean square (RMS) delay spread of the channel. If the product of the RMS delay spread and the signal bandwidth is much less than 1, then the fading is called flat fading. In this case the received signal envelope has a random amplitude and phase (commonly described by a Rayleigh distribution), but there is no additional signal distortion.

When the product of the RMS delay spread and signal bandwidth is greater than 1, the fading becomes frequency selective. Frequency-selective fading introduces self-interference because the delay spread is so large that multipath reflections corresponding to a given bit transmission arrive at the receiver simultaneously with subsequent data bits. This intersymbol interference (ISI) establishes an "error floor" in the received bits that cannot be reduced by an increase in signal power because doing so also increases the self-interference. Without compensation, the ISI forces a reduction in the data rate such that the product of the RMS delay spread and signal bandwidth is less than 0.1. For a 10-3 BER and a rural environment, the delay spread is approximately 25 microseconds and the corresponding maximum data rate is only 8 kbps; the data rates for lower BERs are even more limited. Some form of compensation, either signal processing or sophisticated antenna design, clearly is needed to achieve high-rate data transmission in the presence of ISI. These techniques impose additional complexity and power requirements on the receiver.

Suggested Citation:"2 TECHNOLOGY LIMITS, TRADE-OFFS, AND CHALLENGES." National Research Council. 1997. The Evolution of Untethered Communications. Washington, DC: The National Academies Press. doi: 10.17226/5968.
×

Page 63

Movement of a receiver relative to the transmitter (or vice versa) causes the received signal to be frequency shifted relative to the transmitted signal. The frequency shift, or Doppler frequency, is proportional to the mobile velocity and the frequency of the transmitted signal. For a transmitted signal frequency of 900 MHz and a receiver or transmitter speed of 96 km per hour, the Doppler frequency is roughly 80 Hz. This Doppler shift creates an irreducible error floor for noncoherent detection techniques (which use the previous bit to obtain a phase reference for the current bit). In general the irreducible BER is not a problem when data are transmitted at high speed (faster than 1 Mbps), but it is an issue for moderate-rate (slower than 100 kbps) data applications.

In general the signal changes slowly with time because of path loss, more quickly because of shadow fading, and very quickly because of multipath flat fading; all of these effects are simultaneously superimposed on the transmitted signal. As noted above, the shadow fading needs to be addressed by an increase in transmit power. The deep fades in signal power caused by flat fading also need to be counterbalanced by an increase in transmit power or some other approach (see Section 2.1.5.1). Otherwise the transmitted signal typically exhibits bursts of errors that are difficult to correct.

2.1.1.4 Interference

Users of wireless communications systems can experience interference from various sources. One source is frequency reuse, a popular technique for increasing the number of users in a given region who can be supported by a particular set of frequencies. Cellular systems reuse frequencies at spatially separated locations, taking advantage of the falloff in received signal power with distance (which is indicated by the path loss model). The downside of frequency reuse is the introduction of co-channel interference (see Section 2.2.1.1), which increases the noise floor and degrades performance.

Other sources of interference include adjacent channels and narrow bands of problem frequencies. Adjacent-channel interference can be mitigated by the introduction of guard channels between users, although this technique consumes bandwidth. Narrowband interference can be removed by notch filters or spread-spectrum techniques. Notch filters are simple devices that block the band of frequencies containing the interference; these devices are effective only if the specific frequencies of concern are known. Spread-spectrum techniques (see Section 2.1.5.2), which spread a signal across a larger band of frequencies than is required for normal transmission, can reduce the effect of interference and hostile jamming signals.

Suggested Citation:"2 TECHNOLOGY LIMITS, TRADE-OFFS, AND CHALLENGES." National Research Council. 1997. The Evolution of Untethered Communications. Washington, DC: The National Academies Press. doi: 10.17226/5968.
×

Page 64

2.1.1.5 Satellite Channels

Satellite channels (the link between a receiver or transmitter on Earth and an orbiting receiver or transmitter) have inherent advantages over terrestrial radio channels. Multipath fading is rare because a signal propagating skyward does not experience much reflection from surrounding objects (except in downtown areas with densely packed buildings). Moreover, most satellite systems operate in the gigahertz frequency range, allowing for large-bandwidth communication links that support very high bit rates.

The primary limitation of satellite channels is very high path loss, which generally follows the formula described earlier in this chapter. For satellites the path loss exponent is 2. Because satellites operate at high frequencies and the path distance is long (500 to 2,000 km for a LEO satellite), much higher transmit power is needed than is the case for terrestrial systems operating at the same data rate. Satellite signals are also subject to attenuation by Earth's atmosphere. The effects are especially adverse at frequencies above 10 GHz, where oxygen and water vapor, rain, clouds, fog, and scintillation cause random variations in signal amplitude, phase, polarization, and angle of arrival (similar to the adverse effects of multipath fading in terrestrial propagation). Satellite systems compensate somewhat for the large path loss and adverse atmospheric effects by using high-gain directional antennas to boost the received power.

2.1.2 Capacity Limits of Wireless Channels

The pioneering work of Claude Shannon determined the total capacity limits for simple wired and wireless channel models: These limits established an upper bound on the maximum spectral link efficiency, measured as the data rate per unit of bandwidth as a function of the received SNR. For a channel without fading, ISI, or Doppler shift, this maximum bandwidth efficiency was identified by Shannon to be the logarithm of the term [SNR + 1] (Shannon, 1949).

Determining the capacity limits of wireless channels with all the impairments outlined in the previous section is quite challenging. A relatively simple lower bound for a channel capacity that varies over time is the Shannon capacity under the worst-case propagation conditions. This is often a good bound to apply in practice because many communication links are designed to have acceptable performance even under the worst conditions. However, this design wastes resources because typical operating conditions are generally much better than the worst-case scenario. For channels that exhibit shadow fading or multipath fading, the channel

Suggested Citation:"2 TECHNOLOGY LIMITS, TRADE-OFFS, AND CHALLENGES." National Research Council. 1997. The Evolution of Untethered Communications. Washington, DC: The National Academies Press. doi: 10.17226/5968.
×

Page 65

capacity under worst-case fading conditions is close to zero. The capacity of these fading channels increases greatly when the data rate, power, and transmission are adapted using sophisticated modulation techniques, which are discussed in the next section. As measured by spectral link efficiency, these adaptive techniques in both Rayleigh fading and lognormal-shadowed channels can support much higher data rates than are typical in today's wireless systems. For example, typical digital voice systems deliver 8 kbps in a 30-kHz channel, which corresponds to a spectral link efficiency of 8/30, far less than 1. If this channel experiences Rayleigh fading, then an SNR of approximately 30 dB is required. At this SNR, a spectral link efficiency of approximately 8 can be achieved in Rayleigh fading by using adaptive techniques—a 30-fold improvement over the typical voice system of today (see Figure 2-1).5

2.1.3 Modulation

Modulation is the process of encoding information into the amplitude, phase, and/or frequency of a transmitted signal (Ziemer and Tranter, 1995). This encoding process affects the bandwidth of the transmitted signal and its robustness under impaired channel conditions. In the case of bandwidth-limited channels, digital modulation techniques encode several bits into one symbol. The rate of symbol transmission determines the bandwidth of the transmitted signal: the larger the number of bits encoded per symbol, the more efficient the use of bandwidth but the greater the power requirement for a given BER in the presence of noise.

Modulation techniques fall into two categories: linear and nonlinear. In general, linear modulation techniques use less bandwidth than do nonlinear techniques. However, linear modulation techniques also tend to produce large fluctuations in signal amplitude. This is a disadvantage when using nonlinear amplifiers such as class C amplifiers (the least expensive, most readily available, and most power-efficient amplifiers), because they distort linear modulation signals. Thus, the bandwidth efficiency of linear modulation is generally obtained at the expense of the additional power needed for very linear amplifiers (and reduced battery life).

2.1.4 Channel Coding and Link-Layer Retransmission

Channel coding improves performance by adding redundant bits in the transmitted bit stream that are used by the receiver to correct errors introduced by the channel, thus reducing the average BER. This approach enables a reduction in the transmit power required to achieve a target BER. Conventional forward-error-correction (FEC) codes, which reduce

Suggested Citation:"2 TECHNOLOGY LIMITS, TRADE-OFFS, AND CHALLENGES." National Research Council. 1997. The Evolution of Untethered Communications. Washington, DC: The National Academies Press. doi: 10.17226/5968.
×

Page 66

image

FIGURE 2-1 Spectral link efficiency can be greatly increased using adaptive techniques. Theoretical efficiencies are shown.

Suggested Citation:"2 TECHNOLOGY LIMITS, TRADE-OFFS, AND CHALLENGES." National Research Council. 1997. The Evolution of Untethered Communications. Washington, DC: The National Academies Press. doi: 10.17226/5968.
×

Page 67

the required transmit power for a given BER at the expense of increased signal bandwidth or a reduced data rate (Lin and Costello, 1983), use block or convolutional code designs. Block codes add parity bits to blocks of messages. Convolutional codes map a continuous sequence of information bits onto a continuous sequence of encoded bits. Trellis codes combine channel code design and modulation to reduce the BER without bandwidth expansion or rate reduction (Ungerboeck, 1982). More recent advances in coding technology, such as Turbo codes (Berrou et al., 1993), exhibit superior error-correction performance, although they are generally very complex and impose large delays on end-to-end transmission, drawbacks that make them unsuitable for many wireless applications.

Another way to reduce the link errors prevalent in wireless systems is to implement link retransmission (part of the protocol known as automatic repeat request, or ARQ). The data are encoded with a checksum (the sum of the 1s and 0s in the transmitted digitized data), which the receiver compares to the data received; if the data are corrupted, then the receiver requests a retransmission. Link-layer retransmission wastes system resources because of the added power requirements and interference with other users. In addition, retransmission schemes can result in the delivery of data in the wrong order: When a block is lost on the link because of an error burst, a subsequent block is likely to be sent and received before the lost block is sent again. This phenomenon triggers duplicate acknowledgments and end-to-end retransmissions at the transport layer, further burdening the network. (When countermeasures for fading are used to reduce link errors, as discussed in the next section, the problems introduced by retransmission are similarly reduced.) Even so, ARQ is the only alternative in many cases because FEC is not sufficient in applications with stringent BER requirements. Some link-layer schemes, such as asymmetric reliable mobile access in link layer (AIRMAIL; Ayonoglu et al., 1995), can avert out-of-order delivery to higher-protocol layers, but this approach increases delays and variability in the interpacket delivery times.

2.1.5 Countermeasures for Fading

Numerous signal processing and design techniques have been developed to counter the effects of fading on the wireless channel. Of particular interest are countermeasures for the two types of small-scale fading described above: flat fading and frequency-selective fading.

2.1.5.1 Flat-Fading Countermeasures

The random variation in received signal power caused by multipath flat fading results in a very large increase in BER. For example, to maintain

Suggested Citation:"2 TECHNOLOGY LIMITS, TRADE-OFFS, AND CHALLENGES." National Research Council. 1997. The Evolution of Untethered Communications. Washington, DC: The National Academies Press. doi: 10.17226/5968.
×

Page 68

an average BER of 10-3 (a typical requirement for point-to-point voice systems at the link level) using binary phase-shift key modulation, 60 times more power is required than would be in the absence of flat fading. The difference in required power is even larger at the much lower BER required for data transmission. It follows that the required transmit power can be significantly reduced by combating the effects of flat fading. The most common flat-fading countermeasures are diversity, coding and interleaving, and adaptive modulation. Spread-spectrum techniques also mitigate fading effects (see Section 2.1.5.2).

In diversity, several separate, independently fading signal paths are established between the transmitter and receiver and the resulting received signals are combined. Because there is a low probability of separate fading paths experiencing deep fades simultaneously, the signal obtained by combining several such paths is unlikely to experience large power variations. Independent fading paths can be achieved by separating the signal in time, frequency, space, or polarization. Time and frequency diversity are spectrally inefficient because information is duplicated; polarization diversity is of limited effectiveness because only two independent fading paths (corresponding to horizontal and vertical polarization) can be created. That leaves space diversity as the most efficient of these techniques. Independent fading paths in space are obtained using an antenna array, in which each element receives a separate path. Multiple antenna elements are mounted at the receiver with a separation greater than or equal to half the signal wavelength (Yacoub, 1993). Almost all of the multipath variation is removed by first creating and then later combining four independent paths, with each path weighted by its received signal power. Because the wavelength is inversely proportional to frequency, antenna arrays can be mounted on handheld units when using superhigh frequencies (above 10 GHz) but not when using frequencies below the 1-GHz range.

Coding and interleaving can also be used to combat flat fading. Coding and interleaving involves the spreading of a burst error over many "code words." If the errors are sufficiently spread out that each code word has at most one error, then these errors can be corrected easily. This technique results in time diversity without the need for repeat transmissions. However, error-correcting codes typically result in the loss of spectral link efficiency. The cost of coding and interleaving—increased delay and complexity of the interleaver—can be large if the fading rate is slow relative to the data rate, as is typically the case for high-speed data. For example, at a Doppler frequency of 10 Hz and a bit rate of 10 Mbps, an error burst will last for approximately 300,000 bits, and so the interleaver needs to be large enough to handle at least that much data.

Suggested Citation:"2 TECHNOLOGY LIMITS, TRADE-OFFS, AND CHALLENGES." National Research Council. 1997. The Evolution of Untethered Communications. Washington, DC: The National Academies Press. doi: 10.17226/5968.
×

Page 69

In general, flat fading causes bit errors to occur in bursts corresponding to the times when the channel is in a deep fade. Channel codes (discussed in Section 2.1.4) are best suited for correcting one or two simultaneous errors; code performance deteriorates rapidly when errors occur in large bursts.

The third type of countermeasure is adaptive modulation. In theory, the receiver can make an estimate of the channel conditions and send it back to the transmitter, which can then adapt its transmission scheme as appropriate. Adaptation to signal fading enables adjustments in the power level and data rate to take advantage of the favorable conditions, saving more than 20 dB of power. But most modulation and coding techniques do not enable sufficiently rapid adaptation to typical fading conditions. If the channel is changing more rapidly than the rate at which condition estimates are fed back to the transmitter, then adaptive techniques perform poorly. Another drawback is the additional complexity required in the transmitter and receiver to carry out all the requisite steps. Finally, the channel estimate needs to be relayed to the transmitter on a feedback path, which occupies a small amount of bandwidth on the return channel.

2.1.5.2 Countermeasures for Frequency-Selective Fading

Techniques for combating the ISI delay spread of a frequency-selective fading channel fall into two categories: signal processing (at the transmitter or receiver) and antenna solutions. Transmitter-based signal processing techniques, including equalization, multicarrier modulation, and spread spectrum, can make the signal less sensitive to delay spread. Antenna solutions, including distributed antenna systems, small cells, directive beams, and "smart" antennas, change the propagation environment to reduce or eliminate delay spread.

The goal of equalization is to invert the effects of the channel or cancel the ISI. Channel inversion, or linear equalization, can be achieved by passing the received signal through a filter with a frequency response that is the inverse of the channel frequency response (the channel being the original "filter" for the transmitted signals). This process neutralizes the effects of the channel. Although linear equalization can be implemented using relatively simple hardware, the technique has drawbacks in that noise and interference are also passed through the inverse filter. If the channel frequency response is small anywhere within the signal bandwidth, then the noise and interference components at those frequencies are amplified. Thus, on channels with deep spectral nulls, a linear equalizer enhances the noise, resulting in poor performance on frequency-selective fading channels.

Suggested Citation:"2 TECHNOLOGY LIMITS, TRADE-OFFS, AND CHALLENGES." National Research Council. 1997. The Evolution of Untethered Communications. Washington, DC: The National Academies Press. doi: 10.17226/5968.
×

Page 70

A more effective technique is the nonlinear decision-feedback equalizer (DFE), which determines the ISI from previously detected symbols and subtracts it from the incoming symbols. The DFE does not enhance noise because it estimates the channel frequency response rather than inverting it. On frequency-selective fading channels the DFE has a much lower probability of error than does a linear equalizer but also slightly higher complexity. The main drawback of the DFE is the chance of error propagation: If a symbol is detected incorrectly then the associated ISI is still subtracted from subsequent symbols, possibly causing errors in these symbols as well. Moreover, because of decoding delays, the ISI estimates cannot benefit from error-correction coding. Therefore, a DFE can be used only on channels where the probability of error without coding is reasonably low. In addition, because the ISI at low data rates is small, the DFE does not yield substantial BER improvement for data rates much less than 100 kbps. In general, equalizers (especially the DFE) are most beneficial at high data rates, when the product of RMS delay spread and the data rate is much greater than 1. The BER can be improved by as much as 2 to 3 orders of magnitude depending on the data rate and SNR. In indoor environments, 20 Mbps can be achieved at a BER of 10-3 using a DFE (Pahlavan et al., 1993). The achievable rates on outdoor channels are generally much less because delay spreads are much greater and there are other channel impairments. Commercial outdoor wireless systems using delay-spread countermeasures currently achieve on the order of tens to hundreds of kilobits per second, depending on the available bandwidth.

The use of equalizer techniques requires continuous, accurate estimates of the channel frequency response, usually obtained with finite-impulse-response (FIR) filters in the receiver. The number of filter delay elements (or equalizer taps used to track variations in the channel) is proportional to the delay spread. Updates are needed at the Doppler rate. To assist in the estimation process, the transmitter sends training sequences that have known characteristics. Because they consume bandwidth, training sequences need to be as short as possible to maximize spectral link efficiency. The trade-off is that short training sequences require rapid estimation of the channel and, typically, increased signal-processing complexity.

Multicarrier modulation is another technique that compensates for delay spread. The transmission bandwidth is divided into subchannels and the information bits are divided into an equal number of streams, which are transmitted in parallel. Each stream is used to modulate one of the subchannels. Ideally, the subchannel bandwidths are narrow enough that the fading on each subchannel is flat as opposed to frequency selective, thereby eliminating ISI. The simplest approach is to

Suggested Citation:"2 TECHNOLOGY LIMITS, TRADE-OFFS, AND CHALLENGES." National Research Council. 1997. The Evolution of Untethered Communications. Washington, DC: The National Academies Press. doi: 10.17226/5968.
×

Page 71

implement nonoverlapping subchannels, but spectral link efficiency can be increased by overlapping the subchannels in such a way that they can be separated at the receiver. This is called orthogonal frequency division multiplexing, which can be implemented efficiently using the fast Fourier transform (invertible mapping from the time domain to the frequency domain) to separate the subchannels in the receiver. In this case the entire signal bandwidth experiences frequency-selective fading because wideband channels tend to have different fading characteristics at different frequencies, and so some of the subchannels will have weak SNRs. Their performance can be improved by coding across subchannels, frequency equalization, or adding more bits in subchannels with high SNRs. Multicarrier modulation offers an advantage in that less training is required for frequency equalization than for time equalization. However, time-varying fading, frequency offset, and timing mismatch impair the separation of the subchannels, resulting in self-interference. Moreover, multicarrier signals tend to have a large peak-to-average signal-power ratio, which severely degrades the power efficiency when nonlinear amplifiers are used.

Spread-spectrum techniques increase the signal bandwidth—beyond what is needed to transmit the information—to reduce the effects of flat fading, ISI, and narrowband interference. Each channel is spread over the larger bandwidth by a pseudo-noise sequence, which is used by receivers to invert the spreading operation and recover the original data. Spread-spectrum techniques first achieved widespread use in military applications because they ''hide" the signal below the noise floor during transmission, reduce the effects of narrowband jamming, and reduce multipath fading. There are two common forms of this technique: direct sequence, in which the data sequence is multiplied by the pseudo-noise sequence, and frequency hopping, in which the carrier frequency is varied by the sequence.

During the demodulation process, multipath signal components and interference are reduced in two stages: First the spectrum-spreading modulation is removed, and then the remaining signal is demodulated using conventional frequency- or phase-shift techniques to obtain the original data signal. In direct-sequence systems, the received signal is multiplied with an exact copy of the code sequence, perfectly synchronized in time. When narrowband interference and delayed multipath signal components are multiplied by the spreading sequence, their signal power is spread over the bandwidth of the spread-spectrum code. A narrowband filter can be used in the demodulator to remove most of their power. Alternatively, a RAKE receiver can be used to combine all multipath components coherently.6A RAKE is a means of implementing diversity (see Section 2.1.5.1).

Suggested Citation:"2 TECHNOLOGY LIMITS, TRADE-OFFS, AND CHALLENGES." National Research Council. 1997. The Evolution of Untethered Communications. Washington, DC: The National Academies Press. doi: 10.17226/5968.
×

Page 72

2.1.6 Channel Access

In many modern wireless systems, multiple users share the same bandwidth, creating a need for a protocol that ensures efficient, equitable channel access. Wireless-channel access issues are complicated by the variability and statistical nature of user traffic: Voice traffic typically requires a 40 percent duty cycle (i.e., the channel is used 40 percent of the time), whereas data traffic tends to come in bursts with a much lower duty cycle. All traffic generally varies depending on how many transmitters are active. In addition, many new applications do not exhibit the symmetric two-way flow of voice data that is characteristic of standard telephone service. In typical surfing of the World Wide Web, for example, 100 to 1,000 times more data flows to the user than from the user. This variability and asymmetry are creating a need for new access strategies for digital integrated networks.

Channel sharing through fixed-allocation, demand-assigned, or random-allocation modes is called multiple access.7The three basic multiple-access techniques—FDMA, TDMA, and CDMA (all introduced in Chapter 1)—can be implemented in any of the three modes.

2.1.6.1 Fixed-Allocation Multiple Access

Fixed-allocation multiple-access techniques assign dedicated channels to multiple users through some type of channel resource division. The assignments are made by a protocol for the duration of a single transmission.8 In FDMA the total system bandwidth is divided into channels that are allocated to the different users. In TDMA time is divided into orthogonal slots that are allocated to different users. In CDMA (which is the same as direct sequence spread spectrum) time and bandwidth are used simultaneously by different users, modulated by different spreading signals, or codes. The spreading codes allow receivers to separate the signal of interest from the CDMA channel. The three primary competing U.S. standards for cellular and personal-communications multiple access are mixed FDMA/TDMA with three time slots per frequency channel (IS-54), mixed FDMA/TDMA with eight time slots per frequency channel (GSM), and CDMA (IS-95).

The debates over multiple access among standards committees and equipment providers have led to numerous analytical studies claiming the superiority of one technique or another (e.g., Gilhousen et al., 1991; Gundmundson et al., 1992; Baier et al., 1996). However, there is no widespread agreement as to which access technique is the best. Theoretical analysis indicates that under heavy traffic conditions CDMA combined with some form of detecting all users simultaneously9(using knowledge

Suggested Citation:"2 TECHNOLOGY LIMITS, TRADE-OFFS, AND CHALLENGES." National Research Council. 1997. The Evolution of Untethered Communications. Washington, DC: The National Academies Press. doi: 10.17226/5968.
×

Page 73

of all spreading codes to eliminate interference) results in higher spectral efficiency than does TDMA or FDMA (Gallager, 1985; Goldsmith, 1997). Without simultaneous detection and in the absence of fading, TDMA and FDMA are more spectrally efficient than is CDMA.10The spread spectrum gives CDMA the advantage of soft capacity (there is no absolute limit on the number of users), but performance is degraded in proportion to any increase in users on the system. The TDMA and FDMA techniques place hard limits on the number of users sharing a given bandwidth because each time or frequency slot can support a maximum of one user (less than one if multiple slots are assigned to the same user). In general FDMA is the simplest technique to implement, TDMA is slightly more complex because of the requirement for time synchronization among all the users, and CDMA is the most complex because of the need for code synchronization. Another consideration with respect to CDMA is the need for stringent power control to prevent the "near-far problem," which arises when signals from mobile units close to the base station overwhelm those of units farther away. Such control is difficult to maintain in a fading environment and is one of the major challenges of spread-spectrum multiple access.

Fixed-allocation multiple-access techniques are inefficient for many voice and data applications because the variability in traffic from a single transmitter limits throughput on dedicated channels. For example, a single channel in a two-way voice conversation usually occupies less than half of the available bandwidth; for many data applications the traffic is even more intermittent. Cellular and satellite systems generally serve a slowly changing set of active terminals with a relatively fixed traffic pattern. The inability to predict terminal traffic requirements accurately and the need to handle a dynamic set of active terminals create a need for more flexible forms of multiple access.

2.1.6.2 Demand-Assigned Multiple Access

One method of providing flexibility is the assignment of network channels to remote terminals on demand. In these systems a common signaling channel is assigned to handle requests from transmitters for network capacity. Demand-assigned multiple access (DAMA) is very efficient as long as the "overhead" traffic required to assign channels is a small percentage of the message traffic and as long as the message traffic is fairly steady. Otherwise two types of problems can arise. First there is a set-up delay, or latency period. For transmissions of sufficient length this is not a serious limitation, but for networks with a considerable amount of short, interactive messages the delay and overhead of each message make demand-based assignment impractical.11Second, the

Suggested Citation:"2 TECHNOLOGY LIMITS, TRADE-OFFS, AND CHALLENGES." National Research Council. 1997. The Evolution of Untethered Communications. Washington, DC: The National Academies Press. doi: 10.17226/5968.
×

Page 74

transmission of requests on the signaling channel is not possible when the network is overloaded and the transmitter cannot communicate with the hub station, effectively shifting the multiple-access problem from the data channel to the signaling channel.

2.1.6.3 Random Access

When networks serve a wide variety of data rates and the traffic consists of small messages that are roughly the same size as the overhead messages of the access protocol, DAMA is not an efficient use of channel resources. In these cases a connection-free protocol, such as random-access CDMA or ALOHA, is preferable. The random-access CDMA approach requires complex receivers that can demodulate all possible spreading codes. In ALOHA random access, channel packets are stored at each terminal and transmitted over a common channel to the hub station; no attempt is made to synchronize transmissions from the various users. This technique has high reliability in moderate network traffic, but the probability of a collision between packets from different users increases with the traffic. Therefore, such channels are usually sized to operate at about 10 percent of the peak data rate.12

Conventional ALOHA is a narrowband technology. In wireless networks, ALOHA channels rarely operate at more than 10 kbps or 20 kbps in terrestrial systems and 56 kbps in satellite systems. Because ALOHA can be viewed as a random-access version of TDMA, the same peak-power constraints that limit a TDMA channel also apply to conventional ALOHA channels. Recent work on spread ALOHA, a combination of ALOHA and spread-spectrum transmission, shows how these limits can be overcome for high-data-rate applications.

Throughput is not necessarily the most appropriate performance measure for a multiple-access channel. In the case of power-limited satellite channels or battery-operated transmitters, access efficiency is a more appropriate measure. The access efficiency of an ALOHA random-access channel is the ratio of spectral link efficiency using the ALOHA protocol to the spectral link efficiency of a continuously transmitting channel with the same average power and total bandwidth. The access efficiency of an ALOHA channel approaches 1 (meaning no restrictions are needed on throughput) when most users are idle and transactions are brief, as can be the case for some data communications systems. In other words, this technique offers the highest throughput of any random-access protocol under these conditions.

It is much easier to design an access protocol for a single type of network traffic rather than for a range of traffic types. All the major

Suggested Citation:"2 TECHNOLOGY LIMITS, TRADE-OFFS, AND CHALLENGES." National Research Council. 1997. The Evolution of Untethered Communications. Washington, DC: The National Academies Press. doi: 10.17226/5968.
×

Page 75

digital cellular standards designed for voice applications, use a DAMA architecture with some form of ALOHA request channel superimposed over a TDMA or CDMA channel structure. The resulting throughput is adequate for voice applications, but when a network handles data as well as voice the connection-oriented architecture limits the channel throughput. It is difficult to size channels that are assigned on demand for a wide and unpredictable range of user data rates. New, highly flexible random-access structures will probably be needed to enable the seamless integration of data service within a voice network as promised in some new personal communications networks.

2.2 Network Issues

2.2.1 Architecture

The choice of an architecture for a two-way wireless network involves numerous issues dealing with the most fundamental aspects of network design. The primary issue is whether to use a peer-to-peer or a base-station-oriented network configuration. In a peer-to-peer architecture, communication flows directly among the nodes in the network and the end-to-end process consists of one or more individual communication links. In a base-station-oriented architecture, communication flows from network nodes to a single central hub.

The choice of a peer-to-peer or base-station-oriented architecture depends on many factors. Peer-to-peer architectures are more reconfigurable and do not necessarily have a single point of failure, enabling a more dynamic topology. The multiple hops in the typical end-to-end link offer the advantage of extended communication range, but if one of the nodes fails then the localized link path needs to be reestablished. Base-station-oriented architectures tend to be more reliable because there is only one hop between the network node and central hub. In addition, this design tends to be more cost-efficient because centralized functions at the hub station can control access, routing, and resource allocation. Another problem with peer-to-peer architecture is the significant co-site interference that arises for multiple users in close proximity to each other—a problem that can be averted in a base-station-oriented architecture by the coordinated use of transmission frequencies or time slots.

The wireless base-station-oriented architecture is exemplified by cellular telephone systems, whereas the most common peer-to-peer architecture for wireless systems is a multihop packet radio. Fundamental differences between the two types of systems are indicated in Table 2-1.

Suggested Citation:"2 TECHNOLOGY LIMITS, TRADE-OFFS, AND CHALLENGES." National Research Council. 1997. The Evolution of Untethered Communications. Washington, DC: The National Academies Press. doi: 10.17226/5968.
×

Page 76

TABLE 2-1 Comparison of Cellular and Multihop Packet Radio Architectures

Feature

Cellular System

Multihop Packet Radio

Topology

Static

Dynamic

Number of hops

One

Multiple

Network control

Centralized

Distributed

Link distance

Fixed (by cell size)

Variable ("over the horizon")

2.2.1.1 Cellular System Design

One of the biggest challenges in providing multimedia wireless services is to maximize efficient use of the limited available bandwidth. Cellular systems, which exploit the falloff in power at increased distances, reuse the same frequency channel at spatially separated locations. Frequency reuse increases spectral efficiency but introduces co-channel interference, which affects the achievable BER and data rate of each user. The interference is small if the users operating at the same frequency are far enough apart; however, area spectral efficiency (i.e., the data rate per unit bandwidth per unit area) is maximized by packing the users as close together as possible. Thus, good cellular system design places users that share the same channel at a separation distance such that the co-channel interference is just below the maximum tolerable level for the required BER and data rate. Because co-channel interference is subject to shadowing and multipath fading, the design of a static cellular system needs to assume worst-case propagation conditions in determining this separation distance. System performance can be improved through dynamic resource allocation, which involves allocating power and bandwidth based on propagation conditions, user demands, and system traffic; however, the increases in spectral and power efficiency are achieved at the price of increased system complexity.

In cellular systems, a given geographical area such as a city is divided into nonoverlapping cells (see Figure 2-2) and different frequencies are assigned to the cells. For FDMA and TDMA, cells using the same frequency are spatially separated such that their mutual interference is tolerable. Frequency reuse in FDMA and TDMA systems introduces co-channel interference in all cells using the same channel. In CDMA, interference can be introduced from all users within the same cell as well as from users in other cells.

Regardless of the multiple-access technique used, area spectral efficiency is maximized when the system is interference limited—that is,

Suggested Citation:"2 TECHNOLOGY LIMITS, TRADE-OFFS, AND CHALLENGES." National Research Council. 1997. The Evolution of Untethered Communications. Washington, DC: The National Academies Press. doi: 10.17226/5968.
×

Page 77

image

FIGURE 2-2 In cellular systems each cell has a central hub (the base-station-oriented
design) and is assigned different frequencies, time slots, or codes, depending on how
users access the system.

when the receiver noise power is much less than the interference power and can be ignored. The received SNR for each user is determined by the amount of interference at the receiver; if the system is not interference limited, then spectral efficiency could be further increased by allowing additional users on the system or reusing the frequencies at reduced distances.

The transmitter in each cell is connected to a base station and switching office, which allocates channels and controls power. In analog cellular systems, the switching office also coordinates handoffs to neighboring cells when a mobile terminal traverses a cell boundary. In digital cellular systems and low-tier systems, base stations and terminals play a more active role in coordinating handoffs. The spectral efficiency can be increased by dividing each existing cell into several smaller cells because more users can then be accommodated in the system. However, reducing the cell size increases the rate at which handoffs occur, sometimes affecting higher-level protocols. In general, if the rate of handoff increases, then the rate of call dropping will increase proportionally. Routing is also more dynamic with small cells because routes need to be reestablished whenever a handoff takes place.

Suggested Citation:"2 TECHNOLOGY LIMITS, TRADE-OFFS, AND CHALLENGES." National Research Council. 1997. The Evolution of Untethered Communications. Washington, DC: The National Academies Press. doi: 10.17226/5968.
×

Page 78

2.2.1.2 Packet Radio System Design

A packet radio system provides communications to fixed and mobile network nodes that use radios to form the physical links (Lauer, 1995; Leiner et al., 1997). The earliest such systems were the ALOHANET, which operated at the University of Hawaii in the early 1970s, and the DARPA PRNet, a multihop peer-to-peer packet network that operated in the late 1970s and early 1980s. Commercial packet radio networks have been built around single-hop base-station-oriented architectures, as in the Ardis or Mobitex systems, and multihop peer-to-peer architectures, as in the Metricom system (see Chapter 1, Section 1.6). These networks can be constructed with fixed-location infrastructure elements (as in Metricom) or can achieve connectivity in a completely ad hoc manner. In general, multihop ad hoc packet-radio networks can be set up, deployed, and redeployed rapidly. These characteristics are important to military operations. However, multihop ad hoc packet radio networks can pose difficulties in defense applications, because a peer-to-peer architecture does not correspond to the military command structure.

Many of the challenges in packet radio system design are the same as those for any wide-area wireless communications system. These issues include how-best to deal with the fading characteristics of RF propagation and whether to use a random or reservation access strategy. Packet radio networks also pose special challenges related to the dynamic nature of the network topology. The terrain, distance between nodes, antenna height, and antenna directionality all influence whether network connectivity can be established and maintained. Physical connectivity in ad hoc packet radios is more complex than in cellular systems because cell sites cannot be surveyed in advance and may be situated in locations that are difficult to access. Furthermore, it is not economical (commercially at least) to use large antennas or extensive antenna processing and directionality at each peer-to peer node; the network nodes are all more or less identical, highly portable, and always moving, although additional repeaters can be added within the system to improve performance. Repeaters demodulate packets, remodulate them, and send them again.

Military packet radio systems typically operate at lower frequencies than do cellular systems so as to cover large areas within the battlefield. Active interference needs to be considered in system design, and transmitter power is chosen not only to ensure successful reception at the receiver but also to hide the network from adversaries. Military packet radio systems make extensive use of spread-spectrum methods for channel access and in general require a higher degree of flexibility in coding

Suggested Citation:"2 TECHNOLOGY LIMITS, TRADE-OFFS, AND CHALLENGES." National Research Council. 1997. The Evolution of Untethered Communications. Washington, DC: The National Academies Press. doi: 10.17226/5968.
×

Page 79

schemes than do commercial systems. Preamble spreading codes (simple versions of the data spreading codes used for synchronization or header information) may be different from the codes used during the data portion of the packet, and codes can be changed on a bit-by-bit basis to reduce the probability of interference (a feature of second-generation DARPA packet radios). All transmitters use either a common preamble code or a receiver-directed preamble code that directs the transmission to a single node that is tuned to the specific code. The latter approach makes it possible for multiple packets to be in the air yet have a low probability of interference.

2.2.2 Physical Resource Allocation

Any system using a fixed assignment of network resources needs to be designed based on worst-case signal propagation and interference assumptions. A more efficient strategy is dynamic resource allocation, in which channels, data rates, and power levels are assigned depending on the current interference, propagation, and traffic conditions. For cellular systems, dynamic resource allocation includes assignment of channels to base stations. Dynamic channel allocation in cellular systems improves channel efficiency by a factor of two or more, even when using simple algorithms (Katzela and Naghshineh, 1996). However, analyses of dynamic resource allocation to date have been based on fairly simplistic system assumptions, such as fixed traffic intensity, homogenous user demands, fixed reuse constraints, and static channels and users. Little work has been done on resource allocation strategies that consider simultaneous, random variations in traffic, propagation, and user mobility. The extent to which system performance can be improved under realistic conditions remains an open and challenging research problem with respect to both cellular and packet-radio architectures. Previous research has focused primarily on cellular systems; little attention has been devoted to peer-to-peer networks.

An emerging and important research area focuses on reducing the complexity of dynamic resource allocation, particularly in systems with small cells and rapidly changing propagation conditions and user demands. Even under simplistic assumptions of fixed conditions, optimizing channel allocation is highly complex. Current allocation procedures are not easily generalized to incorporate power control, traffic classes (e.g., multimedia), cell handoff, or user priorities. In addition, the efficiency of dynamic resource allocation is most pronounced under light loading conditions. Thus, the optimal dynamic resource allocation strategy is also dependent on traffic conditions.

Suggested Citation:"2 TECHNOLOGY LIMITS, TRADE-OFFS, AND CHALLENGES." National Research Council. 1997. The Evolution of Untethered Communications. Washington, DC: The National Academies Press. doi: 10.17226/5968.
×

Page 80

2.2.3 Interoperability

For elements of a system to communicate, they must be compatible. One way to achieve compatibility is to mandate a "point design" in which all devices conform to the same standard. As described in Chapter 1, this approach was taken for first-generation cellular systems in the United States and in second-generation systems in Europe. As used in this report, the term "interoperability" refers to the capability of network elements that do not conform to the same standard to communicate. Interoperability can be achieved in two ways using different enabling devices: gateways and adapters. In the compatibility context, a gateway is a device that conforms to more than one standard, whereas an adapter translates information formats between two standards. A cable-ready television set is an example of a gateway, and a set-top box is an example of an adapter.

With respect to military wireless communications systems, there will be no convergence to a single technology in the foreseeable future, for many reasons. The number of incompatible systems will remain high, and yet evolving military missions will require increasing communications between individuals and machines using different systems. As a consequence, interoperability among these systems will be essential.

In sophisticated multimedia networks such as the ones required for military operations in the next century, interoperability is necessary at all layers of a communication protocol. The Internet approach to interoperability is a narrow-waist protocol suite with compatibility at the middle layers (TCP and IP) and diversity at higher (i.e., application) and lower (i.e., physical) layers (Computer Science and Telecommunications Board, 1994). Although this approach can be adopted in all types of communications systems, there are several physical-layer problems unique to wireless communications. For example, wireless systems may operate in different frequency bands and use different modulation and coding techniques. Multimode radios are gateways that address these problems. The commercial dual-mode cellular telephone is an example of this type of radio.

The software radio is a promising means of achieving interoperability at the physical layer. Software radio receivers digitize the RF signal and implement most receiver functions by means of software running on general-purpose hardware (see Section 2.4). Similarly, transmitters synthesize waveforms in digital format and convert them to analog prior to amplification. A software radio can be programmed to be compatible with a number of communications systems and provide interoperability across the required data encoding, transmit waveforms and bandwidths, timing, and clock accuracy of the individual modes. Chapter 3 (Section

Suggested Citation:"2 TECHNOLOGY LIMITS, TRADE-OFFS, AND CHALLENGES." National Research Council. 1997. The Evolution of Untethered Communications. Washington, DC: The National Academies Press. doi: 10.17226/5968.
×

Page 81

3.4.3.1) notes the initial success of experimental SpeakEASY radios in implementing gateways between several military waveforms. However, current-generation software radios are limited in terms of the range of radio waveforms they can handle.

The implementation of gateways and/or adapters raises the issue of where they would best be situated in a communications system. There are various possibilities depending on network architecture. In peer-to-peer networks, the terminals need to be capable of implementing all coexisting technologies. In this case any terminal would be capable of communicating with any other terminal within range; the disadvantage would be the added cost, weight, and power drain relative to single-mode terminals. On the other hand, in base-station-oriented networks, it may be possible to concentrate the tasks of interoperability in base stations. This approach has the disadvantage of disabling communications between terminals when base stations are out of service. This issue could be addressed in research on network architectures (see Chapter 4).

2.2.4 Routing and Mobility Management
2.2.4.1 Multihop Routing

The routing of messages through a multihop packet-radio network requires the identification of existing communication links and an assessment of their relative quality. Routing protocols perform these tasks. The best route is the one with the smallest number of hops providing acceptable connectivity; the link quality can be determined by measuring signal strength, SNR, or BER. Poor link quality can be improved to some extent through the use of higher transmission power, wider spreading codes, aggressive hop-by-hop error correction, or retransmission schemes. However, link capacity is also a function of the traffic on nearby links; it may be necessary to route around nodes experiencing heavy congestion.

In general, network topologies vary rapidly in mobile packet radio networks, with links constantly being lost and new ones established. Therefore, the network management component needs to disseminate connectivity information more rapidly than is necessary in wired networks. The network also needs to be able to handle gracefully any network partitions caused by link outages, which are more likely to occur in mobile packet radio networks than in a conventional wired network.

Routing algorithms choose a hop-by-hop path based on information about link connectivity. The simplest scheme is flooding, in which a packet is transmitted on all links from the source to neighboring nodes, which then repeat the process. Flooding is inefficient but can be the best strategy when a network topology changes rapidly. Another scheme,

Suggested Citation:"2 TECHNOLOGY LIMITS, TRADE-OFFS, AND CHALLENGES." National Research Council. 1997. The Evolution of Untethered Communications. Washington, DC: The National Academies Press. doi: 10.17226/5968.
×

Page 82

connection-oriented routing, maintains a sequence of hops for communications between a single source and specific destination in the network. Given rapid topology changes, network partitions, and large numbers of nodes, keeping this information updated and available to all nodes is difficult at best. A third scheme is connectionless routing, which requires no knowledge of end-to-end connections. Packets are forwarded toward their destination, with local nodes adapting to changes in network topology.

Connection-oriented and connectionless approaches require that routing information be distributed throughout the network. In small networks this distribution was originally accomplished by a centralized routing server; by now, distributed algorithms with improved scaling behavior have largely replaced centralized servers, especially in large networks. Each node independently determines the best hop in the direction of the destination, and updated routing tables are periodically exchanged among neighboring nodes.

Routing schemes have also been used that combine elements of the centralized and distributed approaches. For very large multihop packet radio networks, such schemes impose a hierarchy on the network topology, hiding changes in the distant parts of the network from local nodes (the next-hop routes to distant network nodes are not likely to change as rapidly as are routes within each cluster). A combined strategy is to use a centralized route server, known as a cluster head, to maintain routes between clusters in the direction of ''border radios."

A final routing issue relates to packet forwarding, which is initiated when several transmission attempts fail to deliver a message to the next node. In these cases a node engages in localized rerouting, broadcasting the message to any node that can complete the route. Packet forwarding can cause flooding if multiple nodes hear the request and choose to forward the packet. The process can be optimized by filtering based on overheard traffic: If a node has a packet in its send queue and "hears" the same packet being forwarded from a second node, then the first node assumes that the packet has been sent and removes it from the queue.

2.2.4.2 Terminal Mobility

The mobile internetworking routing protocols (Mobile IP) were designed to accommodate the mobility of Internet users. There is some disagreement concerning whether Mobile IP was originally designed for an individual user moving from one fixed location to another (Myles et al., 1995) or for on-the-move wireless operations. In any case, the Internet Engineering Task Force (IETF) has addressed the issue of full mobility, and Mobile IP is now suited for highly mobile users.13On the Internet, every node (fixed or mobile) has a unique identifying address, its IP address.

Suggested Citation:"2 TECHNOLOGY LIMITS, TRADE-OFFS, AND CHALLENGES." National Research Council. 1997. The Evolution of Untethered Communications. Washington, DC: The National Academies Press. doi: 10.17226/5968.
×

Page 83

Mobile IP has to circumvent the association of IP addresses with specific networks because mobile nodes can attach to and detach from multiple networks as they roam. Changing an IP address on the fly is not always possible. If a node has an open TCP connection when its IP address changes, then the TCP connection will fail. If the node requires an accurate Domain Name System (DNS) entry, then the entry will need to be updated as the address changes, and in today's implementation of DNS such an update can be very slow.

Communications take place between a sender and receiving mobile host (MH). In the Mobile IP specifications, every MH has a home network and its IP address is called the home address. A router called the home agent, which resides in the MH's home network, is responsible for intercepting each packet destined for the home address of a roaming MH. The packet is placed inside another IP packet through a process called "IP-in-IP encapsulation." The source address in the encapsulating packet is that of the home agent. The packet is usually sent "in care of" another agent, the foreign agent, which resides in the network in which the MH is roaming. The packet is sent by conventional IP routing to the foreign agent, where the contents (i.e., the original packets) are removed and delivered to the MH. The MH can transmit information directly to the sender but the sender always directs its own communications to the home network. The MH can also request a locally assigned care-of IP address in its roaming domain by invoking the dynamic host configuration protocol; this address could be used by the home agent directly, eliminating the foreign agent.

When an MH enters a new mobile subnetwork it needs to obtain a care-of address. It can find a foreign agent using a process built on top of the existing Internet control message protocol capabilities for router discovery. Once accepted by the local network, the MH registers its new care-of address with its home agent. All registration attempts need to be carefully authenticated to prevent a malicious user from hijacking the packets simply by furnishing another care-of address. The Mobile IP specifications use a message authentication code (similar to a digital signature) based on a secret key shared by the MH and home agent, typically using a secure one-way function called MD5. Only the MH that knows the secret key can provide the digital signature expected by the home agent. Replay protection is required to prevent a malicious user from falsely registering an MH with a stale care-of address.

The major performance challenge is to circumvent the indirect routing among the sender, home agent, and foreign agent. This path can be eliminated if the sender caches bindings between the MH's home and care-of addresses. The management of these bindings is called route optimization.14For example, when a sender first sends a packet to an MH through its home agent, the home agent could send a binding-update

Suggested Citation:"2 TECHNOLOGY LIMITS, TRADE-OFFS, AND CHALLENGES." National Research Council. 1997. The Evolution of Untethered Communications. Washington, DC: The National Academies Press. doi: 10.17226/5968.
×

Page 84

message to the original sender. Until the binding expires because of a time-out, the sender can use the care-of address directly. If the MH moves to a new subnetwork, then it can ask its former foreign agent to forward packets to the new care-of address while also alerting senders of that new address.

2.2.4.3 Wireless Overlay Networks

No single network technology can simultaneously offer wide-area coverage, high bandwidth, and low latency. In general, networks that span small geographical areas (e.g., LANs) tend to support high bandwidth and low latency, whereas networks that span large geographical areas (e.g., satellite networks) tend to support low bandwidth and high latency. To yield flexible connectivity over wide areas, a wireless internetwork needs to be formed from multiple wide-, medium-, and local-area networks interconnected by wired or wireless segments (Katz and Brewer, 1996). This internetwork is called a wireless overlay network because the WANs are laid on top of the medium- and local-area networks to form a multilayered network hierarchy. A user operating within the LAN enjoys high bandwidth and low latency, but when communicating outside the local coverage area the user accesses a wider-area network within the hierarchy, typically sacrificing some bandwidth or latency in the process.

Future mobile information systems will be built on heterogeneous wireless overlay networks, extending traditional wired and internetworked processing "islands" to hosts on the move over a wide area. Overlay technologies are used in buildings (wireless LANs), in metropolitan areas (packet radio), and regional areas (satellite). The software radio, with its capability to change frequencies and waveforms as needed, is a critical enabling technology for overlay networks.

Handoffs may take place not only "horizontally" within a single network but also "vertically" between overlays. If each overlay network assigns the MH a different IP address, then Mobile IP needs to be extended to correlate all the addresses for one user. Alternatively, the mobile node can treat each new IP address as a new care-of address. The home agent maintains a table of bindings between the home and locally assigned addresses. The applications running on the MH may participate in the choice of route. For example, an application might specify that high-priority traffic traverse an overlay with low latency. Less-critical traffic might travel over higher-latency connections. Signal quality, BER, and packet loss and retransmission need to be considered. Under certain conditions such as the transmission of urgent data, a slow-speed overlay with a strong signal strength might be preferred to one with a higher bandwidth but a weaker signal.

Suggested Citation:"2 TECHNOLOGY LIMITS, TRADE-OFFS, AND CHALLENGES." National Research Council. 1997. The Evolution of Untethered Communications. Washington, DC: The National Academies Press. doi: 10.17226/5968.
×

Page 85

2.2.5 Resource Discovery

New protocols are being developed to support convenient operations by mobile users. One example is the service location protocol, which allows user agents to determine access information for generic network services such as printing, faxing, schedule management, file system access, and backup. A directory agent delivers universal resource locators (URLs) to user agents, which use the URLs to access service agents. New service agents can register or withdraw their URLs as needed. Much of the protocol research is geared toward enabling the identification of directory agents in unfamiliar environments. Other strategies based on modifications to directory services have been proposed as well.

2.2.6 Network Simulation and Modeling Tools

Network performance analysis can take three forms: mathematical analysis, experimental trials, or system simulation. Mathematical analyses can incorporate only a limited range of realistic phenomena, and field trials are expensive as well as difficult to set up under repeatable conditions. Consequently simulation is often the best tool for optimizing system design and predicting performance.

Network-level simulation tools are used to simulate the dynamic behavior of routing, flow, and congestion-mitigation schemes in packet-switched data networks. These tools can model arbitrary network topologies, link-error models, router scheduling algorithms, and traffic.15Network simulators have been used to investigate net link-layer algorithms for packet scheduling and retransmission, new mechanisms within routers for determining local congestion conditions and sending this information to transmitters and receivers, and transport-layer algorithms for retransmission and rate control. Performance tools can also help troubleshoot problems in real networks by collecting statistics about the throughput of the various nodes and links. This information can be used to identify bottlenecks and develop remedial strategies such as changing the topology of the network. Debugging tools enable the protocol designer to capture detailed traces of network activity (McCanne and Jacobson, 1993); these tools are invaluable for tracking down errors in protocol implementation.

To model mobile networks accurately, simulators require special features, some of which have yet to be developed. They need to model the nature of errors on the wireless link precisely because errors are not uniformly distributed but rather tend to cluster (Nguyen et al., 1996). They also need to model node mobility, especially in the case of packet radio networks. Existing simulation technology consists of good models of radio propagation at microwave frequencies but only standard teletraffic

Suggested Citation:"2 TECHNOLOGY LIMITS, TRADE-OFFS, AND CHALLENGES." National Research Council. 1997. The Evolution of Untethered Communications. Washington, DC: The National Academies Press. doi: 10.17226/5968.
×

Page 86

models and few abstract mobility models. Some proprietary tools integrate geographical modeling, propagation, and cellular networking behavior, but no integrated tools are available commercially to predict the performance of the next generation of wireless technologies, such as smart radios.

Similarly, existing tools can simulate the creation of relatively narrowband waveforms at the transmitter and analyze the effects of radio propagation on the received signal, but they cannot model the antenna radiation or reception properties of a signal that spans more than 1 GHz of spectrum. No existing tool can model the propagation performance of urban, suburban, rural, or free-space radiation of wideband signal-containing components with diverse propagation characteristics. No tool can analyze the effects of the motion of network elements on the received signal's multipath characteristics, such as spectral nulls and Doppler shift over wide bandwidths. No widely available tools allow for the geographical or topological analysis of specific network deployments. Finally, no analysis tool is sophisticated enough to examine the performance of software radios or radio networks in the presence of interference sources common to wideband mobile communications. The evaluation and optimization of mobile wireless networks would be enhanced by the development of sophisticated, flexible models of communications traffic and node mobility.

2.3 End-to-End System Design Issues

Most end-to-end system design issues, such as security, design tools, and interoperability with other systems, are relevant to any wireless application (Katz, 1994). However, some end-to-end design issues depend on the application(s) to be supported by the network. For example, videoconferencing is an extremely challenging application for a wireless system because of its high bandwidth requirement and strict constraints on delayed end-to-end transmissions. To support this application, the capability to adapt to channel conditions, perhaps through a slight degradation in image quality, might be built into the end-to-end system protocols. Similarly, many military command-and-control operations require the capability to assign priorities to certain messages, and this flexibility needs to be built into the system. The following sections deal with three key end-to-end design issues for wireless systems: application-level adaptations, quality of service (QoS), and system security.

2.3.1 Application-Level Adaptation

A system can adapt to the variability in mobile client applications in three ways. One approach is to exploit data-type-specific lossy (i.e., involving

Suggested Citation:"2 TECHNOLOGY LIMITS, TRADE-OFFS, AND CHALLENGES." National Research Council. 1997. The Evolution of Untethered Communications. Washington, DC: The National Academies Press. doi: 10.17226/5968.
×

Page 87

some distortion) compression mechanisms and use data semantics to determine how information can be compressed and prioritized en route to the client.16A second approach is on-the-fly adaptation involving the transcoding of data into a representation that can be handled by the end application. The third approach is to push the complexity away from the mobile clients and servers into proxies, which are often used in wired networks but are not currently optimized for wireless applications.

Introduced in response to security concerns, the proxy approach is a new paradigm for distributed applications. A proxy is an intermediary that resides between the client and server—outside the client's security firewall—to filter network packets on behalf of the client. Proxies provide a convenient place to change data representations en route to the client (thereby mitigating the lossy, constrained bandwidths of wireless links), perform type-specific compressions, cache data for rapid re-access, and fetch data in anticipation of access. By supporting the adaptation to network variations in bandwidth, latency, and link error rates as well as to hardware and software variations, proxies enable client applications running on limited-capability end nodes to appear as if they were running on high-end, well-connected machines. Low-end clients (e.g., PDAs) have limited processing capabilities due to small displays and memory, relatively slow processors, and limited-capability software and run-time environments.

2.3.2 Quality of Service

Quality of service refers to traffic-dependent performance metrics—bandwidth, end-to-end latency, or likelihood of message loss—that a connection must have or can tolerate for the type of data transmitted. A network's admission-control mechanisms, which are invoked whenever a new connection is initiated, provide assurance that QoS requirements will be met; a new connection is aborted if its QoS requirements cannot be met. Attention to QoS issues is increasing because of at least two converging trends: the growing market for applications (e.g., video) that require real-time service, and the evident interest in using the Internet for a range of activities that are critical to both the public and private sectors.

2.3.2.1 Approaches to Quality of Service

Within the Internet, three categories of QoS are currently defined: guaranteed, predictive, and best-effort service. Guaranteed service is achieved if the connection conforms to a well-specified traffic specification. If the network determines that it can support this traffic, then it allows the connection to be established and guarantees that its requirements

Suggested Citation:"2 TECHNOLOGY LIMITS, TRADE-OFFS, AND CHALLENGES." National Research Council. 1997. The Evolution of Untethered Communications. Washington, DC: The National Academies Press. doi: 10.17226/5968.
×

Page 88

will be met. Guarantees of this type are required when the application needs tight, real-time coupling between the end points of the connection. An example of guaranteed service is the telephone system, which is designed to meet the level of perceived audio/speech quality, end-to-end switching delays, and likelihood of call blocking required for telephone calls. Certain values for these metrics are determined and the network is designed to offer the desired number of simultaneous connections. Another example of guaranteed service is that provided by ATM, a transmission protocol that handles voice, data, and video. If absolute guarantees are too expensive, then it is often sufficient to provide predictive service, indicating that the application's requirements are highly likely to be met.

The Internet as it exists today does not provide explicit QoS to different packet flows; instead it is based on a best-effort model that makes no performance guarantees. (Flows are groups of packets that share common characteristics, including the tolerated delay.) Best-effort services are appropriate for applications that do not demand real-time performance and that can adapt gracefully to the bandwidth available in the network. Best-effort service tolerates simple network components and is a good match for data transmitted in interactive bursts, interactive bulk transfer, and asynchronous bulk transfer. The common Internet data transfer applications are sensitive to losses but tolerant of latency. However, the reverse is true for emerging real-time Internet applications, which are tolerant of losses but sensitive to latency. The IETF is working to provide guaranteed services on the Internet (Peterson and Davie, 1996; Tanenbaum, 1996).

The Internet carries two broad classes of applications: delay tolerant and delay intolerant. The former applications, such as file transfer, tolerate some packet losses and delays. For these services, which are common today, the network does not reserve resources or limit the number of transfers in progress at any one time. Instead, it shares the available bandwidth among all the active applications as best it can. This is the so-called best-effort service. Delay-intolerant applications require data that is delivered with little or no delay. For these applications, different services are being developed. The components of these services consist of traffic and network descriptions, admission control procedures, resource reservation protocols, and packet scheduling mechanisms. These specifications are associated with flows. Given a service specification, the network can admit a new flow or deny access when the specifications exceed what the network can provide. The network can also police a flow to ensure that it meets the traffic specification.

Real-time Internet applications are developed on top of the real-time protocol (RTP). With RTP, a node moderates its transmission rate based on periodic reports of successfully received data at the receiver. If the

Suggested Citation:"2 TECHNOLOGY LIMITS, TRADE-OFFS, AND CHALLENGES." National Research Council. 1997. The Evolution of Untethered Communications. Washington, DC: The National Academies Press. doi: 10.17226/5968.
×

Page 89

sender's rate exceeds that which is reported as received, then the sending rate is reduced. Periodically the sender probes the network by attempting to increase the rate to see if a higher rate can be supported. In this way, the sender and receiver adapt to the available bandwidth without requiring any special support from the network itself. The RTP protocol is appropriate for real-time data streams, such as video and audio, that can tolerate some losses.

The real-time Internet services proposed by the IETF use the resource reservation protocol (RSVP), which enables dynamic changes in QoS and permits receivers to specify different QoS requirements.17The RSVP protocol is closely integrated with multicast services in which receivers determine a path through the network on which senders distribute their traffic specifications and receivers distribute their network service requirements. These sender-directed path messages and receiver-directed reservation messages are built on top of the existing multicast protocols. Senders and receivers are responsible for periodically signaling the network about their changing specifications. Once the reservations have been made, the final step is to implement them in every router on the path from sender to receiver through packet classification and scheduling. The router implementation achieves the performance specified by the network end points. The classification process maps packets on flows into their associated reservation, and scheduling drives queue management to ensure that the packets obtain their requested service.

Proponents of ATM networks view them as the foundation of integrated voice, video, and data services because they combine flexibility with performance guarantees. The ATM approach involves breaking up data into short packets of fixed size called cells, which are interspersed by time division with data from other sources and delivered over trunk networks. An ATM network can scale up to high data rates because it uses fast switching and data multiplexing based on these fixed-format cells, which contain 48 bytes of traffic combined with a 5-byte header defining the virtual circuits and paths over which the data are to be transported. The virtual circuits need to be established before data can flow. In setting up connections, the network makes resource allocation decisions and balances the traffic demands across network links, thereby separating data and control flows and enabling switches to be simple and faster.

Two types of guaranteed service are available on ATM networks: constant bit rate (CBR) and variable bit rate (VBR). The CBR service is appropriate for constant-rate data streams that demand consistency in delay. An example of such a data stream is telephone traffic that uses constant-bit-rate encodings for audio and places bounds on delivery latency. The VBR service is appropriate for traffic patterns that have a fairly sustained rate but also may feature short bursts of data at the peak transmission

Suggested Citation:"2 TECHNOLOGY LIMITS, TRADE-OFFS, AND CHALLENGES." National Research Council. 1997. The Evolution of Untethered Communications. Washington, DC: The National Academies Press. doi: 10.17226/5968.
×

Page 90

rate. Burst-type traffic tolerates higher delays and higher variations in delay than does constant-rate traffic. Two types of best-effort traffic classes are available on ATM networks: available bit rate, which guarantees zero losses (but makes no other guarantees) if the source follows the traffic management signals delivered by the network; and undefined bit rate, which provides no performance guarantees.

The connection orientation of ATM presents problems when dealing with network mobility. Movement between cells requires existing connections to be reestablished: Even brief transmissions invoke the full latency of connection setup. In addition, ATM is not appropriate for lossy links because there is no agreed-upon mechanism for error recovery or retransmission at the link layer. Considerable controversy exists as to whether ATM will be used throughout a system or only at the link or subnetwork level. Full ATM connectivity, from one end of a system to the other, is required to take advantage of the service guarantees.18Many believe that it will be necessary to run TCP/IP over ATM to ensure interoperability across heterogeneous subnetworks. Debate continues over how to interface ATM's performance guarantees with the emerging Internet capabilities for predictive service.

Wireless communications introduce additional QoS issues. The QoS guarantees for expected loss rates, latencies, and bandwidths were developed based on the assumption that switched, fiber-optic wired networks would be used. Such networks feature low link-error rates, easily predicted link bandwidths, and QoS parameters that are largely determined by how the queues are managed within the switches. As a result, losses are due almost entirely to congestion-related queue overflows. Wireless links, on the other hand, have high bit-error rates, high latencies due to link layer retransmissions, and unpredictable link bandwidths.19Furthermore, the quality of a wireless link varies over time, and connections can be lost completely. Two wireless end nodes sharing the same link can experience vastly different link bandwidths depending on their relative proximity to the base station, location in a radio fade, or loss of receiver synchronization in a multipath environment. Link quality can also be degraded by interference from a nearby transmitter. In addition, hidden terminals cause time-consuming back-off (i.e., waiting before resending) that further degrades network performance.

Because link quality varies on small time scales, it is difficult to improve a wireless link through agile error coding or increased transmit power. Moreover, attempts to improve a link for one user can adversely affect others. Guarantees are elusive in this complex environment.20The ATM end-to-end QoS model is difficult to implement if the limiting link is wireless. In general, however, approaches such as adaptive spreading codes, FEC, and transmit-power control can be used at the media access

Suggested Citation:"2 TECHNOLOGY LIMITS, TRADE-OFFS, AND CHALLENGES." National Research Council. 1997. The Evolution of Untethered Communications. Washington, DC: The National Academies Press. doi: 10.17226/5968.
×

Page 91

control and link layers to improve higher protocol layers in a wireless environment (Acampora, 1996).

2.3.2.2 Transport-Layer Issues

The most widely used reliable transport protocol is TCP, a connection-oriented protocol that combines congestion control with "sliding-window" flow control at the sender and cumulative acknowledgments from the receiver. As each segment is received in sequence, the receiver generates an acknowledgment indicating the number of bytes received. In the current generation of TCP, congestion is controlled by the sender, which maintains a variable called the congestion window that regulates how much data the sender can have in flight across the network at any one time. The sender adjusts the congestion window in response to perceived network conditions.

When a TCP connection first starts, or after a major congestion event, the congestion window is set to one packet, which means the sender cannot send a second packet until it receives an acknowledgement of the first. The sender then adjusts the congestion window by doubling it each round-trip across the net. This part of the algorithm is called slow-start. Once a certain threshold is crossed, the congestion-avoidance phase is triggered and the window size grows in increments of a single packet for each round-trip. The sender uses these mechanisms to probe the network to discover how much data can be in flight.

A lost packet creates a gap in the sequence number of data arriving at the receiver. When this occurs, the receiver generates a duplicate acknowledgment for the last segment received in order. When a threshold number of duplicate acknowledgments is received, the sender retransmits the lost segment and halves the congestion window; this part of the algorithm is known as fast retransmission and recovery. A more serious congestion event can cause the loss of so many packets that no duplicate acknowledgments are generated by the receiver. The sender detects and corrects this situation using a timer. The TCP protocol sets time-outs as a function of the mean and standard deviation of the round-trip time. If no acknowledgment is received within this interval, then the sender retransmits the first unacknowledged segment, sets its window to one packet, and reenters the slow-start phase. This event causes a major reduction in throughput until the window opens and also can cause a silent period until the timer expires.

Fast retransmission works well in many circumstances today. However, specific issues arise in wireless systems. When a packet is lost or damaged because of bit errors on the wireless link, this loss is detected and corrected by the fast-recovery algorithm. However, fast retransmission

Suggested Citation:"2 TECHNOLOGY LIMITS, TRADE-OFFS, AND CHALLENGES." National Research Council. 1997. The Evolution of Untethered Communications. Washington, DC: The National Academies Press. doi: 10.17226/5968.
×

Page 92

reduces the window size as a side effect, thus keeping throughput low. These problems can be mitigated through a TCP-aware link layer (Balakrishnan et al., 1996), in which the base station triggers local retransmissions of lost segments. By intercepting the duplicate acknowledgments, the base station shields the sender from the effects of local losses that would have the effect of shrinking the congestion window and reducing throughput. More seriously, burst errors on the wireless link can cause the loss of several packets, which will trigger the slow-start algorithm even though there is no congestion.

Asymmetric connections, in which the bandwidth in one direction far exceeds that available for the opposite path, can present problems for transport-layer connections because the effective bandwidth on the forward path is limited by the amount of acknowledgment traffic that can be sent along the reverse path. An example of asymmetry is direct-broadcast satellite, which sends data to the user at several hundred kilobits per second but uses substantially smaller-bandwidth technologies (such as conventional telephone or wide-area wireless) for the return path at tens of kilobits per second. Asymmetries also arise because of the nature of data traffic patterns. For example, Web access involves much more data transmitted from servers to users than in the opposite direction. Yet poor performance on the acknowledgment path moderates the performance on even a high-bandwidth forward path. One solution to is to compress the acknowledgment packets; another is to delay acknowledgments so that each one acknowledges an expanded range of received data.

Asymmetries can undermine reliability in the application of TCP to wireless links. The TCP protocol can adapt the duration of its time-outs as long as the round-trip time estimate is not highly variable. Asymmetries in the connection bandwidth, coupled with differential loss rates and congestion effects on the forward and reverse paths, increase the variability in estimated round-trip time. This means that when losses do occur, the retransmission time-outs can become very large, significantly degrading a connection if losses occur often.

2.3.3 Security

Wireless communication systems are inherently less private than are wired systems because the radio link can be intercepted without any physical tap, undetected by the transmitter and receiver. Wireless networks are therefore especially vulnerable to eavesdropping, usage fraud, and activity monitoring, threats that will grow as wireless banking and other commercial services become available. In addition, both wired and wireless networks need to be designed to maintain the integrity of data and systems and assure the appropriate availability of services. Thus,

Suggested Citation:"2 TECHNOLOGY LIMITS, TRADE-OFFS, AND CHALLENGES." National Research Council. 1997. The Evolution of Untethered Communications. Washington, DC: The National Academies Press. doi: 10.17226/5968.
×

Page 93

security is an important issue for both commercial and military applications. For purposes of this discussion, which considers key aspects of the information security challenge but is not comprehensive, the issues can be divided into three categories: network security, radio link security, and hardware security.21

Network security encompasses end-to-end encryption and measures to prevent fraudulent network access and monitoring. One user-oriented framework distinguishes several levels of end-to-end encryption (Garg and Wilkes, 1996). Level 0 has no encryption, meaning that anyone with a scanner and knowledge of the communication link design can intercept a transmission. Analog cellular telephones offer this level of security, which has been a problem and has motivated security upgrades in the digital cellular standards.22Level 1 provides low-level security such that individual conversations might take a year or more to decrypt. This level is probably secure enough for commercial telephony applications, provided that an equivalent effort would be needed to decrypt subsequent conversations (''perfect forward secrecy"). Level 2 provides increased (perhaps by a factor of 10) security for sensitive information related to electronic commerce, mergers and acquisitions, and contract negotiations. Level 3 provides the most stringent level of security, meeting government and military communications requirements as defined by the appropriate agencies.

Radio link security prevents the interception of radio signals, ensuring the privacy of user location information and, for military applications, AJ and low-probability-of-detection-and-interference (LPD/I) capabilities. Link security was primarily a military concern before commercial wireless communications became prevalent. Military systems are designed to avert the detection of radio signals, jamming of communication links, and interception and decoding of messages. Many military radios are based on spread-spectrum technology, which provides both AJ and LPD/I capabilities. However, because knowledge of the spread-spectrum code would enable an adversary to intercept a spread signal, encryption is usually applied as well to prevent signal interception and message recovery. Many military techniques for reducing interception and detection are classified.

For commercial systems the primary link-security issue is privacy, which is not typically assured. Conversations on analog cellular telephones are accessible to anyone with an FM scanner, as demonstrated by recent publication of communications involving public figures. Moreover, the location of a cellular user can be determined by triangulating the signal from two or more base stations, a feature that has been exploited successfully by law enforcement authorities. It is difficult to prevent the interception of commercial radio signals, not only because communications

Suggested Citation:"2 TECHNOLOGY LIMITS, TRADE-OFFS, AND CHALLENGES." National Research Council. 1997. The Evolution of Untethered Communications. Washington, DC: The National Academies Press. doi: 10.17226/5968.
×

Page 94

protocols are publicized in patents and standards but also because most communications devices have a "maintenance" mode for monitoring calls (a capability intended for testing purposes that could also be used to eavesdrop).

It is unlikely that commercial devices will ever require a level of security equivalent to military systems and may not even provide the "hooks" enabling the addition of LPD/I capabilities. Similarly, although the growing use of wireless systems and growing dependence on networked communications have heightened concerns about the possible denial of service in commercial contexts, there is probably greater tolerance for private-service outages than for jamming in a military situation at this time.

Hardware security also has different implications for commercial and military applications, although encryption keys typically need to be protected in both contexts.23Commercial systems require sufficient security to prevent the fraudulent use of information in the event of theft or loss, and user databases need to be secured against unauthorized access. The military has similar requirements but at a much higher security level. It also has additional requirements: Military devices need to be protected so that opening them will not reveal any of the specialized hardware or software technology.

2.4 Hardware Issues

Among the hardware issues that are critical to third-generation wireless systems, radio stands out as being central to the military mission. The radio receiver consists of an antenna, RF amplifier, mixer, filters, demodulator, and decoder (see Figure 2-3). Radio signals are received by the antenna, amplified, passed through the mixer and filters, demodulated, and decoded. Transmitters have similar architectures but the operations

image

FIGURE 2-3 A radio receiver has six basic components. In transmit mode the
operations proceed in the reverse order.

Suggested Citation:"2 TECHNOLOGY LIMITS, TRADE-OFFS, AND CHALLENGES." National Research Council. 1997. The Evolution of Untethered Communications. Washington, DC: The National Academies Press. doi: 10.17226/5968.
×

Page 95

are performed in reverse order: The data are encoded, modulated, passed through the filters and mixer, amplified, and transmitted through the antenna.

Section 2.4.1 reviews antenna technology, which has become increasingly sophisticated with the addition of adaptive capabilities. Section 2.4.2 discusses other radio components, emphasizing the transition from analog to digital technology and from single-purpose to multipurpose systems. Traditional radios were designed for a single air interface (i.e., one modulation type occupying a particular bandwidth). Given the proliferation of standards and the need for compatibility with older equipment, the general trend in radio design is to build flexible systems that can handle multiple air interfaces. Section 2.4.3 discusses portable terminals. In modern mobile devices, the radio system is integrated with sophisticated user interfaces and computing capabilities in lightweight, modular packages. The design of portable terminals relies on advanced microprocessors, displays, user interface devices, power sources, and software.

2.4.1 Antennas

An antenna serves as the interface, or transducer, between the electronic circuitry of a transmitter or receiver and the medium through which radio waves travel. Classical antenna designs include simple stub or "whip" antennas such as those found on cellular telephones, as well as massive, parallel panels that are aligned in phase to provide flexible electronic steering (examples include the phased-array radars used on some warships). While in transit between the transmitter and receiver, the RF signals are subject to a variety of distortions (see Section 2.1.1.3). In addition, they create interference for other communications and provide opportunities for interception. To limit interception and interference and also to conserve power, antennas can be designed so that the RF energy radiates in only a particular direction, providing gain along the intended direction and attenuation in undesired directions.

Various antenna structures have been developed to direct electromagnetic signals. Receiving antennas also have directional properties: The most common examples are rooftop television antennas that point in the direction of the local television transmitter and satellite dishes that point at the orbiting satellite. Directional antennas in cellular-system base stations focus power in a particular direction, thereby minimizing the required transmitter power and significantly reducing the amount of interference. Directional antennas need to be positioned carefully. Positioning is not difficult in local television broadcasting because both the transmitter and receiver are stationary and a fixed, narrow beam works well. In cellular and personal communications systems, however, the

Suggested Citation:"2 TECHNOLOGY LIMITS, TRADE-OFFS, AND CHALLENGES." National Research Council. 1997. The Evolution of Untethered Communications. Washington, DC: The National Academies Press. doi: 10.17226/5968.
×

Page 96

transmitter and receiver locations are mobile. Therefore, most directional antennas in mobile communications have a fairly large beam width (60 to 120 degrees). Although narrower beams would enable the use of low-power transmitters and reduce interference, many such beams would be required to cover even a small service area, and mobile users would constantly be moving from one beam to another.

User mobility is a key motivation behind the development of steerable antennas—so-called smart or adaptive-array antennas—that can change the shape and direction of their transmission beams depending on user location. A steerable transmitting antenna controls the phases of the electromagnetic signals generated at each of its numerous elements, thereby changing the physical locations at which the wave-like signals add constructively (to create a beam) or destructively (to create a null). Using feedback control, an antenna beam can follow the movement of a mobile unit. Spot beams can be created by both the transmitting and receiving antennas. These technologies increase system capacity, reduce transmitter power requirements and interference, and dramatically reduce the likelihood of unwanted signal interception. Future mobile systems will pinpoint the relative positions of the transmitter and receiver with even greater accuracy than is currently possible, making sophisticated location-based services feasible.

The physical size of the elements in an antenna is related to the wavelength of operation, which is inversely proportional to the transmission frequency. Thus, higher operating frequencies mean shorter wavelengths, smaller antenna features, more elements per antenna, and the possibility of more complicated and precise beam patterns. Adaptive antennas are already used in military operations, particularly at frequencies above 20 GHz, to accommodate very wideband signals used for communications, tracking, or guidance. Such antennas are composed of many elements and are fully capable of electronic beam forming and steering. Each element is controlled electrically through changes in the properties of dielectric materials; the antenna does not change physically and therefore needs no moving parts.

Older communications systems have a number of shortcomings. Some have multiband capabilities but are expensive and bulky. Commercial applications for adaptive antennas are limited to relatively low-cost, single-band units with limited flexibility in beam pattern. Moreover, virtually all existing adaptive antennas for mobile radio applications are designed for use at base stations rather than mobile units. The key technical challenges in the design of adaptive antennas for military applications are to reduce the size and cost of the RF and signal-processing technology and achieve additional gain in a handset by designing three-dimensional instead of planar designs. The commercial sector is likely to need such designs in

Suggested Citation:"2 TECHNOLOGY LIMITS, TRADE-OFFS, AND CHALLENGES." National Research Council. 1997. The Evolution of Untethered Communications. Washington, DC: The National Academies Press. doi: 10.17226/5968.
×

Page 97

TABLE 2-2 Use of Digital Components in Commercial Communications Products

Product

RFa Amplifier

Mixer

Filter

Demodulator

Decoder

Car radio with equalizer

Analog

Analog

Analog

Analog

Analog

DirecTV receiver

Analog

Analog

Analog

Digital

Digital

Dual-mode cell phone

Analog

Analog

Digital

Digital

Digital

PC telephone modem

Analog

Digital

Digital

Digital

Digital

FDDIb modem

Digital

n/a

n/a

Digital

Digital

a Radio frequency.

b Fiber-distributed data interface.

the future as new spectrum is allocated at higher frequencies (above 2 GHz) and multiband radios become available. For the time being, the DOD may need to fund its own R&D in this area.

2.4.2 Other Radio Components

The evolution of digital technology is transforming radios. Other than antennas, all the components of the radio system—RF amplifier, mixer, filter, demodulator, and decoder—are amenable to either analog or digital implementation. Many commercial radios and other communications products already use programmable digital modules (see Table 2-2).

There are many advantages to replacing analog hardware with programmable digital technology, although trade-offs are involved. As noted above, digital technology offers inherent security advantages. Another benefit is time to market: As with PCs, product development time can be reduced because changes and improvements can be implemented through software. Digital technologies also make it easier to achieve temperature stability and reliability and to manufacture, support, and test equipment. Digital radios can be designed for performance peaks, whereas analog radios de-optimize performance because the filters are detuned to make the system easier to manufacture and tolerant of component variability. Finally, digital components can reduce costs by providing increased functionality per unit and reducing the need for multiple types of radios.

The design of wideband (i.e., multiband) digital radios has been enabled by rapid advances in microelectronics, including DSPs, A/D converters, ASICs, and field-programmable gate arrays (FPGAs). In new radio architectures, referred to variously as software-defined radio, programmable radio, or simply software radio, analog functions such as tuning,

Suggested Citation:"2 TECHNOLOGY LIMITS, TRADE-OFFS, AND CHALLENGES." National Research Council. 1997. The Evolution of Untethered Communications. Washington, DC: The National Academies Press. doi: 10.17226/5968.
×

Page 98

filtering, demodulating, and decoding are replaced with software directing the digital equivalents. The mixers and filters can process multiple modulations spanning multiple bandwidths; the demodulation and decoding processes are programmed; and modulation and coding are usually performed using DSP chips.

The wideband A/D conversion of the software radio enables the implementation in handsets of direct frequency conversion (i.e., eliminating the typical intermediate steps between the baseband and transmit frequencies, thereby reducing noise and the need for filtering). This design is not yet appropriate for commercial systems, because it is not feasible at this point to service a large number of subscribers using such receivers simultaneously. In the handset, the RF amplifier is required to obtain a reasonable noise figure and input intercept. The anti-alias filter selects which of the multiple subbands to digitize. The wideband (multiband) digitizer converts all RF signals into a digital representation. The processor uses software to implement all legacy and future radio systems. The processor is capable of implementing multiple simultaneous radios, much like a PC can run multiple applications simultaneously.

The use of digital radio hardware still presents challenges and requires trade-offs that may not be readily apparent. A digitally implemented radio needs to be at least as good as the analog radio it replaces in terms of QoS parameters such as reception sensitivity or power. This challenge is being met: Coding and decoding improvements, driven by DSP advances, are making digital radio systems not only equal to analog systems but also better. However, four limiting technologies need to be developed further if wideband (i.e., multiband) software radios are to become a practical reality: advanced A/D converters, DSP chips, filters, and RF amplifier components.

2.4.2.1 Analog-to-Digital Converters

The key enabling component, and the most complex and misunderstood element of wideband software radios, is the A/D converter. Most A/D converters are characterized by maximum clock rate and number of output bits, digital metrics similar to those used to characterize microprocessors or memory devices. However, because signal quality is key, A/D converters are better characterized using analog characteristics such as SNR, signal range free from spurious noise, and usable bandwidth (known as Nyquist bandwidth). The use of these metrics helps ensure that the critical A/D transition can be accomplished with minimum degradation in signal quality.

An A/D converter can be implemented in many different architectures; the three significant modern architectures are known as flash (or

Suggested Citation:"2 TECHNOLOGY LIMITS, TRADE-OFFS, AND CHALLENGES." National Research Council. 1997. The Evolution of Untethered Communications. Washington, DC: The National Academies Press. doi: 10.17226/5968.
×

Page 99

parallel), subranging, and sigma-delta. The choice involves trade-offs between the accuracy and conversion rates. The flash A/D uses the most hardware and power but also operates at the fastest sampling rates and produces the largest usable Nyquist bandwidth. These converters are generally inadequate because of a lack of dynamic range. Subranging A/D converters are slower but offer both the bandwidth and dynamic range needed for software radios. Sigma-delta A/D converters generally have very low sample rates and are appropriate for narrowband applications. It is not clear at this stage which technology can be improved most readily to provide the requisite low-power converters with high dynamic range that process bits as rapidly as possible. A demonstration of ultrafast A/D converters was planned as part of the DOD-funded Millennium program. The commercial sector is also performing R&D in this area and is likely to produce advances that would be appropriate for military applications.

2.4.2.2 Digital Signal Processors

For the past few decades semiconductor manufacturing has followed Moore's Law, which predicts that the number of devices on an IC will double every 18 months. This trend is directly related to the steady reductions in the minimum feature size, or linewidth, that can be manufactured in large volumes. The Semiconductor Industry Association road map calls for the production of devices with more than a billion transistors by the year 2010. Such densities could enable entire systems to be built on a few or even a single chip. Indeed, software radio architectures may be implemented increasingly in small numbers of ICs (see Figure 2-4).

image

FIGURE 2-4 Future wideband (i.e., multiband) software radios may implement some
functions, such as analog-to-digital conversion and signal processing, in single
integrated circuits.

Suggested Citation:"2 TECHNOLOGY LIMITS, TRADE-OFFS, AND CHALLENGES." National Research Council. 1997. The Evolution of Untethered Communications. Washington, DC: The National Academies Press. doi: 10.17226/5968.
×

Page 100

The several general-purpose, programmable DSPs now on the market are primarily a customized segment of the IC industry. Although DSP speed is improving every year, single-chip performance is still very limited for software radio applications. High speed can be achieved with large arrays of DSPs but the size, weight, power, and cost of this design are not attractive for small or handheld radio applications. Because filters are critical to the performance of a software radio, gains could be achieved through the use of integrated FIR-filtering ICs. These devices, developed very recently, could perform the processing function at a small fraction of the complexity and cost of a programmable DSP.24Regardless, the rapid commercial advances in signal processing technology are likely to produce chips that meet military needs.

2.4.2.3 Filters

Filters influence not only a radio's signal-processing speed but also its sensitivity, dynamic range, and capability to avert co-site interference. Their importance is reflected in their physical presence: Filters constitute 25 percent of the volume of a typical software radio, in part because several different filters are needed (i.e., for receive preselectors, amplifier output, local oscillators, and mixers). Improvements in frequency-tuning range and selectivity as well as miniaturization would be helpful, especially for application in handheld devices. The commercial sector continues to rely on older technology (e.g., mechanical filters are used in cellular telephone systems) whereas the military has unique needs to reduce co-site interference, both within software radios and across multisystem platforms, and cover wide frequency ranges. Existing radios that span wide frequency ranges require combined filters made of new materials that have remarkably flexible and adaptive electrical properties, far beyond older static inductors and capacitors. The new materials and modern filter fabrication techniques will lead to new and smaller implementations of wideband filtering based on the fundamentals of transmission-line techniques. Thus, filters may merit a significant military R&D investment.

2.4.2.4 Radio Frequency Amplifiers

The commercial sector is designing ultralinear amplifiers that will process many signals from multiple transmitters and add them coherently to achieve good fidelity. These designs will improve power efficiency and consume less space than traditional amplifiers. However, the commercial sector is unlikely to produce multiband amplifiers, which will be very expensive, anytime soon. Alternative materials might offer

Suggested Citation:"2 TECHNOLOGY LIMITS, TRADE-OFFS, AND CHALLENGES." National Research Council. 1997. The Evolution of Untethered Communications. Washington, DC: The National Academies Press. doi: 10.17226/5968.
×

Page 101

advantages in the design of future military systems and could be a topic for DOD-funded research.

The most important recent development in RF technology is the reemergence of semiconductors (i.e., silicon) as an alternative to the semi-insulator materials (e.g., gallium arsenide) traditionally used for RF device manufacturing. Semiconductors offer two advantages. First, they form a natural ground plane on an IC such that microwave devices can be fabricated much closer together, resulting in smaller chips and enabling the design of circuits that cost less and support higher frequencies and performance than do conventional circuits. Second, semiconductors cost less than semi-insulators because they are produced in higher volumes (they are also used in the most advanced CMOS microprocessors and random-access memory [RAM] chips).

As a consequence, silicon is now being used in moderate-performance RF front ends for cellular and personal communication systems. Further, a new technology involving the implantation of germanium atoms in silicon to create heterojunction bipolar transistors promises an extremely low cost, silicon-based approach for RF (30 MHz to 2 GHz) and microwave (2 GHz to 40 GHz and above) analog front ends and power amplifiers.

2.4.3 Portable Terminal Design

The small size and portability of wireless communicators provide obvious benefits for users but also introduce challenges for system designers because they limit display, processing, power, and storage capabilities. The following subsections review the limitations and the new technologies designed to overcome them. The commercial sector is making rapid advances in all these areas that the DOD can exploit to good advantage.

2.4.3.1 Displays, User Interfaces, and Input Devices

Small, highly portable devices contain relatively low quality displays. There are three reasons for this. First, portable devices have limited physical space and power available for the display. Second, display pixels cannot be smaller than the resolving limit of the human eye, meaning that the number of pixels in a given display (i.e., the resolution) is limited. In addition, bright colors can be produced only if there is sufficient power for backlights and display elements; otherwise the display is dim and monochrome. For these reasons the user interface of a portable device needs to be designed for monochrome presentations in a very small screen area—a significant impediment to the display of video or high-quality

Suggested Citation:"2 TECHNOLOGY LIMITS, TRADE-OFFS, AND CHALLENGES." National Research Council. 1997. The Evolution of Untethered Communications. Washington, DC: The National Academies Press. doi: 10.17226/5968.
×

Page 102

images. Nevertheless, significant market forces are fueling a trend toward ubiquitous information displays, and commercial displays offering high resolution, full color, and reduced power requirements are likely to be developed.

Because portable devices lack the space for standard keyboards, icon-based interfaces and pen-based input have been considered as alternatives. In some devices the keyboard is replaced with a small number of function-specific buttons. These devices still support functionally specific virtual keyboards, which are displayed on a touch screen and can be operated by applying pressure to the keys with a stylus. The pen-based devices either support handwriting recognition or simply capture pen strokes ("digital ink").

Ideally, mobile communication devices will be able to send images from remote sites. This capability will be enabled by charge-coupled devices (CCDs)25and CMOS camera chips. These chips are already used in commercial camcorders and have become inexpensive and widespread as a result. Highly integrated cameras have been declining in price, and such a camera is integrated into at least one state-of-the-art Japanese PDA.

2.4.3.2 Processors

The successful development of low-power devices with long battery life has placed limits on the raw performance of embedded processors because processing speed and clock cycle directly influence power consumption. New metrics are therefore required to measure the performance of processors for portable applications: millions of instructions per second (MIPS) per watt, a measure of the impact on battery life and heat dissipation in highly integrated systems; MIPS per square millimeter, a measure of the silicon manufacturing costs of the processor; and bytes per task, a measure of the amount of memory that devices need to incorporate to perform signal-processing functions. Because consumers are demanding highly integrated yet portable computing devices, the commercial sector is performing R&D with the aim of increasing processor capabilities while also reducing power requirements.

2.4.3.3 Batteries

The commercial sector has made tremendous strides in battery technology in recent years because it plays a role in many technologies, ranging from surgical implants to electric cars. Nickel cadmium (NiCd) batteries are the most widely used rechargeable batteries, found in many consumer electronic devices. Most laptop computers now use nickel metal hydride batteries, which have slightly better energy storage per weight

Suggested Citation:"2 TECHNOLOGY LIMITS, TRADE-OFFS, AND CHALLENGES." National Research Council. 1997. The Evolution of Untethered Communications. Washington, DC: The National Academies Press. doi: 10.17226/5968.
×

Page 103

and substantially improved energy storage per volume. Lithium ion (LiIon) batteries are used in some new portable products, such as small cellular telephones. The energy-storage capacity of LiIon batteries is more than twice that of NiCd technology by both weight and volume. Lithium polymer (LiPoly) batteries are about 10 percent more efficient than are LiIon batteries. Both LiIon and LiPoly batteries use solid electrolytes, making it possible to form the battery into arbitrary shapes, a significant improvement over other battery technologies.

2.4.3.4 Storage

The disk-drive capacity of information processing devices continues to increase while physical size shrinks, but the 2.5-inch disk widely used in notebook and laptop computers is still too large for handheld devices. In PDAs the disk is replaced by RAM in the form of battery-backed-up static RAM and flash RAM on personal computer multiple component interface access (PCMCIA, or just PC) cards, which can cost 30 times more than disk storage per megabyte. Commercial R&D in this area is producing steady, impressive advances that are likely to meet military needs.

2.5 Summary

The design of wireless communications systems presents countless challenges. Some solutions are available and many more are on the horizon. Although the review presented in this chapter is general in nature, consideration of this information in the context of DOD's needs suggests a number of areas deserving careful attention in the design of future military systems.

Specifically, network architecture is a fundamental issue that defines all other aspects of the system design. The basic choice is between a peer-to-peer and base-station-oriented design, but there are also other questions related to how infrastructure elements are connected and the nature of communications with other networks. The commercial and defense sectors have differed in their choice of network architectures in the past and continue to have some different needs and concerns. The selection of an optimal military network design could be assisted by simulation and modeling. However, current tools are inadequate to the task of modeling an untethered communications system that uses wideband signals and advanced components such as software radios.

The DOD also has unique needs for interoperability and security of communications systems, although commercial concerns about system integrity and service availability are growing. The evolution of software radios will enable interoperability among advanced and legacy systems,

Suggested Citation:"2 TECHNOLOGY LIMITS, TRADE-OFFS, AND CHALLENGES." National Research Council. 1997. The Evolution of Untethered Communications. Washington, DC: The National Academies Press. doi: 10.17226/5968.
×

Page 104

but this technology presents co-site interference problems that will require new solutions. Similarly, the available AJ and LPD/I technologies will need to be complemented with security advances that accommodate global, heterogeneous communications systems and multiple security levels. The emergence of wideband, programmable radios for military applications will also depend on advances in hardware components such as antennas, which need to be designed for mobile units, and filters, which need to be miniaturized and designed for wideband applications.

These issues are examined further in Chapter 3, which explores the opportunities for synergy between the commercial and military sectors in the development of advanced wireless communications systems.

Notes

1. These error rates are associated with the link layer of the OSI model and are commonly accepted as tolerable for these applications. At higher levels of the OSI model the use of error-correction protocols can improve the rates.

2. These definitions apply only to unmodulated waveforms. Modulation changes the phase and frequency with time (see Section 2.1.3) such that the definitions are no longer accurate.

3. There are numerous path loss models that conform to a variety of propagation mechanisms, including free space, reflection, diffraction, scattering, or some combination of these (Rappaport, 1996).

4. The relationship is L = Pr/Pt = K/f2dn, where Pr is received power, Pt is transmitter power, f is the center frequency of the transmitted signal, and K is a constant that depends on the average path loss at a reference distance d0 from the transmitter (d0 is the far field of the antenna, typically 1 m for indoor environments and 0.1–1 km for outdoor environments). The exponent n is the path loss exponent.

5. This analysis is based on the assumption that the channel is changing slowly enough to allow for adaptation, and that the channel fading can be estimated accurately at the receiver and this information fed back to the transmitter with minimal delay.

6. A RAKE receiver produces a coherent sum of individual multipath components of the received signal. The components can be weighted based on their signal strength to maximize the SNR of the RAKE output. The sum provides an estimate of the transmit signal. A RAKE receiver is essentially another form of diversity because the spreading code induces a time diversity on the transmitted signal such that independent multipath components can be resolved.

7. If multiple systems share the same bandwidth without any channel access coordination and are not interoperable, then some technique is still needed to enable efficient operations. Etiquette rules permit incompatible systems to coexist when using the same bandwidth (whereas interoperability requires standardization—agreement on all waveforms and protocols before systems are built and deployed). The Wireless Information Network Forum, an industry group, has

Suggested Citation:"2 TECHNOLOGY LIMITS, TRADE-OFFS, AND CHALLENGES." National Research Council. 1997. The Evolution of Untethered Communications. Washington, DC: The National Academies Press. doi: 10.17226/5968.
×

Page 105

defined etiquette rules for the unlicensed personal communications bands and has taken the same basic approach for the 60-GHz spectrum allocation (Steer, 1994). The key elements of etiquette rules are (1) listen before transmitting to ensure that the transmitter is the only user of the spectrum, thereby minimizing the possibility of interfering with other spectrum users; (2) limit transmission time to allow others to use the spectrum in a fair manner; and (3) limit transmitter power so as not to interfere with users in a nearby spectrum.

8. For many networks, including voice-oriented cellular networks, the number of transmitters active at any one time is much smaller than the total number of possible transmitters that need be recognized by the hub station. The unpredictable and dynamic nature of the set of active transmitters clearly precludes the fixed assignment of separate channels to each transmitter.

9. Simultaneous detection of multiple users is not currently possible because of the increased complexity required in the receiver. Multiuser detection schemes also require low BERs because bits that are incorrectly detected are subtracted from the signals of other users, possibly causing those signals to be decoded in error as well.

10. These analyses are based on simplifying assumptions about the hardware and communications environment; many of these assumptions would break down in a real operating environment. Moreover, it is not known which technique has a higher spectral efficiency in flat or frequency-selective fading, particularly when countermeasures to fading are used.

11. Techniques are available to avert the delay. For example, a certain number of packet slots can be allocated for unreserved transmissions using a contention scheme. The successful sending of a packet in this slot is taken as a request for a reserved slot (or two) in the next round-trip. As long as the slots are used this reservation continues to be available, and as long as there is capacity the reservation is allowed to grow. But the reservation is abandoned as soon as the sender does not use the slot. This approach involves no delay (except for contention failures on the first packet), poses contention issues only for the first packet in a burst, and matches the natural behavior of the TCP slow-start phase (which is described later in this chapter). However, for applications that alternate a short message in each direction (e.g., transaction processing) the procedure still produces latency equal to one round-trip for each message, and, assuming fixed-length slots and a perfect fit between the data to be transmitted and a slot, has a fundamental throughput limit of 33 percent. If the transmission is smaller than a slot, then the throughput will be even lower (and lower still in many realistic applications with short transaction times). If the information to be transmitted is less than or even comparable to the amount of information required to set up the DAMA resources, then efficiency will be compromised.

12. Assuming that a collision results in the loss of two packets, the maximum throughput in an ALOHA channel is about 18 percent of the peak data rate if the probability of a collision is to be reduced to a level acceptable to the user. Various modifications of ALOHA channels, such as slotted ALOHA or CSMA/CD, can increase efficiency, but they also impose restrictions on data transmission.

13. The latency of Mobile IP is typically much less than a second—the time it

Suggested Citation:"2 TECHNOLOGY LIMITS, TRADE-OFFS, AND CHALLENGES." National Research Council. 1997. The Evolution of Untethered Communications. Washington, DC: The National Academies Press. doi: 10.17226/5968.
×

Page 106

takes for one round-trip between the foreign agent and home agent, or perhaps two round-trips counting the time for message receipt verification.

14. Route optimization is an enhancement to the base specification for Mobile IP and has not to date reached an equivalent level of standardization within the IETF. Mobile IP is a proposed standard (RFC 2002-2006), whereas route optimization has yet to be standardized or shown to be interoperable in multiple implementations.

15. A typical simulator accepts as input a description of network topology, protocols, workload, and control parameters. The output includes a variety of statistics, such as the number of packets sent by each source of data, the queuing delay at each queuing point, and the number of dropped and retransmitted packets. Visualization packages have been developed to allow the simulator's dynamic execution history to be made visible to the network designer. The simulators are designed in such a way that they can modified easily by users.

16. Recent research has investigated the adaptation of compression algorithms to a channel of varying quality (i.e., a channel with fading or varying noise or interference levels). Such adaptation can reduce distortion significantly. This design is based on the idea that, because the transmission rate is constant, this rate needs to be divided between the compression algorithm and the channel code. The optimal way to divide the transmission rate and minimize distortion is the following: On a channel with high SNR, no channel coding is needed, and all the rate is allocated to the compression scheme; as the channel quality degrades, more of the rate is allocated to the channel coding to remove most of the effects of channel errors. However, joint compression and channel coding creates some problems. First, this approach requires that the compression algorithms, which typically sit at the application layer, have access to information about the link layer, which means that the layer separation of the open-systems interconnection model breaks down. Second, the design can become very complicated. It is often easier to design the compression algorithms and the channel coding independently and then ''glue" them together (the compression and coding communities prefer this approach because they have developed separate languages and perspectives, which make it difficult for them to work together). Some future cellular systems will implement a crude form of this joint design using "vocoders" (compression schemes for voice) that operate at multiple rates. If the channel has a high SNR, then the higher-rate vocoder (which performs poorly at low SNRs) is used, and the vocoder rate is decreased as the channel quality decreases.

17. The Internet community is carrying forward two proposals for real-time service: Guaranteed service provides per-flow hard guarantees (i.e., no statistical aggregation or probabilistic bounds), whereas controlled-load service provides a probabilistic bound based on aggregation of a number of real-time flows into one scheduling class. Although guaranteed service provides a delay bound that is computed in advance, controlled load provides a bound that is stable but not explicitly computed. The application must adapt to the service it receives. Both are set up using RSVP. The soft and hard states differ in terms of what happens when a route fails. In ATM the connection is cleared and no traffic is delivered until a new connection is established. In Internet/RSVP the packets start flowing once the routing tables have found a new route, but only with default QoS until

Suggested Citation:"2 TECHNOLOGY LIMITS, TRADE-OFFS, AND CHALLENGES." National Research Council. 1997. The Evolution of Untethered Communications. Washington, DC: The National Academies Press. doi: 10.17226/5968.
×

Page 107

RSVP reestablishes the state. In both cases the new request may fail if there is not enough capacity after the failure.

18. The Wireless ATM Working Group of the ATM Forum (an industry group) is addressing the problems of end user mobility. This effort may be the only avenue for extending ATM to the end user.

19. Latencies in the wireless channel are not only high but also variable over time because of fluctuations in retransmission. Forward error correction can mitigate this problem somewhat but imposes a penalty even when the channel quality is good.

20. Because of the error characteristics of wireless links, some of the QoS issues need to be addressed locally at the link layer rather than from an end-to-end perspective. The DARPA PRNet had a strategy of accomplishing enough at the link level that TCP could handle the remaining reliability issues. However, this approach requires interaction between the link layer and higher layers (e.g., if the link layer needs to implement a stronger channel code, then its transmission rate may be reduced or its delay increased). In addition, the wireless channel may be so degraded that little can be done at the link level to improve matters. There needs to be a way to cope with this situation through higher-layer protocols.

21. Software security is another category but it is not unique to wireless communications and therefore is not addressed here.

22. Some security concerns are being alleviated in the transition from analog to digital systems, which offer an inherent advantage because the meaning of a pattern of 1s and 0s cannot be casually discerned.

23. For example, systems based on the GSM standard keep the key in a separate smart card, not in the telephone.

24. For example, most contemporary software radios use commercial filters by Graychip, Inc., or Harris Corp. for highly programable channel access to FDMA, TDMA, and CDMA systems with the low size, weight, and power of ASICs.

25. A CCD detector turns light into an electric charge, which is then transformed into the binary code recognized by computers. Some commercial cameras use this technology, but they remain expensive.

Suggested Citation:"2 TECHNOLOGY LIMITS, TRADE-OFFS, AND CHALLENGES." National Research Council. 1997. The Evolution of Untethered Communications. Washington, DC: The National Academies Press. doi: 10.17226/5968.
×
Page 56
Suggested Citation:"2 TECHNOLOGY LIMITS, TRADE-OFFS, AND CHALLENGES." National Research Council. 1997. The Evolution of Untethered Communications. Washington, DC: The National Academies Press. doi: 10.17226/5968.
×
Page 57
Suggested Citation:"2 TECHNOLOGY LIMITS, TRADE-OFFS, AND CHALLENGES." National Research Council. 1997. The Evolution of Untethered Communications. Washington, DC: The National Academies Press. doi: 10.17226/5968.
×
Page 58
Suggested Citation:"2 TECHNOLOGY LIMITS, TRADE-OFFS, AND CHALLENGES." National Research Council. 1997. The Evolution of Untethered Communications. Washington, DC: The National Academies Press. doi: 10.17226/5968.
×
Page 59
Suggested Citation:"2 TECHNOLOGY LIMITS, TRADE-OFFS, AND CHALLENGES." National Research Council. 1997. The Evolution of Untethered Communications. Washington, DC: The National Academies Press. doi: 10.17226/5968.
×
Page 60
Suggested Citation:"2 TECHNOLOGY LIMITS, TRADE-OFFS, AND CHALLENGES." National Research Council. 1997. The Evolution of Untethered Communications. Washington, DC: The National Academies Press. doi: 10.17226/5968.
×
Page 61
Suggested Citation:"2 TECHNOLOGY LIMITS, TRADE-OFFS, AND CHALLENGES." National Research Council. 1997. The Evolution of Untethered Communications. Washington, DC: The National Academies Press. doi: 10.17226/5968.
×
Page 62
Suggested Citation:"2 TECHNOLOGY LIMITS, TRADE-OFFS, AND CHALLENGES." National Research Council. 1997. The Evolution of Untethered Communications. Washington, DC: The National Academies Press. doi: 10.17226/5968.
×
Page 63
Suggested Citation:"2 TECHNOLOGY LIMITS, TRADE-OFFS, AND CHALLENGES." National Research Council. 1997. The Evolution of Untethered Communications. Washington, DC: The National Academies Press. doi: 10.17226/5968.
×
Page 64
Suggested Citation:"2 TECHNOLOGY LIMITS, TRADE-OFFS, AND CHALLENGES." National Research Council. 1997. The Evolution of Untethered Communications. Washington, DC: The National Academies Press. doi: 10.17226/5968.
×
Page 65
Suggested Citation:"2 TECHNOLOGY LIMITS, TRADE-OFFS, AND CHALLENGES." National Research Council. 1997. The Evolution of Untethered Communications. Washington, DC: The National Academies Press. doi: 10.17226/5968.
×
Page 66
Suggested Citation:"2 TECHNOLOGY LIMITS, TRADE-OFFS, AND CHALLENGES." National Research Council. 1997. The Evolution of Untethered Communications. Washington, DC: The National Academies Press. doi: 10.17226/5968.
×
Page 67
Suggested Citation:"2 TECHNOLOGY LIMITS, TRADE-OFFS, AND CHALLENGES." National Research Council. 1997. The Evolution of Untethered Communications. Washington, DC: The National Academies Press. doi: 10.17226/5968.
×
Page 68
Suggested Citation:"2 TECHNOLOGY LIMITS, TRADE-OFFS, AND CHALLENGES." National Research Council. 1997. The Evolution of Untethered Communications. Washington, DC: The National Academies Press. doi: 10.17226/5968.
×
Page 69
Suggested Citation:"2 TECHNOLOGY LIMITS, TRADE-OFFS, AND CHALLENGES." National Research Council. 1997. The Evolution of Untethered Communications. Washington, DC: The National Academies Press. doi: 10.17226/5968.
×
Page 70
Suggested Citation:"2 TECHNOLOGY LIMITS, TRADE-OFFS, AND CHALLENGES." National Research Council. 1997. The Evolution of Untethered Communications. Washington, DC: The National Academies Press. doi: 10.17226/5968.
×
Page 71
Suggested Citation:"2 TECHNOLOGY LIMITS, TRADE-OFFS, AND CHALLENGES." National Research Council. 1997. The Evolution of Untethered Communications. Washington, DC: The National Academies Press. doi: 10.17226/5968.
×
Page 72
Suggested Citation:"2 TECHNOLOGY LIMITS, TRADE-OFFS, AND CHALLENGES." National Research Council. 1997. The Evolution of Untethered Communications. Washington, DC: The National Academies Press. doi: 10.17226/5968.
×
Page 73
Suggested Citation:"2 TECHNOLOGY LIMITS, TRADE-OFFS, AND CHALLENGES." National Research Council. 1997. The Evolution of Untethered Communications. Washington, DC: The National Academies Press. doi: 10.17226/5968.
×
Page 74
Suggested Citation:"2 TECHNOLOGY LIMITS, TRADE-OFFS, AND CHALLENGES." National Research Council. 1997. The Evolution of Untethered Communications. Washington, DC: The National Academies Press. doi: 10.17226/5968.
×
Page 75
Suggested Citation:"2 TECHNOLOGY LIMITS, TRADE-OFFS, AND CHALLENGES." National Research Council. 1997. The Evolution of Untethered Communications. Washington, DC: The National Academies Press. doi: 10.17226/5968.
×
Page 76
Suggested Citation:"2 TECHNOLOGY LIMITS, TRADE-OFFS, AND CHALLENGES." National Research Council. 1997. The Evolution of Untethered Communications. Washington, DC: The National Academies Press. doi: 10.17226/5968.
×
Page 77
Suggested Citation:"2 TECHNOLOGY LIMITS, TRADE-OFFS, AND CHALLENGES." National Research Council. 1997. The Evolution of Untethered Communications. Washington, DC: The National Academies Press. doi: 10.17226/5968.
×
Page 78
Suggested Citation:"2 TECHNOLOGY LIMITS, TRADE-OFFS, AND CHALLENGES." National Research Council. 1997. The Evolution of Untethered Communications. Washington, DC: The National Academies Press. doi: 10.17226/5968.
×
Page 79
Suggested Citation:"2 TECHNOLOGY LIMITS, TRADE-OFFS, AND CHALLENGES." National Research Council. 1997. The Evolution of Untethered Communications. Washington, DC: The National Academies Press. doi: 10.17226/5968.
×
Page 80
Suggested Citation:"2 TECHNOLOGY LIMITS, TRADE-OFFS, AND CHALLENGES." National Research Council. 1997. The Evolution of Untethered Communications. Washington, DC: The National Academies Press. doi: 10.17226/5968.
×
Page 81
Suggested Citation:"2 TECHNOLOGY LIMITS, TRADE-OFFS, AND CHALLENGES." National Research Council. 1997. The Evolution of Untethered Communications. Washington, DC: The National Academies Press. doi: 10.17226/5968.
×
Page 82
Suggested Citation:"2 TECHNOLOGY LIMITS, TRADE-OFFS, AND CHALLENGES." National Research Council. 1997. The Evolution of Untethered Communications. Washington, DC: The National Academies Press. doi: 10.17226/5968.
×
Page 83
Suggested Citation:"2 TECHNOLOGY LIMITS, TRADE-OFFS, AND CHALLENGES." National Research Council. 1997. The Evolution of Untethered Communications. Washington, DC: The National Academies Press. doi: 10.17226/5968.
×
Page 84
Suggested Citation:"2 TECHNOLOGY LIMITS, TRADE-OFFS, AND CHALLENGES." National Research Council. 1997. The Evolution of Untethered Communications. Washington, DC: The National Academies Press. doi: 10.17226/5968.
×
Page 85
Suggested Citation:"2 TECHNOLOGY LIMITS, TRADE-OFFS, AND CHALLENGES." National Research Council. 1997. The Evolution of Untethered Communications. Washington, DC: The National Academies Press. doi: 10.17226/5968.
×
Page 86
Suggested Citation:"2 TECHNOLOGY LIMITS, TRADE-OFFS, AND CHALLENGES." National Research Council. 1997. The Evolution of Untethered Communications. Washington, DC: The National Academies Press. doi: 10.17226/5968.
×
Page 87
Suggested Citation:"2 TECHNOLOGY LIMITS, TRADE-OFFS, AND CHALLENGES." National Research Council. 1997. The Evolution of Untethered Communications. Washington, DC: The National Academies Press. doi: 10.17226/5968.
×
Page 88
Suggested Citation:"2 TECHNOLOGY LIMITS, TRADE-OFFS, AND CHALLENGES." National Research Council. 1997. The Evolution of Untethered Communications. Washington, DC: The National Academies Press. doi: 10.17226/5968.
×
Page 89
Suggested Citation:"2 TECHNOLOGY LIMITS, TRADE-OFFS, AND CHALLENGES." National Research Council. 1997. The Evolution of Untethered Communications. Washington, DC: The National Academies Press. doi: 10.17226/5968.
×
Page 90
Suggested Citation:"2 TECHNOLOGY LIMITS, TRADE-OFFS, AND CHALLENGES." National Research Council. 1997. The Evolution of Untethered Communications. Washington, DC: The National Academies Press. doi: 10.17226/5968.
×
Page 91
Suggested Citation:"2 TECHNOLOGY LIMITS, TRADE-OFFS, AND CHALLENGES." National Research Council. 1997. The Evolution of Untethered Communications. Washington, DC: The National Academies Press. doi: 10.17226/5968.
×
Page 92
Suggested Citation:"2 TECHNOLOGY LIMITS, TRADE-OFFS, AND CHALLENGES." National Research Council. 1997. The Evolution of Untethered Communications. Washington, DC: The National Academies Press. doi: 10.17226/5968.
×
Page 93
Suggested Citation:"2 TECHNOLOGY LIMITS, TRADE-OFFS, AND CHALLENGES." National Research Council. 1997. The Evolution of Untethered Communications. Washington, DC: The National Academies Press. doi: 10.17226/5968.
×
Page 94
Suggested Citation:"2 TECHNOLOGY LIMITS, TRADE-OFFS, AND CHALLENGES." National Research Council. 1997. The Evolution of Untethered Communications. Washington, DC: The National Academies Press. doi: 10.17226/5968.
×
Page 95
Suggested Citation:"2 TECHNOLOGY LIMITS, TRADE-OFFS, AND CHALLENGES." National Research Council. 1997. The Evolution of Untethered Communications. Washington, DC: The National Academies Press. doi: 10.17226/5968.
×
Page 96
Suggested Citation:"2 TECHNOLOGY LIMITS, TRADE-OFFS, AND CHALLENGES." National Research Council. 1997. The Evolution of Untethered Communications. Washington, DC: The National Academies Press. doi: 10.17226/5968.
×
Page 97
Suggested Citation:"2 TECHNOLOGY LIMITS, TRADE-OFFS, AND CHALLENGES." National Research Council. 1997. The Evolution of Untethered Communications. Washington, DC: The National Academies Press. doi: 10.17226/5968.
×
Page 98
Suggested Citation:"2 TECHNOLOGY LIMITS, TRADE-OFFS, AND CHALLENGES." National Research Council. 1997. The Evolution of Untethered Communications. Washington, DC: The National Academies Press. doi: 10.17226/5968.
×
Page 99
Suggested Citation:"2 TECHNOLOGY LIMITS, TRADE-OFFS, AND CHALLENGES." National Research Council. 1997. The Evolution of Untethered Communications. Washington, DC: The National Academies Press. doi: 10.17226/5968.
×
Page 100
Suggested Citation:"2 TECHNOLOGY LIMITS, TRADE-OFFS, AND CHALLENGES." National Research Council. 1997. The Evolution of Untethered Communications. Washington, DC: The National Academies Press. doi: 10.17226/5968.
×
Page 101
Suggested Citation:"2 TECHNOLOGY LIMITS, TRADE-OFFS, AND CHALLENGES." National Research Council. 1997. The Evolution of Untethered Communications. Washington, DC: The National Academies Press. doi: 10.17226/5968.
×
Page 102
Suggested Citation:"2 TECHNOLOGY LIMITS, TRADE-OFFS, AND CHALLENGES." National Research Council. 1997. The Evolution of Untethered Communications. Washington, DC: The National Academies Press. doi: 10.17226/5968.
×
Page 103
Suggested Citation:"2 TECHNOLOGY LIMITS, TRADE-OFFS, AND CHALLENGES." National Research Council. 1997. The Evolution of Untethered Communications. Washington, DC: The National Academies Press. doi: 10.17226/5968.
×
Page 104
Suggested Citation:"2 TECHNOLOGY LIMITS, TRADE-OFFS, AND CHALLENGES." National Research Council. 1997. The Evolution of Untethered Communications. Washington, DC: The National Academies Press. doi: 10.17226/5968.
×
Page 105
Suggested Citation:"2 TECHNOLOGY LIMITS, TRADE-OFFS, AND CHALLENGES." National Research Council. 1997. The Evolution of Untethered Communications. Washington, DC: The National Academies Press. doi: 10.17226/5968.
×
Page 106
Suggested Citation:"2 TECHNOLOGY LIMITS, TRADE-OFFS, AND CHALLENGES." National Research Council. 1997. The Evolution of Untethered Communications. Washington, DC: The National Academies Press. doi: 10.17226/5968.
×
Page 107
Next: 3 COMMERCIAL-DEFENSE SYNERGY IN WIRELESS COMMUNICATIONS »
The Evolution of Untethered Communications Get This Book
×
Buy Paperback | $46.00
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

In response to a request from the Defense Advanced Research Projects Agency, the committee studied a range of issues to help identify what strategies the Department of Defense might follow to meet its need for flexible, rapidly deployable communications systems. Taking into account the military's particular requirements for security, interoperability, and other capabilities as well as the extent to which commercial technology development can be expected to support these and related needs, the book recommends systems and component research as well as organizational changes to help the DOD field state-of-the-art, cost-effective untethered communications systems. In addition to advising DARPA on where its investment in information technology for mobile wireless communications systems can have the greatest impact, the book explores the evolution of wireless technology, the often fruitful synergy between commercial and military research and development efforts, and the technical challenges still to be overcome in making the dream of "anytime, anywhere" communications a reality.

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    Switch between the Original Pages, where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

    « Back Next »
  6. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  7. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  8. ×

    View our suggested citation for this chapter.

    « Back Next »
  9. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!