Application Needs for Computing and Communications
The requirements of national-scale applications for computing and communications pose both opportunities and challenges that derive, ultimately, from the increasing capabilities of the technologies on which these applications depend. Significant increases in computation and communications performance in recent years have made qualitative differences in what can be done with information technology. For example, widespread deployment of data networks and the increasing processing and display capabilities of personal computers and workstations have made possible a powerful and highly adaptable new medium of communication, the World Wide Web. Advances in performance have raised application users' expectations about what their information technology tools can be counted on to accomplish; as Box S.2 notes, computing and communications are becoming part of the essential national infrastructure on which important sectors of the nation's economy and society depend.
This chapter identifies opportunities for taking advantage of information infrastructure to support the missions of people and organizations in five important application areas—crisis management, digital libraries, electronic commerce, manufacturing, and health care. Reflecting the language that often is used by people seeking to apply technology to solve a problem, the chapter sometimes characterizes these opportunities as "needs" for technology. Society's dependence on information technology is not absolute; certainly, fire fighters can continue to put out fires without computerized maps, and doctors can write clinical reports with pen and paper. However, continued dramatic improvements in the
quality, efficiency, accessibility, and dependability of nationally important industries and services are realizable through advances in information technology and the integration of those advances into the work modes of organizations and individuals (CSTB, 1994a,b). Whether the proposed advances are expressed as needs or as opportunities, research relating to enabling technologies remains essential; it is the foundation for progress in information technology generally and for advances in the nature and uses of information infrastructure. In addition, actual growth in the use of electronic information and communications systems in the United States and worldwide creates a need for research into the complex problems of managing information and integrating information and communications services into broader human activities that involve ordinary citizens, including specialists in areas other than information technology.1
To explore needs and opportunities for use of computing and communications in crisis management and other selected application areas, workshop participants examined four classes of technologies, loosely reflecting a layered model of information infrastructure, with each set of technologies providing capabilities used by the higher layers. The organization of each section in this chapter reflects this classification scheme, proceeding from lower to higher layers.
- Networking—technologies related to networked voice, video, and data communications, including physical facilities (e.g., circuits, switches, routers), the communications services that make use of them, and the architectures, protocols, and management mechanisms that make networks function. Key aspects include, for example, bandwidth, reliability, security, quality of service, and architectural support for the integration of higher-level functions across the network.
- Computation—technologies related to computer processing, particularly in a distributed context. Traditional computation-intensive functions include modeling, simulation, and some aspects of visualization, among others. Key aspects include, for example, strategies for maximizing the use of processing power (such as parallelism and distribution), programming models, software system composition, and management of processing and data flows across networks, including representation of time and temporal constraints in distributed computing.
- Information management—technologies contributing to the creation, storage, retrieval, and sharing of information across networks. Components that may be integrated within an information management system include traditional databases, object databases for design applications, multimedia servers, digital libraries, and distributed file systems, as well as software applications that process or manage information. They also include remote sensors attached to networks. Key aspects include, for example, balance between central and distributed control, exchange of diverse types and formats of information across boundaries,
- integration of real and synthetic information (e.g., in virtual environments), and easy construction of new applications from existing components.
- User-centered systems—technologies for maximizing the utility of computer-based systems for the people who use them, including natural human-computer interfaces, alternative modes of information representation (e.g., speech, hypertext, visualization), artificial intelligence-based decision support (including knowledge-based systems and newer techniques for coping with uncertainty), and work-group collaboration technologies. Key aspects include, for example, ease of use for individuals and groups and the ability of applications and systems to adapt to user-specific skills and needs.
The technologies for communicating and using information are highly interrelated, and this scheme is not intended to be rigid or perfectly consistent in applying a layered approach. To simplify discussion, the application area demands for computing and communications that are examined in this chapter are distributed somewhat arbitrarily among these four areas. A particular computing or communications application (e.g., tool, system) may span all of these levels—for example, an information system that helps a user answer a question. The system would assist by translating a need for information into a formal expression that automated systems can understand, identifying potential information sources (including the vast array of sources available across networks such as the Internet), formulating a search strategy, accessing multiple sources across the network, integrating the retrieved data consistent with the user's original requirement, displaying the results in a form appropriate to both the user's needs and the nature of the information, and interacting with the user to refine and repeat the search. This system would incorporate both information management and user-centered technologies, and these would rely on a supporting infrastructure of networking and computation.
Definition and Characteristics
Crisis management was selected as the focus for Workshops II and III in the Computer Science and Telecommunications Board's series of three workshops on high-performance computing and communications because crises place heavy demands on computing, communications, and information systems, and such systems have become crucial to providing necessary support in times of crisis. Crises are extreme events that cause significant disruption and put lives and property at risk. They require an immediate response, as well as coordinated application of resources, facilities, and efforts beyond those regularly available to handle routine problems. They can arise from many sources. Natural disasters such as major earthquakes, hurricanes, fires, and floods clearly can precipitate
crises. Man-made crises can be accidental, such as oil spills or the release of toxic substances into the environment, or they may be intentional, such as bombings by terrorists. Warfare clearly presents a continuing set of crises, and although operational warfare concerns were largely outside the scope of the workshop series, many of the characteristics and computing and communications requirements of crisis management in other contexts overlap with the needs of warfare.2 The military requirements for command, control, communications, computing, and intelligence (C4I), for example, have much in common with the nonmilitary crisis management requirements for understanding a complex situation and preparing a coordinated response. The relatively more centralized and hierarchical structure of military command in comparison to civilian organizations, however, introduces differences in the needs for and the available approaches to computing and communications in the two contexts. As John Hwang, of the Federal Emergency Management Agency (FEMA), observed, "Military command and control is becoming a discipline; however, civil crisis management is still in its infancy as a discipline."
When does a situation become a crisis? One workshop participant observed that when he had to call up staff to run the crisis center, it was a crisis. This tautological comment underscores that the human decision to invoke extraordinary resources and management priorities implies a situation distinct from "business as usual": standard practices no longer apply. Beyond this commonsense observation, experts whose careers revolve around crisis management sometimes offer differing perspectives on crises and crisis management. To simplify the discussion and be consistent with its limited scope for investigation, the steering committee has framed these issues in somewhat general terms in examining the relationships between the crisis-related conditions in which computing and communications may be used and the features or functions of those technologies that are needed.
Crisis management has several phases or components with different time horizons. Among these are preparedness (including planning and training), crisis avoidance (averting a developing crisis), response, and recovery.3 Much of the discussion at the workshops centered on response-related activities, which offer particularly severe challenges across a range of technologies. Response to a crisis involves an initial reaction with available resources, a rapid assessment to determine the scope of the problem, mobilization of additional resources (such as personnel, equipment, supplies, communications, and information), and integrating resources to create an organization capable of managing and sustaining the required response and recovery. During and after the response, the need to disseminate information to the public, including the press, is an important part of the context within which crisis managers operate. The workshops also addressed questions of preparedness, since preparations and plans can alleviate difficulties associated with response and recovery from a crisis.
Requirements at each phase differ. For example, conventional (e.g.,
scheduled) training is needed for the earlier phases, while at crisis time, ''just-in-time" training is needed to bring people up to speed. Recognition of pre-crisis phases illuminates opportunities for specific preparations, such as the simulation of possible crises to identify likely needs, which can guide the pre-positioning of resources in anticipation of predictable kinds of crises (e.g., earthquakes, floods, or tornadoes in areas prone to such natural disasters) or the formation of plans to access them when needed. An analogy may be made to the emergency room of a hospital. Statistical expectations may help to pre-position equipment, supplies, and trained staff. During holidays, traffic accidents tend to increase. This situation can be handled with an increase in emergency room staffing and supplies to meet the predicted demand; however, the next emergency that is wheeled in the door is usually not predictable as to specifics. A major crisis that overwhelms the capacity of the emergency room in a way that cannot be predicted requires contingency plans and coordination with other organizations, in order to locate and bring in additional resources or to divert patients elsewhere.
These tasks grow increasingly complex at scales larger than a single emergency room, where many organizations and kinds of resources become involved. Many such tasks relate directly to or make use of computing and communications, since important resources for crisis response and recovery include information repositories, computing capacity, and emergency communications links. Two sets of broad goals for using information resources to support crisis management, one from FEMA and one from the nongovernmental National Institute for Urban Search and Rescue (NI/USR), are presented in Box 1.1.
Workshop participants identified several distinctive characteristics of crises and factors relevant to managing them:
- Magnitude. Crises overwhelm available resources. (This is the distinction made, at least for the purposes of the workshop series, between crises and emergencies.) In many cases, problems that are manageable at one level become crises as the magnitude of the problem increases beyond normal or expected bounds, thus overwhelming the resources on hand. An automobile accident or a fire in a single building requires emergency services—fire engines and ambulances are dispatched—but does not overwhelm those services and so is not a crisis. Overload situations may lead to crises. They may arise, for example, in telephone systems, power plants, weather centers, and hospital emergency rooms. Hospitals in a region may be prepared for a certain number of emergency patients within a 24-hour period, but will experience a crisis if ten times as many patients arrive.
- Urgency. Crises have a serious, immediate impact on people and property and require an immediate response. Lifesaving fire, rescue, and emergency medical services are clear examples. Citizens also want immediate access to information about obtaining disaster relief, such as emergency loans to replace lost homes and property, and rapid processing of claims. In some crises, a fast
At Workshop II, John Hwang, of the Federal Emergency Management Agency (FEMA), identified four major areas for applying information technology:
The FEMA Information Systems Directorate's "Strategic Plan for Information Resources" (September 30, 1994)1 sets the following goals:
The National Institute of Urban Search and Rescue provides the following vision statement:2
"Vision 2000": Crisis Information System
Without such a [crisis information] system there can be no coordinated, cost effective, efficient response. We have established the following goals for the crisis communication architecture:
To Deliver the Right Information
To the Right People
Within the "Action Cycle"
To Save the Greatest Number of Lives
To Protect the Largest Amount of Property
To Contain the Event at the Lowest Possible Level
To Guarantee a Sustainable Economy for the United States.
- response may reduce the need for later countermeasures. For example, in a communications network overload, a cascading problem may be avoided by isolating the failure quickly, thereby diminishing the need for greater corrective measures later. Although more slowly developing, broad-scale problems such as global climate change, disease, or overpopulation are crises of a long-term nature, workshop discussion generally centered on shorter-duration events with severe time pressures. (However, it is important to note that long-term effects may influence planning for short-term crises; for example, Steven Smith, of the National Center for Atmospheric Research, noted research suggesting that global warming is linked to an increase in the intensity of extreme weather-related disasters, such as floods and hurricanes.)
- Infrequency and unpredictability. Some high-magnitude events, such as earthquakes, are not necessarily unexpected, but they occur infrequently and their location and magnitude are unpredictable. Therefore, it is not feasible for agencies with constrained budgets to keep on hand the extraordinary resources needed to handle crises in every location where they might occur. The nature of the warning influences the ability to respond; earthquakes, for example, occur with effectively no warning, whereas approaching hurricanes can be tracked, although their exact landfall is difficult to predict more than a few hours in advance. Crisis management thus requires contingency plans for identifying needed resources—including resources that other agencies or organizations can offer—and deploying them rapidly.
- Uncertainty and incompleteness of information and resources (combined with a need to respond in spite of these shortfalls). Even with complete information, chaotic conditions during a crisis make the prediction of future conditions uncertain. A strategy of waiting and watching is not generally viable in a crisis, and so decision makers must be prepared to act despite these limitations and to change course as new information becomes available.
- Special need for information and access methods. Both prior to and during a crisis, there may be extraordinary needs for more and different sorts of information (both from the crisis scene and from remote sources of information and expertise), as well as for sharing and presentation of information to decision and judgment makers, analysts, workers in the field, and the public. These parties' needs create demands for information flows into, within, and out of the crisis area. Often, special tools and access methods are needed to consolidate information from disparate sources. For example, in the search and rescue efforts after the Oklahoma City bombing in April 1995, information was consolidated from many sources—including agencies with offices in the Alfred P. Murrah Building and nearby damaged buildings, architectural diagrams, city maps, digitized photographs of the scene, and reports from rescue workers—to map the buildings and determine the high-probability locations of missing people. This allowed the searches to focus on those locations, thereby avoiding useless and dangerous searches of lower-probability locations.
- Multidimensionality. Some events become crises because of their multidimensional nature and side effects. A crisis that damages the transportation system can create crises in systems that depend on transportation, such as medical services; it may also inhibit a rapid response, thus worsening the problem. A power failure in New York during a heat wave may cause not only health and safety risks for people caught in a subway system, but also economic disruption due to the interruption of computer-based financial transactions (e.g., stock trading). Several workshop participants commented on the greater consequences associated with physical events that caused economic disruption, especially disruption to the financial system of the country or world.
- Location and social context. Where a crisis occurs influences its nature and the ability to respond. Many communities apply a rational cost-benefit analysis that gives planning for highly unlikely events a low priority. Thus California, which expects to have earthquakes, is better prepared for them than are other states. Crises may be international, national, regional, state, or local in scope. International events have the broadest set of issues, but perhaps lower expectations from the U.S. public for speed and comprehensiveness of response.
The political and social context can create resource limitations in local crisis management. This has obvious implications for communities' preparation for crises, among which is limited ability to acquire and use information technologies effectively. As Nicole Dash, of the University of Delaware, stated,
In addition to technological advancements, we must also look at the human elements. One of our first priorities is to recognize that emergency management is often not a high priority in many communities. Community risk assessment tends to employ a rational choice approach in an attempt to balance cost and benefit. Because disaster is seen as rare, emergency planning is associated with high cost and low benefit. . . . In addition, emergency management personnel often lack the computer skills and hardware to utilize . . . technology oriented toward crisis management needs.4
The crisis management budget constraints of communities are outside the sphere of computing and communications research, but their implications are not. They demonstrate the potential value of research to make technology more affordable by reducing its complete life-cycle costs—making it not only cheaper to purchase, but also easier to set up and maintain, easier to integrate into existing organizational processes, and more usable without extensive training. Remote access to network-based resources and rapid deployment to crisis locations can also reduce costs to communities by making it possible to share resources. Comments in the workshops from crisis management professionals about the impracticality of learning and using complex, feature-overloaded equipment in the time- and resource-limited context of crisis management, however, showed that to realize these cost reductions, technology development must be informed by testing, measurement, and experience gained through deployment.5
Realistic crisis scenarios provide a context for understanding and analyzing the needs of crisis managers for computing and communications capabilities. The characteristics of crises discussed in the preceding section, "Definition and Characteristics," can be used in developing typical cases to motivate and test elements of a research agenda for computing and communications. Numerous crisis scenarios exist, developed by various civilian and military organizations for training and planning purposes. Access to some scenarios is necessarily restricted, in order to avoid spreading knowledge of vulnerabilities and response plans to potential adversaries. One publicly available scenario, which was used in Workshop III to stimulate and focus discussions, is summarized in Box 1.2. The scenario illustrates some of the range of demands that crises may raise.
The steering committee also developed the fictional scenario presented in Box 1.3, describing a future crisis and some of the means by which relief officials might respond, given computing and communications capabilities beyond those currently available or tested in experimental contexts such as the Joint Warrior Interoperability Demonstrations (JWIDs) discussed in Box 1.2. These capabilities are extrapolated from current areas of research. The scenario draws on workshop discussions with both experienced crisis management officials and researchers in computing and communications. The scenario is somewhat fanciful and is not intended as a prediction of future capabilities or a recommendation for particular technical solutions. Its purpose is to illustrate specific ways in which breakthroughs and incremental advances in high-performance computing and communications could be motivated by the broad range of crisis management needs that workshop participants identified.
Crisis Management Needs for Computing and Communications
Networking and Communications
When a crisis occurs, the first order of business is to find out what happened—to perform a situation assessment. Nicole Dash observed that a situation assessment poses two requirements related to communications. First, authorities (such as emergency services managers) at the location of the crisis must be able to communicate their community's situation to the world outside the crisis area; second, rapid response teams must be able to enter the area, perform an assessment, and communicate back what they find in real time. In many crises, the normal infrastructure of telephone and data networks will not be able to support these initial communications requirements, for one or more of the following reasons: the crisis is in a location with little communications infrastructure in normal times (such as a remote location or a developing country with weak infrastructure), the crisis itself has destroyed the infrastructure (as large natural
disasters often do), or people overload the public networks by trying to call in or out of the area.
The U.S. wireline telephone network is designed to maintain or restore basic voice communications in the event of emergencies, but it may not be possible to depend on complete restoration of telephone service. Walter McKnight, of the National Communications System (NCS), reported that a review by NCS found recurring communications shortfalls for national- and regional-level emergency users responding to disasters.6 These included the following:
- Inadequate voice services;
- Congested wireline and wireless services;
- Unknown radio frequencies for various relief organizations;
- Limited access to distributed information resources;
- Limited information sharing among different functional branches ("emergency support functions," such as transportation, communications, fire fighting, health and medical, hazardous materials, and food);
- Inability to send and receive electronic mail among users and regional offices (including difficulty finding users' addresses); and
- Lack of service provisioning (rapid setup) for telecommunications equipment and facilities.
Commenting on the current state of crisis communications, John Hwang observed,
I think one of the misconceptions is that . . . we have a very robust infrastructure already . . . that automatically, in times of crisis, is ready to deal with the emergency situation. It turns out that's just not true. . . . [I]n emergencies, there are a lot variables like mobility, survivability, breakdowns, things which just don't work the way you think [they're] supposed to work. Now, what happens is instead of depending on healing the entire infrastructure and bringing it back up again, what you have to do is find a way through it, which I always call the emergency lane problem.
To respond to concern about congestion, federal agencies and telephone companies (both long-distance and local carriers) have worked together to develop the Government Emergency Telecommunications Service (GETS; see Government Issue, 1995).7 This is a program to reserve voice-grade, analog communications capacity (suitable for fax and modem as well as voice) for priority emergency users, such as federal, state, and local governments and industry personnel. Users access GETS by dialing a special 710 area code and entering a personal identification number (PIN). GETS became available in 1995 and was used in the JWID '95 exercise and in response to the Oklahoma City bombing, Louisiana floods, and (through international calling) the Kobe, Japan, earthquake.
The Department of Defense (DOD) conducts annual exercises for training and planning purposes and to demonstrate interoperability of the military services' information and communications systems. They are called Joint Warrior Interoperability Demonstrations (JWIDs). They seek to test and demonstrate technologies such as distributed collaboration and the use of intelligent decision aids; improved battle space management and a common tactical picture including integrated collateral intelligence information; improved joint, combined, and non-DOD agency interoperability; expanded use of commercial satellites and new switching technology; multilevel security; knowledge-based information presentation; expanded use of modeling and simulation including enhanced operations and simulation integration; telemedicine; and improved network management and planning, among others.
In the JWID exercises, scenarios are used to create a framework for evaluating the performance of systems and planners in relation to valid, simulated operational requirements. Consistent with the growing military emphasis on operations other than war, recent JWIDs have addressed crisis management applications and have involved civilian agencies along with the military. The following scenario, related to natural disasters and subsequent complications, is excerpted from the description of Phase 3 of JWID '95 (conducted in September 1995).1
An earthquake measuring 7.6 on the Richter scale is registered by the U.S. Geological Survey as having occurred near New Madrid, Missouri. The epicenter is located at coordinates 36.5N–89.6W. The Director of the Arkansas Office of Emergency Services initiates response measures for the state. The Governor of the State of Arkansas, reacting to these actions, declares a State of Emergency and forwards request for Federal assistance. In response, elements of the Federal Response Plan [the federal coordination plan for responding to crises are activated and deployed to provide immediate response assistance and collection of data necessary to determine actual extent of damages. An initial Disaster Field Office is established at the State Emergency Operations Center to facilitate emergency response teams.
Most of the state utilities and thoroughfares in the northeast quarter of the state are severely damaged or destroyed. Communications are limited to wireless in the damaged area. Loss of life and critical injuries are substantial and basic medical, shelter, power, food and water supplies are decimated. Some of the initial damages include:
The JWID '95 Phase 3 exercise linked participants distributed in Arkansas and throughout the nation using the Government Emergency Telecommunications Service (GETS), which provides crisis managers with priority voice service over facilities of the public long-distance and local telephone services (Hazard Technology, 1995a). The National Aeronautics and Space Administration (NASA) Advanced Communications Technology Satellite (ACTS) and a commercial mobile data network were used for mobile communications. As part of the exercise, a state trooper "discovered" the spilled ordnance, identified it as dangerous using a database of chemical and biological hazards previously installed on his portable computer, and reported it via wireless e-mail to the Emergency Operations Center in Conway, Arkansas. There, an atmospheric dispersion model was run to predict areas in danger and to plan an evacuation and cleanup operation. Crisis managers shared maps, situation reports, briefings, weather data, and similar information over an "emergency information network," a secure subnetwork deployed over the Internet using World Wide Web technology.
Priority reservation systems of this kind do little for crisis response in regions where telephone infrastructure is damaged and not yet restored or has never existed. For these situations, wireless alternatives include terrestrial and satellite services. However, GETS does not have a mechanism for securing priority access to cellular telephone circuits, which typically become jammed during a crisis; this reduces its utility for users who must be mobile at the scene of a crisis. NCS has experimented with crisis communications integrating voice and data service via the T-1 (1.5 megabits per second) transponder of the National Aeronautics and Space Administration's (NASA's) Advanced Communications
Technology Satellite (ACTS). The U.S. Army set up a transportable ACTS T-1 very small aperture terminal (VSAT) in a few hours during the Haiti operations in 1994 (Dixon et al., 1995, p. 27). John Hwang explained that FEMA can deploy to a field command center a mobile (truck-mounted) satellite terminal capable of digital communications at T-1 data rates (1.5 megabits per second).8 This is sufficient for multiple voice conversations and some data communications between a field command center and authorities outside the crisis area, but it does not solve the problem of communications among mobile workers at the crisis. There is also a drawback in terms of the delay involved in driving the van to the
The trip to the opera was the high point for the thousands of international visitors to the conference. They are streaming out of the new center, which had been built in a decaying downtown area. Here, old warehouses are mixed with the new buildings of the city's economic redevelopment zone.
Luke is on duty at the crisis center when the first images from emergency video-911 calls show the horrifying sight. Gigantic explosions rock a set of old chemical warehouses, and fires and fumes of unknown composition ring the new opera complex. The frightened audience panics and scatters into the surrounding alleys and buildings, where some become trapped. Television crews covering the opera immediately switch their cameras to this catastrophe. Within a few seconds after the initial alarms, all the digital video channels on the global information infrastructure (GII) are presenting the chaos, damage, and injuries live to a world whose virtual eyes are trained on Luke's and the other crisis officials' every action.
Luke, unlike many today, is well prepared for this event. His graduate specialty was computer-supported intuitive judgment—the science of making difficult decisions under deadline pressure with unprepared, uncertain, and incomplete information. This education has been augmented with specialized simulations in the Federal Emergency Management Agency's training facility, where various disasters and collaborative response exercises were presented using experiences and technology developed from distributed interactive simulation activities of the previous decade. Within seconds of the crisis' occurrence, the command center system suggests—and Luke and his colleagues confirm and refine—the reservation of key GII resources. These include priority communications links—so-called emergency lanes on the information highway—and a wide array of communications, computational, and information resources carried atop these lanes.
Academic supercomputers and distributed metacomputers roll out their simulations of colliding black holes and other physical phenomena. Now they stand ready to model the movement of the chemical plumes and raging fires. Specialized intelligent software agents roam the GII, and key resources are identified and activated. Some of the audio and database streams associated with the crisis are routed through translation service bureaus on the GII, so that Luke, his colleagues, and the many doctors, scientists, and decision makers from different countries who will become involved in the crisis can have information presented to them in their native tongues. Advanced distributed metacomputer support on the GII allocates and links the reserved computing resources with specialized resource centers (anchor desks) for chemical and atmospheric modeling, which apply the necessary databases and reaction simulations needed for plume prediction. The software codes were written in a highly scalable language descended from High Performance Fortran so that they run efficiently on multiple, heterogeneous hardware platforms and adapt smoothly to the scale of distributed computing resources that can be brought to bear on the crisis. Parts of the modeling code were in fact written years before but had scaled successfully as underlying hardware and communications technologies advanced in performance by orders of magnitude.
The distributed models and information systems rely on fault-tolerant high-performance networking protocols and recently developed neural network-based network management strategies to ensure that the GII's high-performance communications backbone supplies the necessary secure, low-latency bandwidth on demand. The backbone evolved from a confluence of ideas, such as the fine-grained multiplexing capabilities of asynchronous transfer mode (ATM), the need to accommodate delays in communications over global distances (imposed by the speed of light), integrated services using heterogeneous hardware and tunable requests for network resources, research on microkernel protocol composition, and functional abstraction—all areas of research in the 1980s and 1990s.
The judgment support environment that Luke and his colleagues use—which extends the rule-based decision support techniques of knowledge-based systems further into the realm of incomplete and uncertain information, unpredictable demands, and support for intuitive decision making by people—was adapted from commercial products to support military, law enforcement, and civilian crisis needs. Focused, minimally restrictive interconnection standards allow the crisis management application to incorporate components and build on top of GII services designed for larger commercial markets such as health care. The thriving middleware industry supplies the necessary integration technologies, including agents, rapidly configurable wrappers and mediators, and graphical scripting environments.
Luke benefits from a natural, intuitive user interface, which maximizes his effectiveness under stress and fatigue. This capability builds on advanced virtual reality ideas and tailors the computer interface to the problem at hand. Luke sees a three-dimensional geographic information system (GIS) when viewing the spatial confusion of the catastrophe; a virtual podium when briefing news media; a boardroom when defending his actions to angry politicians; and a summer wildflower meadow in moments of thought. Monitors record Luke's actions so that the system can learn for future events. They note an increase of errors or stress that is signaled to Luke. The information filtering, data fusion, and presentation tools also adapt to Luke's condition, reducing the number of inputs to which he must react.
Luke shares the virtual environment with others from federal, state, and local agencies and private institutions (such as hospitals and universities). These people form a virtual instant response organization customized for the situation at hand. Whether supported by supercomputer or handheld personal assistant, all interact over the GII through a common environment with a range of collaboration and productivity tools. However, the presentation of the bandwidth- and computation-intensive aspects of the environment varies—for example, from text to still images to video—depending on the available computing and communication resources. In this way the GII enables adaptive linking of "come-as-you-are" computational, communications, and personnel resources.
Jane, one of the leaders of tactical operations for the crisis, is on vacation hundreds of miles away in the northern Adirondacks, but she is able to collaborate effectively with other leaders and people at the disaster site. In the area of the catastrophe, a digital infrastructure installed at the end of the previous century is augmented by wireless connections and supplies digital video and other data from thousands of sensors to image processing and high-performance multimedia server resources on call outside the crisis area. Fortunately, basic mathematics research has developed adaptive compression algorithms, which are included in the GII protocol stack, so an order of magnitude more data can be carried over these links than would have been possible 10 years earlier. Jane observes multiple, three-dimensional perspectives of the crisis scene composed of data from hundreds of separate cameras, global positioning system (GPS) detectors, satellites, and other sensors, both fixed and carried by relief workers, as well as views from the news media. These data are integrated into continuously updated simulations of the vapor plume spread. They are also used to verify critical, uncertain information, such as the actual location of bridges and roads that may be misplaced on outdated or incorrect city maps.
The local authorities and institutions in the area of the catastrophe had fully implemented the new meta-data standards in their public records, so that Jane is able quickly to access and integrate the necessary community databases to identify medical and other crisis-relevant resources. Jane issues an alert for hospitals who can care for the unusual chemical poisonings. Medical records are fetched from distributed databases throughout the globe so that each patient is given the appropriate care. Digitized maps of the area are superimposed on the real-time images to optimally plan search and rescue operations. Maps of specific streets and buildings from tax records and architectural plans are downloaded to portable flat-screen devices carried by rescue workers at those buildings, who modify and update the maps with information obtained firsthand. Within the security perimeter of the crisis management system, proprietary data are made available, with the crisis priority temporarily overriding normal intellectual property safeguards so that crisis managers can use the best multimedia commercial yellow pages to help their personnel in the area find key resources. Tracers and trusted information agents monitor the cryptographically marked proprietary data to ensure they do not migrate out of the virtual subnet reserved for the crisis management effort.
Jane superimposes a view of the latest predicted spread of toxic plumes with a GIS representation of first-aid stations and determines that one of the stations soon must be moved. When she selects an evacuation route, the judgment support system offers up live video of potential choke points along the proposed route, and Jane notices debris blocking the way. She could open a voice link with workers on the scene to make sure they clear the road, but Jane decides that those workers' current relief activities (as displayed by the judgment support system) have higher priority and selects a different, but still adequate, evacuation route. Thus, the judgment system helps Jane and other judgment makers make the best use of available police, medical, and fire fighting personnel.
By morning, the crisis is over. Authorized relatives and colleagues of injured people are able to discover and remain aware of their status on a moment-by-moment basis over the GII, including public information kiosks placed at all shelters and hospitals to which survivors have been dispersed. Information gathered during the response is integrated and maintained to enable prompt resolution and settlement of insurance claims.
|This page in the original is blank.|
crisis. Because of congestion or damage to local cellular telephone networks, local communications generally must rely on fire and police radios, which do not support data networking.
An example of a crisis in which initial response teams went into the field with portable computers and satellite-capable telephones (which are limited to
much lower than T-1 rates) was Hurricane Marilyn, which struck the U.S. Virgin Islands in September 1995. The U.S. Army sent a 12-person early assessment team, called an ''Away Team," to St. Croix before the hurricane arrived (Hazard Technology, 1995c). The team carried a 27-pound kit consisting of a laptop computer with commercial crisis-oriented database software and a communications set that linked with the commercial Inmarsat satellite communications service. In the first 24 hours after the storm, theirs was the only working communications system on the island, and so all official calls passed through their link. Not enough official channels were available over this link to meet the demand, and in the future, Away Teams are slated to carry more communications sets. However, no local networking among the laptops on the scene is currently supported. This limits team members' ability to share information collaboratively.
After the initial situation assessment, a rapidly assembled response structure with many people from different agencies and organizations must have the ability to communicate to coordinate their actions. As the NI/USR's Vision 2000 statement.9 observes, current capabilities are limited to voice telephony, which is inadequate for crisis information and computing needs; furthermore, the lack of interoperability among equipment of different organizations is a serious problem that adds cost to the overall response—all of which suggests that research investments in solving this difficult problem could have high payoffs. The NI/USR statement adds,
An [interoperable] crisis information system is not available to the civilian side of [crisis management in] the United States today. A new system must have certain capabilities to function in the worst of circumstances. . . . The system must be interlinking, have open architecture, agreed-upon standards and consistent protocols. A lack of interoperable emergency communications is the greatest cause of the unreasonable escalation of dollar costs of disasters today. The cost of response to large scale, multiagency, multijurisdictional emergencies has soared off the top of the charts. The crisis information structure must be accessible to the scene of the need.
The control structure for emergency response must function in a bottom-up manner. People bleed at the site of the disaster, not in the halls of either the State Capitol nor on the floor of Congress. However, the uniformity in communications must be implemented from a national perspective within a united framework. We have dozens of layers of overlapping technologies with overlapping and often inconsistent characteristics. Thirty or 40 years of undirected growth in emergency communications has left us with our only means of cross-communications being the telephone. Telephones alone cannot and do not provide the robust crisis information system necessary today nor for the future.
James Beauchamp, of the Commander in Chief, Pacific Command (CINCPAC; a U.S. military command organization), noted that interoperability can be an especially significant problem in overseas disaster relief missions, in which military forces, government agencies, and humanitarian relief agencies
from many countries may have to interoperate with each other.10Moreover, solutions that are complex and difficult to implement, however technically brilliant, are of no value in the urgent context of a crisis. As Beauchamp pointed out,
The last [communications equipment] I need in a time of crisis is something I have never worked with before. If you're going to send me something brand new, one, don't ask me to transport it for you if it's very big. And two, you had better have somebody that knows how to operate it. I haven't got time . . . in the first 24 to 48 hours to train somebody new. I'm going to go with what I know. . . . I have to integrate everything I'm doing across the wide function . . . at my CINC [theater Commander in Chief] level when I'm out with the JTF [joint task force], . . . the foreign country forces, . . . probably anywhere from 30 to 75 nongovernmental or private or volunteer organizations. None of them have the same communications gear. If they have anything, a lot of them have AM [radio]; some of them have nothing at all. All of them have a different agenda and about half of them don't trust the military at all.
Security of the network is an obvious concern in crises where there is an active adversary seeking to obstruct the response. This is clearly the case in warfare and may also apply in confronting terrorism and criminal acts. The response team must keep its plans secret from hostile parties, and it must protect its communications against denial of service. Security needs are not limited to active, hostile situations. Crisis managers may need to communicate sensitive information, such as personal medical records and national security-related satellite imagery; the threat of disclosure of such information over an insecure crisis response network could leave the owners of information unwilling to share it with crisis managers. Robert Kehlet, of the Defense Nuclear Agency, observed, "When you operate at a federal level, though, you get access to databases and information that are very sensitive in nature, and you don't want to pass that out to the world in general and make it totally and completely public accessible. You just can't. That is Privacy Act information." In practice, this restriction has prevented FEMA from sharing some types of information outside a narrow sphere; instead, FEMA must handle the data itself and share the results in sanitized form (such as map images without the underlying data). Lifting this limitation might improve the flexibility of responses, but before that could happen, security technologies (as well as information security practices and guidelines) would have to offer greater assurance than they do now.
Another emerging communications issue is the challenge associated with distributed sensor networks. The Crisis 2005 scenario (see Box 1.3 ) illustrates how real-time data might be applied not only in simulation but also to complement other information sources. Isolated and experimental examples discussed by workshop participants illustrate the value of fixed sensor networks for both anticipating and responding to crises, if they survive the crisis itself and can be managed and integrated effectively with other resources. Kelvin Droegemeier, of the University of Oklahoma, discussed the integration of weather sensors in
Oklahoma's Mesonet and the National Oceanic and Atmospheric Administration's (NOAA) Doppler weather radars in real time with high-performance modeling and simulation for severe storm prediction. Mesonet sensor data are communicated over wireless spectrum on loan from Oklahoma law enforcement users (Oklahoma State University and University of Oklahoma, 1993). A more demanding load on networks is posed by the California Institute of Technology's (Caltech's) pilot digital seismographic data network, which uses 56-kbps (kilobits per second) frame relay services granted by Pacific Bell's California Research and Education Network to carry data in real time to earthquake-modeling computers. Caltech's Egill Hauksson noted,
The goal of real-time earthquake monitoring is to collect data in real time from sensors in the field and to deliver near real-time information and analysis with high reliability to the users in the field. Today, continuous data collection is much more demanding of bandwidth and speed than event-driven information distribution. . . .
Hauksson added that sustained long-term maintenance of a real-time earthquake monitoring system is infeasible: past experience shows that analog networks yield unacceptably noisy data and have too limited a bandwidth, while the alternative of commercial digital network services is too expensive.11
Sensors that could be deployed rapidly during crisis response could provide additional inputs and perhaps increase the resolution of existing sensors' coverage. Affordable sensor systems for crisis management, however, may not be available until larger commercial markets demand their development. David Kehrlein, of the Office of Emergency Services, State of California, speaking of the need for spatial data including maps, suggested that "10 years from now, when they have . . . locators in vehicles and Chrysler and Ford and GM say, 'We want this country mapped, by golly,'. . . you will get that [mapping] done. But today, those databases don't exist at that quality level."
Sensor networks do not automatically ensure data quality. Whereas the global positioning system (GPS) provides information of known reliability, other sensor networks may decrease in quality during a crisis, in ways that make their integration with models or with other databases challenging. Egill Hauksson noted that the delivery of noise-free data to models can be crucial during a crisis because "computer algorithms and models designed to deal with noisy and unpredictable data are inherently unstable and prone to failure under high load conditions, when they are most needed. If noise-free digital data are available—as opposed to noise-contaminated analog data—the data processing can be simplified and made more reliable." His comments underscore the interdependence of computing and communications technologies.
The vision expressed by NI/USR (see Box 1.1)—of a well-integrated, interoperable communications network supporting a mix of voice, data, and video communications—is beyond the reach of current crisis management, for reasons
including both technology limitations and cost. Experimental systems of high cost, unknown reliability, and doubtful ease of use are not appropriate for widespread operational deployment. However, targeted deployment and experience with real users in realistic exercises (such as JWID '95) and actual crises are crucial for testing and developing technologies and research ideas (such as those elaborated in Chapter 2) to make communications systems affordable and usable in the future.
Crisis management can benefit from computation at all levels—from the forward scene of action to strategic planning and coordination at state, regional, and national levels. Crises place demands on traditional high-performance computing applications, such as modeling and simulation. They also underscore the need for a broader notion of delivering computational performance to users who require it, wherever they are located, through a balanced, integrated collection of computers, communications, and data storage spread throughout the response organization.
Traditional high-performance computing has been applied for years to modeling phenomena that are relevant to crises, such as severe storms, earthquakes, and atmospheric dispersion of toxic substances (OSTP, 1993, 1994a; NSTC, 1995). However, high-performance modeling resources have been used primarily for scientific research, rather than real-time crisis response. Forecasting by the National Weather Service, including hurricane track predictions, appears to be among a small number of exceptions as a resource derived from high-performance computation that is operationally available for crisis response planning in real time. (Kelvin Droegemeier described a storm prediction model, discussed in Chapter 2, that has been tested experimentally for real-time applications.) The ability to rapidly requisition computers engaged in scientific research and other activities, as envisioned in the Crisis 2005 scenario, would make high-performance resources available for other case-specific applications during crises, but will require both new administrative arrangements and further advances in the flexibility, affordability, and ease of use of these resources.
Not all simulation problems require high-performance computation to yield useful results. Robert Kehlet described some ways that FEMA uses workstation-based models, integrated into a tool called the Consequences Assessment Tool, in actual operations, such as speeding up relief assistance based on earthquake model outputs. For example, after the Northridge, California, earthquake, officials approved checks to homeowners without waiting for a site inspection, if their residences were in areas that simulations identified as heavily damaged. Kehlet's examples are a proof of the concept that modeling also can have practical operational value in predicting how an impending crisis will evolve and in planning a response. Given some advance notice of the path and severity of an
approaching hurricane, FEMA can simulate the likely damage to population centers, helping officials plan the assembly and deployment of relief supplies (e.g., food, shelter, and medicine) to an affected area.12 However, the ability to generate accurate inputs to this analysis—the hurricane's path and severity—is a very difficult simulation problem in which further advances are necessary. As John Hwang observed,
What we are not very good at is phenomenology modeling; e.g., actually modeling a hurricane. What we are good at is [estimating damage]. Given some kind of hurricane, a particular path, and the intensity, we certainly can do a lot of analyses of economic, populace, and infrastructure damage, and estimate what will happen to a particular area.
Hwang's comment is especially significant in light of the particular importance of weather-related phenomena to crisis management. Kelvin Droegemeier relayed the statistic that between 1967 and 1991, 67 percent of the world's major disasters were meteorological or hydrological in nature. Modeling many of these phenomena requires high-performance computation. NOAA's High Performance Computing and Communications Office anticipates that increases in computing power are needed to improve understanding of weather and climate effects, for example, by improving the resolution of weather models and more accurately representing key features such as weather fronts and ocean eddies (Sawyer, 1995).13 Roger Ghanem, of the State University of New York, Buffalo, noted at Workshop II that many other natural and technological disaster phenomena will also be amenable to high-performance modeling, such as forest fire and urban fire spreading, detailed structural analysis of damaged buildings, and chemical and nuclear plant accidents.
Performance is needed not only to produce more accurate modeling results, but also to deliver them in a timely manner. As Lois Clark McCoy, of NI/USR, said, "The greatest hazard with which we deal in crisis management is time." James Beauchamp noted that timely results are a function of more than processor speed; fast processing is useless if it cannot be applied to current (real-time or near-real-time) data, which could be the case if large efforts at preprocessing or formatting the inputs are necessary before a system can get to work on the actual problem at hand. Timeliness also requires communications and storage to deliver the results of computations where they are needed. This is reflected in Lois Clark McCoy's call for making remote resources available:
Systems are available today off the shelf. . . . They work on PCs [personal computers]; therefore they have limited memory and their processing time is slow. To date there has been no attempt within the emergency management domain to centralize the needed high-performance computer capability for off-site processing. The product of this off-site processing could then be suitable in near real time for downloading onto the field PCs. This seems to be the next
near-term solution to increasing the power of the present emergency management software.
David Kehrlein gave an illustration of the potentially useful combination of centralized high-performance computing and field-based PCs. Computer-aided design (CAD) software proved useful in the search and rescue operation at the Murrah Building in Oklahoma City to map the areas to be searched and to correlate estimated locations of victims (based on where their offices were located before the blast) with the actual scene. A useful application, but one that was beyond the available computational resources, would have involved transferring the CAD data into a structural model and using finite-element analysis to predict the loads on various parts of a damaged building. This would indicate where shoring was necessary to prop up damaged structures and reduce the danger to survivors and rescuers from further collapses. Remote computation is appropriate for this application because relief teams in the field have, at most, personal computers available on the scene.
Simulation can potentially be useful for testing alternative operational choices, for decision support during crises, and as an aid to planning and personnel training before crises occur. Workshop participants suggested that simulation of an ensemble of related options and their outcomes in scenarios or during actual crises could improve decision making, if the models were sufficiently realistic. Before a threatened terrorist act, for example, there may be enough time to simulate a range of tactical approaches and select the one most likely to succeed.
However, more is required than modeling of physical phenomena. Phenomena that depend fundamentally on human individual and organizational behavior are complex and difficult to model realistically, making the simulation of human judgments such as the actions of adversaries and the political consequences of decisions particularly challenging. Nevertheless, there is a need for ways to model these phenomena, because decision makers require training to develop good judgment skills. Speaking of military involvement in international disaster relief operations, James Beauchamp observed:
All of a sudden every decision [the operational commander makes] not only has a military application to it, it has a political application. . . . You have to train a guy to do that. . . . I haven't found a good model yet to really train that guy to change his mind-set from a tactical commander today to an operational commander tomorrow. We've got to give him models that show him the value of public affairs, the value of doing news interviews, how to manage the press, how to manage information, how to deal with the customs and courtesies of another country, how to deal with coalition warfare when the day before he wasn't doing any of that.
Modeling and simulation are not the only applications requiring computation; all elements of an information infrastructure can be made more capable by increased computing power. In the information arena, applications relevant to
crisis management that demand high-performance computation include data mining to detect anomalous entries (outliers) in federated databases; data fusion to integrate sensor inputs with other information sources; geographic information systems (perhaps, given sufficient computing power, with three-dimensional terrain rendering); and stereo reconstruction from multiple images and video streams.
Computation applications for information management call for a balance of performance and accessibility. David Kehrlein argued, for example, that almost any improvement in placing information technology at the front lines of a crisis, such as a PC in every relief shelter, would be a valuable improvement in the accessibility of computing resources. Even maintaining a roster of survivors at each shelter and hand-carrying data diskettes between shelters would be a first step toward improving the current situation. Informing rescue teams that someone they are seeking in a collapsed home is actually alive and well in a nearby shelter is a major benefit to the search and rescue operation. In general, information management is a crucial need that can be highly complex in crises, as discussed in the next section, and it requires access to computing power at all levels of the response effort.
In a crisis, problems can arise from both a scarcity and an excess of information. Scarcity of information about an unfolding situation must be overcome by locating and obtaining information from many different sources. Once the response organization begins pulling in information, however, a flood of information can overwhelm decision makers. As Donald Brown, of the University of Virginia, observed, "There is too much information for human decision makers to use effectively in a crisis response situation. Computer-based data fusion systems can aid human decision making by quickly assimilating and filtering information."
Computing and communications technologies can help to identify, retrieve, filter, and integrate relevant information into a manageable, coherent picture of the crisis. Alan McLaughlin, of Lincoln Laboratory, Massachusetts Institute of Technology, noted great similarity between crisis management and military command and control, in that both require improved "situation awareness . . . and a common relevant picture of the area of engagement." Lois Clark McCoy stated:
The essence of crisis management is an effective information handling capability. Commanders must have it; analysts must have it; tactical operators must have it. Local emergency managers now realize it is possible to obtain a rapid and clear picture of the disaster . . . yet, we still have not applied these tools and capabilities to the actual command and control of emergency response operations.
The urgency of crises forces an ad hoc response—piecing together available sources by any means available. For example, search and rescue workers in major floods and earthquakes have been guided to victims by images from news helicopters (Gillies, 1994). Urgency can lead to extraordinary efforts to bridge the gaps between data sources, such as printing maps and correlating data from different systems by hand. David Kehrlein related how, following the 1991 fires in the Oakland hills, California relief officials obtained local utility maps and overlaid them with GPS data collected from the field as a way of identifying the owners of various pieces of unrecognizably devastated ground, who could claim disaster benefits. Manually registering this information against printed maps is laborious and slow.
Data sources maintained by many different federal, state, and local agencies may be relevant in a crisis. Walter McKnight listed, for example, geographic data, demographic data, medical files, and real-time weather data. Because such data are developed in separate contexts specific to each agency, they often follow different formal and de facto standards, which makes translation and integration difficult. Those who hold data may have little incentive to make major efforts to accommodate external needs such as crisis management. Thus, efforts such as a recent initiative by the Emergency Management and Engineering Society, to develop and obtain compliance with common crisis information standards are likely to progress only slowly (Newkirk, 1994, p. 305).
Geographic information systems (GISs) provide a good example of both the opportunities and current limitations of integration across different data standards. Data fusion from multiple sources, managed and presented within a GIS, can support current assessments of situations and planning for future evolution of the crisis. For example, a GIS map with building locations (drawn from a database of residences and businesses) could be combined with sensor data on wind speed, direction, and chemical composition of a toxic vapor cloud to show where evacuation must take place. Integrating additional GIS-formatted data about the current location of emergency vehicles, shelters, and relief supplies could facilitate evacuation planning. In addition to the technical challenges posed by fusion of data from mixed sources, however, variations among different vendors' GIS standards currently impede such uses. Although existing commercial GIS standards allow for the import and export of data files in different formats, the main operational processing of geographic information occurs within proprietary internal structures (Newkirk, 1994).
In addition to integrating across different standards and types of data, computational help is necessary for abstracting, adding value, and thereby turning data into useful knowledge. This problem involves much more than just translation between data formats. Integration requires recognizing connections and patterns among completely different kinds of data, such as video images from aircraft, map coordinates of structures and roads, and spoken or written field reports from relief workers. It also requires a capability to cope with missing,
inaccurate, or deliberately falsified data. Chapter 2 discusses opportunities to develop and improve what workshop participants characterized as "judgment support" capabilities—information technology tools that can support the crisis manager in making judgments in unexpected, urgent situations, in which information is uncertain and incomplete.14
In crises, integration and analysis must happen rapidly to be useful. As Joseph Stewart II, of MITRE Corporation, observed, information management has been addressed in the battlefield context, but to solve the problem there is a need for much better integration of computing that is specifically high performance:
Decision makers . . . must be presented with timely intelligence . . . The chore is to turn the data into useful, corroborated, validated information that may be presented to decision makers with confidence. This accrual, sorting, corroboration, consolidation and dissemination of continuously arriving data is a major task. . . .
In the military situation, data arrive by electronic means and generally in a format that is prescribed. This format contains the essential elements of friendly information in easy-to-extract form, but there is much additional information that is sent along as plain text. Some data may be missing from early reports. Some information on the same contact may be referenced to differing coordinate systems if it comes from more than one observer. Latency of the information derives from delays in the communications system, poor time coordination in the field, or the inability of an observer to transmit it until he returns to friendly territory. . . . In the civil context, sources of data are "less trusted" and more varied, and no single corps has the responsibility for ensuring that data get consolidated. Moreover data may not be released by the organization that collected them, or the release may be delayed, thereby adding to the latency problem.
The application of [high-performance computing] to this problem must provide a real-time solution with an in-line system that is capable of parsing standard formatted information from a variety of sources. . . . Input data could also be weighted in this system such that data with a high degree of positional and time accuracy from systems that access GPS or a triangulation system would count more heavily than other data. . . . Computers could be assigned to process data from simultaneously arriving messages, until some time-based sorting and ordering can be done. Contacts could also be compared with databases, most of which are countable but large. Processed information must then be presented to the decision makers for fusion with other sources of data that are not automated. . . . High performance is required to allow calculations to be done in real time, so that the means of processing does not add to the latency problem for later users.
Data quality is another important issue. The quality of commercially available GIS databases poses obstacles to automated integration, because data in the GIS cannot always be trusted, but it is not apparent from the GIS which data
points are likely to be out of date or otherwise incorrect. David Kehrlein related that in the response to the Northridge earthquake, the commercial GIS database that was used had a 40 percent error rate in locating and identifying hospitals, primarily because of ownership changes, telephone number changes, and so on. Usually, maps must be updated and corrected at the crisis scene against aerial photographs and field reports, to identify roads and buildings that are not found where the crisis managers' maps say they are. A national effort to improve the completeness, quality, and standardization of relevant data would be one solution to the problem, but this is unlikely to occur in response to the relatively small marketplace demand for crisis management tools.
Access to databases specific to a crisis region can also be inhibited by proprietary and security classification constraints. For example, participants from FEMA reported that during Hurricane Andrew, FEMA was unable to obtain some necessary data from Dade County until it paid the county for the data. Data and system protection mechanisms, some potentially developed for such applications as electronic commerce, could help implement more rapid transfer of authority to access data, particularly if there were a way to ensure that the data's privacy or intellectual property value could not be compromised by release outside the circle of crisis management.
If the need for specific information can be anticipated, certain problems related to location and integration of information from varied sources can be worked out in advance. John Hwang described FEMA's ongoing development of a National Emergency Management Information System, in which subject area databases related to crisis management activities (e.g., regulations and requirements for obtaining federal disaster assistance) are accessible by network to federal, state, and local authorities. Among other benefits, this approach makes resource sharing possible, thus reducing costs. It also hastens response by enabling "one-stop shopping" for key information. It is not always feasible, however, to predict the need for specific kinds of information (e.g., treatment alternatives for a mass outbreak of a rare or unknown disease). It may be equally infeasible to preassemble information concerning all possible specific instances whose general usefulness is clear. For example, rescue workers need building plans—ideally in a form that can be loaded into computer structural models. However, the need for detailed plans of the Murrah Building could not have been anticipated, and it would likely be infeasible to preassemble plans for every building in the nation that might suffer a bombing attack. Rapid response therefore calls for an ability to locate, retrieve, and integrate such information during a crisis.
A powerful message from the workshops was that crisis management systems must be usable by technical nonexperts working under extraordinary
conditions; ease of use is therefore a central goal. Perhaps the most visible aspect is the interface between the user and the machine. The purpose of this interface is to enable effective human-machine communication; technological capabilities such as graphical display and speech recognition ultimately are relevant only in relation to that goal. Simplicity is not necessarily the highest virtue for such interfaces, but rather, appropriateness to the task at hand and the capabilities of each user is required. Training and familiarity with tools are crucial if they are to be useful during a crisis, as James Beauchamp's comments above illustrate. The finite resources available to crisis management organizations put a premium on reducing the amount of training time needed. 15
The issue of usability arises not only in training for crises, but during them as well. Crises put severe stress on people, because of the extreme pressures to save lives and avert damage, as well as the fatigue that comes with overwork. David Kehrlein observed that stress can lead to a measurable decline in the cognitive capabilities of crisis managers. Considering users as part of the total system makes it clear that the ability of tools to adapt to user needs and capabilities is important to overall system performance.
The system environment should provide support for communication and collaboration between people, as well as the interactions of people and computers—in the extreme, an instant ''electronic administration" to support a newly created response organization. Noting the complexity of the organizational management tasks involved, Lois Clark McCoy identified the need for ways to track and control the constantly changing information flow throughout the crisis organization as a way of reducing wasted effort and improving the organization's effectiveness. In addition, the varied backgrounds, procedures, and methods of working that different collaborating groups bring to a crisis response increase the need for clear, complete communications and information sharing; a photograph or map, for example, might convey information with a persuasiveness and clarity missing from verbal communications between people under stress who are not used to working together.
To achieve the goal of what Don Eddington, of the Naval Research and Development Laboratory, described as a consistent picture of the situation shared by everyone involved in responding to a crisis, there is a need for information sharing that involves more than just multiparty voice communications and can be done without face-to-face meetings in conference rooms. To coordinate complex response efforts involving many parties, there could be value in collaboration support systems (e.g., teleconferencing) that integrate both person-to-person communications (in multiple modes, such as text, audio, and perhaps video images) and other forms of shared data, including multimedia and sensor data, in real time. Multiple levels of computing and communications performance must be accommodated, however, because crisis management necessarily involves cooperation among people with widely varying resources. In particular, integrating workers in the field—whose upper limit of resources may be portable telephones
and laptop computers—into the collaborative environment involves an ability to scale across different levels of resources and adapt to variable or unstable resources in a crisis. User-controlled adaptivity may be useful, allowing the user, for example, to select trade-offs between video image quality and frequency of image redrawing and between still and moving images; alternatively, there may be automated ways to optimize these decisions. These types of scalable collaborative applications are relevant not only in crisis management but also in other application domains, including distributed "collaboratories" for academic research and enterprise systems for business.
OTHER APPLICATION DOMAINS
Although the workshop series ultimately focused on crisis management as a tool for uncovering valuable research areas in computing and communications, the steering committee and workshop participants spent time considering other application areas. These served both as additional input from which to identify research issues and as a means of testing the generality of conclusions based on crisis management. The following sections, drawn primarily from input at the workshops, highlight similarities and differences between these domains and crisis management, including specific research opportunities that, with respect to crisis management, are discussed further in Chapter 2. All of these areas have been addressed more thoroughly in other, focused reports. They are reviewed here briefly to provide a context for—and to examine their interdependence with—crisis management. Citations are provided to more extensive treatments.
The first two areas, digital libraries and electronic commerce, represent both end-user applications in themselves (e.g., educational use of libraries, consumer banking, and retail transactions) and infrastructural services that enable specific capabilities within other application areas. For example, crisis managers could turn to digital libraries for information discovery and retrieval tools or to electronic commerce for secure authentication and payment services in order to obtain proprietary information on an expedited basis.
The other two areas, manufacturing and health care, are applications that, like crisis management, may derive significant benefit from broadly distributed computing and communications technologies. Manufacturing and health care applications (other than emergency medicine) place less emphasis on urgent, ad hoc response than does crisis management, and so integration and other technical challenges can, in principle, be addressed in a less ad hoc manner. Nevertheless, these areas face many of the same challenges as crisis management for coping with complexity and diversity, integrating information and software resources, and adapting to user capabilities and needs.
The interconnected demand for and use of resources among application areas illustrate the potential for technological advances in one application area to benefit others. They also indicate the drawbacks in terms of lost flexibility of failing
to accommodate the interdependencies—for example, to accommodate demands for service and access across architectures and standards that, as noted in Box S.2, are owned and controlled by multiple parties and are inevitably diverse. Indeed, one observer characterized it as a firm requirement that research on computing and communications in each application area take into account the others, noting, "You can't address one or two of them and let the others slide."
Digital libraries make more intensive demands for storage and bandwidth to manage and interchange image, audio, video, and numeric information than do activities with traditional high-performance computational requirements such as modeling and simulation. Digital libraries require substantial advances in software; information management technology and practices; and the ability to process, navigate, manage, and classify not only textual data but also multimedia, sensor feeds, and numeric data. Digital libraries also represent a primary focus of research in the scaling of very large, autonomously managed distributed systems. Central issues in the successful development of digital libraries encompass the identification, development, and adoption of appropriate standards, as well as fundamental questions about the definition of interoperability among systems and collections of information at various levels and the mechanisms that can be used to accomplish such interoperability.
Finally, it is important to recognize that digital libraries are not purely technological constructs; rather, they also encompass complex sociological, legal, and economic issues that include intellectual property rights management, public access to the scholarly and cultural record, preservation, and the characteristics of evolving systems of scholarly and mass communications in the networked information environment. The requirements for reflecting this broader context in software and network protocols are poorly understood but may generate substantial computational and infrastructure demands—for example, to examine intellectual property rights and ancillary evaluative or rating information associated with very large numbers of digital objects as part of query processing and result ranking. Design of technical approaches to support the social, legal, and economic framework of digital libraries that are sufficiently flexible to recognize and support reuse within a new framework is a challenging problem that itself has significant legal and economic dimensions. As resources that comprise digital libraries are reused in the crisis management environment, it may not be feasible, for example, to stop to negotiate a license agreement for access to a networked information resource that is needed urgently to respond to a crisis.
Digital libraries place extensive and challenging demands on infrastructure
services relating to authentication, integrity, and security, including determining characteristics and rights associated with users. Needed are both a fuller implementation of current technologies, such as digital signatures and public-key infrastructure for managing cryptographic key distribution, and a consideration of tools and services in a broader context related to library use. For example, a digital library system may have to identify whether a user is a member of an organization that has some set of access rights to an information resource (analogous to the privileges discussed below in the section "Electronic Commerce"). Use of digital libraries will require both adaptivity to changing bandwidth and computational resource constraints and the ability to reserve network resources. As an international enterprise that serves a very large range of users, digital libraries must be designed to detect and adapt to the varying connectivity of individual resources accessible through networks. Digital libraries will also build on a range of other infrastructure services such as electronic payments and contracting.
The availability or reliability of resources is a less central issue for digital libraries than for crisis management. If a data source is temporarily unreachable or otherwise unavailable, the user can be told to try later; however, crisis managers must make use of the best data available at a given time. Both application domains require adapting to the capabilities of user workstations and the bandwidth that is available to these workstations. Strategies that are alternatives in the digital library environment in many cases are mandatory for crisis management. For example, a digital library system can simply rank results and at some later time present them, but crisis management applications must summarize data and provide immediate overviews.
Digital libraries require substantial computational and storage resources both in servers and in a distributed computational environment. Little is known about the precise scope of the necessary resources, and deployment and experimentation are needed (Lynch and Garcia-Molina, 1995; OSTP, 1994b). From the 1960s to the 1980s, much of the research and development in the information retrieval community was constrained by the limited computational capacity of machines available to most users, particularly the inability to perform computations on large databases in near real time. Current increases in the availability of computational power are leading to a reconsideration of much of this work and may point toward the use of algorithms that are extremely intensive in both their computational and their input-output demands as they evaluate, structure, and compare large databases that exist within the distributed environment. In many areas that are critical to digital libraries, however, such as knowledge representation and resource description, or summarization and navigation, even the basic algorithms and approaches are not yet well defined, which makes it difficult to
project computational requirements. It appears likely that many breakthroughs in digital libraries will be computationally intensive—for example, distributed database searching, resource discovery, automatic classification and summarization, and graphical approaches to presenting large amounts of information that range from information visualization through virtual-reality-based modeling.
In addition, distributed queries may be computationally intensive. Digital library applications call for the aggregation of large numbers of autonomously managed resources and their presentation to the user as a coherent whole. Computation can compensate where individual resources are poorly optimized for uses that involve aggregation with other resources in ways that go far beyond their original design goals. The ability of digital libraries to reuse information resources could support crisis management applications. In crisis management, for example, information in a GIS or a digital library repository may have to be reused as the basis of a modeling or simulation activity.17 Current digital library systems, however, tend to be designed to facilitate specific classes of use of information stored in the digital library.
Information management is at the core of digital library applications. As in crisis management, the digital library user requires access to collections of information scattered among a range of autonomously managed repositories. This information must be processed through sophisticated user interfaces and viewing applications that may offer simulation, visualization, modeling, and related capabilities. Major advances are needed in methods for knowledge representation and interchange, database management and federation, navigation, modeling, and data-driven simulation; in effective approaches to describing large complex networked information resources; and in techniques to support networked information discovery and retrieval in extremely large-scale distributed systems. In addition to near-term operational problems, approaches are also needed to longer-term issues such as the preservation of digital information across generations of storage and processing technology (which evolves quite rapidly) and even information representation standards.
Work on information management approaches for digital libraries has to some extent proceeded on two levels simultaneously, corresponding to different models of how people use the applications. One level deals with what are philosophically extensions of existing, physical libraries. These are characterized by the assumption that a person is the direct consumer of information and is managing the navigation and retrieval processes, using methods analogous to a visit to the library. The other level assumes that the human user is more distant from the actual mechanics and management of the processes of information discovery, retrieval, evaluation, and use; this level deals with intelligent agents, knowledge representation and interchange, shared ontologies, mediators, and related
technologies. Information management technologies are central to both lines of development, but the technologies and approaches differ substantially between the two lines of development.
To be effective, digital library systems must be user-centered systems. Research is necessary to better characterize the needs and requirements of different classes of (potential) users of digital library systems, and to gain insight into how to adapt systems to specific user needs and behaviors. Although much digital library research has focused on "public" digital library services, public digital libraries form one end of a continuum that also encompasses personal information spaces and work-group or organizational information spaces. Linking digital libraries to personal and work-group information management systems is a central research and design issue—for example, to develop distributed systems for collaborative data exploration. There are also major demands for training and user support, as well as effective management by librarians of information repositories.
As in crisis management, one of the key issues involves information filtering, categorization, and ranking in situations where there is likely to be too much relevant information with which the user of the system must cope. However, the range of information that must be processed in the crisis management context is likely to be more tentative and questionable, and the qualification, authentication, and filtering of information constitute a much more difficult issue. In addition, crisis management has a much more demanding real-time constraint. This largely precludes the benefits of librarians skilled in evaluating and organizing information; digital libraries can allow a great deal of human or machine preprocessing. In both digital libraries and crisis management, incoming information may sometimes be incomplete, anomalous, suspect, or even actively falsified. The real-time constraint of the crisis management application requires adapting to and compensating for these problems, whereas a digital library can simply defer the data for later human review or confirmation from supplementary input sources.
In summary, many aspects of crisis management are functionally equivalent to digital library applications, but with real-time processing constraints (to meet urgent deadlines) and a requirement to operate successfully in an environment of questionable data inputs and high penalties for failures or errors.
Electronic commerce involves both retail and wholesale commercial transactions—purchase of goods and services—across networks. These range, for example, from consumer on-line banking services to procurement of parts by manufacturers through electronic data interchange (EDI). Electronic commerce
involves the use of processing and storage resources in multiple locations (both fixed and mobile), owned and managed by a variety of end users, suppliers of goods and services, and go-betweens. Because it comprises fundamental economic activities, electronic commerce cuts across—and is part of the infrastructure for—other application domains. Thus, electronic commerce can enable the procurement of medical supplies or reimbursement by third-party payers in health care, as well as the acquisition of new holdings and transfer of royalties in digital libraries, or the procurement of relief supplies and the filing and processing of insurance claims in crisis management. These examples are more a promise than a reality today, although the number of relevant pilot and actual (if small-scale) programs is growing. Limitations of current electronic commerce implementations include the inability to automate entire transaction processes18 and restrictions of users' choice among payment mechanisms.19Nevertheless, simply listing the future possibilities illustrates the interrelatedness of national-scale applications and the potential for technical advances in one area to confer broad benefit. Moreover, the effective applicability or extension of electronic commerce to embrace virtually every person and organization that participates in the economy underscores the importance of technology (and standards) to ensure the interoperability of different commercial solutions without stifling technical and service innovation.
Both the nature of economic commerce, which fundamentally revolves around financial transactions, and its interconnection with other activities make security a paramount concern. Motivations include protection of personal privacy (e.g., personal spending records, health status, preferences), protection against theft and fraud (against individuals and businesses), and protection of the integrity of the systems and of the organizations that use them. Privacy relates not only to unauthorized access to specific items of data, but also to aggregation of separate pieces of information (greatly facilitated by their placement on networks) to yield a sensitive result, such as a marketer's profile of an individual's overall buying habits. The greater exposure of institutions to financial risks will change the business model, which is currently oriented to managing as opposed to eradicating risk.
The importance of system integrity is increasingly seen as national or international in scope: the dependence of financial markets on network-based systems and the network-based interdependence of businesses, industries, and sectors lead many to link economic and national security. For example, a denial-of-service attack on a hypothetical Internet-based gateway handling a large share of U.S. retail credit card transactions would create a crisis; without substantial improvements in the security of gateways, such an attack would be much easier to arrange on the Internet than on the current telephone-based system.
Computing and communications technologies are relevant to both vulnerabilities and countermeasure mechanisms; electronic commerce motivates considerable activity in the development and application of security mechanisms,
concepts for security architecture, and implementation infrastructure (e.g., infrastructure needed to support public-key encryption). For example, commercial transactions require authentication and authorization of users and protection against repudiation of commitments by both buyer and seller. Mechanisms include identification technologies (from passwords to biometrics), digital signatures, and audit trails. Current public-key infrastructure development efforts focus on linking cryptographic keys with specific user identity; a more robust infrastructure would incorporate the notion of a user's privileges, which depend on potentially changeable characteristics such as credit card membership, rank within a company or organization, membership in a frequent-flyer program, U.S. citizenship, and others. This capability is also relevant to crisis management, in which privileges such as authority to access sensitive data may have to be rapidly but securely conferred on specific relief officials.
Construction of large commercial software systems, such as those used by banks (and in other domains, for example, manufacturing), continues to face the very difficult, decades-old problem of inefficiency in the programming process. Workshop participants identified the need to overcome this "programming bottleneck" as an area for continued research, through approaches such as hardware platform-independent programming as a source of potential advance.
Bandwidth and architecture are key issues for networking in electronic commerce. Bandwidth currently constrains the introduction of new services. For example, bandwidth for two-way video links between tellers and customers through automated teller machines (ATMs) could allow banks to improve service while reducing the number of bank branches. Increasing the bandwidth to tetherless systems is important if services that rely on graphics like those available through the World Wide Web are to be ubiquitous.20 As these examples suggest, there is a trade-off between using information retrieval mechanisms that scale the types of information presented to fit the available bandwidth and increasing the available bandwidth to achieve a higher level of service for tetherless and other intrinsically limited-bandwidth access mechanisms. Of course, some transactions, such as account-balance inquiries, require only small amounts of bandwidth, but the concept of "anytime, anywhere" banking and commerce implies a suitably provisioned, broadly deployed fixed infrastructure and support for tetherless access.21
There are two architectural challenges for networks in electronic commerce: accommodating heterogeneity in the commercial environment, which implies a general and flexible architecture, and achieving security in the fullest sense, which includes ensuring reliable and convenient service in the face of unpredictable conditions (e.g., user errors, malicious attacks, mergers and acquisitions that
change entities and their relationships). Although electronic commerce does not face the extremes of demands on and availability of resources that characterize crisis management, the dynamism and ubiquitous scope of the commercial market nevertheless call for adaptive, self-healing networks. These are particularly important in light of the threat of economically motivated attacks on commercial networks to steal services or assets (such as intellectual property, personal information, and electronic funds) or to deny service for malicious ends.
The computation required to support electronic commerce is a function of the kind of transaction and business process being supported—or the aggregate of many kinds. Broad experimentation has already begun for purposes of testing the relative merits of micropayment, aggregation of transactions, service subscription, and other models for electronic commerce. Daniel Schutzer, of Citibank, observed that computational performance in distributed systems (including communications and storage as well as processing cycles) currently constrains the ability to perform commercial transactions at very low cost, which is necessary if a market for microtransactions (goods and services purchased for cents or fractions of cents) is to emerge.
Within the confines of a single institution such as a bank, traditional, highly computation-intensive tasks such as transaction processing and fraud detection (through identifying purchasing anomalies) benefit from continued improvements in distributed computing. Added support is implied by emerging requirements, such as real-time pattern and anomaly detection for deterring fraud: a key challenge is obtaining useful results despite dealing with massive amounts of data from unknown sources and of unknown reliability. The widespread experimentation with software agents, such as brokers that search and evaluate over a wide range of suppliers, is beginning to raise questions about qualitative changes to existing computing system architectures. Brokers imply a cross-service lookup problem emerging in other domains as well, and this is a special emphasis in digital library research. Network-distributed catalogs, directories, and independent appraisal services (such as those of Consumers Union) could also aid resource discovery, as could scalable, network-wide advertising mechanisms.
Workshop participants also noted that simulation and modeling of firm and user behavior in large-scale commercial systems, such as banking and retail, may help smooth the deployment of electronic commerce applications, to the extent that important aspects of integrating technology into organizations can be simulated and tested prior to full-scale deployment. This demand—and the difficulty of fulfilling it—is similar to the call, noted above, for more realistic modeling of human and organizational behavior in crisis management training and operational exercises.
For end users to benefit from many electronic commerce services, they will need to be able to locate and find out about them. This requires improved information search and retrieval mechanisms that are usable across differing kinds and capabilities of equipment. As in other application areas, this implies addressing complex challenges in management of distributed information resources, including distributed file and program synchronization and replication, and tools such as Web servers and Web searchers.
The extreme heterogeneity of electronic commerce implies a great concern for data standards that support information management tools and facilitate interfaces among planning and design, provisioning, production, and business systems (e.g., inventory, ordering, billing, fulfillment, and customer support). Support for multiple media, including images, sound, video, and hypertext, implies the need for continued development not only of standards for interpreting graphical and nongraphical data formats, but also of mechanisms for adapting to different quality demands (e.g., image compression) and access capabilities (end-user access and storage devices and communications links).
Because it is unrealistic to expect that all users would shift to any single set of standards, whether a current or a new one, a major challenge in electronic commerce is incorporating legacy systems, such as databases and communications systems in differing or outmoded formats.
The development of easy-to-use tools and other methods for locating information and other resources, conducting transactions, and implementing security, among other needs, is as significant for electronic commerce as for crisis management and other domains because of the expectation of involving people without significant technical training. The history of automation in retail banking (e.g., ATMs) attests to the recognition that consumers often need to be convinced that a new system is an improvement, and convenience or transparency of user interfaces and processes is a major part of that process. The growing need for system security is the area in which this practical reality is most likely to be challenged: achieving better authentication of users will place a premium on methods that both are effective and do not overly inconvenience customers (suggesting possibly greater interest in physical tokens and biometrics as opposed to personal identification numbers).
Computing and communications are enabling significant changes in manufacturing. These relate to a very broad range of capabilities and functions, from
initial design to delivery of products. Many aspects are captured in the concept of highly collaborative design and manufacturing by distributed ''virtual corporations." Such enterprises use information technology to enable them to design and manufacture products in rapid response to customer demand. Discussions in the workshop series addressed mainly this aspect of manufacturing. Among the technological requirements that this perspective illuminates are networked computing and information resources to support collaborative design; virtual reality "test drives" that allow customer input to the design process beginning early in product development, when changes are easier and less costly to implement; and simulating the entire manufacturing process so designs can be optimized to make products that are higher in quality and faster and less costly to produce. (It should be recognized that although this view of integrated design and manufacturing presents a fairly broad perspective on manufacturing applications, there are many other issues that are more closely oriented toward production per se, such as robotic monitoring and control of assembly lines, plant capacity management, inventory management, and automated inspection for quality control, among others.22
Manufacturing begins with design; high performance in computation, storage, and networking is important to support rapid design, as well as redesign and customization based on past designs. In the past, much effort in manufacturing complex systems such as automobiles and aircraft has been spent on improving performance parameters (e.g., speed, range, altitude, size). These are still recognized as critical under extreme conditions, but more generally they form a design framework that is a minimum requirement. Today the key design criterion for manufacturing is competitiveness, including time to market and total affordability (CSTB, 1995b). Thus, design is far from the entire story; concurrent engineering involves a whole corporate information infrastructure, integrating the different component disciplines such as design, manufacturing, and product life-cycle support. Each of these presents its own challenges; manufacturing process optimization, for example, requires complex, multidimensional modeling and analysis. Simulation of manufacturing and assembly layout, logistics (material in, finished goods out), production flow, and material and process variability are additional computation- and data-intensive activities.
It is worth noting that less than 5 percent of the initial development costs of the Boeing 777 aircraft were incurred in computational fluid dynamics (CFD) airflow simulations—a classic Grand Challenge in this field (see Appendix B); more than 50 percent of these development costs could be attributed to overall systems issues. Thus, from the perspective of improving manufacturing efficiency, it is useful but not sufficient to advance the Grand Challenge application of high-performance computing for large-scale CFD. If only 5 percent of a problem is addressed with high-performance computing, one can at best influence fundamental goals such as affordability and time to market by this small amount.23 Computing must be fully integrated into the entire engineering
enterprise to be effective. However, the difficulty of integrating across these engineering functions is far from trivial. As David Jack, of the Boeing Company, said,
Rationally we should be designing [a Boeing plane] from the tools [already installed] to reduce the manufacturing costs. We have some codes which we use for simulating the tooling. They tend to be rule-based. I haven't seen any clever way of handling those rules where the same rule may be used in configuring the airplane as is used in building the airplane. And you have got that huge logistical gap between the two. If you change one rule, does it change the other one? How do you manage that information? That's a problem that we're only starting to scratch up against.
Simulation for prototyping purposes could yield more useful results if integrated with both virtual and actual tools that are to be used in production. Randy Katz, then of the Defense Advanced Research Projects Agency, discussed computational prototyping as
. . . the ultimate dream of hyper-simulation that has been with the computer-aided design community for the last 40 years: the idea that you could have specialized accelerator hardware that could run simulations for you, [located] at special places across the network. You might include in your simulation actual processing equipment (e.g., ovens, furnaces and photolithography equipment); they will be connected, have a network interface on them. You'll like to be able to understand whether you can build a particular semiconductor process from end to end where some of the equipment exists, some is being designed, the process itself is being designed, combining a capability for simulation with the actual use of hardware devices that may exist.
There are a lot of discovery, linkage, conversion, authentication, payment kinds of issues that take place in this kind of environment. You have to find the service providers . . . [and] be able to have assurances about intellectual property rights, just as you would with anything else you might decide to publish which could be copied and handed out without your knowledge. And, of course, you would like the use of these specialized pieces of equipment to be fee-for-service.
Although the design phase is not itself a major cost item, decisions made at this stage lock in most of the full life-cycle cost of an aircraft, with perhaps 80 percent of total cost split roughly equally between maintenance and manufacturing. Thus, computational analysis should be applied in the design phase not only to optimize the product's performance parameters, but also to shorten the design and development cycle itself (reducing time to market) and to lower the later ongoing costs of manufacturing and maintenance.
A hypothetical scenario from aircraft design illustrates how the integrated, design-for-manufacturability approach to engineering demands advances in computing and communications. The example considers design of a future military aircraft, perhaps 10 years in the future. This analysis is taken from a set of NASA-sponsored activities centered on a study of the Affordable Systems
Optimization Process (ASOP), which involved an industrial team including Rockwell International, Northrop Grumman, McDonnell Douglas, General Electric, and General Motors.24 ASOP is one of several possible approaches to multidisciplinary analysis and design (MAD) and the results of the study should be generally valid for these other approaches. ASOP is designed as a software backplane (distributed across the nation) linking eight major services or modules. These are the design (process controller) engine; visualization toolkit; optimization engine; simulation engine; process (manufacturing, producibility, supportability) modeling toolkit; costing toolkit; analytic modeling toolkit; and geometry toolkit. These are linked to a set of databases defining both the product and the component properties. The hypothetical aircraft design and construction project could involve 6 major companies and 20,000 smaller subcontractors. This impressive virtual corporation would be very geographically dispersed on both a national and, probably, an international scale. The project could involve some 50 engineers at the first conceptual design phase. The later preliminary and detailed design stages could involve 200 and 2,000 engineers, respectively.
The design would be fully electronic and would demand major computing, information systems, and networking resources. For example, some 10,000 separate programs would be involved in the design. These would range from a parallel CFD airflow simulation around the plane to an expert system to plan location of an inspection port to optimize maintainability. There are a correspondingly wide range of computing platforms from personal computers to high-performance platforms and a range of languages from spreadsheets to High Performance Fortran. The integrated multidisciplinary optimization does not involve linking all these programs together blindly, but rather a large number of sub-optimizations involving a small cluster of base programs at any one time. However, these clusters could well require linking geographically separated computing and information systems.
Because an aircraft is a system that must function with very high reliability, a strict coordination and control of the many different components of the aircraft design is needed. In the ASOP model, there will be a master systems database with which all activities are synchronized at regular intervals, perhaps every month. The clustered suboptimizations represent a set of limited excursions from this base design, which are managed in a loosely synchronous fashion on a monthly basis. The configuration management and database system are both critical and represent a major difference between manufacturing and crisis management, where in the latter case, a real-time "as good as you can do" response is more important than a set of precisely controlled activities.
Intra- and interfirm collaboration among engineers and linked simulations and databases requires reliable, secure, and interoperable communications. The
need for simulations to exchange large proprietary datasets leads to major requirements on both security and bandwidth for the communications infrastructure. Integrating actual tools together with virtual ones poses a specific research challenge for new control protocols that behave in predictable, understood ways across the actual-virtual boundary. More generally, information infrastructure supporting communication both between collaborating firms and within firms (e.g., manufacturing process control) is crucial to enabling the agile, distributed style of manufacturing envisioned in this section.
The computing resource for multidimensional optimization reflected in the ASOP scenario requires linkage of a wide variety of distributed machines ranging from small to large systems. This area is a severe test for metacomputing systems that support the synchronization and linkage of heterogeneous computing devices. These distributed simulations must be linked to the many databases involved in design and to the engineers making design decisions. Availability and performance requirements of distributed resources are likely much more predictable and stable than in the crisis management context; nevertheless, ease of setting up operational systems across organizational boundaries is a challenge to the success of distributed, collaborative projects.
In manufacturing, there is a very structured set of databases that needs to be reliably interfaced with work flow, configuration management, and other tools. Crisis management, by contrast, emphasizes good interfaces to unanticipated databases. Manufacturing databases need to have high-performance capabilities when used to drive or support simulations. Critical to the successful linkage of many corporations with (logically if not physically) central information systems is the use of standards both in system (software) interfaces and in product data definitions. In the latter case, there could be some useful interactions between information technology standards development activities, such as Virtual Reality Modeling Language (VRML) for three-dimensional object representation, and industrial production standards development such as PDES/STEP (Product Data Exchange using the Standard for the Exchange of Product model data; CSTB, 1995b).
Another critical problem in ASOP is integrating legacy systems. It is not economically reasonable to assume that industry will rewrite from scratch the large number of existing programs (10,000 in the scenario above), nor will firms rebuild all their databases to new information infrastructure standards. Using these resources across a broadly deployed information infrastructure requires advances in general-purpose, easily configurable technology for software
integration (discussed in Chapter 2), for example, to take existing codes in multiple languages (e.g., Fortran, C, Lisp, Excel) and integrate them into a single, distributed system.
Both in crisis management and in manufacturing, critical decisions are made from composite systems involving humans, computers, and information systems. In crisis management, the emphasis is on intuitive judgment making with incomplete information. Manufacturing also requires good judgment for decision makers, but it represents a more classic decision support context that supplies engineers with information targeted very precisely at well-defined questions. These decisions need to be made by collaborations of geographically distributed engineers. This implies a need for collaboratory systems that link people and the information they need to make decisions.
Computing and communications increasingly affect health care in many different forms. Among those discussed in the workshop series were direct patient care, medical research, development of new medical technologies, and management of financial and other aspects of health services. Health care will continue to be administered by a diverse collection of providers working in a very large number of geographical settings. The health care system in the future likely will be characterized by (1) integration of widespread databases; (2) digitization of most health care data modalities (e.g., x-rays, magnetic resonance imaging (MRI)), allowing their transmission across networks; and (3) increased application of telemedicine. Health care providers will need to discover and access information from many sites in order to be able to put together a comprehensive description of a patient's medical history. Although perhaps to a lesser extent than in crisis management, there are significant variability and unpredictability in both the types of information that must be obtained (text records, handwritten notes, medical imagery) and their location. For example, an integrated health care information infrastructure will be able to give providers ready access to an accurate and detailed account of a patient's medical history. Networked access could compensate for the current, almost complete lack of access to patient medical records in some kinds of crises, such as large natural disasters. At the same time, however, the infrastructure must protect the security and confidentiality of personal information.
Medical decision support systems are increasingly used to help providers identify and evaluate different diagnostic workups and treatment plans. The ability to easily obtain large sets of longitudinal patient records will greatly facilitate the ability to carry out meaningful comparative analysis for clinical care
and for health science and clinical research. Medical researchers and health care system administrators need to link multiple patient databases to one another and to auxiliary databases used to define such items as hospital facilities and procedures. Data must be encoded in a reasonably uniform fashion using standard vocabularies being developed—in the face of great challenges in achieving consensus among diverse parties—by the health care industry and medical informatics communities with the National Library of Medicine. Delays in formulating and agreeing on these standard vocabularies are part of the implementation context for health care computing and communications, and they are indicative of the challenge of hammering out a consensus on standards in most national-scale application areas.
The health care-specific applications of networking revolve around telemedicine. Telemedicine will enable remote consultation with individuals in their homes (an advantage for both mobility-impaired and rural patients) and with remote specialists. Telemedicine should support not only voice and video communications, but also real-time data from a range of medical sensors such as heart monitors and blood chemistry analyzers. Although the bandwidth requirements associated with textual medical record information are modest, digitization of most health care modalities will lead to increasing bandwidth requirements. The need to deliver the data to remote computing resources for processing and integrating in real time also adds complexity to the management of the overall application—for example, integrating, on one hand, the requirements of voice communications for low latency even at the expense of reduced quality with, on the other hand, sensor data that may require low-noise characteristics to be useful. Integrating real-time sensor data—including data from field-deployed sensors, as in telemedicine—into a continuously updated patient record is another potentially valuable application.
In addition to bandwidth and service requirements, difficult security issues arise because of the confidential nature of health care records and the potentially large number of health care providers who have a need to know about particular aspects of a patient's medical record. Strong guarantees of privacy, protection, and authentication will be required.26 New models of privacy and protection are needed to address emergency "need-to-know" circumstances, while providing for secure protection of privacy. The type of de facto protection afforded by the current health care system, which still is based largely on paper and disconnected computer systems, will diminish as medical information is placed on networks and powerful information location and retrieval mechanisms become available.
There are important similarities between the requirements associated with emergency health care and crisis management. Network management must cope with near-real-time constraints that arise in emergency situations. Priority
schemes must be structured to give priority to queries related to caring for emergency and critical care patients. During applications that are critical to life (such as image processing or expert assistance during surgery), uninterrupted, reliable service is vital. If the computational and network resources used for these applications are being used at the same time for other applications, mechanisms must be in place to prevent the denial of service due to resource limits.
The ability to generate large databases of longitudinal clinical records, combined with substantial computational resources, will enable statistically meaningful comparative analysis for clinical care and health science research. This analysis could enable identification of medically distinct models and templates to describe diagnostic workups and care plans, thereby improving the efficiency and effectiveness of health care. Secure methods are required, however, to disaggregate the information needed for such analysis from data that could be used to identify individuals.
Routine testing is another potentially important computational demand. There are a number of high-volume, computationally intensive image screening applications (such as mammograms and Pap smears) in which semiautomated, well-implemented image processing methods could have a strong positive impact on efficiency and accuracy. Although real-time processing is not critical in this area, the huge volume of data to be processed imposes serious requirements for computational power. In addition, whereas some routine testing examples would simply involve the analysis of individual acquisitions, more robust methods would also include database acquisition and manipulation. One potentially valuable example is the use of change detection algorithms in mammography, in which a current scan is normalized and registered to a previously acquired scan of the patient; then the two are compared to highlight potential differences. Such an application would be enhanced further by the ability to register a new scan automatically to a canonical (standard healthy) reference or atlas, including estimating the deformation of the scan to account for patient variability. By registering to an atlas, any detected anatomical changes could be interpreted further based on knowledge of the tissue type associated with the matched portion of the atlas. Image processing is of course just one of many potential data inputs about patients that could benefit from this type of semiautomation. Computer-based patient status tracking, automatic record updating, and detection of changes and anomalies could be applied across a wide range of medical sensor inputs as well as clinical observations by health practitioners.
Significant computational challenges arise in the context of areas associated with integrating robotics and image processing. The medical community increasingly seeks minimally invasive surgical procedures, with the expected benefits of reduced complications, reduced trauma for the patients, and reduced length of
hospital stays, leading to reduced costs and an increased quality of life for patients. More effective use of minimally invasive procedures requires improvements in automatic or semiautomatic methods to localize anatomical structures for the surgeon and to facilitate presurgical planning. These methods should also support navigation of devices (by robot or surgeon) within the body and delivery of treatment and procedures in minimally invasive ways.
One example of a significant computational challenge is enhanced reality visualizations, in which segmented and labeled anatomical models, acquired through three-dimensional medical sensors (such as MRI and computerized tomography (CT)) are automatically registered with the patient and displayed to the surgeon in a superimposed visualization showing internal structures directly overlaid on top of the patient, from the correct viewpoint. Ideally, such structures would be tracked and their registration refined over time, to maintain a consistent visualization as the surgeon changes view, the patient moves, and the patient's tissues deform. This problem is particularly relevant in endoscopic applications, where the surgeon has a limited field of view and navigation and localization become critically important.
A second challenge is the use of robotic devices to assist a surgeon.27 Such devices include remote manipulation and tactile feedback devices for palpation of internal tissue, systems to deliver surgical tools and procedures to inaccessible locations (e.g., in sinus surgery), and tools to improve the accuracy and reliability of surgical procedures. Key computing requirements in these applications are real-time processing, high-bandwidth data storage and retrieval, and computational and data reliability.
The creation of new medical devices can benefit from more extensive use of computer simulation. Simulations can reduce the time required to complete a design as well as the time needed for testing. With good three-dimensional models, the designer can evaluate the effect of various device parameters in its future physiological environment. For example, the ability to perform accurate simulations of blood flow through the heart with an artificial valve would help in the design of such devices. High-performance computing could allow the implementation of a more accurate model of the heart and greatly reduce the time it takes to perform such a complex simulation. Computational chemistry and molecular modeling are being applied to drug design, with scope for continued improvement as greater computing resources become available.
There are potentially important overlaps between the types of computations that need to be carried out in the contexts of health care and crisis management. Both application areas make significant use of sensor data, and both will potentially benefit from different forms of data fusion. Both areas can benefit from increased use of simulation. Because medical care is an important facet of crisis management, the ability to access patient records would also be of potential use to crisis managers in providing postdisaster medical care. If crisis managers have information about the individuals affected by a disaster, an ability to access their
longitudinal medical records could be used to help prioritize relief efforts by determining which individuals might have preexisting conditions requiring special attention.
To coordinate patient care, it is necessary to be able to integrate inputs reliably from a subset of a very large number of heterogeneous databases. It should be possible to construct longitudinal medical records recording the care and health of each individual, by discovering and integrating distributed information obtained from multiple health care providers. Resource discovery is an important need because, in many cases, neither patients or providers are able to recall or locate key past health care providers. There is also a need to locate representative case histories for comparative purposes. Although some of these tasks can be performed in advance of emergencies, this is not always possible. In addition, integrating medical sensor data to update patient status adds further complexity and real-time constraints. The real-time character of medical emergencies (particularly if they occur in the large-scale context of a disaster or other crisis) highlights the importance of efficiency of these resource discovery and retrieval mechanisms.
Currently only a small fraction of electronically stored medical data is in a form that is readily usable in automated clinical analyses, such as studies of treatment effectiveness. This situation will change as current practice improves and the health care community moves from computer databases that are largely oriented toward billing to databases aimed at recording information relevant to observing and improving individuals' care and health. The ability to obtain and process large sets of longitudinal patient records would greatly facilitate the ability to carry out meaningful comparative analysis both for clinical care and for health science and clinical research. There is a range of architectural approaches available for aggregating data for use in health systems research and in epidemiological studies. At one extreme is World Wide Web technology with knowledge agents accessing the database, which itself is in distributed form. The other extreme involves the occasional collection of needed information to a central aggregated database, which is then mined. (A centralized database incorporating medical records of everyone in the United States would be infeasible with current technology,28 and so this should be understood as an extreme example, beyond current capabilities.) Intermediate solutions correspond to generalizations of data-caching strategies familiar in parallel and distributed computing (e.g., dividing the data and storing each part closest to where it will be needed for access or processing).
Aggregating patient records for health research raises problems of maintaining the privacy of personal information, because it is difficult to sanitize patient records by removing all data that could disclose a patient's identity (including
telephone numbers, addresses, birthdates, and others). These problems are made more complex if the later identification of individuals by aggregating these data with other information sources such as financial records is also to be deterred. These threats indicate opportunities for technological advances to help prevent such compromises of privacy while facilitating legitimate research (IOM, 1994).
Both health care and crisis management share a need to search a heterogeneous collection of databases. In the health care context, it usually is not necessary to access databases that have unanticipated qualitative features. Both emergency health care and crisis management share analogous security and policy issues associated with the need to access crucial information rapidly without incurring significant security-related delays.
An integrated health care information infrastructure would be capable of giving providers ready access to an accurate and detailed account of a patient's medical history. However, this information is useful only if the caregiver can readily obtain and understand critical information, especially during emergencies. Significant, continued research efforts are needed to improve both the caregiver's ease of using medical information systems and the ease with which caregivers may insert new clinical information electronically into patient records. These embody issues both within and outside computing and communications technology. Examples of the former include user interfaces, natural language processing, and handwriting recognition, whereas broader implementation contexts might include incorporating informatics into medical school curricula.
Even with access to all available information, health care providers are often faced with—and are trained for—making intuitive decisions when available information is not complete. Economic pressures in the health care industry, however, have created a need for providers to justify the medical treatment they provide. This pressure is spurring research into the development of health care decision support. One important area that may underlie the development of decision support systems is the need for standard encoding processes to represent care plans and diseases. (This is not only a problem of finding technically optimal encoding schemes; as noted above, there are also challenges in reaching consensus among diverse parties about what names to use to distinguish various diseases, conditions, treatments, and the like.) These techniques should support the development of process representations, the automatic detection of processes from database records, and identification of similar process representations. This is analogous to crisis managers' need for support in making judgments, but with less unpredictability about the types of decisions that must be made, and therefore the ability to tailor rule-based decision support systems toward specific questions.
Health care would also benefit from increased deployment of remote
collaboration technologies optimized for telemedicine, teleradiology, and perhaps telesurgery, along with remote sensing mechanisms to facilitate remote physical examinations. Effective use of these tools requires not only bandwidth and security, but also effective shared environments for communicating and working collaboratively with information about patients and resources. There is a strong overlap between this application need and crisis management, where the expertise and equipment for health care delivery may be damaged or remote from the crisis location.
For further discussion of information technology costs, training needs, and usage patterns in civilian crisis management organizations, see Drabek (1991).
For detailed discussions of the importance of deployment and feedback from actual users in the design and development of information technologies, see Landauer (1995) and CSTB (1994a, pp. 181-184).
See also the NCS's GETS home page, http://188.8.131.52/~nc-pp/html/gets.htm.
Civilian relief agencies sometimes call upon U.S. military units to deploy similar capabilities.
Available from NI/USR home page, http://niusr.org/vision.html.
The Consequences Assessment Tool uses a model to predict damage from high winds that is adapted from a nuclear blast effects model developed by the Defense Nuclear Agency. The assessment tool is described in detail in Linz and Bryant (1994).
For additional details, see NOAA HPCC home page, http://hpcc1.hpcc.noaa.gov/hpcc.
Workshop participants observed that good judgments require not only access to information, but also a good general education on the part of judgment makers.
See Drabek (1991) for results of a detailed investigation of the relationship between training and information technology use in crisis management organizations.
Incorporation of video and sound into Web pages increases the richness of the content provided, but also increases the bandwidth required for access.
Networks among ATMs involve links with known and stable locations and relatively predictable load patterns (unlike the networks needed for crisis management).
For a more complete overview, see CSTB (1995b).
This illustrates what might be called ''Amdahl's law for practical HPCC." For a classic discussion of key principles, see Amdahl (1967).
For a detailed description, see Syracuse University and Multidisciplinary Analysis and Design Industrial Consortium Team 2 (1995).
Workshop series participant Joel Saltz, of the University of Maryland, made valuable contributions to this section. For a discussion of these research issues in greater depth and breadth, see Davis et al. (1995).
For a discussion of medical record privacy issues in a networked environment, see IOM (1994).
Robots may find application in other elements of health care, such as handling and inspection of clinical or research specimens.