National Academies Press: OpenBook

Computers at Risk: Safe Computing in the Information Age (1991)

Chapter: Overview and Recommendations

« Previous: Executive Summary
Suggested Citation:"Overview and Recommendations." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×

1
Overview and Recommendations

We are at risk. Increasingly, America depends on computers. They control power delivery, communications, aviation, and financial services. They are used to store vital information, from medical records to business plans to criminal records. Although we trust them, they are vulnerable—to the effects of poor design and insufficient quality control, to accident, and perhaps most alarmingly, to deliberate attack. The modern thief can steal more with a computer than with a gun. Tomorrow's terrorist may be able to do more damage with a keyboard than with a bomb.

To date, we have been remarkably lucky. Yes, there has been theft of money and information, although how much has been stolen is impossible to know.1 Yes, lives have been lost because of computer errors. Yes, computer failures have disrupted communication and financial systems. But, as far as we can tell, there has been no successful systematic attempt to subvert any of our critical computing systems. Unfortunately, there is reason to believe that our luck will soon run out. Thus far we have relied on the absence of malicious people who are both capable and motivated. We can no longer do so. We must instead attempt to build computer systems that are secure and trustworthy.

In this report, the committee considers the degree to which a computer system and the information it holds can be protected and preserved. This requirement, which is referred to here as computer security, is a broad concept; security can be compromised by bad system design, imperfect implementation, weak administration of procedures, or through accidents, which can facilitate attacks. Of course, if we are to trust our systems, they must survive accidents as

Suggested Citation:"Overview and Recommendations." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×

well as attack. Security supports overall trustworthiness, and vice versa.

COMPUTER SYSTEM SECURITY CONCERNS

Security is a concern of organizations with assets that are controlled by computer systems. By accessing or altering data, an attacker can steal tangible assets or lead an organization to take actions it would not otherwise take. By merely examining data, an attacker can gain a competitive advantage, without the owner of the data being any the wiser.

Computer security is also a concern of individuals, including many who neither use nor possess computer systems (Box 1.1). If data can be accessed improperly, or if systems lack adequate safeguards, harm may come not only to the owner of the data, but also to those to whom the data refers. The volume and nature of computerized data-bases mean that most of us run the risk of having our privacy violated in serious ways. This is particularly worrisome, since those in a position to protect our privacy may have little incentive to do so (Turn, 1990).

The threats to U.S. computer systems are international, and sometimes also political. The international nature of military and intelligence threats has always been recognized and addressed by the U.S. government. But a broader international threat to U.S. information resources is emerging with the proliferation of international computer networking—involving systems for researchers, companies, and other organizations and individuals—and a shift from conventional military conflict to economic competition.2 The concentration of information and economic activity in computer systems makes those systems an attractive target to hostile entities. This prospect raises questions about the intersection of economic and national security interests and the design of appropriate security strategies for the public and private sectors. Finally, politically motivated attacks may also target a new class of system that is neither commercial nor military: computerized voting systems.3

Outside of the government, attention to computer and communications security has been episodic and fragmented. It has grown by spurts in response to highly publicized events, such as the politically motivated attacks on computer centers in the 1960s and 1970s and the more recent rash of computer viruses and penetrations of networked computer systems.4 Commercial organizations have typically concentrated on abuses by individuals authorized to use their systems, which typically have a security level that prevents only the most straightforward of attacks.

Suggested Citation:"Overview and Recommendations." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×

BOX 1.1 SAMPLER OF COMPUTER SYSTEM PROBLEMS: EVIDENCE OF INADEQUATE TRUSTWORTHINESS

Failures of system reliability, safety, or security are increasingly serious—and apparently increasing in number. Notable are the following:

  • A $259 million Volkswagen currency exchange scam involving phony transactions;

  • The nearly successful attempt to use thousands of phony Bank of America automatic teller machine cards fabricated with personal identification numbers pirated from an on-line database;

  • An almost-successful $15.2 million Pennsylvania Lottery fraud attempt in which the database of unclaimed ticket numbers was used in the fabrication of a ticket about to expire; and

  • Thousands of reported virus attacks and hundreds of different viruses identified (e.g., Stoned, Devil's Dance, 1260, Jerusalem, Yankee Doodle, Pakistani Brain, Icelandic-2, Ping Pong, December 24, to cite just a few).

Penetrations and disruptions of communication systems appear to be increasing:

  • A software design error freezing much of AT&T's long-distance network;

  • The German Chaos Computer Club break-ins to the National Aeronautics and Space Administration's Space Physics Analysis Network;

  • The West German Wily Hacker attacks (involving international espionage) on Lawrence Berkeley Laboratory;

  • The Internet worm incident in which several thousand computers were penetrated; and

  • Several takeovers of TV satellite up-links.

Individual privacy has been compromised. For example, deficient security measures at major credit agencies have allowed browsing and surreptitious assignment of thousands of individuals' credit histories to others.

Health care has been jeopardized by inadequate system quality as well as by breaches of security:

  • An error in the computer software controlling a radiation therapy machine, a Therac 25 linear accelerator, resulted in at least three separate patient deaths when doses were administered that were more than 100 times the typical treatment dose.

  • A Michigan hospital reported that its patient information had been scrambled or altered by a virus that came with a vendor's image display system.

  • A Cleveland man allegedly mailed over 26,000 virus-infected diskettes with AIDS prevention information to hospitals, businesses, and government agencies worldwide.

NOTE: None of the cases cited above involved any classified data. References to all of them can be found in Neumann (1989).

Suggested Citation:"Overview and Recommendations." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×

While weak computer security obviously affects direct and indirect users of computer systems, it may have less obvious but still important impacts on vendors of computer systems. The role of security and trust in product development and marketing should grow, and not only because it is in the public interest. In particular, failure to supply appropriate security may put vendors at a serious competitive disadvantage. Even though U.S. firms lead overall in the computer and communications market, several European governments are now promoting product evaluation schemes and standards that integrate other elements of trust, notably safety, with security. These developments may make it difficult for American industry to sell products in the European market.5

Although the committee focuses on technical, commercial, and related social concerns, it recognizes that there are a number of related legal issues, notably those associated with the investigation and prosecution of computer crimes, that are outside of its scope. It is important to balance technical and nontechnical approaches to enhancing system security and trust. Accordingly, the committee is concerned that the development of legislation and case law is being outpaced by the growth of technology and changes in our society. In particular, although law can be used to encourage good practice, it is difficult to match law to the circumstances of computer system use. Nevertheless, attacks on computer and communication systems are coming to be seen as punishable and often criminal acts (Hollinger and Lanza-Kaduce, 1988) within countries, and there is a movement toward international coordination of investigation and prosecution. However, there is by no means a consensus about what uses of computers are legitimate and socially acceptable. Free speech questions have been raised in connection with recent criminal investigations into dissemination of certain computer-related information.6 There are also controversies surrounding the privacy impacts of new and proposed computer systems, including some proposed security safeguards. Disagreement on these fundamental questions exists not only within society at large but also within the community of computer specialists.7

TRENDS-THE GROWING POTENTIAL FOR SYSTEM ABUSE

Overall, emerging trends, combined with the spread of relevant expertise and access within the country and throughout the world, point to growth in both the level and the sophistication of threats to major U.S. computer and communications systems. There is reason to believe that we are at a discontinuity: with respect to computer

Suggested Citation:"Overview and Recommendations." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×

security, the past is not a good predictor of the future. Several trends underlie this assessment:

  • Networking and embedded systems are proliferating, radically changing the installed base of computer systems and system applications.8

  • Computers have become such an integral part of American business that computer-related risks cannot be separated from general business risks.

  • The widespread use of databases containing information of a highly personal nature, for example, medical and credit records, leaves the privacy of individuals at risk.

  • The increased trust placed in computers used in safety-critical applications (e.g., medical instruments) increases the likelihood that accidents or attacks on computer systems can cost people their lives.

  • The ability to use and abuse computer systems is becoming widespread. In many instances (e.g., design of computer viruses, penetration of communications systems, credit card system fraud) attacks are becoming more sophisticated.

  • The international political environment is unstable, raising questions about the potential for transnational attacks at a time when international corporate, research, and other computer networks are growing.

THE NEED TO RESPOND

Use of computer systems in circumstances in which we must trust them is widespread and growing. But the trends identified above suggest that whatever trust was justified in the past will not be justified in the future unless action is taken now. (Box 1.2 illustrates how changing circumstances can profoundly alter the effective trustworthiness of a system designed with a given set of expectations about the world.) Computer system security and trustworthiness must become higher priorities for system developers and vendors, system administrators, general management, system users, educators, government, and the public at large.

This observation that we are at a discontinuity is key to understanding the focus and tone of this report. In a time of slow change, prudent practice may suggest that it is reasonable to wait for explicit evidence of a threat before developing a response. Such thinking is widespread in the commercial community, where it is hard to justify expenditures based on speculation. However, in this period of rapid change, significant damage can occur if one waits to develop a countermeasure until after an attack is manifest. On the one hand, it may

Suggested Citation:"Overview and Recommendations." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×

BOX 1.2 PERSONAL COMPUTERS: SECURITY DETERIORATES WITH CIRCUMSTANCES

Personal computers (PCs), such as the popular IBM PC running the MS/DOS operating system, or those compatible with it, illustrate that what was once secure may no longer be. Security was not a major consideration for developers and users of early PCs. Data was stored on floppy disks that could be locked up if necessary, and information stored in volatile memory disappeared once the machine was turned off. Thus the operating system contained no features to ensure the protection of data stored in the computer. However, the introduction of hard disks, which can store large amounts of potentially sensitive information in the computer, introduced new vulnerabilities. Since the hard disk, unlike the floppy disk, cannot be removed from the computer to protect it, whoever turns on the PC can have access to the data and programs stored on the hard disk. This increased risk can still be countered by locking up the entire machine. However, while the machine is running, all the programs and data are subject to corruption from a malfunctioning program, while a dismounted floppy is physically isolated.

The most damaging change in the operating assumptions underlying the PC was the advent of network attachment. External connection via networks has created the potential for broader access to a machine and the data it stores. So long as the machine is turned on, the network connection can be exercised by a remote attacker to penetrate the machine. Unfortunately, MS/DOS does not contain security features that, for example, can protect against unwanted access to or modification of data stored on PCs.

A particularly dangerous example of compromised PC security arises from the use of telecommunication packages that support connecting from the PC to other systems. As a convenience to users, some of these packages offer to record and remember the user's password for other systems. This means that any user penetrating the PC gains access not only to the PC itself but also to all the systems for which the user has stored his password. The problem is compounded by the common practice of attaching a modem to the PC and leaving it turned on at night to permit the user to dial up to the PC from home: since the PC has no access control (unless the software supporting the modem provides the service), any attacker guessing the telephone number can attach to the system and steal all the passwords.

Storing passwords to secure machines on a machine with no security might seem the height of folly. However, major software packages for PCs invite the user to do just that, a clear example of how vendors and users ignore security in their search for ease of use.

Suggested Citation:"Overview and Recommendations." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×

take years to deploy a countermeasure that requires a major change to a basic system. Thus, for example, the current concern about virus attacks derives not from the intrinsic difficulty of resisting the attacks, but from the total lack of a countermeasure in such popular systems as MS/DOS and the Apple Macintosh operating system. It will take years to upgrade these environments to provide a technical means to resist virus attacks. Had such attacks been anticipated, the means to resist them could have been intrinsic to the systems. On the other hand, the threats are changing qualitatively; they are more likely to be catastrophic in impact than the more ordinary threat familiar to security officers and managers. This report focuses on the newer breed of threat to system trustworthiness.

The committee concludes, for the various reasons outlined above and developed in this report, that we cannot wait to see what attackers may devise, or what accident may happen, before we start our defense. We must develop a long-term plan, based on our predictions of the future, and start now to develop systems that will provide adequate security and trustworthiness over the next decade.

TOWARD A PLANNED APPROACH

Taking a coherent approach to the problem of achieving improved system security requires understanding the complexity of the problem and a number of interrelated considerations, balancing the sometimes conflicting needs for security and secrecy, building on ground-work already laid, and formulating and implementing a new plan for action.

Achieving Understanding

The Nature of Security: Vulnerability, Threat, and Countermeasure

The field of security has its own language and mode of thought, which focus on the processes of attack and on preventing, detecting, and recovering from attacks. In practice, similar thinking is accorded to the possibility of accidents that, like attacks, could result in disclosure, modification, or destruction of information or systems or a delay in system use. Security is traditionally discussed in terms of vulnerabilities, threats, and countermeasures. A vulnerability is an aspect of some system that leaves it open to attack. A threat is a hostile party with the potential to exploit that vulnerability and cause damage. A countermeasure or safeguard is an added step or improved design that eliminates the vulnerability and renders the threat impotent.

Suggested Citation:"Overview and Recommendations." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×

A safe containing valuables, for example, may have a noisy combination lock—a vulnerability—whose clicking can be recorded and analyzed to recover the combination. It is surmised that safecrackers can make contact with experts in illegal eavesdropping—a threat. A policy is therefore instituted that recordings of random clicking must be played at loud volume when the safe is opened—a countermeasure.

Threats and countermeasures interact in intricate and often counterintuitive ways: a threat leads to a countermeasure, and the countermeasure spawns a new threat. Few countermeasures are so effective that they actually eliminate a threat. New means of attack are devised (e.g., computerized signal processing to separate ''live" clicks from recorded ones), and the result is a more sophisticated threat.

The interaction of threat and countermeasure poses distinctive problems for security specialists: the attacker must find but one of possibly multiple vulnerabilities in order to succeed; the security specialist must develop countermeasures for all. The advantage is therefore heavily to the attacker until very late in the mutual evolution of threat and countermeasure.9

If one waits until a threat is manifest through a successful attack, then significant damage can be done before an effective countermeasure can be developed and deployed. Therefore countermeasure engineering must be based on speculation. Effort may be expended in countering attacks that are never attempted.10 The need to speculate and to budget resources for countermeasures also implies a need to understand what it is that should be protected, and why; such understanding should drive the choice of a protection strategy and countermeasures. This thinking should be captured in security policies generated by management; poor security often reflects both weak policy and inadequate forethought.11

Security specialists almost uniformly try to keep the details of countermeasures secret, thus increasing the effort an attacker must expend and the chances that an attack will be detected before it can succeed. Discussion of countermeasures is further inhibited because a detailed explanation of sophisticated features can be used to infer attacks against lesser systems.12 As long as secrecy is considered important, the dissemination, without motivation, of guidelines developed by security experts will be a key instrument for enhancing secure system design, implementation, and operation. The need for secrecy regarding countermeasures and threats also implies that society must trust a group of people, security experts, for advice on how to maintain security.

Suggested Citation:"Overview and Recommendations." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×

Confidence in countermeasures is generally achieved by submitting them for evaluation by an independent team; this process increases the lead times and costs of producing secure systems. The existence of a successful attack can be demonstrated by an experiment, but the adequacy of a set of countermeasures cannot. Security specialists must resort to analysis, yet mathematical proofs in the face of constantly changing systems are impossible.

In practice, the effectiveness of a countermeasure often depends on how it is used; the best safe in the world is worthless if no one remembers to close the door. The possibility of legitimate users being hoodwinked into doing what an attacker cannot do for himself cautions against placing too much faith in purely technological countermeasures.

The evolution of countermeasures is a dynamic process. Security requires ongoing attention and planning, because yesterday's safeguards may not be effective tomorrow, or even today.

Special Security Concerns Associated with Computers

Computerization presents several special security challenges that stem from the nature of the technology, including the programmability of computers, interconnection of systems, and the use of computers as parts of complex systems. A computing system may be under attack (e.g., for theft of data) for an indefinite length of time without any noticeable effects, attacks may be disguised or may be executed without clear traces being left, or attacks may be related to seemingly benign events. Thus "no danger signals" does not mean that everything is in order.13 A further complication is the need to balance security against other interests, such as impacts on individual privacy. For example, automated detection of intrusion into a system, and other safeguards, can make available to system administrators significant information about the behavior of individual system users.

To some extent, those attributes of computing that introduce vulnerabilities can also be used to implement countermeasures. A computer system (unlike a file cabinet) can take active measures in its defense, by monitoring its activity and determining which user and program actions should be permitted (Anderson, 1980). Unfortunately, as discussed later in this report, this potential is far from realized.

Programmability The power of a general-purpose computer lies in its ability to become an infinity of different machines through programming.14 This is also a source of great vulnerability, because if a system can be programmed, it can be programmed to do bad things.

Suggested Citation:"Overview and Recommendations." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×

Thus by altering program text a computer virus can transform a familiar and friendly machine into something else entirely (Cohen, 1984).

The vulnerability introduced by programmability is compounded by the degree to which the operation of a computer is hidden from its user. Whereas an individual concerned about security can inspect a mechanical typewriter and safely conclude that the effects of pressing a key are the appearance of a letter on the paper and the imprint of a letter on the ribbon, he can gain no such confidence about the operation of a word processor. It is clear that the pressing of a word processor's key causes the appearance of a letter on the screen. It is in no sense clear what else is happening—whether, for instance, the letters are being saved for subsequent transmission or the internal clock is being monitored for a "trigger date" for the alteration or destruction of files.

Embeddedness and Interconnection The potential for taking improper irreversible actions increases with the degree to which computers are embedded in processes.15 The absence of human participation removes checks for the reasonableness of an action. And the time scale of automatic decisions may be too short to allow intervention before damage is done.

Interconnection enables attacks to be mounted remotely, anonymously, and against multiple vulnerabilities concurrently, creating the possibility of overwhelming impacts if the attacks are successful. This risk may not be understood by managers and system users. If a particular node on a massive, heterogeneous network does not contain any sensitive information, its owners may not be motivated to install any countermeasures. Yet such "wide-open" nodes can be used to launch attacks on the network as a whole, and little can be done in response, aside from disconnecting. The "Wily Hacker," for example, laundered his calls to defense-related installations through various university computers, none of which suffered any perceptible loss from his activities. The Internet worm of November 1988 also showed how networking externalizes risk. Many of the more than 2,000 affected nodes were entered easily once a "neighbor" node had been entered, usually through the electronic equivalent of an unlocked door.

In many cases, communication and interconnection have passed well beyond the simple exchange of messages to the creation of controlled opportunities for outsiders to access an organization's systems to facilitate either organization's business. On-line access by major telephone customers to telephone system management data and by large businesses to bank systems for treasury management

Suggested Citation:"Overview and Recommendations." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×

functions are two examples of this phenomenon. A related development is electronic data interchange (EDI), in which companies have computer-communications links with suppliers and customers to automate ordering, queries about the status of orders, inventory management, market research, and even electronic funds transfer (EFT). EDI and EFT may add an additional system layer or interconnection where systems are mediated by third-party suppliers that collect, store, and forward messages between various parties in various organizations. This situation illustrates the need for trustworthiness in common carriage. In short, a wide range of organizations are connected to each other through computer systems, sometimes without knowing they are interconnected.

Interconnection gives an almost ecological flavor to security; it creates dependencies that can harm as well as benefit the community of those who are interconnected. An analogy can be made to pollution: the pollution generated as a byproduct of legitimate activity causes damage external to the polluter. A recognized public interest in eliminating the damage may compel the installation of pollution control equipment for the benefit of the community, although the installation may not be justified by the narrow self-interest of the polluter. Just as average citizens have only a limited technical understanding of their vulnerability to pollution, so also individuals and organizations today have little understanding of the extent to which their computer systems are put at risk by those systems to which they are connected, or vice versa. The public interest in the safety of networks may require some assurances about the quality of security as a prerequisite for some kinds of network connection.

Security Must Be Holistic—Technology, Management, and Social Elements

Computer security does not stop or start at the computer. It is not a single feature, like memory size, nor can it be guaranteed by a single feature or even a set of features. It comprises at a minimum computer hardware, software, networks, and other equipment to which the computers are connected, facilities in which the computer is housed, and persons who use or otherwise come into contact with the computer. Serious security exposures may result from any weak technical or human link in the entire complex. For this reason, security is only partly a technical problem: it has significant procedural, administrative, physical facility, and personnel components as well. The General Accounting Office's recent criticisms of financial computer systems, for example, highlighted the risks associated with poor physical

Suggested Citation:"Overview and Recommendations." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×

BOX 1.3 SECURITY VS. RELIABILITY: A TELEPHONE BILLING SYSTEM AS AN EXAMPLE

Consider, for example, a telephone billing system that computes the duration of a call by recording the time but not the date at the start and end of a call. The system cannot bill calls over 24 hours. Thus a call of 24 hours and 3 minutes would be billed for 3 minutes. In the normal course of events, such calls are very rare, and in the absence of an active threat it is possible to visualize an analysis whose conclusion is that the error is not worth fixing. That is, the revenue lost from that tiny number of calls that "naturally" last more than 24 hours would not cover the cost of making the fix. But the discovery of this error by an active threat (e.g., bookies) turns it immediately into a vulnerability that will be exploited actively and persistently until it is fixed. The tolerance for error is therefore very much less when one considers "security" than it is when one is simply concerned with "reliability."

and administrative security (GAO, 1990a), which sets the stage for even amateur attacks on critical systems.

Paralleling concerns about security are concerns about system safety and the need for assurance that a system will not jeopardize life or limb. Steps that enhance computer security will enhance safety, and vice versa.16 Mechanisms used to achieve security are often similar to those used to achieve safety, reliability, and predictability. For example, contingency planning (which may involve system backup activities and alternative equipment and facilities) can protect an organization from the disruption associated with fires and other natural disasters, and it can help an organization to recover from a security breach.

Nevertheless, the environment in which those mechanisms operate differs when the principal concern is security. In particular, traditional risk analysis relies on statistical models that assume that unlikely events remain unlikely after they have occurred once. Security analyses cannot include such assumptions (see Box 1.3). Security is also distinguished from safety in that it involves protection against a conscious action rather than random unfortunate circumstances.17

Commercial and Military Needs are Different

There has been much debate about the difference between military and commercial needs in the security area. Some analyses (OTA, 1987b) have characterized so-called military security policies (i.e., those

Suggested Citation:"Overview and Recommendations." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×

concerned with national security or classified data) as being largely or exclusively concerned with secrecy, and commercial security policies (i.e., those of interest to the private sector) as being concerned with the integrity or reliability of data. This distinction is both superficial and misleading. National security activities, such as military operations, rely heavily on the integrity of data in such contexts as intelligence reports, targeting information, and command and control systems, as well as in more mundane applications such as payroll systems. Private sector organizations are concerned about protecting the confidentiality of merger and divestiture plans, personnel data, trade secrets, sales and marketing data and plans, and so on. Thus there are many common needs in the defense and civilian worlds.

Commonalities are especially strong when one compares the military to what could be called infrastructural industries—banking, the telephone system, power generation and distribution, airline scheduling and maintenance, and securities and commodities exchanges. Such industries both rely on computers and have strong security programs because of the linkage between security and reliability. Nonsecure systems are also potentially unreliable systems, and unreliability is anathema to infrastructure.

Nevertheless, specific military concerns affect the tack taken to achieve security in military contexts. Thus far, system attacks mounted by national intelligence organizations have been qualitatively different from attacks mounted by others (see Appendix E). This qualitative difference has led to basic differences in system design methodology, system vulnerability assessment, requirements for secrecy vs. openness in system design, and so on.

Other differences stem from the consequences of a successful attack. National security countermeasures stress prevention of attack, and only secondarily investigation and pursuit of the attackers, since the concept of compensatory or punitive damages is rarely meaningful in a national security context. Private sector countermeasures, however, are frequently oriented toward detection—developing audit trails and other chains of evidence that can be used to pursue attackers in the courts.

A final set of differences stem from variations in the ability to control who has access to computer systems. Threats can come from outsiders, individuals who have little or no legitimate access to the systems they are attacking, or from insiders, individuals who abuse their right to legitimate access. Embezzlement and theft of trade secrets by employees are familiar insider threats. Effective attacks often combine the two forms: a determined and competent group of outsiders aided by a subverted insider (Early, 1988).

Suggested Citation:"Overview and Recommendations." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×

The national security community conducts extensive background checks on individuals before it grants access to systems or information. Its countermeasures, therefore, tend to emphasize attacks by outsiders. Nonetheless, recognition of its own insider threats has led to an increased emphasis on accountability, auditing, and other measures to follow up on improper as well as accidental incidents. The private sector, by contrast, is limited by privacy and civil rights legislation in its ability to deny employment to individuals based on in-depth background investigations. This situation, together with the fact that most commercial applications are wide open to simple physical attacks and also have lacked external system connections, contributes to the private sector's historic emphasis on the threats posed by insiders (employees). Of course, the increasing interconnection and globalization of business, research, and other activities should raise the level of concern felt by all segments of the economy about outside threats.

The security needs of both commercial and defense sectors are matters of public interest. Partly because understanding of security is uneven, the computer and communications market has moved slowly and unevenly. Like other complex and sophisticated products, computer software and systems are difficult for the average consumer to understand and evaluate. This situation has depressed potential demand for security, and it has resulted in public and private efforts to stimulate and guide the market that, while well intended, fall short of what is needed. This is one area where it is generally agreed that some form of institutional support is not only desirable but also most valuable.

Putting the Need for Secrecy into Perspective

There is a tension between the need for prudent limits on the dissemination of information on vulnerabilities and the need to inform those at risk of specific security problems. The secrecy imperative has historically dominated the communications security field. Cryptology (the science of making and breaking codes), for instance, is one of two sciences (the other being atomic energy) that is given special status under federal statute (Kahn, 1967). Secrecy has also been self-imposed; government investigators, prosecutors, and insurance representatives have noted the reluctance of companies that have experienced computer system attacks to report their experiences.

Concern for secrecy affects the way computer systems are built and used. Open discussion of the design of a system offers the benefit of collegial review (see Chapter 4) but also involves the risk that attackers may be immediately informed of vulnerabilities. Evaluation

Suggested Citation:"Overview and Recommendations." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×

and analysis may also yield a list of residual vulnerabilities that cannot be countered for technical or economic reasons, and these become the most important secrets associated with the system. The more complex the system, the more difficult the trade-off becomes because of the increased likelihood that those close to the system will overlook something. General education in the proper use of countermeasures leads to a better-informed user community, but it also leads to a better-informed community of potential attackers. Publicizing specific vulnerabilities will lead some users to correct them, but will also provide a cookbook for attacking sites that do not hear about or are not motivated to install the countermeasure.

Concern for secrecy also impedes technological progress in the security area. It has deterred research in the academic community, which places a premium on open discussion and publication. It increases the difficulties faced by people new to the field, who cannot readily find out what has been done and what the real problems are; there is much reinventing of wheels. Finally, concern for secrecy makes it hard for the few who are well informed to seek the counsel and collaboration of others.

Perhaps the most damaging aspect of the secrecy associated with computer and communications security is that it has led many to assume that no problems exist. "Tomorrow will be pretty much like today," is the rationale that guides most government, corporate, and individual activities. However, with respect to computer security, secrecy makes it extremely hard to know what today is really like.

Building on Existing Foundations

A number of government agencies have addressed portions of the computer system security problem, either by developing relevant technology or applying relevant tools and practices (see Box 1.4). Two government agencies, the National Security Agency (NSA)—most recently through one of its arms, the National Computer Security Center (NCSC)—and the National Institute of Standards and Technology (NIST; formerly the National Bureau of Standards) have been particularly active for some 20 years, but neither is positioned to adequately address the nation's needs.

The National Security Agency has been the more active of the two organizations. The establishment of the NCSC represented an effort to stimulate the commercial marketplace. Through the NCSC and the publication of the Trusted Computer System Evaluation Criteria, or Orange Book (U.S. DOD, 1985d), which outlines different levels of computer security and a process for evaluating the security of com-

Suggested Citation:"Overview and Recommendations." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×

BOX 1.4 RECENT MAJOR COMPUTER SECURITY INITIATIVES UNDERTAKEN BY THE U.S. GOVERNMENT

  • Establishment of the National Computer Security Center

  • The Orange Book, Trusted Network Interpretation, related publications, and the Trusted Products Evaluation Program

  • National Security Decision Directive 145; revised and recast as NSD 42

  • The Computer Fraud and Abuse Act of 1986

  • The Computer Security Act of 1987

  • National Telecommunications and Information System Security Policy 200—C2 by '92

  • The Secure Data Network System project

  • NIST's Integrity Workshop program

  • DARPA's Computer Emergency Response Team program

puter systems (see Appendix A), the NSA has had a noticeable effect (Box 1.5). Because of its defense-oriented charter, the NSA cannot, however, more actively foster development or widespread dissemination of technology for use in the nonclassified or commercial world. Indeed, its defense-related focus—specifically, a focus on systems that process classified information—has been narrowed in recent years.

The National Institute of Standards and Technology's impact on computer security has been concentrated within the federal government. NIST has limited technical expertise and funds; in FY 1990 its appropriations for the computer security program totaled only $2.5 million. Although it can organize workshops, develop procedural guidelines, and sanction standards efforts, it is not in a position to develop technology internally or to provide direct support to external technology development efforts. The newest (FY 1991) NIST budget request called for a doubling of funds to support activities related to computer security, and NIST has made plans to undertake some initiatives (e.g., an industry-oriented program to combat computer viruses). However, the denial of NIST's FY 1990 request for modest additional funds in this area is symptomatic of the lack of stability and predictability of the political process for government funding in general and funding for NIST in particular.18

Tension between commercial and military interests dominated public policymaking relating to computer security during the 1980s. National Security Decision Directive (NSDD) 145, the Computer Security Act of 1987, and the mid-1990 revision of NSDD 145 (resulting in NSD 42) have progressively restricted NSA to an emphasis on defense systems, leaving civilian (notably civil government) system security

Suggested Citation:"Overview and Recommendations." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×

BOX 1.5 THE RAINBOW SERIES Since its formation in 1981, the National Computer Security Center has disseminated a collection of criteria and guidelines to assist developers, evaluators, and users in the development of trusted systems. This set of documents has become known as the Rainbow Series because of the different colors used for each volume's cover. Of these documents, perhaps the most widely known is the so-called Orange Book, which is formally known as the Department of Defense Trusted Computer System Evaluation Criteria. The following are brief descriptions of some of the documents that form the Rainbow Series: Trusted Computer System Evaluation Criteria (TCSEC) (Orange)

The TCSEC defines criteria for evaluating the security functionality and assurance provided by a computer system. The TCSEC formalizes the concept of a trusted computing base (TCB) and specifies how it should be constructed and used in order to ensure a desired level of trust.

Trusted Network Interpretation (TNI) (Red)

The TNI interprets the TCSEC with regard to networked computer systems. The TNI has been particularly controversial due to the complex security issues that arise when computer networks are used. It has been undergoing revision.

Trusted Database Management System Interpretation (TDI) (forthcoming)

The TDI interprets the TCSEC with regard to database management systems. The TDI is expected to be released in late 1990 or early 1991.

Password Management Guideline (Light Green)

This document describes a set of good practices for using password-based authorization schemes. A similar set of guidelines has also been issued by the National Institute of Standards and Technology as a Federal Information Processing Standards publication.

Glossary of Computer Security Terms (Dark Green)

This document defines the acronyms and terms used by computer security specialists, focusing on DOD contexts.

Magnetic Remanence Security Guidelines (Dark Blue)

This document provides procedures and guidance for sanitizing magnetic storage media (e.g., disks and tapes) prior to their release to nonsecure environments.

Guidance for Applying the Department of Defense Trusted Computer System Evaluation Criteria in Specific Environments (Yellow)

This volume provides guidance for applying the TCSEC to specific environments.

Suggested Citation:"Overview and Recommendations." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×

concerns to NIST. Partly as a result of the changing policy context, NSA has moved to diminish its interaction with commercial organizations, most notably by scaling back the NCSC. The full implications of these moves are yet to be appreciated at the time this report is being completed.

Meanwhile, no industry-based organization or professional association has stepped forward to play a leadership role in increasing computer system security, although the 1980s saw the birth or strengthening of a number of volunteer professional associations, and over the past couple of years major computer-related trade associations (e.g., the Computer and Business Equipment Manufacturers Association (CBEMA) and the computer software and services industry association ADAPSO) have begun to explore steps they can take to better track security problems, notably virus incidents, and to encourage better systems development. However valuable, these efforts are piecemeal.

Common technical interests, complementary objectives, and significant differences in resources combine to make the existing separate activities aimed at increasing computer security in commercial and military environments an incomplete solution to the problem of increasing the overall level of system security and trust. A more complete solution calls for the formulation and implementation of a new, more comprehensive plan that would inject greater resources into meeting commercial computer security needs.

SCOPE, PURPOSE, CONTENTS, AND AUDIENCE

This report provides an agenda for public policy, computer and communications security research, technology development, evaluation, and implementation. It focuses on the broad base of deployed computers in the United States; it does not emphasize the special problems of government classified information systems. This committee is particularly concerned about raising the security floor, making sure that the commercial environment on which the economy and public safety depend has a better minimum level of protection.

A number of actions are needed to increase the availability of computer and communications systems with improved security, including:

  • A clear articulation of essential security features, assurances, and practices;

  • Enhanced institutional support and coordination for security; and

  • Research and development of trustworthy computer-based technology.

Suggested Citation:"Overview and Recommendations." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×

This the appropriate time to develop a new strategy that blends research, establishment of requirements and criteria, and commercial incentives. The committee's recommendations in each of the above areas are presented below in the ''Recommendations" section of this chapter. These include recommendations for both short- and long-term actions.

This report is intended to address a variety of audiences, including government policymakers, vendors, managers responsible for the purchase and use of computer and communications systems, people involved in computer-related research and development, educators, and interested members of the general public. The chapters and appendixes that follow provide technical and analytical detail to further support the assertions, conclusions, and recommendations presented in this first chapter.

  • Chapter 2 describes basic concepts of information security, including security policies and management controls.

  • Chapter 3 describes technology associated with computer and communications security, relating technical approaches to security policies and management controls.

  • Chapter 4 discusses methodological issues related to building secure software systems.

  • Chapter 5 discusses system evaluation criteria, which provide yardsticks for evaluating the quality of systems. This topic is a current focus of much international concern and activity.

  • Chapter 6 discusses why the marketplace has failed to substantially increase the supply of security technology and discusses options for stimulating the market.

  • Chapter 7 discusses the need for a new institution, referred to as the Information Security Foundation.

  • Chapter 8 outlines problems and opportunities in the research community and suggests topics for research and mechanisms for strengthening the research infrastructure.

  • Appendixes provide further detail on the Orange Book (A), technology (B), emergency response teams (C), models for proposed guidelines (D), high-grade threats (E), and terminology (F).

The nature of the subject of security dictates some limits on the content of this report. Of necessity, this report anticipates threats in order to guide the development of effective security policy; it therefore inherently contains a degree of surmise. It leaves things unsaid so as not to act as a textbook for attackers, and therefore it may fail to inform or inspire some whose information is at risk. And finally, it may carry within it the seeds of its own failure, as the countermea-

Suggested Citation:"Overview and Recommendations." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×

sures it may inspire may also lead to new and more effective threats. Such is the nature of security.

RECOMMENDATIONS

The central concern of this report is how to get more and better computer and communications security into use. Five of the committee's six recommendations endorse actions with medium- to long-range impacts. Another, Recommendation 2, outlines short-term actions aimed at immediately improving the security of computing systems. It is clear that system operators, users, and managers need to take effective steps now to upgrade and stabilize their operating environments; developers and vendors are likewise urged to use existing capabilities for immediate enhancement of computer security. Also of concern are a number of currently unfolding political developments (e.g., development of harmonized international criteria for trusted system design and evaluation) that call for immediate attention from both public policymakers and vendors in particular. The committee has addressed such developments within the body of the report as appropriate.

Although the committee focused on system security, its recommendations also serve other aspects of system trustworthiness, in particular safety and reliability. It does not make sense to address these issues separately. Many of the methods and techniques that make systems more secure make them more trustworthy in general. System safety is tied to security, both in method and in objective. The penetration of computing into the social and economic fabric means that, increasingly, what we may want to protect or secure is public safety.

Increasing the trustworthiness of computer systems requires actions on many fronts—developing technology and products, strengthening managerial controls and response programs, and enhancing public awareness. Toward that end, the committee recommends six sets of actions, summarized as follows:

  1. Promulgating a comprehensive set of generally accepted system security principles, referred to as GSSP (see also Chapter 2);

  2. Taking specific short-term actions that build on readily available capabilities (see also Chapter 6);

  3. Establishing a comprehensive incident data repository and appropriate education programs to promote public awareness (see also Chapters 4 and 6);

  4. Clarifying export control criteria and procedures (see also Chapter 6);

Suggested Citation:"Overview and Recommendations." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×
  1. Securing funding for a comprehensive, directed program of research (see also Chapters 3, 4, and 8); and

  2. Establishing a new organization to nurture the development, commercialization, and proper use of trust technology, referred to as the Information Security Foundation, or ISF (see also Chapters 5, 6, and 7).

Recommendation 1 Promulgate Comprehensive Generally Accepted System Security Principles (GSSP)

1a. Establish a set of Generally Accepted System Security Principles, or GSSP, for computer systems. Because of widely varying understanding about vulnerabilities, threats, and safeguards, system vendors and users need guidance to develop and use trusted systems. It is neither desirable nor feasible to make all who come into contact with computers experts in computer and communications security. It is, however, both desirable and feasible to achieve a general expectation for a minimum level of protection. Otherwise, responses to security problems will continue to be fragmented and often ineffective.

The committee believes it is possible to enunciate a basic set of security-related principles that are so broadly applicable and effective for the design and use of systems that they ought to be a part of any system with significant operational requirements. This set will grow with research and experience in new areas of concern, such as integrity and availability, and can also grow beyond the specifics of security to deal with other related aspects of system trust, such as safety. GSSP should articulate and codify these principles.

Successful GSSP would establish a set of expectations about and requirements for good practice that would be well understood by system developers and security professionals, accepted by government, and recognized by managers and the public as protecting organizational and individual interests against security breaches and lapses in the protection of privacy. Analogous broad acceptance has been accorded to financial accounting standards (what have been called the Generally Accepted Accounting Principles, or GAAP) and building codes,19 both of which contain principles defined with industry input and used or recognized by government as well. To achieve a similar level of consensus, one that builds on but reaches beyond that accorded to the Orange Book (see Appendix A), the GSSP development process should be endorsed by and accept input from all relevant communities, including commercial users, vendors, and interested agencies of the U.S. government. The development of GSSP would

Suggested Citation:"Overview and Recommendations." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×

BOX 1.6 POTENTIAL ELEMENTS OF GENERALLY ACCEPTED SYSTEM SECURITY PRINCIPLES

The following set of examples is intended to illustrate the kinds of principles and considerations that might be embodied in GSSP. The committee emphasizes security-related issues but believes that GSSP should also stress safety-related practices.

  • Quality control—A system is safe and secure only to the extent that it can be trusted to provide the functionality it is intended to supply. At a minimum, the best known industrial practice must be used for system development, and some recognized means for potential purchasers or users to obtain independent evaluation must be provided. A stronger requirement would specify that every procedure in the software be accompanied by text specifying its potential impact on safety and security and arguing that those specifications imply the desired properties.* Chapter 5 discusses specific proposals for evaluation of systems relative to GSSP.

  • Access control on code as well as data—Every system must have the means to control which users can perform operations on which pieces of data, and which particular operations are possible. A minimum mechanism has a fixed set of operations (for example read, write, and execute) and may only associate permission with static groups of users, but stronger means, such as the ability to list particular users, are recommended.

  • User identification and authentication—Every system must assign an unambiguous identifier to each separate user and must have the means to assure that any user is properly associated with the correct identifier. A minimum mechanism for this function is passwords, but stronger means, such as challenge-response identity checks, are recommended.

  • Protection of executable code—Every system must have the means to ensure that programs cannot be modified or replaced improperly. Mechanisms stronger than customary access control are recommended, such as a basic system function to recognize certain programs as "installed" or "production" or "trusted,'' and to restrict the access to specified data to only this class of program.

  • Security logging—Every system must have the means to log for later audit all security-relevant operations on the system. At a minimum, this must include all improper attempts to authenticate a user or to access data, all changes to the list of authorized users, and (if appropriate)

require a level of effort and community participation that is well beyond the scope either of this report or of organizations currently active in the security arena. The committee therefore recommends that the process of establishing GSSP be spearheaded by a new organization discussed below in recommendation 6.

Presented in Box 1.6 are some potential GSSP elements that in

Suggested Citation:"Overview and Recommendations." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×
  • all successful security-related operations (user authentications, file opens, and so on). The log must be implemented in such a way that it cannot be altered or deleted after being written. A stronger version would also prevent the security administrator from deleting the log.

  • Security administrator—All systems must support the concept of a special class of users who are permitted to perform actions that change the security state of the system, such as adding users or installing trusted programs. They must control system code and data sources in appropriate off-line facilities. They must employ standard procedures for system initialization, backup, and recovery from "crashes."

  • Data encryption—While data encryption is not, in itself, an application-level security requirement, it is currently recognized as the method of choice for protecting communication in distributed systems. Any system that can be attached to a network must support some standard means for data encryption. A stronger version would forbid software encryption.

  • Operational support tools—Every system must provide tools to assist the user and the security administrator in verifying the security state of the system. These include tools to inspect security logs effectively, tools to provide a warning of unexpected system behavior, tools to inspect the security state of the system, and tools to control, configure, and manage the off-line data and code storage and hardware inventory.

  • Independent audit—At some reasonable and regular interval, an independent unannounced audit of the on-line system, operation, administration, configuration control, and audit records should be invoked by an agency unrelated to that responsible for the system design and/or operations. Such an audit should be analogous to an annual business audit by accounting firms.

  • Hazard analysis—A hazard analysis must be done for every safety-critical system. This analysis must describe those states of the system that can lead to situations in which life is endangered and must estimate the probability and severity of each under various conditions of usage. It should also categorize the extent to which hazards are independent of each other.

*  

Note that the Internet Engineering Advisory Board has begun to contemplate "security impact statements" for proposed modifications to the large and complex Internet.

fully developed GSSP would be elaborated in greater detail. The committee expects that GSSP would also cover matters of safety that fall outside the scope of this report.

Comprehensive GSSP must reflect the needs of the widest possible spectrum of computer users. Although some groups with particular responsibilities (e.g., in banking) might be tempted to reject GSSP in

Suggested Citation:"Overview and Recommendations." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×

favor of defining practices specific to their sectors, the committee believes that this would be unfortunate. Base-level security requirements of the sort outlined above are broadly applicable and ought to be defined in common (see Chapter 2), so that the features required to support GSSP can become a part of general-purpose computing. Only as a part of mainstream computing products will they become available at reasonable cost.

In order to serve a wide range of users, GSSP must allow variation with circumstances. The committee concludes (see Chapter 5) that GSSP should be organized in a somewhat more unbundled manner than is the Orange Book.

The process of motivating the adoption of GSSP could and probably should differ across sectors. For example, where computers are used to help manage assets, cooperation with the American Institute of Certified Professional Accountants or the Financial Accounting Standards Board might lead to incorporation of GSSP into the larger body of standard practice for accounting. In systems used for health care, GSSP might become a part of the Food and Drug Administration's regulations governing medical equipment. GSSP could also be directly incorporated into government requests for proposals (RFPs) and other procurement actions. During the development of GSSP it would be necessary to consider mechanisms and options for motivating adoption of GSSP.

The committee expects natural forces, such as customers' expectations, requirements for purchasing insurance, vendors' concerns about liability, industry associations, and advertising advantage, to instill GSSP in the marketplace. Nevertheless it is possible to imagine that in some circumstances, such as for life-critical systems, certain aspects of GSSP might become mandatory. Serious consideration of regulation or other mechanisms for enforcement is both premature and beyond the scope of this report. However, the process implied by the committee's set of recommendations could force such consideration in a few years. That process entails establishing a new organization, developing GSSP, and beginning the dissemination of GSSP through voluntary means.

1b. Consider the system requirements specified by the Orange Book for the C2 and B1 levels as a short-term definition of Generally Accepted System Security Principles and a starting point for more extensive definitions. To date and by default, the principal vehicle in the United States for raising the level of practice in computer and communications security has been the National Computer Security Center's Orange Book and its various interpretations. Although

Suggested Citation:"Overview and Recommendations." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×

the Orange Book is not a full set of GSSP (see Appendix A), it is a major step that is currently molding the market and is clearly consonant with GSSP.

The C2 and B1 ratings describe systems that provide base-line levels of acceptable discretionary security (C2) and systems that provide minimal levels of acceptable mandatory multilevel security (B1).20 However, the Orange Book is not adequate to meet the public's long-term needs, largely because it is incomplete. GSSP would provide fuller treatment of integrity, availability, and advanced techniques for assurance and software development.21 It must address distributed systems and evolving architectures (as well as change in the underlying technologies generally), which means that it should go beyond trusted computing bases as currently defined.

1c. Establish methods, guidelines and facilities for evaluating products for conformance to GSSP. A mechanism for checking conformance to GSSP is required for GSSP to have its fullest impact and to protect both vendors and consumers. As with technical standards, it is possible to claim conformance, but conformance must be genuine for benefits, such as interoperability, to be realized. Conformance evaluation is already becoming a prominent issue across the industry because of the proliferation of standards.22 Evaluation of security and safety properties is generally recognized as more difficult than evaluation of conformance to interoperability standards. Therefore, methods for evaluating conformance should be considered for each element of GSSP.

It will also be necessary both to train evaluators and to establish the extent and timing of independent evaluation. The details of the evaluation process affect costs to vendors and users as well as the confidence of both in the performance or quality of a system. In Chapter 5 the committee recommends that the minimal GSSP evaluation include two parts, an explicit design evaluation performed by an outside team, and a coordinated process of tracking field experience with the product and tracking and reporting security faults. This process ought to be less costly and time-consuming than the current NCSC process, thus improving the chances of its widespread acceptance.

Experience with the current NCSC evaluation process suggests that individual products can be evaluated somewhat formally and objectively. However, a system composed of evaluated components may not provide the security implied by component ratings. Achieving overall system security requires more objective, uniform, and rigorous standards for system certification. The committee recommends

Suggested Citation:"Overview and Recommendations." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×

that GSSP include guidelines for system certification, again building on existing methodology.

1d. Use GSSP as a basis for resolving differences between U.S. and foreign criteria for trustworthy systems and as a vehicle for shaping inputs to international discussions of security and safety standards. With the current emergence of national evaluation criteria and the proposed harmonized Information Technology Security Evaluation Criteria (ITSEC; Federal Republic of Germany, 1990) developed by the United Kingdom, France, Germany, and the Netherlands, the Orange Book is no longer the only game in town. Just as GSSP would serve to extend the Orange Book criteria to cover integrity and availability and advanced system development and assurance techniques, it should also serve as the basis for resolving the differences between the Orange Book and international criteria such as the ITSEC. In the ongoing process of reconciling international criteria and evaluations, U.S. interests may be inadequately served if the comparatively narrowly focused Orange Book is the sole basis for U.S. positions.

The committee supports a move already under discussion to conduct simultaneous evaluations of products against the Orange Book and international criteria to improve the understanding of the relationships among different criteria and to enhance reciprocity. A concerted effort to simultaneously evaluate a series of trusted products can, over a reasonable period of time, bring the criteria (eventually including GSSP) to a common level of understanding and promote the development of reciprocity in ratings.

Similar concerns pertain to U.S. participation in international standards-setting committees. U.S. participation is often constrained by concerns about international technology transfer or by limited technical support from industry. The cost of weak participation may be the imposition on the marketplace of standards that do not fully reflect U.S. national or industrial interests.

Recommendation 2 Take Specific Short-term Actions that Build on Readily Available Capabilities

System users and vendors can take a number of actions that will immediately improve the security of computing systems.

2a. Develop security policies. Computer system users should think through their security needs, establish appropriate policies and associated procedures, and ensure that everyone in a given organization

Suggested Citation:"Overview and Recommendations." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×

knows those policies and procedures and has some understanding of security risks and safe computing practices. Many organizations have taken these common-sense steps; many others have not or could do so more effectively.23 At the highest level, these policies provide directions for programs that affect physical security, contingency planning, electronic access, networking, security awareness, and so on. Within each of these general security areas, policies should be developed to identify the specific controls or mechanisms needed to satisfy organizational objectives.

It should be understood that planning and setting policies and procedures need not result in wholesale changes to installed systems. Many of the most effective management controls relate to system operation rather than to functional changes to system design, both because operational changes can be accomplished quickly and because operational weaknesses in computer systems are among the most severe practical problems today. Such changes may not decrease vulnerabilities, but they can reduce a potential threat by imposing controls on potential abusers. Two obvious techniques are upgrading the quality of security administration (e.g., password management, audit analysis, and configuration management) and educating individual users about the risks of importing software (e.g., contamination by viruses).

2b. Form computer emergency response teams. The committee recommends that all organizations dependent on proper operation of computer systems form or obtain access to computer emergency response teams (CERTs) trained to deal with security violations (see Appendix C). These teams should be prepared to limit the impact of successful attacks, provide guidance in recovering from attacks, and take measures to prevent repetition of successful attacks.

For security problems arising from basic design faults, such as the lack of security in MS/DOS, little remedy can be expected in the short term. However, for problems resulting from implementation flaws, a CERT can help by informing the vendor of the fault, ensuring that the fault receives sufficient attention, and helping to ensure that upgraded software is distributed and installed. DARPA's CERT and other, smaller efforts have demonstrated the potential of emergency response teams.

2c. Use as a first step the Orange Book's C2 and B1 criteria. Until GSSP can be articulated and put in place, industry needs some guidance for raising the security floor in the marketplace. The Orange Book's C2 and B1 criteria provide such guidance, which should be

Suggested Citation:"Overview and Recommendations." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×

valuable not only to conventional computer system vendors (hardware and software) but also to vendors of computer-based medical systems, specialized database management systems, and other computer-based products. Vendors who have not already done so should move to meet C2 and B1 criteria as a conservative step toward instituting GSSP.

2d. Use sound methodology and modern technology to develop high-quality software. The committee recommends that developers of security-relevant software use current-generation tools for software engineering. The development of high-quality software, clearly a paramount goal for any project, often is not achieved because of various real-world pressures and constraints (e.g., competitive need for fast release, or customer demand for enhanced performance). Although the development of more trustworthy systems in general is a concern, security in particular can suffer if systems are not constructed in a methodical and controlled way.

Poor development practices can have several consequences. First, they may lead to a system with vulnerabilities that result directly from undetected errors in the software. (Although objective evidence is hard to gather, it seems that technical attacks on systems are targeted more to implementation faults than to design faults.) Second, such a system may be much harder to evaluate, since it is very difficult for an independent evaluator to understand or review the implementation. Third, the system may be harder to maintain or evolve, which means that with time, the security of the system may get worse, not better.

Conventional wisdom about sound development practices applies with special force where security is involved (see Box 1.7).

2e. Implement emerging security standards and participate actively in their design. The committee urges vendors to incorporate emerging security standards into their product planning and to participate more actively in the design of such standards. In particular, vendors should develop distributed system architectures compatible with evolving security standards.24 Further, vendors and large-system users should make the setting of security standards a higher priority.

Current attempts to set standards raise two concerns. First, standards-setting committees should strive to make security standards simple, since complexity is associated with a greater potential for security problems. Achieving consensus typically results in a standard that combines the interests of diverse parties, a process that promotes complexity. Second, because there are hundreds of computing-related standards groups, setting security standards gets relatively

Suggested Citation:"Overview and Recommendations." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×

BOX 1.7 SOUND DEVELOPMENT METHODOLOGY FOR SECURE SOFTWARE AND SYSTEMS

  • Strive for simplicity and smallness where feasible.

  • Use software configuration management and control systems for all source and object code, specifications, documents, test plans and results, version control, and release tracking.

  • Reduce exposure to failure of security. For example, validated copies of vital data should be kept off-line, and contingency plans for extended computer outages should be in place.

  • Restrict general access to software development tools and products, and to the physical environment.

  • Develop generally available components with well-documented program-level interfaces that can be incorporated into secure software. Among these should be standardized interfaces to security services (e.g., cryptography) that may have hardware implementations.

  • Provide excess memory and computing capacity relative to the intended functionality. This reduces the need to solve performance problems by introducing complexity into the software.

  • Use higher-level languages. (This suggestion may not apply to intelligence threats.)

  • Aim for building secure software by extending existing secure software. Furthermore, use mature product or development technology.

  • Couple development of secure software with regular evaluation. If system evaluation is to be done by an outside organization, that organization should be involved in the project from it inception.

  • Schedule more time and resources for assurance than are typical today.

  • Design software to limit the need for secrecy. When a project attempts to maintain secrecy, it must take extraordinary measures, (e.g., cleared "inspectors general") to ensure that secrecy is not abused (e.g., to conceal poor-quality work).

limited attention and participation. Although NIST has supported the setting of such standards, emphasis in this country on standards development by the private sector makes active industry participation essential. Therefore, vendors should be encouraged to assign representatives to U.S. standards efforts to ensure that (1) the impact of standards that affect security is fully understood and (2) security standards can be implemented effectively.

2f. Use technical aids to foster secure operations. The committee recommends that vendors take technical steps that will help diminish the impact of user ignorance and carelessness and make it easier to

Suggested Citation:"Overview and Recommendations." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×

administer systems in a secure manner. For example, systems should be shipped with security features turned on, so that explicit action is needed to disable them, and with default identifications and passwords turned off, so that a conscious effort is required to enable them. More efforts are needed to develop and market tools that could examine the state of a system and report on its security.25 Such audit tools (e.g., MIT's Kuang tool (Baldwin, 1988), Digital Equipment Corporation's Inspect, Clyde Digital's Cubic, DEMAX's Securepack, and AT&T's Quest) have proved useful in assuring the continued operational security of running systems.

Recommendation 3 Gather Information and Provide Education

3a. Build a repository of incident data. The committee recommends that a repository of incident information be established for use in research, to increase public awareness of successful penetrations and existing vulnerabilities, and to assist security practitioners, who often have difficulty persuading managers to invest in security. This database should categorize, report, and track pertinent instances of system security-related threats, risks, and failures. Because of the need for secrecy and confidentiality about specific system flaws and actual penetrations, this information must be collected and disseminated in a controlled manner. One possible model for data collection is the incident reporting system administered by the National Transportation Safety Board; two directly relevant efforts are the incident tracking begun by DARPA's computer emergency response team and NIST's announced plans to begin to track incidents.

3b. Foster education in engineering secure systems. There is a dramatic shortage of people qualified to build secure software. Universities should establish software engineering programs that emphasize development of critical and secure software; major system users should likewise provide for continuing education that promotes expertise in setting requirements for, specifying, and building critical software. Effective work on critical software requires specialized knowledge of what can go wrong in the application domain. Competence in software that controls a nuclear reactor, for example, does not qualify one to work on flight-control software. Working on secure software requires yet more skills, including understanding the potential for attack, for software in general and for the application domain in particular.

Especially needed is a university-based program aimed at returning,

Suggested Citation:"Overview and Recommendations." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×

graduate-level students who are already somewhat familiar with at least one application area. In addition to covering conventional software engineering, such a program would give special emphasis to topics related to critical software and security26 and could best be developed at universities with strong graduate engineering and business programs. The committee envisions as an initial step approximately three such programs, each turning out perhaps 20 people a year.

Given the current shortage of qualified people and the time needed for universities to establish appropriate programs, those undertaking large security-related development efforts should deal explicitly with the need to educate project members. Both time and money for this should appear in project budgets.

3c. Provide early training in security practices and ethics. The committee recommends that security practices and ethics be integrated into the general process of learning about and using computers. Awareness of the importance of security measures should be integrated into early education about computing. Lessons about socially acceptable and unacceptable behavior (e.g., stealing passwords is not acceptable) should also be taught when students first begin to use computers, just as library etiquette (e.g., writing in library books is not acceptable) is taught to young readers—with the recognition, of course, that security is a more complex subject. This recommendation is aimed at teachers, especially those at the primary and secondary levels. Implementing it would require that organizations and professionals concerned with security get the word out, to organizations that customarily serve and inform teachers and directly to teachers in communities.

Recommendation 4 Clarify Export Control Criteria, and Set Up a Forum for Arbitration

The market for computer and communications security, like the computer market overall, is international. If the United States does not allow vendors of commercial systems to export security products and products with relatively effective security features, large multinational firms as well as foreign consumers will simply purchase equivalent systems from foreign manufacturers. At issue is the ability to export two types of products: (1) trusted systems and (2) encryption.

4a. Clarify export controls on trusted systems and differentiate

Suggested Citation:"Overview and Recommendations." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×

them from Orange Book ratings. Industry has complained for some time about current export controls on trusted systems. The requirement for case-by-case review of export licenses for trusted systems with Orange Book ratings of B3 and above adds to the cost of such systems, because sales may be restricted and extra time is needed to apply for and receive export approval. These prospects discourage industry from developing more secure systems; vendors do not want to jeopardize the exportability of their mainline commercial offerings.27

The committee recommends that Orange Book ratings not be used as export control criteria. It also recommends that the Department of Commerce, in conjunction with the Departments of Defense and State, clarify for industry the content of the regulations and the process by which they are implemented. Removal of Orange Book ratings as control parameters would also help to alleviate potential problems associated with multiple, national rating schemes (see Chapter 5).

The crux of the problem appears to be confusion among Orange Book ratings, dual-use (military and civilian) technology, and military-critical technology. Security technology intended to counter an intelligence-grade threat is considered military critical and not dual use—it is not aimed at commercial as well as military uses. Security technology intended to counter a lower, criminal-grade threat is of use to both defense and commercial entities, but it is not military critical. Since an Orange Book rating per se is not proof against an intelligence-grade threat, it does not alone signal military-critical technology that should be tightly controlled. Industry needs to know which features of a product might trigger export restrictions.

4b. Review export controls on implementations of the Data Encryption Standard. The growth of networked and distributed systems has created needs for encryption in the private sector. Some of that pressure has been seen in the push for greater exportability of products using the Data Encryption Standard (DES) and its deployment in foreign offices of U.S. companies.28

In principle, any widely available internationally usable encryption algorithm should be adequate. NIST, working with NSA, is currently trying to develop such algorithms. However, the committee notes that this effort may not solve industry's problems, for several reasons. The growing installed base of DES products cannot be easily retrofitted with the new products. The foreign supply of DES products may increase the appeal of foreign products. Finally, NSA-influenced alternatives may be unacceptable to foreign or even U.S. buyers, as evidenced by the American Banking Association's opposition

Suggested Citation:"Overview and Recommendations." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×

to the NSA's proposals to effectively restrict banks to encryption algorithms designed and developed by NSA when the DES was last recertified, in 1988.

The committee has been apprised that NSA, because of classified national security concerns, does not support the removal of remaining restrictions on export of DES. However, there is a growing lack of sympathy in the commercial community with the NSA position on this matter. The committee recommends that the Administration appoint an arbitration group consisting of appropriately cleared individuals from industry and the Department of Commerce as well as the Department of Defense to impartially evaluate if there are indeed valid reasons at this time for limiting the export of DES.29

Recommendation 5 Fund and Pursue Needed Research

The dramatic changes in the technology of computing make it necessary for the computer science and engineering communities to rethink some of the current technical approaches to achieving security. The most dramatic example of the problem is the confusion about how best to achieve security in networked environments and embedded systems.

At present, there is no vigorous program to meet this need. Particularly worrisome is the lack of academic research in computer security, notably research relevant to distributed systems and networks.30 Only in theoretical areas, such as number theory, zero-knowledge proofs, and cryptology, which are conducive to individual research efforts, has there been significant academic effort. Although it must be understood that many research topics could be pursued in industrial as well as academic research laboratories, the committee has focused on strengthening the comparatively weaker research effort in universities, since universities both generate technical talent and are traditionally the base for addressing relatively fundamental questions.

The committee recommends that government sponsors of computer science and technology research (in particular, DARPA and NSF) undertake well-defined and adequately funded programs of research and technology development in computer security. A key role for NSF (and perhaps DARPA), beyond specific funding of relevant projects, is to facilitate increased cross-coupling between security experts and researchers in related fields. The committee also recommends that NIST, in keeping with its interest in computer security and its charter to enhance security for sensitive unclassified data and systems, pro-

Suggested Citation:"Overview and Recommendations." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×

BOX 1.8 SECURITY RESEARCH AGENDA

  • Security modularity—How can a set of system components with known security properties be combined or composed to form a larger system with known security properties? How can a system be decomposed into building blocks, units that can be used independently in other systems?

  • Security policy models—Security requirements other than disclosure control, such as integrity, availability, and distributed authentication and authorization, are not easily modeled. There is also a need for better models that address protocols and other aspects of distributed systems.

  • Cost/benefit models for security—How much does security (including also privacy protection) really cost, and what are its real benefits?

  • New security mechanisms—As new requirements are proposed, as new threats are considered, and as new technologies become prevalent, new mechanisms are required to maintain effective security. Some current topics for research include mechanisms to support critical aspects of integrity (separation of duty, for example), distributed key management on low-security systems, multiway and transitive authentication, availability (especially in distributed systems and networks), privacy assurance, and access controllers in networks to permit interconnection of mutually suspicious organizations.

  • Increasing effectiveness of assurance techniques—More needs to be known about the spectrum of analysis techniques, both formal and informal, and to what aspects of security they best apply. Also, tools are needed to support the generation of assurance evidence.

  • Alternative representations and presentations—New representations of security properties may yield new analysis techniques. For example,

vide funding for research in areas of key concern to it, either internally or in collaboration with other agencies that support research.

The committee has identified several specific technical issues that justify research (see Box 1.8). Chapter 8 provides a fuller discussion; Chapters 3 and 4 address some underlying issues. The list, although by no means complete, shows the scope and importance of a possible research agenda.

The committee believes that greater university involvement in large-scale research-oriented system development projects (comparable to the old Arpanet and Multics programs) would be highly beneficial for security research. It is important that contemporary projects, both inside and outside universities, be encouraged to use state-of-the art software development tools and security techniques, in order to evaluate these tools and to assess the expected gain in system security. Also, while academic computer security research traditionally has been

Suggested Citation:"Overview and Recommendations." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×
  • graphics tools that allow system operators to set, explore, and analyze proposed policies (who should get access to what) and system configurations (who has access to what) may help identify weaknesses or unwanted restrictions as policies are instituted and deployed systems used.

  • Automated security procedures—Research is needed in automating critical aspects of system operation, to assist the system manager in avoiding security faults in this area. Examples include tools to check the security state of a system, models of operational requirements and desired controls, and threat assessment aids.

  • Nonrepudiation—To protect proprietary rights it may be necessary to record user actions so as to bar the user from later repudiating these actions. Doing this in a way that respects the privacy of users is difficult.

  • Resource control—Resource control is associated with the prevention of unauthorized use of proprietary software or databases legitimately installed in a computing system. It has attracted little research and implementation effort, but it poses some difficult technical problems and possibly problems related to privacy as well.

  • Systems with security perimeters—Network protocol design efforts have tended to assume that networks will provide general interconnection. However, as observed in Chapter 3, a common practical approach to achieving security in distributed systems is to partition the system into regions that are separated by a security perimeter. This may cause a loss of network functionality. If, for example, a network permits mail but not directory services (because of security concerns about directory searches), less mail may be sent because no capability exists to look up the address of a recipient.

performed in computer science departments, several study areas are clearly appropriate for researchers based in business schools, including assessing the actual value to an organization of information technology and of protecting privacy.

DARPA has a tradition of funding significant system development projects of the kind that can be highly beneficial for security research. Examples of valuable projects include:

  • Use of state-of-the-art software development techniques and tools to produce a secure system. The explicit goal of such an effort should be to evaluate the development process and to assess the expected gain in system quality. The difficulty of uncovering vulnerabilities through testing suggests that a marriage of traditional software engineering techniques with formal methods is needed.

  • Development of distributed systems with a variety of security properties. A project now under way, with DARPA funding, is the

Suggested Citation:"Overview and Recommendations." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×

development of encryption-based private electronic mail. Another such project could focus on decentralized, peer-connected name servers.

  • Development of a system supporting some approach to data integrity. There are now some proposed models for integrity, but without worked examples it will be impossible to validate them. This represents an opportunity for DARPA-NIST cooperation.

In addition to funding specific relevant projects, both DARPA and NSF should encourage collaboration across research fields. Cross-disciplinary research in the following areas would strengthen system trustworthiness:

  • Safety: There is growing concern about and interest in the safety-related aspects of computer processing both in the United States and internationally.

  • Fault-tolerant computing: Much research has been directed at the problem of fault-tolerant computing, and an attempt should be made to extend this work to other aspects of security.

  • Code analysis: People working on optimizing and parallelizing compilers have extensive experience in analyzing both source and object code for a variety of properties. An attempt should be made to see if similar techniques can be used to analyze code for properties related to security.

  • Security interfaces: People working in the area of formal specification should be encouraged to specify standardized interfaces to security services and to apply their techniques to the specification and analysis of high-level security properties.

  • Theoretical research: Theoretical work needs to be properly integrated in actual systems. Often both theoreticians and system practitioners misunderstand the system aspects of security or the theoretical limitations of secure algorithms.

  • Programming language research: New paradigms require new security models, new design and analysis techniques, perhaps additional constructs, and persuasion of both researchers and users that security is important before too many tools proliferate.

  • Software development environments: Myriad tools (e.g., theorem provers, test coverage monitors, object managers, and interface packages) continue to be developed by researchers, sometimes in collaborative efforts such as Arcadia. Some strategy for integrating such tools is needed to drive the research toward more system-oriented solutions.31

Again, much of this research is appropriate for both commercial and academic entities, and it might require or benefit from industry-

Suggested Citation:"Overview and Recommendations." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×

university collaboration. Certainly, joint industry-university efforts may facilitate the process of technology transfer. NSF and DARPA have a tradition of working with the broad science community and could obviously take on programs to facilitate needed collaboration. Some possible specific actions are suggested in Chapter 8.

Recommendation 6 Establish an Information Security Foundation

The public needs an institution that will accelerate the commercialization and adoption of safer and more secure computer and communications systems. To meet that need, the committee recommends the establishment of a new private organization—a consortium of computer users, vendors, and other interested parties (e.g., property and casualty insurers). This organization must not be, or even be perceived to be, a captive of government, system vendors, or individual segments of the user community.

The committee recommends a new institution because it concludes that pressing needs in the following areas are not likely to be met adequately by existing entities:

  • Establishment of Generally Accepted System Security Principles, or GSSP;

  • Research on computer system security, including evaluation techniques;

  • System evaluation;

  • Development and maintenance of an incident, threat, and vulnerability tracking system;

  • Education and training;

  • Brokering and enhancing communications between commercial and national security interests; and

  • Focused participation in international standardization and harmonization efforts for commercial security practice.

Why should these functions be combined in a single organization? Although the proposed organization would not have a monopoly on all of these functions, the committee believes that the functions are synergistic. For example, involvement in research would help the organization recruit technically talented staff; involvement in research and the development of GSSP would inform the evaluation effort; and involvement in GSSP development and evaluation would inform education, training, and contributions to international criteria-setting and evaluation schemes. Further, a new organization would have

Suggested Citation:"Overview and Recommendations." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×

more flexibility than those currently focused on security to build strong bridges to other aspects of trust, notably safety.

In the short run, this organization, called the Information Security Foundation (ISF) in this report, would act to increase awareness and expectations regarding system security and safety. The pressure provided by organized tracking and reporting of faults would encourage vendors and users to pay greater attention to system quality; the development and promulgation of GSSP should cause users and vendors to focus on an accepted base of prudent practice.

In the longer term, a major activity of the ISF would be product evaluation. The complex and critical nature of security products makes independent evaluation essential. The only current official source of evaluations, the NCSC, has been criticized as poorly suited to meeting industry's needs, and changes in its charter and direction are reducing its role in this area. The process of evaluation described in Chapters 5 and 7 is intended to address directly industry's concerns with the current process and to define a program that can be a success in the commercial marketplace. The committee concludes that some form of system evaluation is a critical aspect of achieving any real improvement in computer security.

Also in the longer term, the ISF would work to bridge the security and safety arenas, using as vehicles GSSP and evaluation as well as the other activities. The ISF could play a critical role in improving the overall quality and trustworthiness of computer systems, using the need for better security as an initial target to motivate its activities.

The organization envisioned must be designed to interact closely with government, specifically the NCSC and NIST, so that its results can contribute to satisfying government needs. Similarly, it would coordinate with operational organizations such as DARPA's CERT, especially if the CERT proceeds with its plans to develop an emergency-incident tracking capability. The government may be the best vehicle to launch the ISF, but it should be an independent, private organization once functional.

As discussed in detail in Chapter 7, the committee concludes that the ISF would need the highest level of governmental support; the strongest expression of such support would be a special congressional charter. Such a charter would define ISF's role and its relation to the government. At the same time, the organization should be outside of the government to keep it separate from the focus on intragovernmental security needs, internecine political squabbles, and the hiring and resource limitations that constrain NCSC and NIST. Its major source of funds should be member subscriptions and fees

Suggested Citation:"Overview and Recommendations." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×

for services such as evaluation. It must not depend on government funding for its viability.

Note that the mission outlined above is much more challenging than defining standards or providing evaluation of consumer durables (e.g., as done by Underwriters Laboratories, Inc.). The committee does not know of any existing private organization that could take on these tasks.

Although it recognizes that any proposal for establishing a new institution faces an uphill battle, the committee sees this proposal as a test of commitment for industry, which has complained loudly about the existing institutional infrastructure. Commitment to an organization like that proposed can facilitate self-regulation and greatly diminish the likelihood of explicit government regulation.

If a new organization is not established—or if the functions proposed for it are not pursued in an aggressive and well-funded manner, the most immediate consequence will be the further discouraging of efforts by vendors to develop evaluated products, even though evaluation is vital to assuring that products are indeed trustworthy; the continuation of a slow rate of progress in the market, leaving many system users unprotected and unaware of the risks they face; and the prospect that U.S. vendors will become less competitive in the international systems market. Without aggressive action to increase system trustworthiness, the national exposure to safety and security catastrophes will increase rapidly.

CONCLUSION

Getting widely deployed and more effective computer and communications security is essential if the United States is to fully achieve the promise of the Information Age. The technology base is changing, and the proliferation of networks and distributed systems has increased the risks of threats to security and safety. The computer and communications security problem is growing. Progress is needed on many fronts—including management, development, research, legal enforcement, and institutional support—to integrate security into the development and use of computer and communications technology and to make it a constructive and routine component of information systems.

NOTES

1.  

Losses from credit card and communications fraud alone investigated by the Secret Service range into the millions. See Box 1.1 for other examples.

Suggested Citation:"Overview and Recommendations." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×

2.  

This growth may be aided by recent political changes in Eastern Europe and the Soviet Union, which are believed to be freeing up intelligence resources that analysts suggest may be redirected toward economic and technological targets (Safire, 1990).

3.  

Voting systems present special challenges: First, the data is public property. Second, voting systems are information systems deployed to strange locations, handled by volunteers, abused by the media (''got to know the results by 8 p.m."), and offered by specialty vendors. Third, the openness issue can be evaded by vendors promoting proprietary approaches, in the absence of any organized screening or regulatory activity. Fourth, the security overhead in the system cannot get in the way of the operations of the system under what are always difficult conditions. Voting system technology makes an interesting case study because it is inherently system-oriented: ballot preparation, input sensing, data recording and transmission, pre-election testing, intrusion prevention, result preservation, and reporting. The variety of product responses are therefore immense, and each product must fit as wide a range of voting situations as possible, and be attractive and cost-effective. Anecdotal evidence suggests a range of security problems for this comparatively new application. (Hoffman, 1988; ECRI, 1988b; Saltman, 1988; miscellaneous issues of RISKS.)

4.  

Viruses can spread by means of or independently of networks (e.g., via contaminated diskettes).

5.  

The committee did not find evidence of significant Japanese activity in computer security, although viruses have begun to raise concern in Japan as evidenced by Japanese newspaper articles, and Japanese system development interests provide a foundation for possible eventual action. For competitive reasons, both Japanese and European developments should be closely monitored.

6.  

A new organization, the Electronic Frontiers Foundation, has recently been launched to defend these free speech aspects.

7.  

For example, professional journals and meetings have held numerous debates over the interpretation of the Internet worm and the behavior of its perpetrator; the Internet worm also prompted the issuance or reissuance of codes of ethics by a variety of computer specialist organizations.

8.  

Two recent studies have pointed to the increased concern with security in networks: The congressional Office of Technology Assessment's Critical Connections: Communication for the Future (OTA, 1990) and the National Research Council's Growing Vulnerability of the Public Switched Networks (NRC, 1989b).

9.  

This evolution took roughly two centuries in the case of safecracking, a technology whose systems consist of a box, a door, and a lock.

10.  

This does not mean that the effort was wasted. In fact, some would argue that this is the height of success (Tzu, 1988).

11.  

For example, a California prosecutor recently observed that "We probably turn down more cases [involving computer break-ins] than we charge, because computer-system proprietors haven't made clear what is allowed and what isn't" (Stipp, 1990).

12.  

For example, a description of a magnetic door sensor that is highly selective about the magnetic field it will recognize as indicating "door closed" can indicate to attackers that less sophisticated sensors can be misled by placing a strong magnet near them before opening the door.

13.  

For example, the GAO recently noted in connection with the numerous penetrations of the Space Physics Analysis Network in the 1980s that, "Skillful, unauthorized users could enter and exit a computer without being detected. In such cases and even in those instances where NASA has detected illegal entry, data could have been copied, altered, or destroyed without NASA or anyone else knowing" (GAO, 1989e, p. 1).

Suggested Citation:"Overview and Recommendations." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×

14.  

"Programming" is to be understood in a general sense—anything that modifies or extends the capabilities of a system is programming. Modification of controls on access to a system, for example, is a type of programming with significant security implications. Even special-purpose systems with no access to programming languages, not even to a "shell" or command language, are usually programmable in this sense.

15.  

"Embeddedness" refers to the extent to which a computer system is embedded in a process, and it correlates with the degree to which the process is controlled by the computer. Computer-controlled X-ray machines and manufacturing systems, avionics systems, and missiles are examples of embedded systems. Higher degrees of embeddedness, generated by competitive pressures that drive the push for automation, shorten the link between information and action and increase the potential for irreversible actions taken without human intervention. By automating much of a process, embeddedness increases the leverage of an attacker.

16.  

However, sometimes there will be trade-offs between security or safety and other characteristics, like performance. Such trade-offs are not unique to computing, although they may be comparatively more recent.

17.  

It is worth noting, however, that "safety factors" play a role in security. Measures such as audit trails are included in security systems as a safety factor; they provide a backup mechanism for detection when something else breaks.

18.  

Even NSA is confronting budget cuts in the context of overall cuts in defense spending.

19.  

For example, the American Institute of Certified Public Accountants promulgates Statements on Auditing Standards (SAS), and the Financial Accounting Standards Board (FASB) promulgates what have been called Generally Accepted Accounting Principles (GAAP). Managers accept the importance of both the standards and their enforcement as a risk management tool. Adherence to these standards is also encouraged by laws and regulations that seek to protect investors and the public. (See Appendix D.)

20.  

B1 is also the highest level to which systems can effectively be retrofitted with security features.

21.  

An effort by several large commercial users to list desired computer and communications system security features demonstrates the importance of greater integrity protection and the emphasis on discretionary access control in that community. This effort appears to place relatively limited emphasis on assurance and evaluation, both of which the committee deem important to GSSP and to an ideal set of criteria. The seed for that effort was a project within American Express Travel Related Services to define a corporate security standard called C2-Plus and based, as the name suggests, on the Orange Book's C2 criteria (Cutler and Jones, 1990).

22.  

In the past decade, a number of organizations (e.g., Corporation for Open Systems and the formerly independent Manufacturing Automation Protocol/Technical Office Protocol Users Group) have emerged with the goal of influencing the development of industry standards for computing and communications technology and promoting the use of official standards, in part by facilitating conformance testing (Frenkel, 1990).

23.  

The Computer Security Act of 1987, for example, set in motion a process aimed at improving security planning in federal agencies. The experience showed that it was easier to achieve compliance on paper than to truly strengthen planning and management controls (GAO, 1990c).

24.  

Examples include ISO 7498–2 (ISO, 1989), CCITT X.509 (CCITT, 1989b), and the NSA-launched Secure Data Network System (SDNS) standardization program.

25.  

The very availability of such tools puts an extra responsibility on management to eliminate the kinds of vulnerabilities the tools reveal.

26.  

For example, discussions of different project management structures would

Suggested Citation:"Overview and Recommendations." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×

   

deal with their impact not only on productivity but also on security. Discussions of quality assurance would emphasize safety engineering more than might be expected in a traditional software engineering program.

27.  

It is expensive for vendors to maintain two versions of products—secure and regular. Thus, all else being equal, regular versions can be expected to be displaced by secure versions. But if sales are restricted, then only the regular version will be marketed, to the detriment of security.

28.  

As this report goes to press, a case is under consideration at the Department of State that could result in liberalized export of DES chips, although such an outcome is considered unlikely.

29.  

As of this writing, similar actions may also be necessary in connection with the RSA public-key encryption system, which is already available overseas (without patent protection) because its principles were first published in an academic journal (Rivest et al., 1978).

30.  

The paucity of academic effort is reflected by the fact that only 5 to 10 percent of the attendees at recent IEEE Symposiums on Security and Privacy have been from universities.

31.  

For vendors, related topics would be trusted distribution and trusted configuration control over the product life cycle.

Suggested Citation:"Overview and Recommendations." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×
Page 7
Suggested Citation:"Overview and Recommendations." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×
Page 8
Suggested Citation:"Overview and Recommendations." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×
Page 9
Suggested Citation:"Overview and Recommendations." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×
Page 10
Suggested Citation:"Overview and Recommendations." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×
Page 11
Suggested Citation:"Overview and Recommendations." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×
Page 12
Suggested Citation:"Overview and Recommendations." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×
Page 13
Suggested Citation:"Overview and Recommendations." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×
Page 14
Suggested Citation:"Overview and Recommendations." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×
Page 15
Suggested Citation:"Overview and Recommendations." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×
Page 16
Suggested Citation:"Overview and Recommendations." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×
Page 17
Suggested Citation:"Overview and Recommendations." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×
Page 18
Suggested Citation:"Overview and Recommendations." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×
Page 19
Suggested Citation:"Overview and Recommendations." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×
Page 20
Suggested Citation:"Overview and Recommendations." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×
Page 21
Suggested Citation:"Overview and Recommendations." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×
Page 22
Suggested Citation:"Overview and Recommendations." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×
Page 23
Suggested Citation:"Overview and Recommendations." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×
Page 24
Suggested Citation:"Overview and Recommendations." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×
Page 25
Suggested Citation:"Overview and Recommendations." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×
Page 26
Suggested Citation:"Overview and Recommendations." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×
Page 27
Suggested Citation:"Overview and Recommendations." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×
Page 28
Suggested Citation:"Overview and Recommendations." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×
Page 29
Suggested Citation:"Overview and Recommendations." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×
Page 30
Suggested Citation:"Overview and Recommendations." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×
Page 31
Suggested Citation:"Overview and Recommendations." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×
Page 32
Suggested Citation:"Overview and Recommendations." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×
Page 33
Suggested Citation:"Overview and Recommendations." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×
Page 34
Suggested Citation:"Overview and Recommendations." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×
Page 35
Suggested Citation:"Overview and Recommendations." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×
Page 36
Suggested Citation:"Overview and Recommendations." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×
Page 37
Suggested Citation:"Overview and Recommendations." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×
Page 38
Suggested Citation:"Overview and Recommendations." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×
Page 39
Suggested Citation:"Overview and Recommendations." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×
Page 40
Suggested Citation:"Overview and Recommendations." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×
Page 41
Suggested Citation:"Overview and Recommendations." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×
Page 42
Suggested Citation:"Overview and Recommendations." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×
Page 43
Suggested Citation:"Overview and Recommendations." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×
Page 44
Suggested Citation:"Overview and Recommendations." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×
Page 45
Suggested Citation:"Overview and Recommendations." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×
Page 46
Suggested Citation:"Overview and Recommendations." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×
Page 47
Suggested Citation:"Overview and Recommendations." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×
Page 48
Next: Concepts of Information Security »
Computers at Risk: Safe Computing in the Information Age Get This Book
×
Buy Paperback | $85.00
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

Computers at Risk presents a comprehensive agenda for developing nationwide policies and practices for computer security. Specific recommendations are provided for industry and for government agencies engaged in computer security activities.

The volume also outlines problems and opportunities in computer security research, recommends ways to improve the research infrastructure, and suggests topics for investigators.

The book explores the diversity of the field, the need to engineer countermeasures based on speculation of what experts think computer attackers may do next, why the technology community has failed to respond to the need for enhanced security systems, how innovators could be encouraged to bring more options to the marketplace, and balancing the importance of security against the right of privacy.

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    Switch between the Original Pages, where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

    « Back Next »
  6. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  7. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  8. ×

    View our suggested citation for this chapter.

    « Back Next »
  9. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!