National Academies Press: OpenBook

Computers at Risk: Safe Computing in the Information Age (1991)

Chapter: Technology to Achieve Secure Computer

« Previous: Concepts of Information Security
Suggested Citation:"Technology to Achieve Secure Computer." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×

3
Technology to Achieve Secure Computer Systems

A reasonably complete survey of the technology needed to protect information and other resources controlled by computer systems, this chapter discusses how such technology can be used to make systems secure. It explains the essential technical ideas, gives the major properties of relevant techniques currently known, and tells why they are important. Suggesting developments that may occur in the next few years, it provides some of the rationale for the research agenda set forth in Chapter 8.

Appendix B of this report discusses in more detail several topics that are either fundamental to computer security technology or of special current interest—including how some important things (such as passwords) work and why they do not work perfectly.

This discussion of the technology of computer security addresses two major concerns:

  1. What do we mean by security?

  2. How do we get security, and how do we know when we have it?

The first involves specification of security and the services that computer systems provide to support security. The second involves implementation of security, and in particular the means of establishing confidence that a system will actually provide the security the specifications promise. Each topic is discussed according to its importance for the overall goal of providing computer security, and not according to how much work has already been done on that topic.

This chapter discusses many of the concepts introduced in Chapter 2, but in more detail. It examines the technical process of relating computer mechanisms to higher-level controls and policies, a process

Suggested Citation:"Technology to Achieve Secure Computer." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×

that requires the development of abstract security models and supporting mechanisms. Although careful analysis of the kind carried out in this chapter may seem tedious, it is a necessary prerequisite to ensuring the security of something as complicated as a computer system. Ensuring security, like protecting the environment, requires a holistic approach; it is not enough to focus on the problem that caused trouble last month, because as soon as that difficulty is resolved, another will arise.

SPECIFICATION VS. IMPLEMENTATION

The distinction between what a system does and how it does it, between specification and implementation, is basic to the design and analysis of computer systems. A specification for a system is the meeting point between the customer and the builder. It says what the system is supposed to do. This is important to the builder, who must ensure that what the system actually does matches what it is supposed to do. It is equally important to the customer, who must be confident that what the system is supposed to do matches what he wants. It is especially critical to know exactly and completely how a system is supposed to support requirements for security, because any mistake can be exploited by a malicious adversary.

Specifications can be written at many levels of detail and with many degrees of formality. Broad and informal specifications of security are called security policies1 (see Chapter 2), examples of which include the following: (1) "Confidentiality: Information shall be disclosed only to people authorized to receive it." (2) "Integrity: Data shall be modified only according to established procedures and at the direction of properly authorized people."

It is possible to separate from the whole the part of a specification that is relevant to security. Usually a whole specification encompasses much more than the security-relevant part. For example, a whole specification usually says a good deal about price and performance. In systems for which confidentiality and integrity are the primary goals of security policies, performance is not relevant to security because a system can provide confidentiality and integrity regardless of how well or badly it performs. But for systems for which availability and integrity are paramount, performance specifications may be relevant to security. Since security is the focus of this discussion, "specification" as used here should be understood to describe only what is relevant to security.

A secure system is one that meets the particular specifications meant to ensure security. Since many different specifications are possible,

Suggested Citation:"Technology to Achieve Secure Computer." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×

there cannot be any absolute notion of a secure system. An example from a related field clarifies this point. We say that an action is legal if it meets the requirements of the law. Since different jurisdictions can have different sets of laws, there cannot be any absolute notion of a legal action; what is legal under the laws of Britain may be illegal in the United States.

A system that is believed to be secure is called trusted. Of course, a trusted system must be trusted for something; in the context of this report it is trusted to meet security specifications. In some other context such a system might be trusted to control a shuttle launch or to retrieve all the 1988 court opinions dealing with civil rights.

Policies express a general intent. Of course, they can be more detailed than the very general ones given as examples above; for instance, the following is a refinement of the first policy: "Salary confidentiality: Individual salary information shall be disclosed only to the employee, his superiors, and authorized personnel people."

But whether general or specific, policies contain terms that are not precisely defined, and so it is not possible to tell with absolute certainty whether a system satisfies a policy. Furthermore, policies specify the behavior of people and of the physical environment as well as the behavior of machines, so that it is not possible for a computer system alone to satisfy them. Technology for security addresses these problems by providing methods for the following:

  • Integrating a computer system into a larger system, comprising people and a physical environment as well as computers, that meets its security policies;

  • Giving a precise specification, called a security model, for the security-relevant behavior of the computer system;

  • Building, with components that provide and use security services, a system that meets the specifications; and

  • Establishing confidence, or assurance, that a system actually does meet its specifications.

This is a tall order that at the moment can be only partially filled. The first two actions are discussed in the section below titled "Specification," the last two in the following section titled "Implementation." Services are discussed in both sections to explain both the functions being provided and how they are implemented.

SPECIFICATION: POLICIES, MODELS, AND SERVICES

This section deals with the specification of security. It is based on the taxonomy of security policies given in Chapter 2. There are only a few highly developed security policies, and research is needed to

Suggested Citation:"Technology to Achieve Secure Computer." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×

develop additional policies (see Chapter 8), especially in the areas of integrity and availability. Each of the highly developed policies has a corresponding (formal) security model, which is a precise specification of how a computer system should behave as part of a larger system that implements a policy. Implementing a security model requires mechanisms that provide particular security services. A small number of fundamental mechanisms have been identified that seem adequate to implement most of the highly developed security policies currently in use.

The simple example of a traffic light illustrates the concepts of policy and model; in this example, safety plays the role of security. The light is part of a system that includes roads, cars, and drivers. The safety policy for the complete system is that two cars should not collide. This is refined into a policy that traffic must not move in two conflicting directions through an intersection at the same time. This policy is translated into a safety model for the traffic light itself (which plays a role analogous to that of a computer system within a complete system): two green lights may never appear in conflicting traffic patterns simultaneously. This is a simple specification. Observe that the complete specification for a traffic light is much more complex; it provides for the ability to set the duration of the various cycles, to synchronize the light with other traffic lights, to display different combinations of arrows, and so forth. None of these details, however, is critical to the safety of the system, because they do not bear directly on whether or not cars will collide. Observe also that for the whole system to meet its safety policy, the light must be visible to the drivers, and they must understand and obey its rules. If the light remains red in all directions it will meet its specification, but the drivers will lose patience and start to ignore it, so that the entire system may not support a policy of ensuring safety.

An ordinary library affords a more complete example (see Appendix B of this report) that illustrates several aspects of computer system security in a context that does not involve computers.

Policies

A security policy is an informal specification of the rules by which people are given access to a system to read and change information and to use resources. Policies naturally fall into a few major categories:

  1. Confidentiality: controlling who gets to read information;

  2. Integrity: assuring that information and programs are changed only in a specified and authorized manner; and

Suggested Citation:"Technology to Achieve Secure Computer." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×
  1. Availability: assuring that authorized users have continued access to information and resources.

Two orthogonal categories can be added:

  1. Resource control: controlling who has access to computing, storage, or communication resources (exclusive of data); and

  2. Accountability: knowing who has had access to information or resources.

Chapter 2 describes these categories in detail and discusses how an organization that uses computers can formulate a security policy by drawing elements from all these categories. The discussion below summarizes this material and supplements it with some technical details.

Security policies for computer systems generally reflect long-standing policies for the security of systems that do not involve computers. In the case of national security these are embodied in the information classification and personnel clearance system; for commercial computing they come from established accounting and management control practices.

From a technical viewpoint, the most fully developed policies are those that have been developed to ensure confidentiality. They reflect the concerns of the national security community and are derived from Department of Defense (DOD) Directive 5000.1, the basic directive for protecting classified information.2

The DOD computer security policy is based on security levels. Given two levels, one may be lower than the other, or the two may not be comparable. The basic principle is that information can never be allowed to leak to a lower level, or even to a level that is not comparable. In particular, a program that has "read access" to data at a higher level cannot simultaneously have "write access" to lower-level data. This is a rigid policy motivated by a lack of trust in application programs. In contrast, a person can make an unclassified telephone call even though he may have classified documents on his desk, because he is trusted to not read the document over the telephone. There is no strong basis for placing similar trust in an arbitrary computer program.

A security level or compartment consists of an access level (either top secret, secret, confidential, or unclassified) and a set of categories (e.g., Critical Nuclear Weapon Design Information (CNWDI), North Atlantic Treaty Organization (NATO), and so on). The access levels are ordered (top secret, highest; unclassified, lowest). The categories, which have unique access and protection requirements, are not ordered, but sets of categories are ordered by inclusion: one set is lower than another if every category in the first is included in the second. One

Suggested Citation:"Technology to Achieve Secure Computer." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×

security level is lower than another, different level if it has an equal or lower access level and an equal or lower set of categories. Thus [confidential; NATO] is lower than both [confidential; CNWDI, NATO] and [secret; NATO]. Given two levels, it is possible that neither is lower than the other. Thus [secret; CNWDI] and [confidential; NATO] are not comparable.

Every piece of information has a security level (often called its label). Normally information is not permitted to flow downward: information at one level can be derived only from information at equal or lower levels, never from information that is at a higher level or is not comparable. If information is computed from several inputs, it has a level that is at least as high as any of the inputs. This rule ensures that if information is stored in a system, anything computed from it will have an equal or higher level. Thus the classification never decreases.

The DOD computer security policy specifies that a person is cleared to a particular security level and can see information only at that, or a lower, level. Since anything seen can be derived only from other information categorized as being at that level or lower, the result is that what a person sees can depend only on information in the system at his level or lower. This policy is mandatory: except for certain carefully controlled downgrading or declassification procedures, neither users nor programs in the system can break the rules or change the security levels. As Chapter 2 explains, both this and other confidentiality policies can also be applied in other settings.

Integrity policies have not been studied as carefully as confidentiality policies, even though some sort of integrity policy governs the operation of every commercial data-processing system. Work in this area (Clark and Wilson, 1987) lags work on confidentiality by about 15 years. Nonetheless, interest is growing in workable integrity policies and corresponding mechanisms, especially since such mechanisms provide a sound basis for limiting the damage caused by viruses, self-replicating software that can carry hidden instructions to alter or destroy data.

The most highly developed policies to support integrity reflect the concerns of the accounting and auditing community for preventing fraud. The essential notions are individual accountability, auditability, separation of duty, and standard procedures. Another kind of integrity policy is derived from the information-flow policy for confidentiality applied in reverse, so that information can be derived only from other information of the same or a higher integrity level (Biba, 1975). This particular policy is extremely restrictive and thus has not been applied in practice.

Policies categorized under accountability have usually been formulated

Suggested Citation:"Technology to Achieve Secure Computer." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×

as part of confidentiality or integrity policies. Accountability has not received independent attention.

In addition, very little work has been done on security policies related to availability. Absent this work, the focus has been on the practical aspects of contingency planning and recoverability.

Models

To engineer a computer system that can be used as part of a larger system that implements a security policy, and to decide unambiguously whether such a computer system meets its specification, an informal, broadly stated policy must be translated into a precise model. A model differs from a policy in two ways:

  1. It describes the desired behavior of a computer system's mechanisms, not that of the larger system that includes people.

  2. It is precisely stated in formal language that resolves the ambiguities of English and makes it possible, at least in principle, to give a mathematical proof that a system satisfies the model.

Two models are in wide use. One, based on the DOD computer security policy, is the flow model; it supports a certain kind of confidentiality policy. The other, based on the familiar idea of stationing a guard at an entrance, is the access control model; it supports a variety of confidentiality, integrity, and accountability policies. There are no models that support availability policies.

Flow Model

The flow model is derived from the DOD computer security policy described above. In this model (Denning, 1976) each piece of data in the system visible to a user or an application program is held in a container called an object. Each object has an associated security level. An object's level indicates the security level of the data it contains. Data in one object is allowed to affect another object only if the source object's level is lower than or equal to the destination object's level. All the data within a single object have the same level and hence can be manipulated freely.

The flow model ensures that information at a given security level flows only to an equal or higher level. Data is not the same as information; for example, an encrypted message contains data, but it conveys no information unless one knows the encryption key or can break the encryption system. Unfortunately, data is all the computer can understand. By preventing an object at one level from being

Suggested Citation:"Technology to Achieve Secure Computer." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×

affected in any way by data that is not at an equal or lower level, the flow model ensures that information can flow only to an equal or higher level inside the computer system. It does this very conservatively and thus forbids many actions that would not in fact cause any information to flow improperly.

A more complicated version of the flow model (which is actually the basis of the rules in the Orange Book) separates objects into active subjects that can initiate operations and passive objects that simply contain data, such as a file, a piece of paper, or a display screen. Data can flow only between an object and a subject; flow from object to subject is called a read operation, and flow from subject to object is called a write operation. Now the rules are that a subject can only read from an object at an equal or lower level, and can only write to an object at an equal or higher level.

Not all possible flows in a system look like read and write operations. Because the system is sharing resources among objects at different levels, it is possible for information to flow on what are known as covert channels (Lampson, 1973; IEEE, 1990a). For example, a high-level subject might be able to send a little information to a low-level subject by using up all the disk space if it learns that a surprise attack is scheduled for next week. When the low-level subject finds itself unable to write a file, it has learned about the attack (or at least received a hint). To fully realize the intended purpose of a flow model, it is necessary to identify and attempt to close all the covert channels, although total avoidance of covert channels is generally impossible due to the need to share resources.

To fit this model of a computer system into the real world, it is necessary to account for people. A person is cleared to some level of permitted access. When he identifies himself to the system as a user present at some terminal, he can set the terminal's level to any equal or lower level. This ensures that the user will never see information at a higher level than his clearance allows. If the user sets the terminal level lower than the level of his clearance, he is trusted not to take high-level information out of his head and introduce it into the system.

Although not logically required, the flow model policy has generally been viewed as mandatory; neither users nor programs in a system can break the flow rule or change levels. No real system can strictly follow this rule, since procedures are always needed for declassifying data, allocating resources, and introducing new users, for example. The access control model is used for these purposes, among others.

Access Control Model

The access control model is based on the idea of stationing a guard

Suggested Citation:"Technology to Achieve Secure Computer." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×

in front of a valuable resource to control who has access to it. This model organizes the system into

  • Objects: entities that respond to operations by changing their state, providing information about their state, or both;

  • Subjects: active objects that can perform operations on objects; and

  • Operations: the way that subjects interact with objects.

The objects are the resources being protected; an object might be a document, a terminal, or a rocket. A set of rules specifies, for each object and each subject, what operations that subject is allowed to perform on that object. A reference monitor acts as the guard to ensure that the rules are followed (Lampson, 1985). An example of a set of access rules follows:

Subject

Operation

Object

Smith

Read file

''1990 pay raises"

White

Send "Hello"

Terminal 23

Process 1274

Rewind

Tape unit 7

Black

Fire three rounds

Bow gun

Jones

Pay invoice 432567

Account Q34

There are many ways to express the access rules. The two most popular are to attach to each subject a list of the objects it can access (a capability list), or to attach to each object a list of the subjects that can access it (an access control list). Each list also identifies the operations that are allowed. Most systems use some combination of these approaches.

Usually the access rules do not mention each operation separately. Instead they define a smaller number of "rights" (often called permissions)—for example, read, write, and search—and grant some set of rights to each (subject, object) pair. Each operation in turn requires some set of rights. In this way a number of different operations, all requiring the right to read, can read information from an object. For example, if the object is a text file, the right to read may be required for such operations as reading a line, counting the number of words, and listing all the misspelled words.

One operation that can be done on an object is to change which subjects can access the object. There are many ways to exercise this control, depending on what a particular policy is. When a discretionary policy applies, for each object an "owner" or principal is identified who can decide without any restrictions who can do what to the object. When a mandatory policy applies, the owner can make these

Suggested Citation:"Technology to Achieve Secure Computer." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×

decisions only within certain limits. For example, a mandatory flow policy allows only a security officer to change the security level of an object, and the flow model rules limit access. The principal controlling the object can usually apply further limits at his discretion.

The access control model leaves open what the subjects are. Most commonly, subjects are users, and any active entity in the system is treated as acting on behalf of some user. In some systems a program can be a subject in its own right. This adds a great deal of flexibility, because the program can implement new objects using existing ones to which it has access. Such a program is called a protected subsystem; it runs as a subject different from the principal invoking it, usually one that can access more objects. The security services used to support creation of protected subsystems also may be used to confine suspected Trojan horses or viruses, thus limiting the potential for damage from such programs. This can be done by running a suspect program as a subject that is different from the principal invoking it, in this case a subject that can access fewer objects. Unfortunately, such facilities have not been available in most operating systems.

The access control model can be used to realize both secrecy and integrity policies, the former by controlling read operations and the latter by controlling write operations, and others that change the state. This model supports accountability, using the simple notion that every time an operation is invoked, the identity of the subject and the object as well as the operation should be recorded in an audit trail that can later be examined. Difficulties in making practical use of such information may arise owing to the large size of an audit trail.

Services

Basic security services are used to build systems satisfying the policies discussed above. Directly supporting the access control model, which in turn can be used to support nearly all the policies discussed, these services are as follows:

  • Authentication: determining who is responsible for a given request or statement,3 whether it is, "The loan rate is 10.3 percent," or "Read file 'Memo to Mike,'" or "Launch the rocket.''

  • Authorization: determining who is trusted for a given purpose, whether it is establishing a loan rate, reading a file, or launching a rocket.

  • Auditing: recording each operation that is invoked along with the identity of the subject and object, and later examining these records.

Given these services, it is easy to implement the access control

Suggested Citation:"Technology to Achieve Secure Computer." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×

model. Whenever an operation is invoked, the reference monitor uses authentication to find out who is requesting the operation and then uses authorization to find out whether the requester is trusted for that operation. If so, the reference monitor allows the operation to proceed; otherwise, it cancels the operation. In either case, it uses auditing to record the event.

Authentication

To answer the question, Who is responsible for this statement?, it is necessary to know what sort of entities can be responsible for statements; we call these entities principals. It is also necessary to have a way of naming the principals that is consistent between authentication and authorization, so that the result of authenticating a statement is meaningful for authorization.

A principal is a (human) user or a (computer) system. A user is a person, but a system requires some explanation. A system comprises hardware (e.g., a computer) and perhaps software (e.g., an operating system). A system can depend on another system; for example, a user-query process depends on a database management system, which depends on an operating system, which depends on a computer. As part of authenticating a system, it may be necessary to verify that the systems it depends on are trusted.

In order to express trust in a principal (e.g., to specify who can launch the rocket), one must be able to give the principal a name. The name must be independent of any information (such as passwords or encryption keys) that may change without any change in the principal itself. Also, it must be meaningful, both when access is granted and later when the trust being granted is reviewed to see whether that trust is still warranted. A naming system must be:

  • Complete: every principal has a name; it is difficult or impossible to express trust in a nameless principal.

  • Unambiguous: the same name does not refer to two different principals; otherwise it is impossible to know who is being trusted.

  • Secure: it is easy to tell which other principals must be trusted in order to authenticate a statement from a named principal.

In a large system, naming must be decentralized to be manageable. Furthermore, it is neither possible nor wise to rely on a single principal that is trusted by every part of the system. Since systems as well as users can be principals, systems as well as users must be able to have names.

One way to organize a decentralized naming system is as a hierarchy,

Suggested Citation:"Technology to Achieve Secure Computer." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×

following the model of a tree-structured file system like the one in Unix or MS/DOS, two popular operating systems. The Consultative Committee on International Telephony and Telegraphy (CCITT) X.500 standard for naming defines such a hierarchy (CCITT, 1989b); it is meant to be suitable for naming every principal in the world. In this scheme an individual can have a name like "US/GOV/State/ James_Baker." Such a naming system can be complete; there is no shortage of names, and registration can be made as convenient as desired. It is unambiguous provided each directory is unambiguous.

The CCITT also defines a standard (X.509) for authenticating a principal with an X.500 name; the section on authentication techniques below discusses how this is done (CCITT, 1989b). Note that an X.509 authentication may involve more than one agent. For example, agent A may authenticate agent B, who in turn authenticates the principal.

A remaining issue is exactly who should be trusted to authenticate a given name. In the X.509 authentication framework, typically, principals trust agents close to them in the hierarchy. A principal is less likely to trust agents farther from it in the hierarchy, whether those agents are above, below, or in entirely different branches of the tree. If a system at one point in the tree wants to authenticate a principal elsewhere, and if there is no one agent that can authenticate both, then the system must establish a chain of trust through multiple agents.4

Often a principal wants to act with less than its full authority, in order to reduce the damage that can be done in case of a mistake. For this purpose it is convenient to define additional principals, called roles, to provide a way of authorizing a principal to play a role, and to allow the principal to make a statement using any role for which it is authorized. For example, a system administrator might have a "normal" role and a "powerful" role. The authentication service then reports that a statement was made by a role rather than by the original principal, after verifying both that the statement came from the original principal and that he was authorized to play that role. (It is critical to ensure that the use of such roles does not prevent auditing measures from identifying the individual who is ultimately responsible for actions.)

In general, trust is not simply a matter of trusting a single user or system principal. It is necessary to trust the (hardware and software) systems through which that user is communicating. For example, suppose that a user Alice running on a workstation B is entering a transaction on a transaction server C, which in turn makes a network access to a database machine D. D's authorization decision may need to take account not just of Alice, but also of the fact that B and C are involved and must be trusted. Some of these issues do not arise in a centralized system, where a single authority is responsible for all the

Suggested Citation:"Technology to Achieve Secure Computer." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×

authentication and provides the resources for all the applications, but even in a centralized system an operation on a file, for example, is often invoked through an application, such as a word-processing program, which is not part of the base system and perhaps should not be trusted in the same way.

Such rules may be expressed by introducing new, compound principals, such as "Smith ON Workstation 4," to represent the user acting through intermediaries. Then it becomes possible to express trust in the compound principal exactly as in any other. The name "Workstation 4" identifies the intermediate system, just as the name "Smith" identifies the user.

How do we authenticate such principals? When Workstation 4 says, "Smith wants to read the file 'pay raises,'" how do we know (1) that the request is really from that workstation and not somewhere else and (2) that it is really Smith acting through Workstation 4, and not Jones or someone else?

We answer the first question by authenticating the intermediate systems as well as the users. If the resource and the intermediate are on the same machine, the operating system can authenticate the intermediate to the resource. If not, we use the cryptographic methods discussed in the section below titled "Secure Channels."

To answer the second question, we need some evidence that Smith has delegated to Workstation 4 the authority to act on his behalf. We cannot ask for direct evidence that Smith asked to read the file—if we could have that, then he would not be acting through the workstation. We certainly cannot take the workstation's word for it; then it could act for Smith no matter who is really there. But we can demand a statement that we believe is from Smith, asserting that Workstation 4 can speak for him (probably for some limited time, and perhaps only for some limited purposes). Given that Smith says, "Workstation 4 can act for me," and Workstation 4 says, "Smith says to read the file 'pay raises,'" then we can believe that Smith on Workstation 4 says, "Read the file 'pay raises.'"

There is another authentication question lurking here, namely how do we know that the software in the workstation is correctly representing Smith's intended action? Unless the application program that Smith is using is itself trusted, it is possible that the action Smith has requested has been transformed by this program into another action that Smith is authorized to execute. Such might be the case if a virus were to infect the application Smith is running on his workstation. This aspect of the authentication problem can be addressed through the use of trusted application software and through integrity mechanisms as discussed in the section "Secure Channels" below.

To authenticate the delegation statement from Smith, "Workstation

Suggested Citation:"Technology to Achieve Secure Computer." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×

4 can act for me," we need to employ the cryptographic methods described below.

The basic service provided by authentication is information that a statement was made by some principal. An aggressive form of authentication, called nonrepudiation, can be accomplished by a digital analog of notarizing, in which a trusted authority records the signature and the time it was made (see "Digital Signatures" in Appendix B).

Authorization

Authorization determines who is trusted for a given purpose, usually for doing some operation on an object. More precisely, it determines whether a particular principal, who has been authenticated as the source of a request to do an operation on an object, is trusted for that operation on that object. (This object-oriented view of authorization also encompasses the more traditional implementations of file protection, and so forth.)

Authorization is customarily implemented by associating with the object an access control list (ACL) that tells which principals are authorized for which operations. The ACL also may refer to attributes of the principals, such as security clearances. The authorization service takes a principal, an ACL, and an operation or a set of rights, and returns "yes" or "no." This way of providing the service leaves the object free to store the ACL in any convenient place and to make its own decisions about how different parts of the object are protected. A database object, for instance, may wish to use different ACLs for different fields, so that salary information is protected by one ACL and address information by another, less restrictive one.

Often several principals have the same rights to access a number of objects. It is both expensive and unreliable to repeat the entire set of principals for each object. Instead, it is convenient to define a group of principals, give it a name, and give the group access to each of the objects. For instance, a company might define the group "executive committee." The group thus acts as a principal for the purpose of authorization, but the authorization service is responsible for verifying that the principal actually making the request is a member of the group.

In this section authorization has been discussed mainly from the viewpoint of an object, which must decide whether a principal is authorized to invoke a certain operation. In general, however, the subject doing the operation may also need to verify that the system implementing the object is authorized to do so. For instance, when logging in over a telephone line, a user may want to be sure that he

Suggested Citation:"Technology to Achieve Secure Computer." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×

has actually reached the intended system and not some other, hostile system that may try to spoof him. This process is usually called mutual authentication, although it actually involves authorization as well: statements from the object must be authenticated as coming from the system that implements the object, and the subject must have access rules to decide whether that system is authorized to do so.

Auditing

Given the reality that every computer system can be compromised from within, and that many systems can also be compromised if surreptitious access can be gained, accountability is a vital last resort. Accountability policies were discussed above—and the point was made that, for example, all significant events should be recorded and the recording mechanisms should be nonsubvertible. Auditing services support these policies. Usually they are closely tied to authentication and authorization, so that every authentication is recorded, as is every attempted access, whether authorized or not.

In addition to establishing accountability, an audit trail may also reveal suspicious patterns of access and so enable detection of improper behavior by both legitimate users and masqueraders. However, limitations to this use of audit information often restrict its use to detecting unsophisticated intruders. In practice, sophisticated intruders have been able to circumvent audit trails in the course of penetrating systems. Techniques such as the use of write-once optical disks, cryptographic protection, and remote storage of audit trails can help counter some of these attacks on the audit database itself, but these measures do not address all the vulnerabilities of audit mechanisms. Even in circumstances where audit trail information could be used to detect penetration attempts, a problem arises in processing and interpreting the audit data. Both statistical and expert-system approaches are currently being tried, but their utility is as yet unproven (Lunt, 1988).

IMPLEMENTATION: THE TRUSTED COMPUTING BASE

This section explores how to build a system that meets the kind of security specifications discussed earlier, and how to establish confidence that it does meet them. Systems are built of components; a system also depends on its components. This means that the components have to work (i.e., meet their specifications) for the system to work

Suggested Citation:"Technology to Achieve Secure Computer." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×

(i.e., meet its specification). Note, however, that not all components of a system have to work properly in order for a given aspect of the system to function properly. Thus security properties need not depend on all components of a system working correctly; rather, only the security-relevant components must function properly.

Each component is itself a system with specifications and implementation, and so the concept of a system applies at all levels. For example, a distributed system depends on a network, workstations, servers, mainframes, printers, and so forth. A workstation depends on a display, keyboard, disk, processor, network interface, operating system, and, for example, a spreadsheet application. A processor depends on integrated circuit chips, wires, circuit boards, and connectors. A spreadsheet depends on display routines, an arithmetic library, and a macro language processor, and so it goes down to the basic operations of the programming language, which in turn depend on the basic operations of the machine, which in turn depend on changes in the state of the chips and wires, for example. A chip depends on adders and memory cells, and so it goes down to the electrons and photons, whose behavior is described by quantum electrodynamics.

A component must be trusted if it has to work for the system to meet its security specification. The set of trusted hardware and software components is called the trusted computing base (TCB). If a component is in the TCB, so is every component that it depends on, because if they do not work, it is not guaranteed to work either. As was established previously, the concern in this discussion is security, and so the trusted components need to be trusted only to support security in this context.

Note that a system depends on more than its hardware and software. The physical environment and the people who use, operate, and manage it are also components of the system. Some of them must also be trusted. For example, if the power fails, a system may stop providing service; thus the power source must be trusted for availability. Another example: every system has security officers who set security levels, authorize users, and so on; they must be trusted to do this properly. Yet another: the system may disclose information only to authorized users, and they must be trusted not to publish the information in the newspaper. Thus when trust is assessed, the security of the entire system must be evaluated, using the basic principles of analyzing dependencies, minimizing the number and complexity of trusted components, and carefully analyzing each one.

From a TCB perspective, three key aspects of implementing a secure system are the following (derived from Anderson, 1972):

  1. Keeping the TCB as small and simple as possible to make it amenable to detailed analysis;

Suggested Citation:"Technology to Achieve Secure Computer." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×
  1. Ensuring that the TCB mediates all accesses to data and programs that are to be protected; that is, it must not be possible to bypass the TCB; and

  2. Making certain that the TCB itself cannot be tampered with, that is, that programs outside the TCB cannot maliciously modify the TCB software or data structures.

The basic method for keeping the TCB small is to separate out all the nonsecurity functions into untrusted components. For example, an elevator has a very simple braking mechanism whose only job is to stop the elevator if it starts to move at a speed faster than a fixed maximum, no matter what else goes wrong. The rest of the elevator control mechanism may be very complex, involving scheduling of several elevators or responding to requests from various floors, but none of this must be trusted for safety, because the braking mechanism does not depend on anything else. In this case, the braking mechanism is called the safety kernel.

A purchasing system may also be used to illustrate the relative smallness of a TCB. A large and complicated word processor may be used to prepare orders, but the TCB can be limited to a simple program that displays the completed order and asks the user to confirm it. An even more complicated database system may be used to find the order that corresponds to an arriving shipment, but the TCB can be limited to a simple program that displays the received order and a proposed payment authorization and asks the user to confirm them. If the order and authorization can be digitally signed (using methods described below), even the components that store them need not be in the TCB.

The basic method for finding dependencies, relevant to ensuring TCB access to protected data and programs and to making the TCB tamperproof, is careful analysis of how each step in building and executing a system is carried out. Ideally assurance for each system is given by a formal mathematical proof that the system satisfies its specification provided all its components do. In practice such proofs are only sometimes feasible, because it is hard to formalize the specifications and to carry out the proofs. Moreover, every such proof is conditioned on the assumption that the components work and have not been tampered with. (See the Chapter 4 section "Formal Specification and Verification" for a description of the state of the art.) In practice, assurance is also garnered by relying on components that have worked for many people, trusting implementors not to be malicious, carefully writing specifications for components, and carefully examining implementations for dependencies and errors. Because there are so

Suggested Citation:"Technology to Achieve Secure Computer." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×

many bases to cover, and because every base is critical to assurance, there are bound to be mistakes.

Hence two other important aspects of assurance are redundant checks like the security perimeters discussed below, and methods, such as audit trails and backup databases, for recovering from failures.

The main components of a TCB are discussed below in the sections headed "Computing" and "Communications." This division reflects the fact that a modern distributed system is made up of computers that can be analyzed individually but that must communicate with each other quite differently from the way each communicates internally.

Computing

The computing part of the TCB includes the application programs, the operating system that they depend on, and the hardware (processing and storage) that both depend on.

Hardware

Since software consists of instructions that must be executed by hardware, the hardware must be part of the TCB. The hardware is depended on to isolate the TCB from the untrusted parts of the system. To do this, it suffices for the hardware to provide for a "user state" in which a program can access only the ordinary computing instructions and restricted portions of the memory, as well as a "supervisor state" in which a program can access every part of the hardware. Most contemporary computers above the level of personal computers tend to incorporate these facilities. There is no strict requirement for fancier hardware features, although they may improve performance in some architectures.

The only essential, then, is to have simple hardware that is trustworthy. For most purposes the ordinary care that competent engineers take to make the hardware work is good enough. It is possible to get higher assurance by using formal methods to design and verify the hardware; this has been done in several projects, of which the VIPER verified microprocessor chip (for a detailed description see Appendix B) is an example (Cullyer, 1989). There is a mechanically checked proof to show that the VIPER chip's gate-level design implements its specification. VIPER pays the usual price for high assurance: it is several times slower than ordinary microprocessors built at the same time.

Another approach to using hardware to support high assurance is to provide a separate, simple processor with specialized software to implement the basic access control services. If this hardware controls

Suggested Citation:"Technology to Achieve Secure Computer." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×

the computer's memory access mechanism and forces all input/output data to be encrypted, that is enough to keep the rest of the hardware and software out of the TCB. (This requires that components upstream of the security hardware do not share information across security classes.) This approach has been pursued in the LOCK project, which is described in detail in Appendix B.

Unlike the other components of a computing system, hardware is physical and has physical interactions with the environment. For instance, someone can open a cabinet containing a computer and replace one of the circuit boards. If this is done with malicious intent, obviously all bets are off about the security of the computer. It follows that physical security of the hardware must be assured. There are less obvious physical threats. In particular, computer hardware involves changing electric and magnetic fields, and it therefore generates electromagnetic radiation (often called emanations)5 as a byproduct of normal operation. Because this radiation can be a way for information to be disclosed, ensuring confidentiality may require that it be controlled. Similarly, radiation from the environment can affect the hardware.

Operating System

The job of an operating system is to share the hardware among application programs and to provide generic security services so that most applications do not need to be part of the TCB. This layering of security services is useful because it keeps the TCB small, since there is only one operating system for many applications. Within the operating system itself the idea of layering or partitioning can be used to divide the operating system into a kernel that is part of the TCB and into other components that are not (Gasser, 1988). How to do this is well known.

The operating system provides an authorization service by controlling subjects' (processes) accesses to objects (files and communication devices such as terminals). The operating system can enforce various security models for these objects, which may be enough to satisfy the security policy. In particular it can enforce a flow model, which is sufficient for the DOD confidentiality policy, as long as it is able to keep track of security levels at the coarse granularity of whole files.

To enforce an integrity policy like the purchasing system policy described above, there must be some trusted applications to handle functions like approving orders. The operating system must be able to treat these applications as principals, so that they can access objects that the untrusted applications running on behalf of the same user cannot access. Such applications are protected subsystems.

Suggested Citation:"Technology to Achieve Secure Computer." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×
Applications and the Problem of Malicious Code

Ideally applications should not be part of the TCB, since they are numerous, are often large and complicated, and tend to come from a variety of sources that are difficult to police. Unfortunately, attempts to build applications, such as electronic mail or databases that can handle multiple levels of classified information, on top of an operating system that enforces flow have had limited success. It is necessary to use a different operating system object for information at each security level, and often these objects are large and expensive. And to implement an integrity policy, it is always necessary to trust some application code. Again, it seems best to apply the kernel method, putting the code that must be trusted into separate components that are protected subsystems. The operating system must support this approach (Honeywell, 1985–1988).

In most systems any application program running on behalf of a user has full access to all that the user can access. This is considered acceptable on the assumption that the program, although it may not be trusted to always do the right thing, is unlikely to do an intolerable amount of damage. But suppose that the program does not just do the wrong thing, but is actively malicious? Such a program, which appears to do something useful but has hidden within it the ability to cause serious damage, is called a Trojan horse. When a Trojan horse runs, it can do a great deal of damage: delete files, corrupt data, send a message with the user's secrets to another machine, disrupt the operation of the host, waste machine resources, and so forth. There are many places to hide a Trojan horse: the operating system, an executable program, a shell command file, or a macro in a spreadsheet or word-processing program are only a few of the possibilities. Moreover, a compiler or other program development tool with a Trojan horse can insert secondary Trojan horses into the programs it generates.

The danger is even greater if the Trojan horse can also make copies of itself. Such a program is called a virus. Because it can spread quickly in a computer network or by copying disks, a virus can be a serious threat (''Viruses," in Appendix B, gives more details and describes countermeasures). Several examples of viruses have infected thousands of machines.

Communications

Methods for dealing with communications and security for distributed systems are less well developed than those for stand-alone centralized systems; distributed systems are both newer and more complex. There

Suggested Citation:"Technology to Achieve Secure Computer." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×

is no consensus about methods to provide security for distributed systems, but a TCB for a distributed system can be built out of suitable trusted elements running on the various machines that the system comprises. The committee believes that distributed systems are now well enough understood that this approach to securing such systems should also become recognized as effective and appropriate in achieving security.

A TCB for communications has two important aspects: secure channels for facilitating communication among the various parts of a system, and security perimeters for restricting communication between one part of a system and the rest.

Secure Channels

The access control model describes the working of a system in terms of requests for operations from a subject to an object and corresponding responses, whether the system is a single computer or a distributed system. It is useful to explore the topic of secure communication separately from the discussions above of computers, subjects, or objects so as to better delineate the fundamental concerns that underlie secure channels in a broad range of computing contexts.

A channel is a path by which two or more principals communicate. A secure channel may be a physically protected path (e.g., a physical wire, a disk drive and associated disk, or memory protected by hardware and an operating system) or a logical path secured by encryption. A channel need not operate in real time: a message sent on a channel may be read much later, for instance, if it is stored on a disk. A secure channel provides integrity (a receiver can know who originally created a message that is received and that the message is intact (unmodified)), confidentiality (a sender can know who can read a message that is sent), or both.6 The process of finding out who can send or receive on a secure channel is called authenticating the channel; once a channel has been authenticated, statements and requests arriving on it are also authenticated.

Typically the secure channels between subjects and objects inside a computer are physically protected: the wires in the computer are assumed to be secure, and the operating system protects the paths by which programs communicate with each other, using methods described above for implementing TCBs. This is one aspect of a broader point: every component of a physically protected channel is part of the TCB and must meet a security specification. If a wire connects two computers, it may be difficult to secure physically, especially if the computers are in different buildings.

Suggested Citation:"Technology to Achieve Secure Computer." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×

To keep wires out of the TCB we resort to encryption, which makes it possible to have a channel whose security does not depend on the security of any wires or intermediate systems through which the messages are passed. Encryption works by computing from the data of the original message, called the clear text or plaintext, some different data, called the ciphertext, which is actually transmitted. A corresponding decryption operation at the receiver takes the ciphertext and computes the original plaintext. A good encryption scheme reflects the concept that there are some simple rules for encryption and decryption, and that computing the plaintext from the ciphertext, or vice versa, without knowing the rules is too difficult to be practical. This should be true even for one who already knows a great deal of other plaintext and its corresponding ciphertext.

Encryption thus provides a channel with confidentiality and integrity. All the parties that know the encryption rules are possible senders, and those that know the decryption rules are possible receivers. Obtaining many secure channels requires having many sets of rules, one for each channel, and dividing the rules into two parts, the algorithm and the key. The algorithm is fixed, and everyone knows it. The key can be expressed as a reasonably short sequence of characters, a few hundred at most. It is different for each secure channel and is known only to the possible senders or receivers. It must be fairly easy to generate new keys that cannot be easily guessed.

The two kinds of encryption algorithms are described below. It is important to have some understanding of the technical issues involved in order to appreciate the policy debate about controls that limit the export of popular forms of encryption (Chapter 6) and influence what is actually available on the market.7

  1. Symmetric (secret or private) key encryption, in which the same key is used to send and receive (i.e., to encrypt and decrypt). The key must be known only to the possible senders and receivers. Decryption of a message using the secret key shared by a receiver and a sender can provide integrity for the receiver, assuming the use of suitable error-detection measures. The Data Encryption Standard (DES) is the most widely used, published symmetric encryption algorithm (NBS, 1977).

  2. Asymmetric (public) key encryption, in which different keys are used to encrypt and decrypt. The key used to encrypt a message for confidentiality in asymmetric encryption is a key made publicly known by the intended receiver and identified as being associated with him, but the corresponding key used to decrypt the message is known only to that receiver. Conversely, a key used to encrypt a message for integrity (to digitally sign the message) in asymmetric

Suggested Citation:"Technology to Achieve Secure Computer." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×

encryption is known only to the sender, but the corresponding key used to decrypt the message (validate the signature) must be publicly known and associated with that sender. Thus the security services to ensure confidentiality and integrity are provided by different keys in asymmetric encryption. The Rivest-Shamir-Adelman (RSA) algorithm is the most widely used form of public-key encryption (Rivest et al., 1978).

Known algorithms for asymmetric encryption run at relatively slow rates (a few thousand bits per second at most), whereas it is possible to buy hardware that implements DES at rates of up to 45 megabits per second, and an implementation at a rate of 1 gigabit per second is feasible with current technology. A practical design therefore uses symmetric encryption for handling bulk data and uses asymmetric encryption only for distributing symmetric keys and for a few other special purposes. Appendix B's "Cryptography" section gives details on encryption.

A digital signature provides a secure channel for sending a message to many receivers who may see the message long after it is sent and who are not necessarily known to the sender. Digital signatures may have many important applications in making a TCB smaller. For instance, in the purchasing system described above, if an approved order is signed digitally, it can be stored outside the TCB, and the payment component can still trust it. See the Appendix B section headed "Digital Signatures" for a more careful definition and some discussion of how to implement digital signatures.

Authenticating Channels

Given a secure channel, it is still necessary to find out who is at the other end, that is, to authenticate it. The first step is to authenticate a channel from one computer system to another. The simplest way to do this is to ask for a password. Then if there is a way to match up the password with a principal, authentication is complete. The trouble with a password is that the receiver can misrepresent himself as the sender to anyone else who trusts the same password. As with symmetric encryption, this means that one needs a separate password to authenticate himself to every system that one trusts differently. Furthermore, anyone who can read (or eavesdrop on) the channel also can impersonate the sender. Popular computer network media such as Ethernet or token rings are vulnerable to such abuses.

The need for a principal to use a unique symmetric key to authenticate himself to every different system can be addressed by using a trusted

Suggested Citation:"Technology to Achieve Secure Computer." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×

third party to act as an intermediary in the cryptographic authentication process, a concept that has been understood for some time (Branstad, 1973; Kent, 1976; Needham and Schroeder, 1978). This approach, using symmetric encryption to achieve authentication, is now embodied in the Kerberos authentication system (Miller et al., 1987; Steiner et al., 1988). However, the requirement that this technology imposes, namely the need to trust a third party with keys that may be used (directly or indirectly) to encrypt the principal's data, may have hampered its widespread adoption.

Both of these problems can be overcome by challenge-response authentication schemes. These schemes make it possible to prove that a secret is known without disclosing it to an eavesdropper. The simplest scheme to explain as an example is based on asymmetric encryption, although schemes based on the use of symmetric encryption (Kent et al., 1982) have been developed, and zero-knowledge techniques have been proposed (Chaum, 1983). The challenger finds out the public key of the principal being authenticated, chooses a random number, and sends it to the principal encrypted using both the challenger's private key and the principal's public key. The principal decrypts the challenge using his private key and the public key of the challenger, extracts the random number, and encrypts the number with his private key and the challenger's public key and sends back the result. The challenger decrypts the result using his private key and the principal's public key; if he gets back the original number, he knows that the principal must have done the encrypting.8

How does the challenger learn the principal's public key? The CCITT X.509 standard defines a framework for authenticating a secure channel to a principal with an X.500 name; this is done by authenticating the principal's public key using certificates that are digitally signed. Such a certificate, signed by a trusted authority, gives a public key, K, and asserts that a message signed by K can be trusted to come from the principal. The standard does not define how other channels to the principal can be authenticated, but technology for doing this is well understood. An X.509 authentication may involve more than one agent. For example, agent A may authenticate agent B, who in turn authenticates the principal. (For a more thorough discussion of this sort of authentication, see X.509 (CCITT, 1989b) and subsequent papers that identify and correct a flaw in the X.509 three-way authentication protocol (e.g., Burrows et al., 1989).)

Challenge-response schemes solve the problem of authenticating one computer system to another. Authenticating a user is more difficult, since users are not good at doing encryption or remembering large, secret quantities. One can be authenticated by what one knows (a

Suggested Citation:"Technology to Achieve Secure Computer." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×

password), what one is (as characterized by biometrics), or what one has (a "smart card" or token).

The use of a password is the traditional method. Its drawbacks have already been explained and are discussed in more detail in the section titled "Passwords" in Appendix B.

Biometrics involves measuring some physical characteristics of a person—handwriting, fingerprints, or retinal patterns, for example—and transmitting this information to the system that is authenticating the person (Holmes et al., 1990). The problems are forgery and compromise. It may be easy to substitute a mold of someone else's finger, especially if the impersonator is not being watched. Alternatively, anyone who can bypass the physical reader and simply inject the bits derived from the biometric scanning can impersonate the person, a critical concern in a distributed system environment. Perhaps the greatest problem associated with biometric authentication technology to date has been the cost of equipping terminals and workstations with the input devices necessary for most of these techniques.9

By providing the user with a tiny computer that can be carried around and will act as an agent of authentication, a smart card or token reduces the problem of authenticating a user to the problem of authenticating a computer (NIST, 1988). A smart card fits into a special reader and communicates electrically with a system; a token has a keypad and display, and the user keys in a challenge, reads the response, and types it back to the system (see, for example, the product Racal Watchword). (At least one token authentication system (Security Dynamics' SecureID) relies on time as an implicit challenge, and thus the token used with this system requires no keypad.) A smart card or token is usually combined with a password to keep it from being easily used if it is lost or stolen; automatic teller machines require a card and a personal identification number (PIN) for the same reason.

Security Perimeters

A distributed system can become very large; systems with 50,000 computers exist today, and they are growing rapidly. In a large system no single agent will be trusted by everyone; security must take account of this fact. Security is only as strong as its weakest link. To control the amount of damage that a security breach can do and to limit the scope of attacks, a large system may be divided into parts, each surrounded by a security perimeter. The methods described above can in principle provide a high level of security even in a very large system that is accessible to many malicious principals. But implementing these methods throughout a system is sure to be difficult

Suggested Citation:"Technology to Achieve Secure Computer." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×

and time-consuming, and ensuring that they are used correctly is likely to be even more difficult. The principle of "divide and conquer" suggests that it may be wiser to divide a large system into smaller parts and to restrict severely the ways in which these parts can interact with each other.

The idea is to establish a security perimeter around part of a system and to disallow fully general communication across the perimeter. Instead, carefully managed and audited gates in the perimeter allow only certain limited kinds of traffic (e.g., electronic mail, but not file transfers). A gate may also restrict the pairs of source and destination systems that can communicate through it.

It is important to understand that a security perimeter is not foolproof. If it allows the passing of electronic mail, then users can encode arbitrary programs or data in the mail and get them across the perimeter. But this is unlikely to happen by mistake, for it requires much more deliberate planning than do the more direct ways of communicating inside the perimeter using terminal connections. Furthermore, a mail-only perimeter is an important reminder of system security concerns. Users and managers will come to understand that it is dangerous to implement automated services that accept electronic mail requests from outside and treat them in the same fashion as communications originating inside the perimeter.

As with any security measure, a price is paid in convenience and flexibility for a security perimeter: it is difficult to do things across the perimeter. Users and managers must decide on the proper balance between security and convenience. See Appendix B's "Security Perimeters" section for more details.

Methodology

An essential part of establishing trust in a computing system is ensuring that it was built according to proper methods. This important subject is discussed in detail in Chapter 4.

CONCLUSION

The technical means for achieving greater system security and trust are a function of the policies and models that have been articulated and developed to date. Because most work to date has focused on confidentiality policies and models, the most highly developed services and the most effective implementations support requirements for confidentiality. What is currently on the market and known to users thus reflects only some of the need for trust technology. Research

Suggested Citation:"Technology to Achieve Secure Computer." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×

topics described in Chapter 8 provide some direction for redressing this imbalance, as does the process of articulating GSSP described in Chapter 1, which would both nourish and draw from efforts to develop a richer set of policies and models. As noted in Chapter 6, elements of public policy may also affect what technology is available to protect information and other resources controlled by computer systems—negatively, in the case of export controls, or positively, in the case of federal procurement goals and regulations.

NOTES

1.  

Terminology is not always used consistently in the security field. Policies are often called "requirements"; sometimes the word "policy" is reserved for a broad statement and ''requirement" is used for a more detailed statement.

2.  

DOD Directive 5200.28, "Security Requirements for Automatic Data Processing (ADP) Systems," is the interpretation of this policy for computer security (encompassing requirements for personnel, physical, and system security). The Trusted Computer Security Evaluation Criteria (TCSEC, or Orange Book, also known as DOD 5200.28-STD; U.S. DOD, 1985d) specifies security evaluation criteria for computers that are used to protect classified (or unclassified) data.

3.  

That is, who caused it to be made, in the context of the computer system; legal responsibility is a different matter.

4.  

The simplest such chain involves all the agents in the path, from the system up through the hierarchy to the first ancestor that is common to both the system and the principal, and then down to the principal. Such a chain will always exist if each agent is prepared to authenticate its parent and children. This scheme is simple to explain; it can be modified to deal with renaming and to allow for shorter authentication paths between cooperating pairs of principals.

5.  

The government's Tempest (Transient Electromagnetic Pulse Emanations Standard) program is concerned with reduction of such emanations. Tempest requirements can be met by using Tempest products or shielding whole rooms where unprotected products may be used. NSA has evaluated and approved a variety of Tempest products, although nonapproved products are also available.

6.  

In some circumstances a third secure channel property, availability, might be added to this list. If a channel exhibits secure availability, a sender can, with high probability, be confident that his message will be received, even in the face of malicious attack. Most communication channels incorporate some facilities designed to ensure availability, but most do so only under the assumptions of benign error, not in the context of malicious attack. At this time there is relatively little understanding of practical, generic methods of providing communication channels that offer availability in the face of attack (other than those approaches provided to deal with natural disasters or those provided for certain military communication systems).

7.  

For example, the Digital Equipment Corporation's development of an architecture for distributed system security was reportedly constrained by the availability of specific algorithms:

The most popular algorithm for symmetric key encryption is the DES (Data Encryption Standard). … However, the DES algorithm is not specified by the architecture and, for export reasons, ability to use other algorithms is a requirement. The preferred algorithm for asymmetric key cryptography, and the only known algorithm with the properties required by the architecture, is RSA. … (Gasser et al., 1989, p. 308)

Suggested Citation:"Technology to Achieve Secure Computer." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×

8.  

This procedure proves the presence of the principal but gives no assurance that the principal is actually at the other end of the channel; it is possible that an adversary controls the channel and is relaying messages from the principal. To provide this assurance, the principal should encrypt some unambiguous identification of the channel with his private key as well, thus certifying that he is at one end. If the channel is secured by encryption, the encryption key identifies it. Since the key itself must not be disclosed, a one-way hash (see Appendix B) of the key should be used instead.

9.  

Another problem with retina scans is that individuals concerned about potential health effects sometimes object to use of the technology.

Suggested Citation:"Technology to Achieve Secure Computer." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×
Page 74
Suggested Citation:"Technology to Achieve Secure Computer." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×
Page 75
Suggested Citation:"Technology to Achieve Secure Computer." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×
Page 76
Suggested Citation:"Technology to Achieve Secure Computer." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×
Page 77
Suggested Citation:"Technology to Achieve Secure Computer." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×
Page 78
Suggested Citation:"Technology to Achieve Secure Computer." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×
Page 79
Suggested Citation:"Technology to Achieve Secure Computer." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×
Page 80
Suggested Citation:"Technology to Achieve Secure Computer." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×
Page 81
Suggested Citation:"Technology to Achieve Secure Computer." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×
Page 82
Suggested Citation:"Technology to Achieve Secure Computer." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×
Page 83
Suggested Citation:"Technology to Achieve Secure Computer." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×
Page 84
Suggested Citation:"Technology to Achieve Secure Computer." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×
Page 85
Suggested Citation:"Technology to Achieve Secure Computer." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×
Page 86
Suggested Citation:"Technology to Achieve Secure Computer." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×
Page 87
Suggested Citation:"Technology to Achieve Secure Computer." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×
Page 88
Suggested Citation:"Technology to Achieve Secure Computer." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×
Page 89
Suggested Citation:"Technology to Achieve Secure Computer." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×
Page 90
Suggested Citation:"Technology to Achieve Secure Computer." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×
Page 91
Suggested Citation:"Technology to Achieve Secure Computer." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×
Page 92
Suggested Citation:"Technology to Achieve Secure Computer." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×
Page 93
Suggested Citation:"Technology to Achieve Secure Computer." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×
Page 94
Suggested Citation:"Technology to Achieve Secure Computer." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×
Page 95
Suggested Citation:"Technology to Achieve Secure Computer." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×
Page 96
Suggested Citation:"Technology to Achieve Secure Computer." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×
Page 97
Suggested Citation:"Technology to Achieve Secure Computer." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×
Page 98
Suggested Citation:"Technology to Achieve Secure Computer." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×
Page 99
Suggested Citation:"Technology to Achieve Secure Computer." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×
Page 100
Suggested Citation:"Technology to Achieve Secure Computer." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×
Page 101
Next: Programming Methodology »
Computers at Risk: Safe Computing in the Information Age Get This Book
×
Buy Paperback | $85.00
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

Computers at Risk presents a comprehensive agenda for developing nationwide policies and practices for computer security. Specific recommendations are provided for industry and for government agencies engaged in computer security activities.

The volume also outlines problems and opportunities in computer security research, recommends ways to improve the research infrastructure, and suggests topics for investigators.

The book explores the diversity of the field, the need to engineer countermeasures based on speculation of what experts think computer attackers may do next, why the technology community has failed to respond to the need for enhanced security systems, how innovators could be encouraged to bring more options to the marketplace, and balancing the importance of security against the right of privacy.

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    Switch between the Original Pages, where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

    « Back Next »
  6. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  7. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  8. ×

    View our suggested citation for this chapter.

    « Back Next »
  9. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!