Skip to main content

Currently Skimming:

Criteria to Evaluate Computer and Network Security
Pages 124-142

The Chapter Skim interface presents what we've algorithmically identified as the most significant single chunk of text within every page in the chapter.
Select key terms on the right to highlight them within pages of the chapter.


From page 124...
... of France, Germany, the Netherlands, and the United Kingdom provide standards against which computer and network systems can be evaluated with respect to security characteristics. As described below in "Comparing National Criteria Sets," these documents embody different approaches to security evaluation, and the differences are a result of other, perhaps less obvious purposes that security evaluation criteria can serve.
From page 125...
... Product Evaluation." Before discussing in more detail the goals for product criteria, it is useful to examine the nature of the security characteristics addressed in evaluation criteria. Security Characteristics Most evaluation criteria reflect two potentially independent aspects of security: functionality and assurance.
From page 126...
... But an identical product developed by uncleared individuals in a nonsecured environment and not accompanied by equivalent documentation, would probably receive a much lower assurance rating. Although the second product in this example is not necessarily less secure than the first, an evaluator probably would have less confidence in the security of the second product due to the lack of supporting evidence provided by its implementors, and perhaps, less confidence in the trustworthiness of the implementors themselves.: Somewhat analogous is the contrast between buying a picnic table from a well-known manufacturer with a reputation for quality (a member of the "Picnic Table Manufacturers of America")
From page 127...
... . Assurance Evaluation There are actually two stages of assurance evaluation: design evaluation and implementation evaluation.
From page 128...
... For the level of assurance generally required in the commercial market, it may be sufficient to carry out a minimal implementation evaluation (as part of overall system quality assurance procedures, including initial operational or Beta testing) prior to system release if a good design - ~ ~ Moreover, if the incident reporting and tracking system proposed in Chapters 1 and 6 is instituted, implementation flaws can be identified and fixed in the normal course of system releases.
From page 129...
... Thus the committee has linked its recommendation for the establishment of a broad set of criteria, GSSP, with a recommendation to establish methods, guidelines, and facilities for evaluating products with respect to GSSP. The committee believes that the way to achieve a system evaluation process supported by vendors and users alike is to begin with a design evaluation, based on GSSP itself, and to follow up win an implementation evaluation, focusing on field experience and incident reporting and tracking.
From page 130...
... Comparisons with the successor harmonized criteria, the ITSEC, which builds on both the ZSI and DTI schemes, are amplified in the section below titled "Comparing National Criteria Sets." One argument in favor of bundling criteria is that it makes life easier for evaluators, users, and vendors. When a product is submitted for evaluation, a claim is made that it implements a set of security functions with the requisite level of assurance for a given rating.
From page 131...
... When completely unbundled criteria are used (e.g.' the proposed DTI set) , the evaluators may have to examine anew the collection of security features claimed for each product, since there may not have been previously evaluated products with the same set of features.
From page 132...
... A system that did qualify for an Orange Book rating and had added functions for integrity to support the Clark-Wilson model would receive no special recognition for the added functionality since that functionality, notably relating to integrity, is outside the scope of the Orange Book.3 The govemment-funded LOCK project (see Appendix B) , for example, is one attempt to provide both security functionality and assurance beyond that called for by the highest rating (A1)
From page 133...
... There are significant, security functionality distinctions between division-C and division-B systems. In particular, the C division provides for discretionary access control, while the B division adds mandatory access control.
From page 134...
... This represents a highly "bundled" approach to criteria in that each rating, for example, B2, is a combination of a set of security functions and security assurance attributes. The Information Technology Security Evaluation Criteria (ITSEC)
From page 135...
... Because the European initiatives are based in part on a reaction to the narrowness of the TCSEC, and because NIST's resources are severely constrained, the committee recommends that GSSP and a new organization to spearhead GSSP, the Information Security Foundation, provide a focus for future U.S. participation in international criteria and evaluation initiatives.
From page 136...
... computer system vendors derive a significant fraction of their revenue from foreign sales and thus are especially vulnerable to proliferating, foreign evaluation criteria. At the same time, the NCSC has interpreted its charter as not encompassing evaluation of systems submitted by foreign vendors.
From page 137...
... SYSTEM CERTIFICATION VS. PRODUCT EVALUATION The discussion above has addressed security evaluation criteria that focus on computer and network products.
From page 138...
... After all, certifiers must be competent in more security disciplines and be able to understand the security implications of combining various evaluated and unevaluated components to construct a system. A user attempting to characterize the security requirements for a system he is to acquire will find applying system certification methodology a priori a much more complex process than specifying a concise product rating based on a reading of the TCSEC environment guidelines (Yellow Book; U.S.
From page 139...
... This reality argues against any recommendation that would undercut that investment or undermine industry confidence in the stability of security evaluation criteria. Yet there are compelling arguments in favor of establishing less-bundled criteria to address some of the shortcomings cited above.
From page 140...
... There is also a need to address broader system security concerns in a manner that recognizes the heterogeneity of integrated or conglomerate systems. This is a matter more akin to certification than to product evaluation.
From page 141...
... Design evaluation should be performed by an independent team of evaluators. Implementation evaluation should include a combination of explicit system audit, field experience, and organized reporting of security faults.
From page 142...
... The Design Analysis Phase takes an in-depth look at the design and implementation of a product using analytic tools. During this phase the Initial Product Analysis Report (IPAR)


This material may be derived from roughly machine-read images, and so is provided only to facilitate research.
More information on Chapter Skim is available.