National Academies Press: OpenBook

Statistical Methods for Testing and Evaluating Defense Systems: Interim Report (1995)

Chapter: Appendix A: The Organizational Structure of Defense Acquistion

« Previous: Appendices
Suggested Citation:"Appendix A: The Organizational Structure of Defense Acquistion." National Research Council. 1995. Statistical Methods for Testing and Evaluating Defense Systems: Interim Report. Washington, DC: The National Academies Press. doi: 10.17226/9074.
×

APPENDIX

A

The Organizational Structure of Defense Acquisition

This appendix provides a description of the salient features of defense acquisition, with emphasis on operational testing. The procurement of a major weapon system follows a series of event-based decisions called milestones. At each milestone, a set of criteria must be met if the next phase of the acquisition is to proceed.

In response to a perceived threat the relevant military service will undertake a Mission Area Analysis and Mission Needs Analysis. For example, because of a change in threat, it may be determined that there is a need to impede the advance of large armored formations well forward of the front lines. In light of this analysis or assessment, the relevant service, or possibly a unified or specified command, prepares a Mission Needs Statement. This is a conceptual document that is supposed to identify the broadly stated operational need and not a specific solution to counter the perceived threat. However, in practice the military service will sometimes try to write the Mission Needs Statement so only a certain preconceived view of a desired solution will meet the mission need. The mission need may be satisfied by either materiel or nonmateriel innovations. Within the context of the above example, the Mission Needs Statement could eventually result in a new acquisition program (e.g., a new aircraft-delivered weapon), or a new concept of operations for existing forces and equipment, or possibly some combination of both.

If the Mission Needs Statement results in a determination that a new materiel solution is required, a concept formulation effort is begun. If that effort is accomplished in a manner that justifies a particular approach, and the decision makers believe the new approach has enough merit to warrant further resource commitment, a new acquisition program is begun. At this stage in the process, the program is assigned to an acquisition category (ACAT). The ACAT designation is important because it determines both the level of review that is required by law, and the level at which milestone decision authority rests in DoD. The ACAT assignment is made primarily according to the program 's projected costs, but other considerations may result in the program 's getting a higher level of review within DoD. Of the four acquisition categories, ACAT I through ACAT IV, ACAT I contains the highest cost systems. This

Suggested Citation:"Appendix A: The Organizational Structure of Defense Acquistion." National Research Council. 1995. Statistical Methods for Testing and Evaluating Defense Systems: Interim Report. Washington, DC: The National Academies Press. doi: 10.17226/9074.
×

appendix focuses on ACAT I programs. The relevant authority for these “major defense acquisition programs” is with the Undersecretary of Defense Acquisition and Technology.

The Mission Needs Statements for needs that can or are expected to result in major programs are forwarded to the Joint Requirements Oversight Council for review, evaluation, and prioritization. The council, chaired by the Vice Chairman of the Joint Chiefs of Staff, validates those Mission Needs Statements it believes require a materiel solution and forwards its recommendations to the Under Secretary of Defense for Acquisition and Technology. If the under secretary concurs with the council's recommendation, a meeting of the Defense Acquisition Board is convened for a milestone 0 review.

Upon milestone 0 approval from the Undersecretary of Defense for Acquisition and Technology, a concept exploration study is undertaken. The purpose of this phase 0 in the acquisition process is to determine the most promising solution(s) to meet the broadly stated mission need. At milestone I, when a concept is approved by the undersecretary, and the Deputy Secretary of Defense agrees that it warrants funding in the DoD programming and budget process, the new acquisition program is formally begun, the relevant service establishes a program office, and a program manager is assigned. It is at this stage that DoD is adequately committed to the program to request budget authority from Congress to begin development. Congress must clearly appropriate funding each year thereafter for the acquisition program to continue.

Once an acquisition program has formally begun, the program office creates an acquisition strategy that provides an overview of the planning, direction, and management approach to be used during the multiyear development and procurement process. Other important documents supporting the program and requiring approval are the Operational Requirements Document and the Acquisition Program Baseline.

The Operational Requirements Document describes in some detail the translation from the broadly stated mission need to the system performance parameters that the users and the program manager believe the system must have to justify its eventual procurement. In the context of the Operational Requirements Document, performance parameters or requirements may be characteristics such as radar cross-section (visibility to the enemy), probability of kill, range, and minimum rate of fire.

The Acquisition Program Baseline is like a contract between the milestone decision authority and the relevant service, including the program manager and his/her immediate superiors. It has three sections: one dealing with performance characteristics, a second with projected costs for various phases of the acquisition, and a third with the projected schedule. In each case there are objectives and less stringent thresholds, the thresholds being the minimum acceptable performance standards or the maximum acceptable costs or time periods for achieving certain objectives. If any of these thresholds are violated or projected not to be met upon program completion, the desirability of continuing or completing the program is supposed to be subject to reexamination and possible termination. The Acquisition Program Baseline is a summary document that is supposed to include only parameters of performance, cost, and schedule critical to the success of the program and to the acquisition decision authorities. Clearly the program manager and the contractors must manage and control many more parameters than are contained in this document.

A problem that sometimes surfaces during operational testing is that the tests are evaluated against all the parameters specified in the Operational Requirements Document, which may be more demanding than the thresholds specified in the Acquisition Program Baseline. At that point, the military service may believe that the system is still good enough to procure even if it does not meet all of the requirements in the Operational Requirements Document. However, when the acquisition process was in the early stages, with the program competing for resources against other programs, the service may have believed that if they did not specify parameters that later proved difficult if not impossible to meet, they

Suggested Citation:"Appendix A: The Organizational Structure of Defense Acquistion." National Research Council. 1995. Statistical Methods for Testing and Evaluating Defense Systems: Interim Report. Washington, DC: The National Academies Press. doi: 10.17226/9074.
×

would not get approval for the program. Also, in the early stages of a program, the service and others in the acquisition community may have believed they could achieve better performance than later proved to be the case. The difficulty this presents is that the credibility of the service and of DoD is damaged with Congress if the acquisition decision authorities attempt to change performance goals near the end of the process, when operational testing is imminent or in process. Related to this issue is the tendency to view the operational testing as a pass/fail process, rather than as part of an overall process for managing risk and balancing cost and performance tradeoffs, with the objective of acquiring quality, cost-effective systems and getting them into the hands of the forces who may have to use them in combat.

The primary purposes of DoD test and evaluation are to (1) provide essential information for assessment of acquisition risk and for decision making, (2) verify attainment of technical performance specifications and objectives, (3) verify that systems are operationally effective and suitable for their intended use, and (4) provide essential information in support of decision making. In technical terms and in the management of acquisition programs, the boundaries between developmental and operational testing are not always clear or distinct. Over the years, however, organizational boundaries have developed between the two within DoD. This occurred as a result of congressional concerns, which eventually resulted in a law establishing the separation and reporting requirements for the Office of the Director of Operational Test and Evaluation. In many respects, these boundaries make the management of testing and evaluation in DoD more complex than it would otherwise be, even though the two test communities communicate well with each other.

For all acquisition programs in DoD, test planning is supposed to begin very early in the acquisition process and to involve both the developmental and operational testers. Both are involved in preparing the Test and Evaluation Master Plan, which is a requirement for all acquisition programs. For all ACAT I programs, this plan must be approved by the Director, Operational Test and Evaluation, and the Deputy Director, Defense Research and Engineering (Test and Evaluation), the analogous head of developmental testing.

The Test and Evaluation Master Plan documents the overall structure and objectives of the test and evaluation program, provides a framework for generating detailed test and evaluation plans, and documents associated schedule and resource implications. It relates program schedule, test management strategy and structure, and required resources to (1) critical operational issues, (2) critical technical parameters, (3) minimum acceptable operational performance requirements, (4) evaluation criteria, and (5) milestone decision points. It is prepared by the program office, with input from system testers in both developmental and operational testing, service representatives, and other technical experts. (In the Army, those in charge of requirements assist the program manager in preparing the operational testing portion of the Test and Evaluation Master Plan while in the Air Force and Navy, those in charge of requirements do not formally participate in writing the plan.)

The Test and Evaluation Master Plan has five components. First, it contains a statement of requirements for the system, which is simply an interpretation of the Operational Requirements Document from the viewpoint of the testing community. Second, it contains an integrated test program summary and schedule, including the identification of which testing agencies will provide information to the program manager and when that information will be provided. Third, it contains detailed information about the criteria for the developmental tests, in which each component of the system will be evaluated. Fourth, it contains the operational test master plan, in which the critical operational issues for operational testing are described and broken into two groups, one for effectiveness and one for suitability. Finally, it identifies the resources projected to be available for purposes of testing the system, including personnel, test ranges, models, and funding. The Test and Evaluation Master Plan is updated during the various

Suggested Citation:"Appendix A: The Organizational Structure of Defense Acquistion." National Research Council. 1995. Statistical Methods for Testing and Evaluating Defense Systems: Interim Report. Washington, DC: The National Academies Press. doi: 10.17226/9074.
×

phases of the acquisition process for the program. It is reported that congressional staff members sometimes become involved in reviewing the plan revisions.

At milestone I, the Defense Acquisition Board reviews the acquisition strategy, the Operational Requirements Document, the Test and Evaluation Master Plan, the initial Acquisition Program Baseline, and an independent cost evaluation report from the Cost Analysis Improvement Group. If the Undersecretary of Defense for Acquisition and Technology approves the program from an acquisition perspective, and the Deputy Secretary of Defense approves the necessary near- and long-term funding in the context of the overall defense program, the new acquisition program is formally begun, and the demonstration and validation phase begins.

In phase I of the acquisition process, the demonstration and validation phase that occurs between milestone I and milestone II, the objectives are to provide confidence that the technologies critical to the concept can be incorporated into a system design and to define more fully the expected capabilities of the system. This is the first stage of development in which tradeoffs can be addressed based on developmental data rather than just analytical models. Thus, this phase provides an opportunity to obtain some confidence that the parameters specified in the Operational Requirements Document will be achieved as development progresses, or to recommend that changes or tradeoffs be made because the original objectives appear to be too stringent. One of the minimum required accomplishments for this phase of the acquisition process is to identify the major cost, schedule, and performance tradeoff opportunities.

At the completion of the demonstration and validation phase, the acquisition program comes to the milestone II decision point for development approval. The milestone decision authority must rigorously assess the affordability of the program at this point and establish a development baseline (a refinement or revision of the initial Acquisition Program Baseline approved at milestone I). The low-rate initial production quantity to be procured before completion of initial operational testing is also determined by the milestone decision authority at milestone II, in consultation with the Director, Operational Test and Evaluation. The quantities of articles required for operational testing are also specified at this point by the testing community, and, specifically, by the Director, Operational Test and Evaluation for ACAT I programs. (A major challenge for DoD is to balance the desire to perform operational testing on production versions of the system and the need to complete operational testing before entering full-scale production. This is especially difficult when there are large costs associated with continuing low-rate production or halting it temporarily to complete some testing and evaluation or make some fixes to problems found in initial operational testing.)

Engineering and manufacturing development is phase II of the acquisition process, which follows a successful milestone II decision point. The objectives in this phase are to translate the system approach into a stable, producible, and cost-effective system design; validate the manufacturing or production process; and demonstrate through testing that system capabilities meet contract specification requirements, satisfy the mission need, and meet minimum operational performance requirements. Thus, both further developmental and operational testing are accomplished in this phase before full-rate production is approved at milestone III (for systems successfully completing the engineering and manufacturing development phase). Moreover, DoD Instruction 5000.2, “Defense Acquisition Management Policies and Procedures,” specifies that, “when possible, developmental testing should support and provide data for operational assessment prior to the beginning of formal initial operational test and evaluation by the operational test activity. ”

Ideally, developmental testing is conducted prior to final operational testing, and the system is required to pass an operational test readiness review which is certified by the program executive officer, the program manager's direct supervisor. However, developmental and operational testing often over-

Suggested Citation:"Appendix A: The Organizational Structure of Defense Acquistion." National Research Council. 1995. Statistical Methods for Testing and Evaluating Defense Systems: Interim Report. Washington, DC: The National Academies Press. doi: 10.17226/9074.
×

lap. The result may be that contractors have little opportunity to make fixes or improve the system based on lessons learned from developmental test results. In addition, as mentioned above, low-rate initial production begins during the Engineering and Manufacturing Development phase, well before all the operational test results are known, because of the desire to use production versions in operational testing and the desire to avoid the increased costs associated with stopping the low-rate production while awaiting operational test results.

When a system is scheduled for operational testing, the exact details of the tests are prepared by the testers and evaluators per the Test and Evaluation Master Plan. However, resource constraints may prevent certain characteristics of the system from being ascertained. In such cases, the testers identify what they can accomplish given the constraints. The amount of control the program manager has over the testing budget for the system varies from service to service.

In theory, the operational testers are meant to be wholly independent of the test result evaluators. This separation is preferred so the evaluators will not be tempted to design tests that are relatively easy to evaluate, rather than ones that are more difficult to evaluate but will produce the most informative results. In practice, the testers and evaluators work together in designing the tests. In all the services, the testing agencies are independent of the program office and any of its direct supervisory management.

The results of operational testing are interpreted by many separate agencies, including: the relevant service's operational test agency and the Director, Operational Test and Evaluation. The Office of the Director, Operational Test and Evaluation is a congressionally created oversight office within DoD, reporting to the Secretary of Defense (rather than the Undersecretary for Acquisition and Technology). It prepares independent reports concerning operational testing, which are provided to the Defense Acquisition Board, the Secretary of Defense, and Congress. Prior to the publication of a report on a specific system, the program manager has the opportunity to comment on the report and ask for revisions. If the Director, Operational Test and Evaluation refuses the revisions, the service may append the program manager's comments to the report. In addition, each of the services has its own agency to interpret the test results with input from the program manager. The reports from these organizations are sent to the Defense Acquisition Board for its milestone III consideration after review by the service acquisition board (e.g., the Army Acquisition Review Board for the Army).

In making the milestone III recommendation to initiate full-scale production, the Defense Acquisition Board considers the developmental test results and the reports of the Director, Operational Test and Evaluation and the service test and evaluation organizations. If approval for full-scale production is granted by the Undersecretary of Defense for Acquisition and Technology, the procurement request is included in the DoD budget request submitted by the Secretary of Defense to Congress (or the dollars included in the prior budget request are approved for obligation by the service). The full-scale production contracts are then awarded, consistent with any DoD or congressional restrictions. Follow-on operational testing is performed during the early stages of the production phase to monitor system performance and quality.

In the entire acquisition process for a specific system, there will be a number of program managers because of the multiyear length of the development and procurement. These program managers are the individuals most affected by the success or failure of the program. Specifically, if the program has major problems or is terminated, the career of the program manager at that time may be significantly damaged. The program manager should be focused on overseeing the management of the program in all phases of the acquisition; particularly in the early stages of the program, however, he or she is under pressure to act as a salesman or advocate of the program rather than as an independent manager. As a result, the program manager is encouraged to have a “can do” attitude rather considering the possibility that meeting the original objective may not be feasible and that some tradeoffs must be made before

Suggested Citation:"Appendix A: The Organizational Structure of Defense Acquistion." National Research Council. 1995. Statistical Methods for Testing and Evaluating Defense Systems: Interim Report. Washington, DC: The National Academies Press. doi: 10.17226/9074.
×

proceeding. There is also a strong tendency or pressure for program managers not to bring problems forward without solutions, even if those problems were not a result of their action (or lack of action). Thus, test results indicating that a system is in need of further development or fixes before the program proceeds may adversely affect the career of the program manager, even if such results are in no way tied to that individual's performance. These pressures on program managers can lead to unnecessary and unproductive tensions in the overall acquisition process and in the test and evaluation portions of the process in particular.

Suggested Citation:"Appendix A: The Organizational Structure of Defense Acquistion." National Research Council. 1995. Statistical Methods for Testing and Evaluating Defense Systems: Interim Report. Washington, DC: The National Academies Press. doi: 10.17226/9074.
×
Page 49
Suggested Citation:"Appendix A: The Organizational Structure of Defense Acquistion." National Research Council. 1995. Statistical Methods for Testing and Evaluating Defense Systems: Interim Report. Washington, DC: The National Academies Press. doi: 10.17226/9074.
×
Page 50
Suggested Citation:"Appendix A: The Organizational Structure of Defense Acquistion." National Research Council. 1995. Statistical Methods for Testing and Evaluating Defense Systems: Interim Report. Washington, DC: The National Academies Press. doi: 10.17226/9074.
×
Page 51
Suggested Citation:"Appendix A: The Organizational Structure of Defense Acquistion." National Research Council. 1995. Statistical Methods for Testing and Evaluating Defense Systems: Interim Report. Washington, DC: The National Academies Press. doi: 10.17226/9074.
×
Page 52
Suggested Citation:"Appendix A: The Organizational Structure of Defense Acquistion." National Research Council. 1995. Statistical Methods for Testing and Evaluating Defense Systems: Interim Report. Washington, DC: The National Academies Press. doi: 10.17226/9074.
×
Page 53
Suggested Citation:"Appendix A: The Organizational Structure of Defense Acquistion." National Research Council. 1995. Statistical Methods for Testing and Evaluating Defense Systems: Interim Report. Washington, DC: The National Academies Press. doi: 10.17226/9074.
×
Page 54
Next: Appendix B: A Short History of Experimental Design, with Commentary for Operational Testing »
Statistical Methods for Testing and Evaluating Defense Systems: Interim Report Get This Book
×
MyNAP members save 10% online.
Login or Register to save!
  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    Switch between the Original Pages, where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

    « Back Next »
  6. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  7. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  8. ×

    View our suggested citation for this chapter.

    « Back Next »
  9. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!