National Academies Press: OpenBook

Peer Review in Environmental Technology Development Programs (1998)

Chapter: 7 Improving the Effectiveness of OST's Peer Review Program

« Previous: 6 “Triage” Approach for Reducing Project Backlogs
Suggested Citation:"7 Improving the Effectiveness of OST's Peer Review Program." National Research Council. 1998. Peer Review in Environmental Technology Development Programs. Washington, DC: The National Academies Press. doi: 10.17226/6408.
×

7
Improving the Effectiveness of OST's Peer Review Program

The committee is encouraged that OST continues to actualize its new goal of implementing peer review of its activities. As stated earlier in this report, the committee finds that the foundation of the peer review program is sound but the process is still in the early stages of development or "maturity." Despite the marked improvements in the procedures for conducting peer reviews over the past year, OST's peer review program still has not fully achieved its stated objectives of providing high-quality technical input to assist in decision making . The first step for OST leadership is to ensure that peer reviews are effectively linked to OST decision making. In order to continue to develop and achieve a more effective peer review program, OST leadership also will have to commit to a process of continuous assessment and improvement involving cycles of planning, execution, and evaluation. This will require incorporating process improvement as part of daily activities, identifying and eliminating problems at their source, and striving toward improvement by implementing solutions to problems as they are identified. The basis for evaluating the efficiency and effectiveness (i.e., the use of peer-review results in better management decisions and in program improvement) of the peer review program should be as quantitative as possible, based on fact, and oriented toward obtaining results. In this chapter, the committee outlines a number of approaches that OST leadership should consider as it works to improve the effectiveness of the peer review program. The committee also discusses some of the institutional factors that will have to be addressed to implement a credible, efficient, and effective peer review program.

Suggested Citation:"7 Improving the Effectiveness of OST's Peer Review Program." National Research Council. 1998. Peer Review in Environmental Technology Development Programs. Washington, DC: The National Academies Press. doi: 10.17226/6408.
×

Linkage of Peer Reviews to Management Decisions

One of the most significant problems with the current peer review program is that many peer reviews are not linked to decision points, in spite of previous and current policy guidance. The committee stresses that a peer review report finding a project to be technically sound, based on good science, and capable of practical realization should be a necessary but not sufficient condition for passing certain TIDM gates. A project could fail to pass the gate for programmatic reasons even if it were technically sound, but a report stating that a project is not technically sound should be a sufficient reason to reject a project at any gate. The project need not necessarily be terminated (e.g., it might need more development and re-review before moving to the next stage, or the review might be "appealed" and reconsidered, especially if the panel was divided in its conclusions), but it should not pass a gate in the face of an adverse peer review.

To have any effect on programmatic decision making, peer reviews must occur well before project decisions are made. The timeliness of reviews has been one of the most significant problems with the program. A number of FY97 peer reviews were completed after project decisions had been made, and in these instances, few benefits of peer review were realized. For example, the decision to fund the next stage of development of the In Situ Redox Manipulation project had been made prior to peer review of the project. In another peer review (the Large-Scale Demonstration project at Fernald Plant I), a major portion of the facility had already been decommissioned when the peer review was conducted. Although retrospective reviews can provide some guidance to other projects, if OST's new peer review program is to be effective, peer reviews must occur prior to key points in the technology development decision process. Because for FY98, the FA/CC program managers selected the projects to be reviewed, the times for review, and the technology-specific review criteria, the committee expects that the results of the peer review should fit more logically with the decision-making process. The ASME Peer Review Committee also has identified the lack of a clear relationship between the peer review results and OST decision making as a recurring problem with the peer review program.1

To address this issue, the committee recommended in its interim report that OST develop a targeted plan for the peer review program. The plan should consider factors such as how many of OST's technology projects can be peer reviewed, realistic schedules for the reviews, and the peer review program budget. To be effective, this plan also should ensure that peer reviews are conducted early enough in the budget cycle to allow their results to be used as an input into meaningful funding decisions. In developing its plan for the peer

1  

Discussions at ASME's PRC meeting, January 26, 1998.

Suggested Citation:"7 Improving the Effectiveness of OST's Peer Review Program." National Research Council. 1998. Peer Review in Environmental Technology Development Programs. Washington, DC: The National Academies Press. doi: 10.17226/6408.
×

review program, OST should consider expanding its practice of consolidating reviews of related projects into a single review (i.e., a Type I review) or several overlapping reviews in order to increase the number of projects that can be reviewed with the resources available. Another value of reviewing multiple projects during a single peer review is that it tends to ensure that the projects reviewed together are judged by the same standard; thus, it normalizes the results (Kostoff, 1997b). To the committee's knowledge, a targeted plan has not yet been developed, but the Implementation Guidance gives the Peer Review Coordinator responsibility for developing such a plan (DOE, 1998, p. 19).

The linkage between peer review results and OST's decision-making process also could be improved by explicitly identifying where and how the results of peer reviews will be used, before the review is conducted. Therefore, the committee recommends that as part of the documentation provided to peer review program management during the process of selecting projects for review, OST program managers be required to clearly identify the upcoming decision or milestone for which the results of the peer review will be used. This information also should be provided to peer reviewers as part of the documentation that they receive in preparation for the review

Evaluation and Improvement Mechanisms

Benchmarking

One approach for guiding the development of an internal evaluation procedure for the peer review program would be for OST peer review program managers to proactively seek out and learn from other organizations that have more mature peer review processes. This process of learning from the practices of other organizations is called "benchmarking." Benchmarking is a process-oriented, systematic investigation in which an organization measures its performance against that of the "best in class" (i.e., other organizations with renowned peer review programs) to determine what should be improved. Benchmarking involves searching for new ideas, methods, practices, and processes; adopting the practices or adapting the good features to the specific needs of the organization; and implementing them to improve the effectiveness of the program.

There are usually four elements to the benchmarking process:

Suggested Citation:"7 Improving the Effectiveness of OST's Peer Review Program." National Research Council. 1998. Peer Review in Environmental Technology Development Programs. Washington, DC: The National Academies Press. doi: 10.17226/6408.
×
  1. Planning: identify internal targets for benchmarking (specific opportunities for improvement),
  2. Analysis and preparation: understand the current process, form an evaluation team, and identify the external organizations whose processes constitute the benchmark,
  3. Integration: understand the process in the external organization and establish new process goals (prioritize, plan and test proposed solutions), and
  4. Implementation: develop action plans, implement changes, monitor performance, and recalibrate the benchmark.

Benchmarking enables learning from the leadership and experience of others. It should challenge current internal paradigms of process performance, provide an understanding of opportunities and methods for improvement, and identify strengths within the organization. It also should result in establishing goals driven by results and in providing insights that will assist in prioritizing and allocating internal resources. One strength of benchmarking is that it provides options and ideas with proven performance, rather than relying on "new" ideas developed from within the organization. It is a proactive method for developing measures of effectiveness because it is based on objective evaluation rather than "gut feel" or perception.

Peer review programs in some organizations that could be used by OST in such a benchmarking process are described in the boxes throughout Chapter 3. Based on its analysis of the development of OST's peer review process since its inception in October 1996, the committee believes that it would have been extremely beneficial if OST had used such a benchmarking process to help design its new peer review program before it was established.

To encourage the development of a more effective peer review program, OST management should support benchmarking by focusing on the processes that are critical to the peer review program, by being open to new ideas, by being willing to admit that its current process is not the best, and by committing to provide resources for change and to overcome resistance to change. The benefits of improving the peer review process through benchmarking must be weighed against the possible negative effects of constantly changing procedures, however. Benchmarking efforts should be targeted to specific weaknesses in the peer review process and should be initiated at a logical time in OST's annual peer review cycle when specific areas of improvement have been identified (e.g., after the release of this report, or after the annual ASME Peer Review Committee meeting), that is, when the first two steps of the benchmarking process (i.e., planning and analysis) already have been completed. The overall goal of the evaluation and improvement process should be a high-quality, relatively stable

Suggested Citation:"7 Improving the Effectiveness of OST's Peer Review Program." National Research Council. 1998. Peer Review in Environmental Technology Development Programs. Washington, DC: The National Academies Press. doi: 10.17226/6408.
×

peer review process. The benchmarking process also should involve the development of metrics to quantify the efficiency and effectiveness of the peer review program.

Metrics

A world-class peer review program has to be built on a foundation of measurement, data, and analysis. Measurements should be directly related to the goals, or objectives, of the program. Metrics are defined as performance indicators that are measurable characteristics of services, processes, and operations used by the organization to track and improve performance. Useful metrics share a number of important characteristics: (1) they encompass the key outputs and results of the process steps, such as performance, program impact, and cost; (2) they are based on systematically collected data rather than anecdotal observations; and (3) they are selected to represent factors that lead to the best customer, operational, and financial performance. A system of metrics tied to organizational performance requirements represents a clear and objective basis for aligning activities with the organization's goals. It is important to note that the metrics themselves can be evaluated and changed when analysis of data from the tracking process suggests they should be changed. Benchmarking against other peer review programs (see previous section) can be used to calibrate metrics as above or below the norm of performance.

Metrics can be used to assist in the measurement of effectiveness and can help evaluate the success of a program in realizing its objectives. Two types of metrics can be considered: activity metrics and performance metrics. Activity metrics are an indication of the efficiency of the process, whereas performance metrics are an indication of the effectiveness of the program, (i.e., achieving the desired results). Although OST has not yet established metrics or a benchmarking process for its peer review program, it has begun to develop performance metrics for its technology programs as part of its annual performance planning. The DOE Environmental Management Advisory Board's Technology Development and Transfer Committee also recently held a workshop to develop guidelines for formulating performance measures for research, technology development, and deployment. This workshop served as a form of benchmarking by involving panelists from other federal agencies with well-established systems of performance measures.

Properly chosen and clearly defined metrics could be a powerful management tool to help OST improve the efficiency and effectiveness of this program. Activity metrics that could be chosen include the following:

Suggested Citation:"7 Improving the Effectiveness of OST's Peer Review Program." National Research Council. 1998. Peer Review in Environmental Technology Development Programs. Washington, DC: The National Academies Press. doi: 10.17226/6408.
×
  • the percentage of projects reviewed at each gate,
  • the percentage of reviewed projects that were not funded in the next gate review decision,
  • the percentage of adequate DOE written responses to peer reviews that were received within the required 30 days,
  • the degree of follow-up to recommendations of peer review panels,
  • the number of peer reviewed articles published on OST-funded technologies, and
  • the percentage of peer reviews conducted in which satisfactory program documentation was provided to the peer review panel at least two weeks prior to the review.

Because OST is at the beginning stages of the peer review process, it is not clear if the current activity level would yield a statistically significant measurement; however, over time, the data should become significant.

Performance metrics should be based on OST's success criteria. Proper use of well chosen and articulated metrics can result in decisions that improve the strategic alignment of the project with the needs of customers or end-users. Paladino and Longsworth (1995) provide a broad description of proposed decision gate criteria for the TIDM, and OST recently has provided an updated list of review criteria for the peer review program (DOE, 1998b). OST also should clarify what would be successful performance relative to the criteria, however. For example, project impact and the percentage of resources going to successful protects could be appropriate performance metrics. Other metrics could include

  • the number of recommendations in the peer review report that have been implemented;
  • the number of decisions affected by the peer review results (i.e., not whether the decision is positive or negative, but whether the decision is affected by the peer review report);
  • the amount of funding reallocated as a result of the report; and
  • the number of changes in research paths (midcourse corrections) that occur as a result of peer reviews.

These activity and performance metrics are provided as example — ultimately OST management will have to establish its own set of metrics based on the success criteria it sets for the technology development program. The committee recommends that OST management develop an effective evaluation and improvement process for the peer review program that

Suggested Citation:"7 Improving the Effectiveness of OST's Peer Review Program." National Research Council. 1998. Peer Review in Environmental Technology Development Programs. Washington, DC: The National Academies Press. doi: 10.17226/6408.
×

includes regular benchmarking against other peer review programs and the collection of activity and performance metrics

Potential Application of Peer Review Within OST

Because OST has chosen to focus its new peer review program on the review of individual projects at various stages of development, the committee also has focused this report on the peer review of projects. As discussed in Chapter 3, however, the fundamental principles of peer review (i.e., independent, external, expert, technical) also can be applied to programs, subsets of programs, or technical needs. One potential application of these types of peer reviews within OST would be to evaluate the research and development efforts that are needed to address the environmental problems at contaminated sites. Another potential application within OST would be to assess the technical balance of OST programs in the context of other programs, both within DOE (and OST in particular) and outside the DOE complex.

Although these and other applications of peer review within OST are possible, simply adding to the number of reviews will not solve OST's problems. OST already has an extensive system of internal and external reviews that it uses to assist staff in making programmatic decisions (see Appendix B). The development of a credible peer review system to evaluate the technical merit of proposals, individual projects, and programs, however, might allow OST to discontinue some of these other types of reviews, if they serve the same or similar objectives. The committee recommends that OST carefully evaluate the objectives and roles of all of its existing reviews, and then determine the most effective use of peer reviews (of various types) in meeting its overall objectives.

For any type of peer review that OST chooses to implement, however, the ''expert, independent, external, and technical'' criteria should be applied to achieve the objectives of the review. That is, the reviewers should have the technical background required to make the judgments called for and they should have no conflict of interest. For peer reviews of programs, the collective expertise of reviewers should also be appropriate to projects within the program and each panel member should have a broad knowledge of the area covered by the range of projects. For specific reviews with clear objectives that involve technical matters such as economics, risk assessment, or other socioeconomic issues, supplementing the technologists with experts in these areas (e.g., alternative dispute resolution) may be appropriate.

If OST decides to implement a peer review process for evaluating

Suggested Citation:"7 Improving the Effectiveness of OST's Peer Review Program." National Research Council. 1998. Peer Review in Environmental Technology Development Programs. Washington, DC: The National Academies Press. doi: 10.17226/6408.
×

proposals, the technical merit and balance of its programs, or its program "needs," such peer reviews need not be arranged under the same ASME program as project reviews. In fact, some reviews might be conducted by a standing group parallel to the Environmental Management Advisory Board (EMAB) but composed of world-class engineers and scientists (another example of such a standing group, the NRC Board on Assessment of NIST has been described in Box 3.5).

OST's "Organizational" Culture and Leadership

One feature characteristic of organizations that effectively use peer review as a tool for management of their research and development portfolios is a peer review process that is ingrained in their organizational culture2; in other words, for these organizations, peer review is "standard operating procedure" for providing input to their decision-making process. Even after deciding on a peer review process that seems adequate on paper, OST still will have to change its organizational culture so that it embraces peer review as an essential part of its decision-making process.

The peer review culture is not yet ingrained within some parts of the Department of Energy, especially the EM program (NRC 1995b,c, 1996; GAO, 1996). This may derive from a time when DOE laboratories were working in some fields where the national expertise was predominantly within the DOE organization. Today, programs like those of OST are not unique to DOE but are shared by other organizations, and a broad range of expertise is available outside the DOE "family." OST is just beginning to turn outside for technical advice, however. The benchmarking process described above would reinforce this positive trend. In addition, the committee is encouraged that the EM Science Program, which funds basic environmental research of relevance to EM, has embraced peer review for assessing the scientific merit of proposals.

A change in the organizational culture of OST will require leadership. Kostoff (1997a) recently pointed out that the commitment of an organization's senior management to high-quality reviews is one of the most important factors for high-quality peer review programs. OST management has begun this process of change by prescribing the use of peer review of projects at several stages of development. Although such leadership pressure provides a strong motivating force for the use of peer review, it will not by itself result in a change of organizational culture, which must pervade all levels of the organization.

2  

Organizational culture can be defined as the normative values of its members.

Suggested Citation:"7 Improving the Effectiveness of OST's Peer Review Program." National Research Council. 1998. Peer Review in Environmental Technology Development Programs. Washington, DC: The National Academies Press. doi: 10.17226/6408.
×

Corporations that introduce and maintain effective safety and quality assurance programs provide an example of how such a change in value systems can be accomplished. When these organizations adopt such a program, corporate managers typically employ a strategy that involves a steady flow of information to staff at all levels of the organization over a period of time. Such communications include educational materials about the program itself, case histories of how the program addressed specific issues or problems, and specific data on how effective the program was in achieving its objectives (i.e., performance metrics).

In the case of peer review in OST, individual members of the organization will value peer review when they see beneficial results (e.g., which might be disseminated by using case histories), when management gives them logical messages of the value of peer review, and/or when they have incentives to use it or disincentives not to use it (Kostoff, 1997b). The committee recommends that OST leadership develop an explicit strategy to accomplish a change in its organizational culture by distributing (1) educational materials that summarize the basic principles and benefits of peer review as a tool to decision-making, (2) case histories illustrating how peer review input served to improve specific projects, and (3) summaries of key performance metrics that demonstrate how peer reviews are used to meet the overall objectives of OST's program.

Suggested Citation:"7 Improving the Effectiveness of OST's Peer Review Program." National Research Council. 1998. Peer Review in Environmental Technology Development Programs. Washington, DC: The National Academies Press. doi: 10.17226/6408.
×
Page 85
Suggested Citation:"7 Improving the Effectiveness of OST's Peer Review Program." National Research Council. 1998. Peer Review in Environmental Technology Development Programs. Washington, DC: The National Academies Press. doi: 10.17226/6408.
×
Page 86
Suggested Citation:"7 Improving the Effectiveness of OST's Peer Review Program." National Research Council. 1998. Peer Review in Environmental Technology Development Programs. Washington, DC: The National Academies Press. doi: 10.17226/6408.
×
Page 87
Suggested Citation:"7 Improving the Effectiveness of OST's Peer Review Program." National Research Council. 1998. Peer Review in Environmental Technology Development Programs. Washington, DC: The National Academies Press. doi: 10.17226/6408.
×
Page 88
Suggested Citation:"7 Improving the Effectiveness of OST's Peer Review Program." National Research Council. 1998. Peer Review in Environmental Technology Development Programs. Washington, DC: The National Academies Press. doi: 10.17226/6408.
×
Page 89
Suggested Citation:"7 Improving the Effectiveness of OST's Peer Review Program." National Research Council. 1998. Peer Review in Environmental Technology Development Programs. Washington, DC: The National Academies Press. doi: 10.17226/6408.
×
Page 90
Suggested Citation:"7 Improving the Effectiveness of OST's Peer Review Program." National Research Council. 1998. Peer Review in Environmental Technology Development Programs. Washington, DC: The National Academies Press. doi: 10.17226/6408.
×
Page 91
Suggested Citation:"7 Improving the Effectiveness of OST's Peer Review Program." National Research Council. 1998. Peer Review in Environmental Technology Development Programs. Washington, DC: The National Academies Press. doi: 10.17226/6408.
×
Page 92
Suggested Citation:"7 Improving the Effectiveness of OST's Peer Review Program." National Research Council. 1998. Peer Review in Environmental Technology Development Programs. Washington, DC: The National Academies Press. doi: 10.17226/6408.
×
Page 93
Next: References »
Peer Review in Environmental Technology Development Programs Get This Book
×
Buy Paperback | $44.00
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF
  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    Switch between the Original Pages, where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

    « Back Next »
  6. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  7. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  8. ×

    View our suggested citation for this chapter.

    « Back Next »
  9. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!