Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
3 CHAPTER ONE INTRODUCTION BACKGROUND AND PURPOSE OF STUDY State product evaluation programs exist to help state de- partments of transportation (DOTs) respond to and analyze proposed changes that affect highway and transportation practices and operations. Typically, these programs evalu- ate new products that are available for application in the industry. These products can include materials, equipment, processes, devices, and other new technologies, and are often proposed to individual agencies by outside vendors and commercial manufacturers. Sometimes new products are proposed by internal sources such as agency staff. The goal of these evaluation programs is to establish a method of responding to the volume of products submitted for re- view and to provide a formal process for incorporating new and innovative products into practice. Product evaluation programs traditionally have been developed independently by DOTs and require varying de- grees of administrative resources. Some programs are well established, efficiently managed, and effective, whereas others are still being developed. Differences exist in how DOTs approach the issue and contend with the processes associated with such programs. These processes, including performance testing, have also been developed independ- ently, resulting in a wide range of effectiveness. Often program scopes are not well defined and inconsistencies exist that could create risk for claims of bias or unfair evaluation conclusions. Some DOT programs track their evaluations through the application and implementation of the product, whereas others do not. Some DOTs effectively share their evaluation results and activities, whereas others do not. DOTs have recognized these inconsistencies and they have generated several mechanisms and programs, primarily through AASHTO, to communicate evaluation processes, practices, and results (1). These programs include ⢠AASHTOâs National Transportation Product Evaluation Program (NTPEP), which was created in 1994 as a way to coordinate specific product testing among states. ⢠The Highway Innovation Technology Evaluation Center (HITEC), a service center of the Civil Engineering Re- search Foundation, was established in 1994. ⢠AASHTOâs Lead State Program on the Strategic Highway Research Program (SHRP), established in 1996. ⢠The AASHTO Product Evaluation List (APEL), es- tablished in 1997 to communicate evaluation and testing activities nationwide. The purpose of NTPEP is to pool the professional and physical resources of the individual AASHTO member de- partments and to focus those resources on testing materials of common interest to improve their cost-effectiveness (2). This program is now financially supported by more than 95% of the 52 member organizations and tests products that have been identified and prioritized by the members. Most of the products tested in this national program have been in the categories of durable pavement markings, geo- textiles, sign sheeting, and miscellaneous traffic control products. Although the NTPEP was evaluated for effectiveness in a 2001 study conducted for AASHTO by TransTech Man- agement, Inc. (3), neither the economic impacts nor the cost savings of the program have yet been studied. It is an- ticipated that the economic impacts of NTPEP on the re- spective state agencies and on the greater transportation re- search community will be addressed in a separate upcom- ing AASHTO study. HITEC is a collaborative program established to be- come a national service center for implementing highway technologies (4). HITECâs primary goal is to facilitate the evaluation of new, innovative technologies and to expedite their transfer into practice. As of February 2003, the pro- gram had facilitated 46 evaluations of âhigh techâ and âlow techâ products. AASHTOâs Lead State Program was established in 1996 to provide models and assistance to agencies dealing with the implementation of the SHRP technologies and practices. This program involved more than 30 state agen- cies in 7 technology focus areas. Although the program did not directly conduct tests, it did help to move innovative products into acceptance and ultimately into practice, as do the respective state evaluation programs. The participants in the Lead State Program used key states for each of the seven technical areas to create implementation tools, strategies, and best practice examples to help non-partici- pating agencies in adopting the new technologies and products without independent testing. The program was fo- cused and targeted with implementation tasks that con- cluded in December 2000.
4 AASHTOâs APEL is a database that was established in 1997 to share information regarding product evaluations, performance, and acceptability (5). This database was pre- ceded by AASHTOâs Special Products Evaluation List. The APEL does not focus on reporting approved products, rather it reports and records information on what products have been evaluated by a state, whether ultimately ap- proved or disapproved. To that end, APEL serves as a communications tool on active product evaluations that have or are being conducted by states. APEL provides the opportunity for a state to check on a specific product under consideration and can also serve as a reference on the type and nature of an evaluation that was conducted by a state. The APEL database also serves as a new product infor- mation source that states may search for products of inter- est. Users can simply search on a product category and review what products have been used by other states to solve a particular problem. APEL not only has information on products, it also has contact information for the product manufacturers and for the state personnel responsible for product evaluations. All of these programs were created to share experiences and technical information regarding the adoption of new highway and transportation products. However, as is often the case with sharing information and technology transfer, the success in sharing these pooled resources of informa- tion is contingent on the respective agencyâs receptiveness to accept someone elseâs experience and the relative value or applicability of the information to the respective agency (6). It should also to be noted that NTPEP, HITEC, and the Lead State Program are or have been limited to specific technologies that may not be of equal interest to all agen- cies. The APEL database includes only information that has been contributed by a respective state and is fluid in that new information is added as states have the time and resources. For these reasons, most states have established their own, independent product evaluation programs. Some states, such as Maryland (7) and Oregon (8,9), have estab- lished their own databases to track product evaluations. AASHTO staff reports that some member organizations participate fully and actively in the above-mentioned na- tional programs to compliment their own programs, whereas others do not. As a part of this synthesis, the organization and funding mechanisms of these independent state evaluation pro- grams will be considered. Evaluation procedures and the implementation of the evaluation results will be summa- rized. Various outside acceptance criteria will be consid- ered and a discussion as to how these criteria affect internal DOT programs will be included. Because many options ex- ist for acceptance criteria, whether formally established or informally applied as ârules of thumb,â these criteria will be discussed, particularly as to how they contribute to im- plementation applications. REPORT FOCUS AND OBJECTIVES DOTs recognize the importance of dealing with new prod- uct evaluations in an efficient, fair, and expeditious man- ner. This synthesis addresses the magnitude of the problem, the types of criteria used to evaluate products, the strate- gies employed to communicate acceptance of an approved product, and the mechanisms used to apply an approved product into practice. This synthesis will also draw some parallels to the above-mentioned national product evalua- tion programs. One key issue surrounding this synthesis is the duplication of effort that exists in the respective evalua- tion processes. This synthesis identifies the products and technologies of most common interest to the states so that some future duplication may be avoided. Transportation agencies have provided information, pri- marily in response to the electronic questionnaire, as to how these programs contribute to agency operations and product implementation. Additional information has been gathered through the survey and in follow-up interviews regarding the extent to which agencies support efforts to respond to requests for evaluation. This synthesis discusses state product evaluation pro- grams as they exist today. It addresses the issue of creating and administering programs that encourage innovation and improvement in practices. The synthesis is based on a re- view of the literature, an electronic questionnaire distrib- uted to U.S. state and Canadian provincial transportation agencies, follow-up interviews and queries to selected pub- lic agencies, and information from private practice and academic organizations. REPORT ORGANIZATION This synthesis is divided into five chapters. ⢠Chapter one introduces the subject of product evaluation programs and sets the stage by presenting the back- ground and existing state of evaluation programs. ⢠Chapter two provides a discussion of the critical is- sues affecting DOTs as they relate to new product evaluations and the implementation and application of those products into practice. ⢠Chapter three specifically addresses how DOTs con- duct their respective programs. Individual mecha- nisms, funding alternatives, staffing and resource management, best practices, and various models are discussed.
5 ⢠Chapter four discusses how DOTs measure the effec- tiveness of their programs. The benefits of imple- menting new products are addressed and some exam- ples of how these benefits have affected the overall operation of the DOTs are discussed. ⢠Chapter five provides a summary of the findings and conclusions. Recommendations for future activities on the subject are also presented. Finally, references and appendixes are provided that in- dicate the sources of information, including the question- naires and tabulated responses as noted in the text.