National Academies Press: OpenBook

Performance Specifications for Rapid Highway Renewal (2014)

Chapter: Appendix E - Assessing the Value of Performance Specifications

« Previous: Appendix D - Annotated Bibliography
Page 104
Suggested Citation:"Appendix E - Assessing the Value of Performance Specifications." National Academies of Sciences, Engineering, and Medicine. 2014. Performance Specifications for Rapid Highway Renewal. Washington, DC: The National Academies Press. doi: 10.17226/22560.
×
Page 104
Page 105
Suggested Citation:"Appendix E - Assessing the Value of Performance Specifications." National Academies of Sciences, Engineering, and Medicine. 2014. Performance Specifications for Rapid Highway Renewal. Washington, DC: The National Academies Press. doi: 10.17226/22560.
×
Page 105
Page 106
Suggested Citation:"Appendix E - Assessing the Value of Performance Specifications." National Academies of Sciences, Engineering, and Medicine. 2014. Performance Specifications for Rapid Highway Renewal. Washington, DC: The National Academies Press. doi: 10.17226/22560.
×
Page 106
Page 107
Suggested Citation:"Appendix E - Assessing the Value of Performance Specifications." National Academies of Sciences, Engineering, and Medicine. 2014. Performance Specifications for Rapid Highway Renewal. Washington, DC: The National Academies Press. doi: 10.17226/22560.
×
Page 107
Page 108
Suggested Citation:"Appendix E - Assessing the Value of Performance Specifications." National Academies of Sciences, Engineering, and Medicine. 2014. Performance Specifications for Rapid Highway Renewal. Washington, DC: The National Academies Press. doi: 10.17226/22560.
×
Page 108
Page 109
Suggested Citation:"Appendix E - Assessing the Value of Performance Specifications." National Academies of Sciences, Engineering, and Medicine. 2014. Performance Specifications for Rapid Highway Renewal. Washington, DC: The National Academies Press. doi: 10.17226/22560.
×
Page 109
Page 110
Suggested Citation:"Appendix E - Assessing the Value of Performance Specifications." National Academies of Sciences, Engineering, and Medicine. 2014. Performance Specifications for Rapid Highway Renewal. Washington, DC: The National Academies Press. doi: 10.17226/22560.
×
Page 110
Page 111
Suggested Citation:"Appendix E - Assessing the Value of Performance Specifications." National Academies of Sciences, Engineering, and Medicine. 2014. Performance Specifications for Rapid Highway Renewal. Washington, DC: The National Academies Press. doi: 10.17226/22560.
×
Page 111
Page 112
Suggested Citation:"Appendix E - Assessing the Value of Performance Specifications." National Academies of Sciences, Engineering, and Medicine. 2014. Performance Specifications for Rapid Highway Renewal. Washington, DC: The National Academies Press. doi: 10.17226/22560.
×
Page 112
Page 113
Suggested Citation:"Appendix E - Assessing the Value of Performance Specifications." National Academies of Sciences, Engineering, and Medicine. 2014. Performance Specifications for Rapid Highway Renewal. Washington, DC: The National Academies Press. doi: 10.17226/22560.
×
Page 113
Page 114
Suggested Citation:"Appendix E - Assessing the Value of Performance Specifications." National Academies of Sciences, Engineering, and Medicine. 2014. Performance Specifications for Rapid Highway Renewal. Washington, DC: The National Academies Press. doi: 10.17226/22560.
×
Page 114
Page 115
Suggested Citation:"Appendix E - Assessing the Value of Performance Specifications." National Academies of Sciences, Engineering, and Medicine. 2014. Performance Specifications for Rapid Highway Renewal. Washington, DC: The National Academies Press. doi: 10.17226/22560.
×
Page 115
Page 116
Suggested Citation:"Appendix E - Assessing the Value of Performance Specifications." National Academies of Sciences, Engineering, and Medicine. 2014. Performance Specifications for Rapid Highway Renewal. Washington, DC: The National Academies Press. doi: 10.17226/22560.
×
Page 116

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

104 A p p e n d i x e data Collection Approach The Delphi technique was used to assess the actual or potential value of performance specifications. The method is particu- larly useful when empirical means are not suitable and study results must rely heavily on the subjective opinions of experts. In this study, the Delphi technique relied on the experience of experts in different fields within the highway industry to deter- mine the added or lost value of using performance specifica- tion under a number of different scenarios (which considered different delivery methods and project characteristics). Delphi Technique Background Research shows that the Delphi technique is very useful when the judgment of individuals must be tapped and combined to address a lack of agreement or incomplete state of knowledge (Delbecq et al. 1975). Delphi is particularly valued for its ability to structure and organize group communication (Powell 2003). One of the major advantages of the Delphi technique is that it documents facts and opinions of the panelists, while avoiding the pitfalls of face-to-face interaction, such as group conflict and individual dominance (Gupta and Clarke 1996). It is an inexpensive research methodology involving experts without physically bringing them together. “Controlled feedback and anonymity through planned, rather than reactionary responses from experts helps panelists to revise their views without pub- licly admitting that they have done so, thus encouraging them to take up a more personal viewpoint rather than a cautious institutional position” (Masser and Foley 1987). The Delphi approach offers an additional advantage for situations in which defining areas of uncertainty or disagreement among experts is important. In those instances, Delphi can highlight topics of concern and evaluate uncertainty in a quantitative manner. Group evaluation of belief statements made by panel mem- bers is an explicit part of Delphi (Robinson 1991). Linstone and Turoff (2002) documented some of the research attributes that warrant the use of Delphi in the data collection process of any given study. For example, they note that the use of Delphi is beneficial when the research problem does not lend itself to precise analytical techniques but can benefit from sub- jective judgments on a collective basis. Another condition for using Delphi is when the individuals needed to contribute to the examination of a broad or complex problem have no history of adequate communication and may represent diverse back- grounds with respect to experience or expertise. Or, when time and cost constraints make frequent group meetings infeasible, employing Delphi can prove valuable. Using Delphi is also appropriate when more individuals are needed than can effec- tively interact in a face-to-face exchange. Furthermore, Delphi helps preserve the heterogeneity of the participants to assure the validity of the results, that is, avoidance of domination by quan- tity or by strength of personality (“bandwagon effect”). All of these Delphi attributes were found to correspond with this research endeavor, which led the team to use the Delphi approach in the data collection process. The number of survey rounds included in the data collec- tion process is a critical aspect of Delphi technique. Studies show that some applications of the Delphi process have been accomplished in three and others in more rounds. The itera- tive nature of the procedure generates new information for panelists in each round, enabling them to modify their assess- ments and project them beyond their own subjective opin- ions. It can represent the best forecast available from a consensus of experts (Corotis et al. 1981). Typically three rounds of sur- veys are sent to a preselected expert panel, although the deci- sion regarding the number of rounds is largely pragmatic (Jones et al. 1992). The Delphi method requires a minimum of two Assessing the Value of Performance Specifications

105 rounds beyond which the appropriate number of rounds is dis- puted (Thangaratinam and Redman 2005). “Repeated rounds may lead to fatigue by respondents and increased attrition” (Walker and Selfe 1996). For this research, the experts participated in three rounds of surveys. Details on each round and the results of the rounds are discussed in the section Survey Results of this appendix. Delphi Experts The success of a Delphi study largely rests on the combined expertise of the participants who make up the expert panel (Powell 2003). Rowe (1994) suggests that experts be selected from varied backgrounds to guarantee a wide base of knowl- edge; Murphy et al. (1998) suggest that diversity in expert panel membership leads to better performance as it may allow for the consideration of different perspectives and a wider range of alternatives. However, considerable variation exists in the suggested panel size. One recommendation for panel size is five to 20 experts with disparate knowledge (Rowe and Wright 2001). In a Del- phi study related to thermal and transport science, reference was made to Clayton’s rule of thumb that 15 to 30 people are an adequate panel size; 31 of 35 people agreed to be on that panel (Streveler et al. 2003). Guidance suggests that numbers of participants will vary according to the scope of the prob- lem and the resources available (Delbecq et al. 1975). Some believe the more participants the better the results. However, there is very little empirical evidence on the effect of the number of participants on the reliability or validity of con- sensus processes as long as opinions of all stakeholders are taken into account (Murphy et al. 1998). Indeed, what is more important is the expertise of the panel members themselves. The R07 study involved 11 participants, representing both public and private institutions and possessing a range of expe- rience in delivery methods, use of warranties, and the develop- ment of specifications. The team was satisfied that the Delphi panel had the requisite expertise (relevant knowledge and years of experience with respect to the subject) to assess the per- ceived value of performance specifications under a number of different scenarios. Experts were identified and considered for potential partici- pation on the Delphi panel on the basis of three criteria: • Affiliation (e.g., agency, contractor, consultant, academia); • Experience with alternate project delivery approaches, warranties, and/or performance specifications; and • Potential interest as determined by previous participation in research and other activities related to the application of performance specifications. Twenty-eight agencies and organizations were identified, providing a list of 34 potential participants. Several of the potential participants were affiliated with the same agency or organization. Only one survey request was made for each agency or organization even if it had multiple potential participants. Of the 24 requests for participation in the Delphi effort, 13 partici- pants represented departments of transportation (DOTs), five represented the construction industry, three represented firms that engaged in public-private partnerships (P3s), two were consultants, and one was from academia. Of the 24 requests, 14 agencies and organizations agreed to participate. Seven survey responses to the first round of the Delphi study were received. However, 11 professionals representing 10 agencies and organizations participated in the workshop and in Round 2 and Round 3 of the Delphi process. Table E.1 summarizes the participation of agencies and organizations in the Delphi study. Survey Development Once experts were identified, the next step in the Delphi pro- cess entailed preparing a survey to assess the perceived value of implementation of performance specifications by state highway agencies. The survey was divided into three steps to guide the parti- cipants in inserting the necessary data in an organized and sequenced fashion. Step 1 of the survey asked respondents to describe a project in terms of the following six characteristics: • Road class (local, state highway, interstate, toll); • Type of construction (preservation, reconstruction, new construction); • Traffic (low, moderate, or high average annual daily traffic— AADT); • Location (urban, rural); • Complexity (based on project phasing, right-of-way require- ments, utilities, environmental issues, etc.); and • Climate (based on moisture and temperature by region). The objective of this step was to determine if certain project characteristics were likely to impart a larger influence than others on the perceived value added (or lost) of using perfor- mance specifications. Given the project characteristics identified in Step 1, respon- dents were asked in Step 2 to evaluate the relative impact of moving from a method specification under a design-bid-build delivery approach (the benchmark case) to performance speci- fications under a variety of different delivery approaches (design-build, design-build-warrant, design-build-maintain, etc.). The value added or lost was to be evaluated (from the perspective of the owner) against the following criteria: • First cost (FC). The relative percentage increase or decrease in the total costs incurred by the owner to complete the design and construction of a given project relative to the implementation of prescriptive (method) specifications.

106 Table E.1. Summary of the Participation of Agencies and Organizations in the Delphi Study Relevant Experience Data Collection Affiliation Organization or Agency Alternate Project Delivery Warranties Performance Specifications Survey Requests Survey Responses Workshop Participation (No Participants) Department of transportation (DOT) Florida DOT √ √ √ √ √ (1) Virginia DOT √ √ √ Washington State DOT √ √ Michigan DOT √ √ √ Ohio DOT √ √ √ Mississippi DOT √ √ Wisconsin DOT √ √ √ North Carolina DOT √ √ √ √ Missouri DOT √ √ √ Caltrans √ √ Texas DOT √ √ √ √ √ (1) Minnesota DOT √ √ Oklahoma DOT √ √ Contractor Rieth-Riley √ √ √ √ Kokosing √ √ √ √ √ (1) Kiewit √ √ √ √ Flatiron √ √ √ √ Wagman √ √ √ P3 Colas √ √ √ (1) Halcrow √ √ √ √ (1) Cintra √ √ √ √ (1) World Bank √ Consultant HNTB √ √ √ Transtec √ √ √ √ (1) Trauner √ √ √ √ (2) Academia University of Oklahoma √ √ √ √ √ (1) Texas A&M √ √ √ √ (1) TOTAL 28 22 7 11

107 • Life-cycle costs (LCC). The relative percentage increase or decrease in the costs encountered during the life span of the project, resulting from implementation of performance specifications under the given delivery method, relative to implementation of prescriptive (method) specifications. A decrease or increase in LCCs of a given project is an indica- tion of project quality. A decrease in LCC reflects high quality and an increase of LCC can be attributed to lower quality. • Construction inspection and administration costs. The rela- tive percentage increase or decrease in agency construction inspection and administration costs, resulting from imple- mentation of performance specifications under the given delivery method, compared with implementation of pre- scriptive (method) specifications. • Innovation opportunity. The opportunity for performance specifications, as implemented under the given delivery method, to create an incentive for innovation in executing design and/or construction works. • Schedule. The relative percentage increase or decrease in total project duration, resulting from implementation of performance specifications under the given delivery method, compared with implementation of prescriptive (method) specifications. • Traffic disruption. The relative decrease or increase in traf- fic disruption, resulting from implementation of perfor- mance specifications under the given delivery method, compared with implementation of prescriptive (method) specifications. Finally, Step 3 of the survey provided a summary of the participants’ input. In that step, participants were required to review a generated summary of their results for consistency and to make any necessary adjustments. Survey Results Round 1 The data presented below are the results from Round 1 of the Delphi survey. For this round, the research team chose 24 experts and requested their participation in the Delphi study through a formal e-mail invitation. Thirteen of the 24 agreed to participate; however, only seven completed the survey. Ini- tial survey responses were received from the following owner and industry representatives: Owners Florida DOT (Dave Sadler) Texas DOT (Jeff Seiders) Industry Ferrovial Agroman US Corporation (Cintra) (Fidel Saenz de Ormijana) Halcrow (Joe Graff) Kokosing Construction Company (John Householder) Rieth-Riley Construction Company (Pete Capon) Academia Douglas Gransberg, University of Oklahoma Thus, for Round 1, data was collected from a total of seven participants. One of the seven participants submitted an incomplete survey. That participant completed the survey for the design-build-maintain (DBM) delivery method only. Therefore, for consistency reasons, the team did not consider the participant’s input in the Round 1 data analysis process. Thus, for Round 1 the data was analyzed from a total of six surveys. The aggregate results for each comparison criteria are pre- sented in the form of bar charts. The combinations of delivery methods and performance specifications under consideration included the following: • Prescriptive (method) specifications, or benchmark; • Design-bid-build, with some performance requirements, no warranty (DBB+P); • Design-bid-build, with short-term warranty (DBB+STW); • Design-build, no warranty (DB); • Design-build, with short-term warranty (DB+STW); and • Design-build-maintain (DBM). First Cost The data in Figure E.1 show that the participants felt that implementing performance specifications would result in an overall increase in first costs. However, one or two participants thought that a decrease in first cost was possible. They also believed that DBM leads to higher increases in first cost (5% to 15%) and that DBB+P mostly shows no impact or a small FC increase (0% to 5%). Furthermore, they expected FC increases for DBB+STW between 0% to 5% and 5% to 10%. Life-Cycle Costs The data show that there is a general decrease in LCC. DBM, DB+STW, and DBB+STW show a greater decrease in LCC com- pared with DBB+P. Meanwhile, DB seems to have no impact or to cause only a minor increase in LCC (see Figure E.2). Construction Inspection and Administration Costs The data show an overall trend of no impact or minimal decreases and increases in construction inspection and admin- istration costs. However, DBM and DB+STW show greater decreases (see Figure E.3).

108 Schedule The data show a general trend toward a decrease in schedule (see Figure E.4). The greatest decrease in project duration is under DBM (decrease 10% to 5%). A relatively smaller decrease in project duration occurs under DB and DB+STW (decrease 5% to 10%), and little impact on schedule occurs under DBB+P and DBB+STW. Innovation Opportunity The data show an overall greater incentive to innovate when using performance specifications. DBB+P is the only deliv- ery method with no additional incentive to innovate. DBM, DB+STW, and DB each indicate a greater incentive to inno- vate; and for DBB+STW the incentive is almost the same (Figure E.5). Traffic Disruption The data show that all participants expected traffic disruption either to decrease or to stay the same. None felt that traffic disruption would increase under any delivery method. DBB+P and DBB+STW generally show no impact on traffic disruption. However, traffic disruption decreases with DBM, DB+STW, and DB (see Figure E.6). Workshop As a follow-up to the initial survey effort, the team conducted a face-to-face workshop with the survey respondents. The objective of the workshop was to reach a consensus on the survey results through subsequent rounds of the Delphi pro- cess and to further identify the benefits and risks associated with implementing performance specifications. The experts involved in the workshop had a variety of dif- ferent backgrounds and experiences in applying performance specifications in the highway industry. A total of 11 experts participated in the workshop (four additional participants in the workshop beyond those who completed the Round 1 sur- vey). A few of the experts represented academia and the con- sulting industry, while other experts were from state highway agencies and contractors or developers. Figure E.1. Survey data related to first cost. Figure E.2. Survey data related to life-cycle costs.

109 Figure E.3. Survey data related to construction inspection and administration costs. Figure E.4. Survey data related to schedule. Figure E.5. Survey data related to incentive to innovate.

110 The team began the workshop with a summary and discus- sion of the initial survey results. Note that for the initial sur- vey, respondents based their answers on an actual project experience, which they described in terms of the six project characteristics identified in the survey tool (road class, loca- tion, etc.). Some general conclusions can be drawn from par- ticipants’ initial responses: • The use of performance specifications, particularly when used in conjunction with warranty or maintenance agree- ments, increases FCs and reduces LCCs. However, the use of DB without a warranty or maintenance option has no impact on FCs and slightly increases LCCs. These results indicate that quality (LCC) tends to be enhanced when using performance specifications with postconstruction warranty or maintenance options, but FC may increase slightly. • The data related to construction inspection and adminis- tration costs show mixed results: no impact or minimal increases or decreases. But DB with warranties or mainte- nance agreements shows a greater decrease in those costs. • In general, the use of performance specifications for all delivery methods results in a general decrease in schedule or project durations. • Use of performance specifications under a DBB scenario has no impact on traffic, while the use of DB delivery by itself or in conjunction with warranties or maintenance options decreases traffic disruption. Presentation of these results prompted discussion among the participants regarding any assumptions that were (or on reconsideration, should have been) built into their value assessments. These assumptions included the following: • First cost refers to all costs incurred during the project development and delivery phases. • First cost excludes any learning curve associated with the initial implementation of the specification. • Life-cycle cost refers to all costs incurred during facility operations including resurfacing, rehabilitation, and reconstruction during the analysis period. • The analysis period is considered to be 50 years. The participants also discussed whether the project char- acteristics and value assessment criteria were appropriate and comprehensive. They agreed that the size (dollar value) of the project would have an effect on the results and should be added to the project characteristics; road class and weather could be eliminated, as the effect of these parameters would be minimal compared with the other characteristics. Also dropped was the use of innovation opportunity as a value comparison criterion. The participants felt that the ability to innovate was perhaps the leading driver behind the savings reflected in the other criteria; therefore, to evaluate savings associated with innovation itself would create double-counting problems. The participants also discussed the applicability of the sur- vey results (which focused on pavements) to other research areas such as bridges and geotechnical systems. The general consensus was that the results would be similar, with the pos- sibility that the savings might be even more pronounced, par- ticularly for the longer-term agreements. Delphi Round 2 The group reconvened on Day 2 of the workshop to reevalu- ate their initial survey responses in light of the Day 1 discus- sions. This evaluation formed the second round of the Delphi process. In contrast to the initial survey, in this formal Delphi round, the participants were asked to evaluate the perceived value of implementing performance specifications under each proj ect scenario that could be generated by combining the various project characteristics (i.e., project size, type, traffic, complexity, and location). After adjusting for the comments received on the first day, this meant evaluating 32 different project scenarios. Figure E.7 shows an example of one of the scenarios. Figure E.6. Survey data related to traffic disruption.

111 The Round 2 Delphi survey distributed to the participants contained 32 tables similar to the one shown in Figure E.7. Each participant completed the Round 2 survey indepen- dently during the second day of the workshop. The results from this round were then analyzed in two different ways to better understand the data collected. The main objective of the data analysis was to better capture the participants’ view- points regarding the application of various delivery methods and the effect on the defined performance criteria. Aggregate Results The number of participants for the second Delphi round was 11. Given the number of participants (11) and number of proj- ect combinations (32), the number of responses collected from the survey totaled 352. The responses were first combined to present the overall impact of each of the performance criteria, without considering different project scenarios. Appendix G presents the survey results in graphical form, in full color. The first section of the appendix presents the aggregated data, with five graphs each showing the combined data of all the partici- pants for the five performance (comparison) criteria: first costs, life-cycle costs, inspection and administration costs, schedule, and traffic disruption. Individual Project Combinations The data collected from each participant in Delphi Round 2 were combined for each performance criterion (first costs, life-cycle costs, inspection and administration costs, sched- ule, and traffic disruption) according to project characteristic (traffic, complexity, size, type, and location). The second sec- tion of Appendix G presents in graphical form the combined results for each project characteristic when considered with each performance criterion. The third section of Appendix G presents the results for each of the 32 project combinations. Each combination includes five bar charts, each representing the results when one of the five performance criteria is considered. An example of the data included in this section of the appendix is presented in Table E.2 and in Figures E.8 to E.12. Table E.2 indicates that Project Combination 1 is a large, new construction project with high AADT traffic. This project is highly complex to construct and located in an urban area. Figures E.8 to E.12 show the results of the 11 participants’ input for this project combination (Project Combination 1) for the five comparison criteria: first cost, life-cycle cost, inspec- tion and administration cost, schedule, and traffic disruption. Delphi Round 3 For Round 3 of the Delphi effort, the participants were given the aggregate results of Delphi Round 2, as included in Appen- dix G, with the expectation that they would review their initial assessment and make changes as they saw fit given the aggre- gate group response and workshop discussions. The partici- pants were also provided with a list of clarifications regarding some issues related to the data collection approach that were discussed at the workshop. The list of clarifications follows: 1. Life-cycle cost refers to all costs incurred during facility operations including resurfacing, rehabilitation, and reconstruction during the analysis period. 2. Analysis period is considered to be 50 years. 3. First cost refers to all costs incurred during project devel- opment and delivery phases. 4. The impacts (first cost, schedule, life-cycle costs, etc.) should be assessed independently of the point of view. In other words, what will be the impact, not what would I want to see. 5. Design-build implies that the contractor has some level flexibility in deciding on design parameters. Table E.2. Project Criteria for Project Combination 1 Project Combination 1 Size Type Traffic Complexity Location Large New Construction High AADT High Urban Figure E.7. Example of project scenario tables for Delphi Rounds 2 and 3.

112 Figure E.9. Life-cycle cost comparison results under Project Combination 1. Figure E.10. Inspection and administration cost comparison results under Project Combination 1. Figure E.8. First cost comparison results under Project Combination 1.

113 The participants were asked to consider these clarifications when completing the third round of the Delphi effort. Six of the 11 experts sent back a reply to the e-mail message with their changes, and the remaining five did not wish to make any changes to their Round 2 inputs. The team aggre- gated the data provided by the experts in Round 3 and assessed how the responses changed from Round 2. The assessment shows that apart from some minor changes, the aggregate results generated in Round 3 are almost identical to those in Round 2. The aggregate first cost results show an expected trend as shown in Figure E.13. DBB+P may result in a slight FC increase, while DBB+STW will most likely increase first costs. The results for the other three levels of performance specifications are less conclusive. DB shows a significant variance across the responses. This results from a lack of clarity about the objec- tive of using DB. Is it used simply to accelerate project delivery, or does it give contractors the design flexibility to maximize project efficiency as well? DBM is another method that shows significant variance. In fact, the results are contradictory. As this method is quite new, such results are not surprising. The experts who have experience in DBM typically report great FC savings. The results for life-cycle cost value gains are intuitive. As the level of performance specifications increases, the LCCs decrease. Again, DB shows significant variation in responses. As DB gives contractors no contractual responsibility for per- formance, many experts believed that this could jeopardize quality. This is reflected in the results (see Figure E.14). Much like for life-cycle cost, inspection and administrative costs decrease with higher levels of performance specification (see Figure E.15). DBM shows the largest decrease, while DBB+P and DBB+STW show no impact. The results related to schedule impact are largely split (see Figure E.16). DBM, DB+STW, and DB show decreases, while DBB methods show no impact or an increase. These results are again expected as DB and its variants integrate project delivery across different delivery phases (e.g., design, con- struction, maintenance). Finally, the results presented in Figure E.17 show that the effect of the various approaches on traffic is largely the same, except for higher levels of performance specifications. As Figure E.11. Schedule comparison results under Project Combination 1. Figure E.12. Traffic disruption comparison results under Project Combination 1.

114 Figure E.15. Round 3 inspection and administration cost histogram. Figure E.14. Round 3 life-cycle cost histogram. Figure E.13. Round 3 first cost comparison histogram.

115 contractors receive incentives to manage traffic under perfor- mance criteria, shortened schedules are expected. Summary of Results The general trends derived from the Round 3 surveys concern- ing added or lost value are presented in Table E.3. The table presents the general trends seen in the histograms with respect to the delivery methods and the comparison criteria included in the Round 3 survey. Performance specifications imple- mented under DBB delivery (i.e., the DBB+P delivery scheme) show the least added value of all the delivery methods. No impact was evident on most of the comparison criteria except for some decrease in life-cycle cost. The results show that implementing performance specifica- tions under the DBB+STW delivery method will have minor impacts on the project. That approach will likely lead to small decreases in life-cycle cost and inspection and administration costs, and a slight increase in first cost. Performance specifications implemented under the DB delivery method show the greatest variability. DB shows consistent decreases in both schedule and inspection and administration costs but shows mixed results for both first cost and life-cycle cost. The surveyed experts responded that DB leads to an increase or a decrease of 0% to 5% in first cost and an increase or decrease of 0% to 10% in life-cycle cost. This variability can be attributed in part to how DB is implemented by agencies and the extent to which performance specifications are used. For DB+STW, the results show an effect on the life-cycle cost, inspection and administration cost, and schedule com- parison criteria similar to DB. In addition, DB+STW decreases traffic disruption, an added value attributed only to DB+STW and DBM (see Table E.3). However, the DB+STW delivery method increases first cost as much as 10%. Finally, performance specifications implemented under the DBM delivery method show the most significantly con- sistent decreases in life-cycle cost, inspection and administra- tion cost, schedule, and traffic disruption relative to the other delivery methods. However, these decreases come with an increase (up to 10%) in the highway project’s first cost (see Table E.3). Figure E.17. Round 3 traffic disruption histogram. Figure E.16. Round 3 schedule histogram.

116 References Corotis, R. B., R. R. Fox, and J. C. Harris. 1981. Delphi Methods: Theory and Design Load Application. Journal of Structural Engineering, Vol. 107, No. 6, pp. 1095–1105. Delbecq, A. L., A. H. Van de Ven, and D. H. Gustafson. 1975. Group Techniques for Program Planning: A Guide to Nominal and Delphi Processes. Scott, Foresman and Co., Glenview, Ill. Gupta, U. G., and R. E. Clarke. 1996. Theory and Applications of the Delphi Technique: A Bibliography (1975–1994). Technological Fore- casting and Social Change, Vol. 53, pp. 185–211. Jones, J. M. G., C. F. B. Sanderson, and N. A. Black. 1992. What Will Happen to the Quality of Care with Fewer Junior Doctors? A Delphi Study of Consultant Physicians’ Views. Journal of the Royal College of Physicians, Vol. 26, pp. 36–40. Linstone, H. A., and M. Turoff (ed.). (1975 and 2002). The Delphi Method: Techniques and Applications. http://njit.edu/pubs/delphibook/. Masser, I., and P. Foley. 1987. Delphi Revisited: Expert Opinion in Urban Analysis. Urban Studies, Vol. 24, No. 3, pp. 217–225. Murphy, M. K., N. A. Black, D. L. Lamping, C. M. McKee, C. D. B. Sander- son, J. Askham, and T. Marteau. 1998. Consensus Development Meth- ods, and Their Use in Clinical Guideline Development: A Review United Kingdom. Health Technology Assessment, Vol. 2, No. 3, pp. 1–88. Powell, C. 2003. The Delphi Technique: Myths and Realities. Journal of Advanced Nursing, Vol. 41, No. 4, pp. 376–382. Robinson, J. 1991. Delphi Methodology for Economic Impact Assess- ment. Journal of Transportation Engineering, Vol. 117, No. 3, pp. 335–349. Rowe, E. 1994. Enhancing Judgement and Decision Making: A Critical and Empirical Investigation of the Delphi Technique. PhD thesis, University of Western England, Bristol. Rowe, E., and G. Wright. 2001. Expert Opinions in Forecasting: The Role of the Delphi Technique in Principles of Forecasting: A Hand- book for Researchers and Practitioners (J. S. Armstrong, ed.), Inter- national Series in Operations Research and Management Science Vol. 30, Springer US, New York, pp. 125–144. Streveler, R. A., B. M. Olds, R. L. Miller, and M. A. Nelson. 2003. Using a Delphi Study to Identify the Most Difficult Concepts for Students to Master in Thermal and Transport Science. Proc. of the American Society for Engineering Education Annual Conference and Exposition, Nashville, Tenn. Thangaratinam, S., and C. W. Redman. 2005. The Delphi technique. The Obstetrician and Gynaecologist, Vol. 7, pp. 120–125. Walker, A. M., and J. Selfe. 1996. The Delphi Method: A Useful Tool for the Allied Health Researcher. International Journal of Therapy and Rehabilitation, Vol. 3, No. 12, pp. 677–681. Table E.3. General Trends Extracted from Round 3 Results First Cost Life-Cycle Cost Inspection and Administration Cost Schedule Traffic Disruption DBBP No impact No impact and decrease 0–5% No impact No impact No impact DBBSTW No impact and increase 0–5% Decrease 0–5% No impact and decrease 0–5% No impact No impact DB No impact and increase 0–5%a No impact and increase 0–10%b No impact and decrease 0–5% Decrease 0–10% No impact DBSTW Increase 0–10%c No impact and decrease 0–10% No impact and decrease 0–10% Decrease 0–10% Decrease DBM Increase 0–10% Decrease 0–15% Decrease 5–10% Decrease 0–10% Decrease a Some responded that DB leads to a decrease of 0% to 5% in first cost. b Some responded that DB leads to a decrease of 0% to 10% in life-cycle cost. c Some responded that DB+STW leads to a decrease of 0% to 5% in first cost.

Next: Appendix F - Lake Anna Demonstration Project, Virginia Department of Transportation »
Performance Specifications for Rapid Highway Renewal Get This Book
×
 Performance Specifications for Rapid Highway Renewal
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

TRB’s second Strategic Highway Research Program (SHRP 2) Report S2-R07-RR-1: Performance Specifications for Rapid Highway Renewal describes suggested performance specifications for different application areas and delivery methods that users may tailor to address rapid highway renewal project-specific goals and conditions.

SHRP 2 Renewal Project R07 also produced:

A separate document, Guide Performance Specifications, includes model specifications and commentary to address implementation and performance targets (for acceptance) for 13 routine highway items. Agencies may adapt guide specifications to specific standards or project conditions. The commentary addresses gaps, risks, and options.

READ FREE ONLINE

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!