National Academies Press: OpenBook

In-Service Performance Evaluation: Guidelines for the Assembly and Analysis of Data (2022)

Chapter: Chapter 5 - Assessing, Interpreting, and Implementing Results

« Previous: Chapter 4 - Evaluation Measures
Page 34
Suggested Citation:"Chapter 5 - Assessing, Interpreting, and Implementing Results." National Academies of Sciences, Engineering, and Medicine. 2022. In-Service Performance Evaluation: Guidelines for the Assembly and Analysis of Data. Washington, DC: The National Academies Press. doi: 10.17226/26751.
×
Page 34
Page 35
Suggested Citation:"Chapter 5 - Assessing, Interpreting, and Implementing Results." National Academies of Sciences, Engineering, and Medicine. 2022. In-Service Performance Evaluation: Guidelines for the Assembly and Analysis of Data. Washington, DC: The National Academies Press. doi: 10.17226/26751.
×
Page 35
Page 36
Suggested Citation:"Chapter 5 - Assessing, Interpreting, and Implementing Results." National Academies of Sciences, Engineering, and Medicine. 2022. In-Service Performance Evaluation: Guidelines for the Assembly and Analysis of Data. Washington, DC: The National Academies Press. doi: 10.17226/26751.
×
Page 36
Page 37
Suggested Citation:"Chapter 5 - Assessing, Interpreting, and Implementing Results." National Academies of Sciences, Engineering, and Medicine. 2022. In-Service Performance Evaluation: Guidelines for the Assembly and Analysis of Data. Washington, DC: The National Academies Press. doi: 10.17226/26751.
×
Page 37
Page 38
Suggested Citation:"Chapter 5 - Assessing, Interpreting, and Implementing Results." National Academies of Sciences, Engineering, and Medicine. 2022. In-Service Performance Evaluation: Guidelines for the Assembly and Analysis of Data. Washington, DC: The National Academies Press. doi: 10.17226/26751.
×
Page 38
Page 39
Suggested Citation:"Chapter 5 - Assessing, Interpreting, and Implementing Results." National Academies of Sciences, Engineering, and Medicine. 2022. In-Service Performance Evaluation: Guidelines for the Assembly and Analysis of Data. Washington, DC: The National Academies Press. doi: 10.17226/26751.
×
Page 39
Page 40
Suggested Citation:"Chapter 5 - Assessing, Interpreting, and Implementing Results." National Academies of Sciences, Engineering, and Medicine. 2022. In-Service Performance Evaluation: Guidelines for the Assembly and Analysis of Data. Washington, DC: The National Academies Press. doi: 10.17226/26751.
×
Page 40
Page 41
Suggested Citation:"Chapter 5 - Assessing, Interpreting, and Implementing Results." National Academies of Sciences, Engineering, and Medicine. 2022. In-Service Performance Evaluation: Guidelines for the Assembly and Analysis of Data. Washington, DC: The National Academies Press. doi: 10.17226/26751.
×
Page 41

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

34 The evaluation measures outlined in Chapter 4 assess in-service performance through con- sideration of • The structural adequacy of the SFUE; • The occupant risk, through consideration of the crash severity; and • The postimpact vehicle trajectory and impact orientation, through consideration of the crash sequence of events. Each of these evaluation measures capitalizes on the data already available within the juris- diction. For each lettered evaluation measure, three conditions are evaluated: • Condition 1. The proportion of known values to unknown values available for evaluation (to assess the statistical power and minimize sample bias), • Condition 2. The proportion of verified installations of the SFUE, and • Condition 3. The proportion of cases in which inspection occurs as part of a standard operating procedure to provide a reasonable degree of assurance that the safety feature was in crash-ready condition prior to the crash. After Conditions 1, 2, and 3 are evaluated, the performance of the SFUE is assessed and con- clusions are formed through comparison with the performance goals (PGs) established by the jurisdiction. Four performance assessment levels are considered for each evaluation measure, as follows: 1. The performance assessment is not limited by design vehicle or speed. 2. The performance assessment is limited to the design vehicles for which the safety feature was crash tested. 3. The performance assessment is limited to the design speed for which the safety feature was crash tested. 4. The performance assessment is limited to both the design vehicle for the safety feature and the design speed for which the safety feature was crash tested. 5.1 Conditional Assessment of ISPE Data Set Conditions 1, 2, and 3 consider the amount of data available for the ISPE and the degree of confidence that the roadside safety feature was crash ready. 5.1.1 Condition 1: Statistical Power and Sample Bias An ISPE first considers Condition 1, which is met when the sampled data are not biased and the statistical power of the study is maximized. If Condition 1 is not met for a particular C H A P T E R 5 Assessing, Interpreting, and Implementing Results

Assessing, Interpreting, and Implementing Results 35   evaluation measure, then further conditions are not considered until or unless an investigative ISPE is undertaken to obtain more cases or reduce the number of unknown values such that Condition 1 is met. Unknown or missing values in the data set are indicated with a value of 99 entered in a row for the crash corresponding to the field for which the value is unknown. Condi- tion 1 considers the specific fields used in each evaluation measure. Some amount of unknown data is expected, but attention should be paid to the amount of unknown data in the analysis of each evaluation measure. Analysts should seek to understand the reasons for the missing data and, if possible, make corrections for future evaluations. A threshold for evaluating (or not) each evaluation measure with consideration of unknown (99) entries has been established and is shown in Table 8 in the far-right column. When the unknown entries (entry value of 99) for the field shown in the column labeled “Distribution” exceed the threshold shown in the column labeled “Percentage Unknown,” Condition 1 for that evaluation measure is not met. For example, if more than 30% of the PRS values for Evaluation Measure C in Table 8 are unknown, a conclusion should not be made for Evaluation Measure C. An investigative ISPE may be undertaken to obtain more data with known values prior to pro- ceeding with that evaluation measure. These thresholds for unknown data limit the introduction of bias in the estimation of ISPE evaluation measures. Consideration was given to the potential reasons for the unknown data. When data are missing at random, the unknown values will not bias the conclusions, given that the Condition 1 threshold is met. There are, however, some values that are difficult to obtain during a routine ISPE. For example, the variable PEN indicates that the occupant compartment is penetrated by the safety feature. If this is determined by reading the police report narrative, there will likely be situations in which there is penetration, but it is not indicated in the narra- tive. This is particularly true if the penetration did not interact with a vehicle occupant. In this Performance Outcome Evaluation Measure Distributiona Percentage Unknown (entry value = 99) Structural adequacy A BREACH1 30 B BREAK2 30 C PRS3 25 Occupant risk D PEN4 NA F PostHE4 20 H MAX_SEV4,5,6,7 VEH_TYPE4,5,6,7 NA Vehicle trajectory and orientation J PostHE4 25 K PostHE8 20 L ICP9 10 M ICP10 10 a Find the distribution of the field where 1. AHE = 1, TOTAL_UNITS ≥1, and SFUE = 1. 2. AHE = 1, TOTAL_UNITS ≥1, and SFUE = 4. 3. AHE = 1, TOTAL_UNITS ≥ 1, and SFUE = or (2, 3). 4. FHE = 1 and TOTAL_UNITS = 1. 5. AHE = 1 and TOTAL_UNITS = 1. 6. MHE = 1 and TOTAL_UNITS = 1. 7. FOHE = 1 and TOTAL_UNITS = 1. 8. FHE = 1 and TOTAL_UNITS ≥ 1. 9. FHE = 1, TOTAL_UNITS = 1, and SFUE = or (2, 3, 4). 10. FHE = 1, TOTAL_UNITS = 1, and SFUE = 1. Table 8. Condition 1: Consider statistical power and potential data bias.

36 In-Service Performance Evaluation: Guidelines for the Assembly and Analysis of Data situation, the variable PEN may be biased by unknown information due to the way the infor- mation was collected. Further, it is believed that this outcome will often not be noted even when it does occur because of the lack of this field on most state crash forms. For these reasons, a threshold has not been provided for Evaluation Measure D. Evaluation Measure D should be limited to investigative ISPEs, to ensure that this field can be consistently collected unless the state undertaking the ISPE includes this field on its crash form. However, some assumptions can be made about unknown crash severity and vehicle type without introducing bias into the data. A vehicle may not be at the scene when the police officer arrives, in which case, the crash severity would be unknown. The unknown crash severity, how- ever, will not bias the sample, as the crash severity can reasonably be assumed to be below the dichotomized value (i.e., less severe than a K or A level). A similar situation occurs when a driver of a crash-involved vehicle flees the scene. The crash severity is not known, but it is unlikely to be a KA crash, so the data remain unbiased. In summary, the assembled ISPE data set is evaluated under Condition 1 for sample bias and to ensure the statistical power of the study is maximized. It is suggested that analysts try to understand the reasons for any unknown data. Consideration should be given to different options for reducing unknown values. These options may include improved communication with police officers if the fields are already available on the police report but are not being com- pleted reliably or correctly. The routine collection of these fields may also be accomplished through the development and maintenance of a safety feature asset inventory. Provided there is no option to routinely collect these data fields, proceeding with an investigative ISPE that includes collection of field data for the ISPE evaluation measure of interest may be desirable. The data cross-tabulation generated by using the routine data for the particular evaluation measure can be used to predict the investigative study period and/or area, as discussed in Chapter 3. 5.1.2 Conditions 2 and 3: Construction and Maintenance Practices Conditions 2 and 3 consider the crash readiness of the SFUE. A jurisdiction may have a policy for inspecting installations that evaluates them for correctness and thereby provides evidence for the crash readiness of the safety features under evaluation in the jurisdiction. Similarly, a juris- diction may have a maintenance practice that systematically inspects safety features to ensure the continued crash readiness. The ISPE data set includes fields for INSTALL and for MAINT, which indicate whether the SFUE has been inspected and found to be crash ready. The proportion of safety features inspected for crash readiness during the installation is assessed under Condition 2. When the proportion of known inspections is equal to or greater than the threshold established by the jurisdiction, Condition 2 is met. If a jurisdiction routinely inspects new installations and records the results either in an asset management database or the con- struction logs, analysis may show, for example, that 98% were known to be in crash-ready condi- tion after being installed. Knowing that the percentage is high provides confidence that the ISPE results reflect hardware that was initially installed correctly. Likewise, the proportion of safety features subject to routine maintenance activities (e.g., inspections) that document crash readiness over the life of the safety feature is assessed under Condition 3. For example, a jurisdiction may schedule a periodic drive-by inspection of all safety features to provide such assurance. Such activities, however, may be limited to particular roadways within a jurisdiction. When the proportion of inspected and crash-ready features within the MAINT field is equal to or greater than the threshold established by the jurisdiction, Condition 3 is met. Condition 2 considers the proportion of installations of the SFUE that can be verified to have been installed correctly. Condition 3 considers the proportion of routinely inspected and

Assessing, Interpreting, and Implementing Results 37   crash-ready hardware to provide a reasonable degree of assurance that the safety feature is in crash-ready condition (Table 9). If the value is below the threshold established by the jurisdic- tion, the agency might consider an investigative ISPE to determine the true proportion of crash- ready hardware. If, however, the performance of the safety feature for the considered evaluation measure was found to meet the jurisdiction’s PGs, then an investigative ISPE would not be recommended, since the performance of the feature already meets the agency’s PG. Conditions 2 and 3 simply provide some assurance that there is a procedure in place to maxi- mize the chance of the safety features being in crash-ready condition before a crash occurs. Ideally, all safety features will be inspected and verified to be correctly installed at the time of installation. It would also be ideal for all safety features to be routinely inspected through main- tenance activities to provide a reasonable degree of assurance that the need for maintaining the safety feature in crash-ready condition is addressed. While both of these conditions represent the ideal, neither is always practical. Each jurisdiction establishes its own PGs. Some jurisdic- tions may have a more robust and active construction team, while others may already have maintenance inspections programmed. For these reasons, the threshold for both INSTALL and MAINT is established by the jurisdiction. 5.2 Performance Assessment of Safety Feature Under Evaluation If Condition 1 is met, the ISPE may continue to this next phase, performance assessment. If Conditions 2 and 3 are also met, the results of the performance assessment can be used with greater confidence, in the knowledge that the safety features are generally crash ready. That is, Condition 1 must be met to proceed to performance assessment, regardless of whether Con- ditions 2 and 3 are also met, but meeting all three conditions provides the strongest evidence that the performance reflects correctly installed and well-maintained roadside safety features. Conditions 1, 2, and 3 are automatically assessed when the ISPE Data Set and Analysis Template is used. After Conditions 1, 2, and 3 are evaluated, the performance of the SFUE is assessed and conclusions are formed through comparison with the PGs established by the juris- diction. The performance assessment levels for each evaluation measure are discussed here. 5.2.1 Performance Assessment Level 1: Analysis with No Limitations The performance of hardware is first assessed through consideration of all of the crashes within the compiled ISPE data set—Performance Assessment Level 1. There are no limitations on the type of impacting vehicle or posted speed during this first level of assessment. This first assessment progresses with the calculation of p̂ and ES for each appropriate ISPE evaluation measure, as outlined in Table 6 with pseudocode. Corresponding CIs are also determined for both p̂ and ES. Condition Performance Outcome Evaluation Measure Proportion1 2 Installation A, B, C, D, F, H, J, K, L, M INSTALL 3 Maintenance A, B, C, D, F, H, J, K, L, M MAINT 1Find the distribution of the field, including values of 99, where AHE = 1 and TOTAL_UNITS ≥ 1 limited to the SFUE. Table 9. Conditions 2 and 3: Construction and maintenance practices by jurisdiction.

38 In-Service Performance Evaluation: Guidelines for the Assembly and Analysis of Data A jurisdiction may wish to benchmark the performance of the safety feature, so that improve- ments can be tracked and quantified. If the SFUE has met or exceeded the PG established by the jurisdiction across the range of vehicles and posted speed limits it is exposed to, then the choices made regarding where and when to install the safety feature would appear to be appropriate. A jurisdiction may define the PG by any means determined acceptable by the jurisdiction (e.g., review of past data, policy decision, engineering judgement, existing literature), and a jurisdic- tion need not establish a PG. With or without a PG, the calculations allow a jurisdiction to track hardware performance over time. During the conduct of the research for NCHRP Project 22-33, which developed the guidelines in this report, it was suggested that, in general, jurisdictions are satisfied with the field perfor- mance of most of their safety features. If that is the case, a jurisdiction may consider establishing PGs after performing its first ISPE. The PGs could be set equal to the observed p̂ for each evalu- ation measure at Performance Assessment Level 1. When future ISPEs are conducted for new safety features, the jurisdiction could consider the future observed values and compare them with the existing PGs (i.e., the PGs found during the initial ISPE). Other methods are acceptable; the method used to establish the PG should be included in the ISPE documentation. Any value of p̂ that meets the jurisdiction’s PGs in this assessment completes the ISPE process, and there is no need to continue with the limited assessment below in Sections 5.2.2 through 5.2.4. The purpose of the later assessments is to focus on possible reasons for not meeting the PG or identify ways to improve performance. 5.2.2 Performance Assessment Level 2: Analysis Limited by Vehicle Type The determination of p̂, ES, and CI discussed in Section 4.4 progresses in the same manner for this next level of performance assessment; however, the ISPE data set is limited by the vehicle types that are similar to those used in the design and testing of the safety feature. When the data are filtered, the VEH_TYPE field is limited in this performance assessment as follows: • Safety features designed and tested to satisfy the MASH criteria (AASHTO 2016) or NCHRP Report 350 (Ross et al. 1993) Test Levels 1, 2, and 3 should include PC and PU from the VEH_TYPE field. • Safety features designed and tested to satisfy the MASH criteria or NCHRP Report 350 Test Level 4 should include PC, PU, and SUT from the VEH_TYPE field. • Safety features designed and tested to satisfy MASH criteria or NCHRP Report 350 Test Level 5 should include PC, PU, SUT, and TT from the VEH_TYPE field. The performance assessment limited by vehicle type proceeds as outlined earlier in Table 6 with the calculation of p̂ and ES appropriate for the SFUE. 5.2.3 Performance Assessment Level 3: Analysis Limited by Posted Speed Limit The determination of p̂, ES, and CI discussed in Section 4.4 progresses in the same manner as the previous performance assessment, with the exception that the ISPE data set is limited to cases with SPEED_LIMIT values that are comparable to the impact speeds used in the design and crash testing of the safety feature. In the tabulation of this assessment, the following limita- tions are applied: • Safety features developed to meet MASH criteria (AASHTO 2016) or NCHRP Report 350 (Ross et al. 1993) Test Level 1 include SPEED_LIMIT ≤ 35.

Assessing, Interpreting, and Implementing Results 39   • Safety features developed to meet MASH criteria or NCHRP Report 350 Test Level 2 include SPEED_LIMIT ≤ 45. • Safety features developed to meet MASH criteria or NCHRP Report 350 Test Levels 3–5 include SPEED_LIMIT ≤ 65. The performance assessment limited by posted speed limit proceeds as outlined earlier in Table 6 with the calculation of p̂ and ES appropriate for the SFUE. Several jurisdictions participated in a pilot test of a beta version of these guidelines. Two pilot states have statewide posted speed limits that are greater than 65 mph on their Interstate systems. When Performance Assessment Level 3 was evaluated and the ISPE data set was limited to crashes occurring on roadways with posted speed limits of less than or equal to 65 mph, their Interstate highways were filtered out of the data. It was found that the risk of a fatal or serious injury increased in these jurisdictions. Further study was undertaken to confirm that the hard- ware installed on roadways with lower functional classifications dominated the data. These juris- dictions concluded that the hardware on their Interstates was performing better than that on their roadways with lower functional classifications. While not an anticipated finding, this is nonetheless a useful one. 5.2.4 Performance Assessment Level 4: Analysis Limited by Vehicle Type and Posted Speed Limit The determination of p̂, ES, and CI discussed in Section 4.4 progresses in the same manner as the performance assessments described in Sections 5.2.2 and 5.2.3, with the exception that the ISPE data set is limited by both vehicle type and posted speed limit. This final filter considers assumptions in safety feature design most similar to the crash test and evaluation guidelines, to the extent practical. In the tabulation of this assessment, the following limitations are applied: • Safety features developed to meet MASH criteria (AASHTO 2016) or NCHRP Report 350 (Ross et al. 1993) Test Level 1 include SPEED_LIMIT ≤ 35 and PC and PU from the VEH_ TYPE field. • Safety features developed to meet MASH criteria or NCHRP Report 350 Test Level 2 include SPEED_LIMIT ≤ 45 and PC and PU from the VEH_TYPE field. • Safety features developed to meet MASH criteria or NCHRP Report 350 Test Level 3 include SPEED_LIMIT ≤ 65 and PC and PU from the VEH_TYPE field. • Safety features developed to meet MASH criteria or NCHRP Report 350 Test Level 4 include SPEED_LIMIT ≤ 65 and PC, PU, and SUT from the VEH_TYPE field. • Safety features developed to meet MASH criteria or NCHRP Report 350 Test Levels 5 and 6 include SPEED_LIMIT ≤ 65 and PC, PU, SUT, and TT from the VEH_TYPE field. The performance assessment limited by posted speed limit and vehicle type proceeds as out- lined earlier in Table 6 with the calculation of p̂ and ES appropriate for the SFUE. 5.3 Interpreting Results Each evaluation measure compares the proportion of fatal and serious-injury (KA) crashes when an unexpected outcome occurs with the proportion of KA crashes when the expected out- come occurs. The expected outcomes conform to the design objectives of a crash test. For example, Evaluation Measure A examines the crash severity when a vehicle is contained and redirected in a crash (i.e., the expected outcome) with the crash severity when the vehicle breaches the longitu- dinal barrier (i.e., the unexpected outcome). If the ratio of KA crashes in which the barrier was breached to KA crashes in which the barrier was not breached (i.e., contained or redirected) is

40 In-Service Performance Evaluation: Guidelines for the Assembly and Analysis of Data greater than 1 then breaching is shown to be a harmful outcome. Similarly, Evaluation Measure B examines whether a support structure breaks away in an impact (i.e., expected outcome) or does not break away (i.e., unexpected outcome). If the ES is greater than 1 for the breakaway hardware, then not breaking away is a riskier outcome for support structures. When the ES of an evaluation measure is greater than 1, the risk of a KA crash for the unexpected outcome is increased as compared with that of the expected outcome. For example, if the ES for Evaluation Measure A equals 2, then there is twice the risk of observing a KA crash when the safety feature is breached. When the ES is exactly 1, the risk of observing a KA crash is the same, regardless of whether the vehicle breaches the barrier or is contained and redirected by it. If the CI for the ES is found to include the value 1, the findings are not statically significant. If, for example, the ES for Evaluation A was found to be between 1.51 and 2.48, the risk of a KA crash when a breach is observed is between 1.51 and 2.48 higher than when a breach is not observed. When the 85% CI does not include the null value, as is true in this example, the finding is statistically significant. On the other hand, if the ES for Evaluation A had a CI of 0.91 to 1.51, there is some likelihood that there is no difference between a breach and a no-breach outcome. The objective of examining the ES values is to determine whether unexpected crash outcomes (e.g., breaching a longitudinal barrier) result in a greater risk than expected outcomes (e.g., redirection, containment for a longitudinal barrier). The CI is an indication of the confidence that the result is a true effect. Conclusions made for each level of NAME are made through consideration of the R2i point estimate values and the CI. For example, when the R2 value for a particular value of NAME = a is 0.02 (0.01, 0.03) and the R2 value for a second value of NAME = b is 0.05 (0.04, 0.06), it can be said NAME = a has superior performance under that evaluation measure because the unexpected outcome (e.g., breach) is less probable. The CI ranges do not overlap for NAME = a and NAME = b, so the findings are statistically significant. An ES of greater than 1 for Evaluation Measure A (breaching the longitudinal barrier) indicates that the risk of a KA crash is lower when the longitudinal barrier is not breached. Safety features for which breaches occur less frequently would reduce the risk of KA crashes when a breach does occur. However, if R2 for NAME = a was found to equal 0.02 (0.01, 0.03) and R2 for NAME = b was 0.03 (0.02 0.04), then a statistically significant difference in performance was not established between R2 NAME = a and R2 NAME = b, even through the point estimate for R2NAMEa is smaller. The ability to obtain significant results should be interpreted in much the same way a jury verdict is interpreted in the United States: “Just because a person has been declared ‘not guilty,’ it does not mean that he is innocent” (Taylor 2015). Results not statistically significant can still be used to determine a trend in the data. Even when an effect is found to be not statistically significant, one should have more confidence than before the research was conducted. However, the support may be weak and the data may be inconclusive. Collecting additional cases will narrow the CIs and clarify whether a trend is real or not. The ISPE performance assessment can stop for any evaluation measure where the ES is less than 1 and is statistically significant. Similarly, there is no need to continue when the perfor- mance assessment of the point estimates for a particular evaluation measure is equal to or less than the PG for that point estimate. In other words, if the PG is met for all vehicle types, on roadways with the full range of speed limits, then the ISPE has demonstrated good performance of the studied safety feature. If the point estimates for a particular evaluation measure do not meet the PG established by the agency, then the analyses described in Sections 5.2.2 through 5.2.4 may help isolate the source of the difficulty. For example, if a longitudinal barrier does not meet the PG for Evaluation Measure A for the general assessment in Section 5.2.1, but does meet the PG for Evaluation Measure A when

Assessing, Interpreting, and Implementing Results 41   limited to passenger vehicles in Section 5.2.2, then the analyst can conclude that non passenger vehicles may be experiencing a higher risk with the longitudinal barrier being evaluated. As with all analyses, each of the evaluation measures depends on the quality of the data used in the analysis. The analyst is expected to understand that every evaluation measure cannot be considered within every jurisdiction with the data routinely collected. When the analyst is not comfortable drawing conclusions from the routinely collected data, the reasons should be stated in the ISPE report. 5.4 Implementing Results The performance assessments in Sections 5.2.1 through 5.2.4 are broad assessments of the AASHTO RDG (AASHTO 2011), the jurisdiction’s selection and placement procedures, and the crash test guidelines used to develop safety hardware. In many jurisdictions, the practice for designing the site and locating the safety features follows the AASHTO RDG. Some jurisdic- tions modify the RDG guidelines and incorporate that modified version into their own design guidelines to address regional needs and priorities. If statistically significant results for an evaluation measure ES are found (i.e., ES is statistically significant as long as the CI does not include 1) and Conditions 1, 2, and 3 are met, the results can be relied upon to make inferences. For example: • If ES is greater than 1 and the CI does not include 1 (i.e., the result is statistically significant) then the unexpected outcome (e.g., breach, penetration) results in a higher risk of observing a KA crash. That is, risk of a KA crash is higher for the unexpected event than for the crash- tested expected outcome. • If an assessment is made by NAME and a statistically significant difference is found for R2 between two different values of NAME (i.e., two different types of the SFUE) that are used interchangeably, the jurisdiction may consider removing the safety feature with the higher value of R2 from its standards. That is, the jurisdiction can stop installing the poorer-performing safety feature in favor of the hardware with documented superior performance. • If an assessment is made by NAME and a statistically significant difference is found between the performance of two identical safety features installed in two different situations (e.g., offset from the traveled way), the jurisdiction may consider an alternative to the placement with the higher risk. • If comparisons with previous jurisdictional point estimates are made, – Current practices may be continued if improvements are observed (i.e., p̂ is equal to or lower than the PG). – The changes that may have led to degraded performance (i.e., p̂ has increased) may be evaluated. The performance assessment results of an ISPE allow an agency to make decisions about what hardware to use and where to place the hardware on the basis of quantifiable observed data. Performing ISPEs continuously or periodically will allow agencies to focus on safety fea- tures that have demonstrably expected performance and, over time, eliminate safety features with less-favorable performance. ISPEs allow for the setting of reasonable and achievable goals and for tracking progress toward achieving those goals.

Next: Chapter 6 - Sharing and Comparing Results »
In-Service Performance Evaluation: Guidelines for the Assembly and Analysis of Data Get This Book
×
 In-Service Performance Evaluation: Guidelines for the Assembly and Analysis of Data
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

An "in-service performance evaluation" (ISPE) examines roadside safety features while roads are in service. A database of crashes is generally the minimum resource necessary for conducting an ISPE.

The TRB National Cooperative Highway Research Program's NCHRP Research Report 1010: In-Service Performance Evaluation: Guidelines for the Assembly and Analysis of Data presents uniform criteria for conducting ISPEs of both permanent and temporary safety features.

Supplemental to the report are NCHRP Web-Only Document 332: Multi-State In-Service Performance Evaluations of Roadside Safety Hardware, which documents the development of the guidelines and the entire research effort; the ISPE Data Set and Analysis Template, a spreadsheet tool to aid in the calculations of each evaluation measure shown in this report and to support the documentation of an ISPE; Implementation of Research Findings and Products, a plan that identifies mechanisms and channels for implementing this research; and a presentation that summarizes the project.

READ FREE ONLINE

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!