Skip to main content

Currently Skimming:


Pages 35-72

The Chapter Skim interface presents what we've algorithmically identified as the most significant single chunk of text within every page in the chapter.
Select key terms on the right to highlight them within pages of the chapter.


From page 35...
... 35 C H A P T E R 3 Phase II: Developing and Vetting Target Setting Methods Research Approach Phase II of this research involved four components: Task 5: Develop and Select Methods for Target Setting. Task 5 involved developing methods for target setting that state DOTs can adopt for performance management that consider: data requirements; appropriate analytic methods; and how to account for uncertainty, practicality, and practitioner capacity.
From page 36...
... 36 the flyer and had discussions with several states to ultimately identify pilot agencies that were interested in testing different target setting methods. The pilot agencies also assessed how easily the method is employed, how the resulting targets compare to the targets set by each state DOT's previous method, and any potential issues that could result from using the method.
From page 37...
... 37 Target Setting Methods Pilot and Results for PM1 Safety Measures Pilot Participants Three state DOTs participated in the piloting of safety target setting methods: Washington State, South Carolina, and Minnesota. Participants from these states had experience with a range of methods for safety target setting and held differing perspectives on how aspirational targets should be.
From page 38...
... 38 The approach to comparing the different forecast methods was to build several forms of regression models using identified explanatory variables, train them on a subset of historical fatalities data, validate the models against known data for the remaining data points, and compare the error metrics and forecasted values associated with each model against the same error metrics for a linear trend forecast. The research team first ran each statistical model on years 2000-2014, "blinded" the models to the safety results from 2015 - 2018 and predicted "future" performance for these years.
From page 39...
... 39 model. Poisson is a related form of generalized linear regression model that a project panel members and the team's statistical expert suggested exploring.
From page 40...
... 40 points (2016-2020) , and calculating MAPE error metrics based on the comparison to actual results.
From page 41...
... 41 Serious Injury Ratio Method The team examined historical fatal and serious injury counts to assess the consistency of the relationship between fatal and serious injuries among states and over time, based on Michigan DOT's approach to setting serious injury targets as a ratio of fatality targets. To do this, the research team collected fatal and serious injury counts for each state plus Puerto Rico and the District of Columbia between 2012 and 2018 from the FHWA's Transportation Performance Reporting website.
From page 42...
... 42 Minnesota's participants expressed an interest and willingness to try the ARIMA version given its simplicity, but also acknowledged that they were likely to continue setting aspirational targets as long as they are able to (at the time of the pilot, discussions were underway with the state's public safety office about pushback form the NHTSA on these aspirational targets and that the approach could change into the future.) Washington State's participants, who both have PhDs in fields related to building models for safety analysis, are both committed to aspirational targets.
From page 43...
... 43 Results The results of the pilots span several factors that inform the ultimate decision of which method to employ: • Performance – How well each method statistically fits historical data and how closely each method predicted future performance • Complexity – How technically difficult each method is • Agency Preference – Despite all the factors outlined above, each agency in the pilot expressed preferences around target setting philosophy, comfort based on previously used methods, and stakeholder preferences that ultimately hold more sway over final decisions going forward than the accuracy of other forecasts. In terms of performance, the ARIMA model with unemployment included as an explanatory variable showed the most accurate forecast results.
From page 44...
... 44 Target Setting Methods Pilot and Results for PM2 Infrastructure Condition Measures Pilot Participants Two state DOTs, Oklahoma and New Jersey, agreed to participate in the pilot testing for the infrastructure performance measures. Summary of Methods Tested Both agencies used some version of time-series analysis to set their current national performance targets and both agencies requested to pilot the use of scenario analysis.
From page 45...
... 45 Data Type NJDOT PMS Oklahoma PMS Unit Costs Contains up-to-date unit costs for all treatments Contains up-to-date unit costs for all treatments Available Funding Available funding can be input as part of defining a scenario, or "run" Available funding can be input as part of defining a scenario, or "run" Prioritization Algorithm Uses dTIMS incremental benefit calculation Uses dTIMS incremental benefit calculation Both agencies possessed the ability to reliably forecast pavement performance for state-owned highways. Oklahoma DOT (ODOT)
From page 46...
... 46 Figure 3. Relationship between pavement management databases, systems, and outputs The issue of network segmentation is critical to the calculation of the national pavement performance measures.
From page 47...
... 47 Table 13. Example of Segmentation Influencing Pavement Measure Calculations Segmentation Percent Good Percent Poor 0.1 mile 27.5 27.5 Project-Length 30.0 32.5 New Jersey The research team met with NJDOT on July 2, 2021, to assess the agency's PMS capabilities related to forecasting the national pavement performance measures.
From page 48...
... 48 Figure 6. NJDOT and PM2 Performance Measure Comparison (Source: New Jersey TAMP)
From page 49...
... 49 Table 15. 2018 NJDOT to National Pavement Rating Correlation National Good National Fair National Poor NJDOT Good 89.18% 14.89% 0.00% NJDOT Fair 22.01% 77.97% 0.02% NJDOT Poor 4.78% 87.29% 7.93% Table 16.
From page 50...
... 50 national performance measures. The same process was being followed to calculate the conditions for noninterstate NHS pavements and will be followed for multiple scenarios.
From page 51...
... 51 governments' pavements. A final minor adjustment was made to ensure that the totals for each forecasted year sum to 100%.
From page 52...
... 52 Once the different segmentation approaches were mapped ODOT provided the results of four different scenarios.
From page 53...
... 53 Figure 7. Forecast of NJDOT Pavement Conditions in terms of Condition Status for Budget Scenario of $320 million Figure 8.
From page 54...
... 54 These results are encouraging that even states with limited data can develop procedures to effectively forecast the national pavement measures. This pilot demonstrates that states with limited data can still use their PMS to inform target setting, as long as the agency has confidence in the PMS results.
From page 55...
... 55 Figure 10. Forecasted Percent Poor Non-Interstate NHS Pavement under Different Scenarios Observations and Applicability The results of the pilots suggest that agencies with a functioning PMS can implement a scenario analysis or system/model-based approach to target setting, even if the PMS does not calculate the national performance measures or model performance based on the HPMS 0.1-mile segmentation.
From page 56...
... 56 Table 24. Challenges to Forecasting National Pavement Performance Measures in Standard PMS Challenges Possible Resolutions Differences in segmentation between HPMS and PMS data.
From page 57...
... 57 3. Reduced level of effort.
From page 58...
... 58 Table 25. Proposed Methods for PM3 Target Setting Method Interstate and Non-Interstate NHS Reliability Freight Reliability Annual hours of PHED per capita Non-SOV Mode Share Building off Baseline, with Assumptions 0 0 0 0 Time-Series Trend Analysis X X X X Trend plus other Factors X X X X Performance Risk Analysis X X NA NA Segment Risk Analysis X X NA NA Segment Level Statistical Model 0 0 NA NA Travel Demand Forecasting Model 0*
From page 59...
... 59 Reliability Methods Tested by the Pilot States and Results The pilot states tested the following methods for target setting for the NHS and Freight reliability performance measure. Time Series Trend Analysis All the state DOTs tried applying the method for one of the performance measures related to NHS or Interstate travel time reliability.
From page 60...
... 60 forecast data could be collected during the pilot. Based on the analysis of how the explanatory factors have influenced the historical performance of the Interstate reliability measure, UDOT plans to adjust the targets for the future years.
From page 61...
... 61 Figure 12. Yearly Interstate travel time reliability measure values mapped with statewide VMT, GDP, population and employment estimates WSDOT compared the historical trend line of TTRI (2017-2020)
From page 62...
... 62 data did not show any correlation with the VMT or economic indicators such as GDP, Employment or Population totals. Also, WSDOT realized that the 2019 TTRI data provided by RITIS NPMRDS showed some quality issues as the 2019 data showed significant improvement in TTRI from the 2018 values.
From page 63...
... 63 MnDOT used 2015 to 2019 LOTTR data to perform trend analysis and forecast for year 2024. The 2024 forecasts were based on the TREND function in excel, which forecasts using a linear trend using the method of least squares.
From page 64...
... 64 Figure 14. Box and Whisker Plot for Monthly Interstate Reliability measure for Minnesota (Data Source: RITIS NPMRDS)
From page 65...
... 65 Monthly Interstate Reliability Measure (% reliable) 2017 2018 2019 2020 2021 Third Quartile 82.8 83.7 82.1 99.5 99.0 Maximum Value 89.4 88.8 90.2 100 99.7 Mean 81.5 82 81.1 97.5 96.5 Range 14.1 11.7 16.2 10.7 6.8 Standard Deviation 4.41 3.64 4.07 3.84 2.97 Piloting Process: Non-SOV Mode Share The target for non-SOV mode share performance measure is developed and reported on an UZA scale.
From page 66...
... 66 Non-SOV Mode Share (%) Linear Forecasts Exponential Forecasts Washington State 5-year Seattle 5year Washington State 5-year Seattle 5-year 2017 27.6% 32.0% 27.6% 32.1% 2018 27.6% 32.2% 27.7% 32.3% 2019 27.6% 32.4% 27.7% 32.5% 2020 27.6% 32.6% 27.7% 32.7% 2021 27.7% 32.8% 27.8% 32.9% 2022 27.7% 33.0% 27.8% 33.1% 2023 27.7% 33.2% 27.9% 33.4% 2024 27.8% 33.4% 27.9% 33.6% 2025 27.8% 33.6% 28.0% 33.8% 2026 27.8% 33.8% 28.0% 34.0% Piloting Process: Annual Hours of Peak Excessive Delay (PHED)
From page 67...
... 67 PHED per capita Methods Tested by the Pilot States and Results Not many state DOTs chose to set targets for the PHED performance measure as part of this pilot as they are expected to be set by the regional MPOs covering the UZA with support of the state DOTs. WSDOT tried the time series trend analysis method and used the annual hours of PHED per capita data from 2013 to 2020 to forecast 2022 and 2026.
From page 68...
... 68 true, given that the COVID-19 pandemic could not have been predicted. It is generally accepted that the COVID-19 pandemic could have long-term effects change in travel behavior and economic factors, but the extent and the duration of the impact are unknown.
From page 69...
... 69 • For the PM2 pavement condition measures, NJDOT faced challenges in collecting all of the pavement inventory data. The pilot effort at NJDOT was led by the pavement management unit.
From page 70...
... 70 While the sample of agencies participating in the pilot was small and may not be representative of all state DOTs, those participating in the pilot generally felt that simpler methods would be more efficient and serve the purpose of supporting target setting and preferred this approach rather than investing in more complex regression model development for the purpose of target setting. Developing statistical models requires considerable data associated with explanatory variables, and even when historic data are available on these variables, forecasts for these variables may not be available and may have high levels of uncertainty.
From page 71...
... 71 underlying causes of performance results, ease of implementation, or helpfulness in engaging with stakeholders, for instance. The methods tested differed in relation to these attributes (although the accuracy of future forecasts cannot yet be discerned)
From page 72...
... 72 subsequent actions. Therefore, the team set up a virtual focus group discussion with six practitioners to hear what they have found successful to make their targets a meaningful part of the performance management process.

Key Terms



This material may be derived from roughly machine-read images, and so is provided only to facilitate research.
More information on Chapter Skim is available.