Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
4 C H A P T E R 2 Phase I: Foundational Research Research Approach Phase I of this research involved two components: Task 1: Review Available Literature. Task 1 involved a review of target setting methods as documented in various sources. The research team began with a review of information provided by the state DOTs in the 2018 submittal to the FHWA via the Performance Management Form (PMF), as presented in the FHWA Transportation Performance Measure (TPM) Dashboard. In addition to information on quantitative targets, the Dashboard includes qualitative descriptions provided by the states of methods used for target setting, and includes information from all 50 states, plus the District of Columbia and Puerto Rico. The research team requested available data from FHWA following the October 2020 submittals by states of the mid-performance period Progress Report but was informed that the information had not been made public and could not yet be released; as a result, the methods presented in the Phase I report reflected the initial target setting methods used by states. The research team also reviewed additional documents available on-line presenting target setting approaches for the TPM measures. In addition, the research team conducted a review of non-TPM measures. Based on feedback from the project panel, a set of five non-required TPM performance topics were selected for further analysis in this report based on available literature: ⢠Accessibility, ⢠Greenhouse gas emissions (GHGs), ⢠Active transportation, ⢠Transit ridership, and ⢠Customer satisfaction. Task 2: Focus Groups and Interviews. Based on the results of the literature review, and with feedback from the project panel, the research team identified agencies for discussions to gain more detailed information about the methods used for target setting, as well as challenges faced, and issues associated with re-evaluation of targets as part of the mid-performance period Progress Report. These discussions were in the form of small focus group web-based meetings as well as individual interviews in some cases. The agencies participating in the discussions mainly consisted of state DOTs, but also included some MPOs for the CMAQ measures, as well as agency consultants invited by the state DOTs in a few cases. To identify a group of states that would provide the most useful insight into understanding relevant examples of common approaches as well as the breadth of approaches agencies have used, the research team proposed selecting states for further research based on the following criteria: 1. Representation of the range of methods â While not all will end up as recommended approaches, it is worthwhile to understand the various methods fully, as well as the rationale for their use.
5 2. Level of sophistication â Our selection skewed toward more sophisticated approaches, while still seeking to understand the circumstances and rationales used when less sophisticated approaches were employed. 3. Ambitious versus conservative targets â How ambitiously the end targets are set is another important facet of attitudes and approaches to target setting. A foundational philosophical question is: should targets be more ambitious than we are on track to achieve (and by how much), or should they reflect realistic, data-driven assessments of the agencyâs current trajectory? 4. Size and geographic diversity â This factor was least significant and was used to ensure at least some diversity in the states and to serve as a deciding factor between a handful of states with similar approaches. In addition, the project panel suggested considering whether the agencies have a process in place for actions to achieve targets set and processes in place for evaluating failures to achieve past targets. It was difficult to discern this information, but the research team considered preliminary results from various studies such as NCHRP 02-27, Making Targets Matter: Managing Performance to Enhance Decision- Making, and on-going work being conducted for FHWA to assess performance-based planning and programming practices of states and MPOs. Based on these criteria, the research team recommended a set of states for panel review and used panel input to select agencies for discussions. The resulting focus groups and interviews for the PM1, PM2, and PM3 measures included representatives 11 state DOTs for the PM1 (Safety) measures, 4 state DOTs for the PM2 (Infrastructure Condition) measures, and 8 state DOTs and 4 MPOs for the PM3 (Reliability, Freight, and Congestion) measures. In addition, the research team reviewed other resources provided by the participants, as well as additional information made available to the team. Much of the information gathered from the literature was ultimately incorporated into the Guide, and so is not repeated in this report (the reader is encouraged to review the resulting NCHRP Research Report 1035: Guide to Effective Methods for Setting Transportation Performance Targets). Below, summary information on findings from the 2018 TPM submittals, and findings from the focus group and interview discussions, which are used in developing the Guide but not directly incorporated, are presented. Initial Identification of Target Setting Methods The initial research identified a range of methods used to support target setting, and classified the methods into four broad categories: 1. Very simple with limited historical data analysis â These methods involve limited analysis of trends and factors influencing performance, including policy-based approaches. The initial analysis suggested that this type of approach was most common for safety measures (PM1), where some states set a target in the form of a set annual decrease (e.g., 2% reduction per year), often based on a long-term goal that is annualized. For instance, California set its fatalities performance targets for 2019 based on an annual decrease of 3.0%, consistent with its Strategic Highway Safety Plan (SHSP) goal of reducing fatalities by 3% annually. In the case of infrastructure conditions (PM2) measures, some states selected targets that matched their baseline conditions, or set conservative targets (at levels below current or forecasted conditions), even at or near minimum acceptable levels, to ensure that the targets would be met. In the case of reliability and freight measures (a subset of the PM3 measures), several states set targets at their baseline values, reflecting limited historical data to assess trends and uncertainties on future directions. This approach was consistent with FHWA guidance, which noted that due to differences between
6 versions 1 and 2 of the National Performance Management Research Data Set (NPMRDS), setting achievable targets based on baseline (2017) figures would be a prudent approach. 2. Using historic trends analysis â States typically conducted analysis of historic performance trends, to the extent data were available, and in many cases used the historic trend line to forecast future performance in order to set a target. The initial research found this was a common approach for safety measures, non-SOV mode share, and annual hours of peak hour excessive delay per capita, in particular. Trend line analysis is generally clear and straightforward, although agencies often conduct statistical analyses that explore different ways to fit a trend beyond simple straight trend line analysis. In many cases, the states calculated the anticipated trend value and then made adjustments to consider other factors, such as economic indicators, travel projections, or transportation projects or program efforts that might influence performance. These adjustments were generally made based on judgment of what is reasonable, and in some cases accounted for policy considerations. For example, in Minnesota, the number of fatalities was calculated using historic data and projecting forward, but slight adjustments were made based on local knowledge gathered from stakeholders. 3. Developing forecasting models â Most sophisticated approaches involved using statistical techniques to account for various factors that influence performance. These approaches go beyond trend line analysis of performance data and integrate different factors that may influence performance within the forecasting itself rather than a âback-of-the-envelope" consideration used to adjust the trend analysis result. The initial analysis found that these approaches typically involved collecting data on different factors that influence performance in order to develop a regression equation that best fits the data and functions as a forecasting model. This approach was used by some states to support development of safety, reliability, and freight targets. For instance, for safety targets, Virginia DOT used monthly data at its district level to develop a model that accounts for 14 different factorsâ impacts on safety outcomes. For reliability and freight targets, often this type of model development focused on data at the segment level. For instance, New Mexico DOT developed log-linear regression models that associated Level of Travel Time Reliability (LOTTR) and Truck Travel Time Reliability (TTTR) for each segment with volume, capacity, and roadway attributes. It then updated forecast future volumes based on estimated growth rates (from the Highway Performance Monitoring System) and updated future capacity based on planned projects and used the models to forecast future LOTTR and TTTR using the updated volumes and capacities to develop updated segment-level figures for the performance measure calculation. 4. Using other available models and tools â Finally, the analysis found other models and tools are used to support target setting, notably for infrastructure and congestion measures. States, for instance, commonly use pavement management systems that encompass detailed data on pavement conditions along with algorithms that account for deterioration rates and performance prediction, along with life cycle costing analysis. For the congestion measures (non-single occupant vehicle mode share and peak- hour excessive delay per capita), some regions applied their regional travel demand models in order to forecast mode shares and congestion levels and apply these forecasts as part of procedures using collected data on mode share and excessive delay to calculate targets. In addition, the Phase I research found that there were some variants to these general categories. For instance, some unique approaches were utilized for the reliability and freight measures, utilizing data on LOTTR and TTTR metrics at the road segment level to assess risks that individual segments would change performance (i.e., shift from being considered reliable to unreliable or yield a higher TTTR) in order to assess possible impacts on the overall measures (share of person-miles traveled on reliable segments and TTTR index). Moreover, the research found that some state DOTs and MPOs used multiple approaches to
7 forecast anticipated performance or combined multiple approaches (sometimes by averaging results) in order to select a target. The Phase I research also found that while data availability, data quality, and access to other tools appeared to be key factors in selecting an approach, the research and discussions in this phase of work made clear that the approach to target setting is also influenced by the target philosophy of the agencies. State DOTs and MPOs were found to have several different views on the purpose of targets: to accurately forecast anticipated future performance (which may be referred to as a realistic target), to reflect a policy perspective (a more aggressive or even aspirational target), or to ensure that the target is attainable (a conservative target). The definition of what makes an âeffectiveâ target depends on which philosophy an agency subscribes to, and this information was documented in a Phase I report for panel review. Analysis of 2018 Target Setting Methods As part of the literature review, the research team attempted to classify the target setting methods used by all states for the 2018 submittals, based on analysis of the FHWA TPM Dashboard and subsequent interviews. It is important to note that in some cases, documentation was limited from the TPM Dashboard, and it was difficult to discern what method was used to establish a target. Below are summary tables for each of the following performance areas: 1) Safety (PM1) targets; 2) Pavement condition targets; 3) Bridge condition targets; 4) Travel time and freight reliability targets; 5) Non-single occupancy vehicle (SOV) targets; and 6) Peak hour excessive delay (PHED) per capita targets. For each of these performance areas, the research team classified the approach into appropriate types of methods. Table 1. Overview of Analysis Approaches for Establishing Safety Targets, 2018 Submittals State Targeted Reduction Trend Analysis Trend with Adjustment Model Model with Adjustment Group Discussion/ Unknown Alabama X Alaska X Arizona X Arkansas X California X Colorado X Connecticut X Delaware X District of Columbia X Florida X Georgia X Hawaii X Idaho X Illinois X Indiana X Iowa X Kansas X
8 State Targeted Reduction Trend Analysis Trend with Adjustment Model Model with Adjustment Group Discussion/ Unknown Kentucky X Louisiana X Maine X Maryland X Massachusetts X Michigan X Minnesota X Mississippi X Missouri X Montana X Nebraska X Nevada X New Hampshire X New Jersey X New Mexico X New York X North Carolina X North Dakota X Ohio X Oklahoma X Oregon X Pennsylvania X Puerto Rico X Rhode Island X South Carolina X South Dakota X Tennessee X Texas X Utah X Vermont X Virginia X Washington X West Virginia X Wisconsin X Wyoming X Total 22 11 11 4 1 3 For the safety targets, methods were defined as: ⢠Targeted reduction - A defined decrease from the baseline, often based on policy; ⢠Trend analysis â A forecast based on historical performance trend;
9 ⢠Trend with adjustment â A trend analysis with an adjustment to account for other factors; ⢠Model â A regression analysis or tool developed to account for various factors to predict performance; ⢠Model with adjustment â A model approach with an external adjustment to account for other factors; and ⢠Group discussion /unknown â Group discussion with a multidisciplinary working group and/or other stakeholders or other undefined method. Table 2. Overview of Analysis Approaches for Establishing Pavement Condition Targets, 2018 Submittals State Trend Analysis Trend Plus Future Funding Model Scenario Analysis Other/ Unknown Alabama X Alaska X Arizona X Arkansas X California X Colorado X Connecticut X Delaware X District of Columbia X Florida X Georgia X Hawaii X Idaho X Illinois X Indiana X Iowa X Kansas X Kentucky X Louisiana X Maine X Maryland X Massachusetts X Michigan X Minnesota X Mississippi X Missouri X Montana X Nebraska X Nevada X New Hampshire X New Jersey X New Mexico X
10 State Trend Analysis Trend Plus Future Funding Model Scenario Analysis Other/ Unknown New York State X North Carolina X North Dakota X Ohio X Oklahoma X Oregon X Pennsylvania X Puerto Rico X Rhode Island X South Carolina X South Dakota X Tennessee X Texas X Utah X Vermont X Virginia X Washington State X West Virginia X Wisconsin X Wyoming Total 3 9 31 2 6 For the pavement condition targets, methods were defined as: ⢠Trend analysis â A forecast based on historical performance trend; ⢠Trend plus future funding â A time-series trend analysis that also accounts for anticipated funding levels; ⢠Model or system-based â Using an asset management-based system (e.g., pavement management system); ⢠Scenario analysis â Using an asset management system to predict conditions, but analyzing multiple funding levels or strategies for prioritizing funding; and ⢠Other/unknown â Other included using the baseline, making an adjustment off the baseline, adopting minimum standards, or other undefined method.
11 Table 3. Overview of Analysis Approaches for Establishing Bridge Condition Targets, 2018 Submittals State Trend Analysis Trend Plus Future Funding Model Scenario Analysis Other/ Unknown Alabama X Alaska X Arizona X Arkansas X California X Colorado X Connecticut X Delaware X District of Columbia X Florida X Georgia X Hawaii X Idaho X Illinois X Indiana X Iowa X Kansas X Kentucky X Louisiana X Maine X Maryland X Massachusetts X Michigan X Minnesota X Mississippi X Missouri X Montana X Nebraska X Nevada X New Hampshire X New Jersey X New Mexico X New York State X North Carolina X North Dakota X Ohio X Oklahoma X Oregon X Pennsylvania X Puerto Rico X Rhode Island X
12 State Trend Analysis Trend Plus Future Funding Model Scenario Analysis Other/ Unknown South Carolina X South Dakota X Tennessee X Texas X Utah X Vermont X Virginia X Washington State X West Virginia X Wisconsin X Wyoming X Total 6 6 32 3 5 For bridge condition measures, methods were defined the same way as for pavement condition measures.
13 Table 4. Overview of Analysis Approaches for Establishing Travel Time Reliability and Freight Reliability Targets, 2018 Submittals State Baseline with Assumptions Trend Analysis Trend Plus Other Factors Performance Risk Analysis Segment Risk Analysis Model Group Discussion + Analysis / Unknown Alabama X Alaska X Arizona X Arkansas X California X Colorado X Connecticut X Delaware X District of Columbia X Florida X Georgia X Hawaii X Idaho X Illinois X Indiana X Iowa X Kansas X Kentucky X Louisiana X Maine X Maryland X Massachusetts X Michigan X Minnesota X Mississippi X Missouri X Montana X Nebraska X Nevada X New Hampshire X New Jersey X New Mexico X New York X North Carolina X North Dakota X Ohio X
14 State Baseline with Assumptions Trend Analysis Trend Plus Other Factors Performance Risk Analysis Segment Risk Analysis Model Group Discussion + Analysis / Unknown Oklahoma X Oregon X Pennsylvania X Puerto Rico X Rhode Island X South Carolina X South Dakota X Tennessee X Texas X Utah X Vermont X Virginia X Washington X West Virginia X Wisconsin X Wyoming X Total 13 7 19 2 3 5 2 For travel time reliability and freight reliability measures, methods were defined as follows: ⢠Building off baseline, with assumptions â Maintaining the baseline level as the target or making an adjustment based on judgement; ⢠Trend analysis â A forecast based on historical performance trend; ⢠Trend plus other factors â A trend analysis but with adjustments to account for other factors that may affect future performance; ⢠Performance risk analysis â Using monthly performance data to calculate a standard deviation and then use the deviation to assess confidence level in order to set a target; ⢠Segment risk analysis â Using segment-level data to assess segment that are at risk of shifting across the threshold of a âreliableâ segment; ⢠Model â A regression analysis of tool developed to account for various factors to predict performance, typically applied at the segment level; and ⢠Group discussion and analysis or unknown â Engagement with stakeholders, which can rely upon data and analysis using any of the other approaches but was not clearly defined.
15 Table 5. Overview of Analysis Approaches for Establishing Non-SOV Targets, 2018 Submittals State UZA Trend Analysis Trend Plus Other Factors Policy- based Model Other/Unknown Arizona Phoenix-Mesa, AZ X Arkansas Memphis, TN- MS-AR X California Los Angeles- Long Beach- Anaheim, CA X Riverside-San Bernardino, CA X Sacramento, CA X San Diego, CA X San Francisco- Oakland, CA X San Jose, CA X Colorado Denver-Aurora, CO X Delaware Philadelphia, PA- NJ-DE-MD X District of Columbia Washington, DC- VA-MD X Georgia Atlanta, GA X Illinois Chicago, IL-IN X Indiana Indianapolis, IN X Maryland Baltimore, MD X Massachusetts Boston, MA-NH- RI X Michigan Detroit, MI X Minnesota Minneapolis-St. Paul, MN-WI X Missouri St. Louis, MO-IL X Nevada Las Vegas- Henderson, NV X New Hampshire Boston, MA-NH- RI X New Jersey New York- Newark, NY-NJ- CT X North Carolina Charlotte, NC-SC X Ohio Cincinnati, OH- KY-IN X Cleveland, OH X Columbus, OH X
16 State UZA Trend Analysis Trend Plus Other Factors Policy- based Model Other/Unknown Oregon Portland, OR-WA X Pennsylvania Pittsburgh, PA X Texas Dallas-Fort Worth-Arlington, TX X Houston, TX X Utah Salt Lake City- West Valley City, UT X Washington Seattle, WA X Wisconsin Milwaukee, WI X Total 12 11 2 2 6 For the non-SOV mode share measure, methods were defined as follows: ⢠Trend analysis â A forecast based on historical performance trend; ⢠Trend plus other factors â A trend analysis but with adjustments to account for other factors that may affect future performance; ⢠Policy-based â While the analysis of trends may have been conducted, the target itself was set based on a policy direction to increase non-SOV mode share; ⢠Model â Using a regional travel model to forecast future mode share, often with the anticipated change applied to the baseline mode share; and ⢠Other/Unknown â Other included undefined methods.
17 Table 6. Overview of Analysis Approaches for Establishing PHED per Capita Targets, 2018 Submittals State UZA Baseline with Assumptions Trend Analysis Trend Plus Other Factors Model Other/Unknown Arizona Phoenix- Mesa, AZ X Arkansas Memphis, TN- MS-AR X California Los Angeles- Long Beach- Anaheim, CA X Riverside-San Bernardino, CA X Sacramento, CA X San Diego, CA X San Francisco- Oakland, CA X San Jose, CA X Colorado Denver- Aurora, CO X Delaware Philadelphia, PA-NJ-DE-MD X District of Columbia Washington, DC-VA-MD X Georgia Atlanta, GA X Illinois Chicago, IL-IN X Indiana Indianapolis, IN X Maryland Baltimore, MD X Massachusetts Boston, MA- NH-RI X Michigan Detroit, MI X Minnesota Minneapolis- St. Paul, MN- WI X Missouri St. Louis, MO- IL X Nevada Las Vegas- Henderson, NV X New Jersey New York- Newark, NY- NJ-CT X
18 State UZA Baseline with Assumptions Trend Analysis Trend Plus Other Factors Model Other/Unknown North Carolina Charlotte, NC- SC X Ohio Cincinnati, OH-KY-IN X Cleveland, OH X Columbus, OH X Oregon Portland, OR- WA X Pennsylvania Pittsburgh, PA X Texas Dallas-Fort Worth- Arlington, TX X Houston, TX X Utah Salt Lake City- West Valley City, UT X Washington Seattle, WA X Wisconsin Milwaukee, WI X Total 2 8 13 2 7 For the PHED per capita measure, methods were defined as follows: ⢠Building off baseline with assumptions â Maintaining the baseline level as the target or making an adjustment based on judgement; ⢠Trend analysis â A forecast based on historical performance trend; ⢠Trend plus other factors â A trend analysis but with adjustments to account for other factors that may affect future performance; ⢠Policy-based â While the analysis of trends may have been conducted, the target itself was set based on a policy direction to increase non-SOV mode share; ⢠Model â Using a regional travel model to forecast future congestion, often with the anticipated change applied to the baseline PHED; and ⢠Other/Unknown â Other included undefined methods. Summary of Phase I Focus Groups and Interviews Overview In order to supplement the literature review, the research team conducted focus groups and one-on-one interviews with state DOTs and MPOs to learn more about the data analysis and methods used to set targets. During these focus groups and interviews, the research team presented findings from the initial scan in Task 1, asked for additional information about target-setting practices, and facilitated discussions regarding practices and challenges with methodologies used.
19 The research team conducted the following focus groups and interviews: ⢠PM1: Safety Measures (state DOTs) o Group 1: Louisiana, Delaware, Montana, Florida o Group 2: Texas, New Mexico, Virginia, Iowa o Group 3: Michigan, South Carolina, Colorado ⢠PM2: Infrastructure Conditions Measures (state DOTs) o Arizona o Pennsylvania o South Dakota o Vermont ⢠PM3: Reliability, Freight, Congestion, and Congestion Mitigation and Air Quality Improvement (CMAQ) Program Emissions Reductions Measures (state DOTs and MPOs) o Reliability and Freight Measures Focus Group 1 ï§ Iowa ï§ New Mexico ï§ South Carolina ï§ Texas o Reliability and Freight Measures Focus Group 2 ï§ Alabama ï§ Maryland o Reliability and Freight Measures Email Exchange ï§ California o Non-SOV and PHED Measures Focus Group ï§ Memphis, TN-MS-AR UZA (Memphis MPO) ï§ Philadelphia, PA-NJ-DE-MD UZA (Delaware Valley Regional Planning Commission) ï§ Washington, DC-VA-MD UZA (Metropolitan Washington Council of Governments) o Non-SOV and PHED Measures Interview ï§ Charlotte, NC-SC UZA (North Carolina DOT) o Non-SOV and PHED Measures Interview ï§ Milwaukee, WI UZA (Southeastern Wisconsin Regional Planning Commission) A description of each focus groups and findings are highlighted below. PM1: Safety Measures Three focus group discussions took place for the PM1 safety targets, loosely differentiated by how technical each stateâs method was as described in their âbasis for targetsâ rationales submitted to FHWA in the performance reporting process. The initial intent was to have discussions separated across three categories: nontechnical approach states, semi-technical states that relied on data but lacked a full modelling approach, and states that have developed a statistical model. Due to scheduling logistics, there was some intermixing of these agencies across the discussions, but agencies in all categories were represented. The focus groups included the following states: ⢠Group 1 (non-technical): Louisiana, Delaware, Montana, Florida (10/26/2020) ⢠Group 2 (mixed semi-and high-technical): Texas, New Mexico, Virginia, Iowa (10/28/2020) ⢠Group 3 (high-technical): Michigan, South Carolina, Colorado (11/04/2020) All three of the discussions touched on the same general themes:
20 ⢠âPhilosophyâ of whether targets should be ambitious or realistic and alignment with other targets ⢠Communication challenges around setting increasing targets and not meeting ambitious targets ⢠Coordination on target setting ⢠Specific internal or external influences the targets consider ⢠Use of data for analysis of crash causes and actions the agency will take to address them. Summary of Focus Group 1: Non-Technical Philosophy. The participants in this focus group set flat reduction targets. While all the agencies spoken with have âtoward zero deathsâ or âdestination zeroâ long-term goals, all except Florida selected more realistic targets for the annual federal measures. However, all agencies agreed that the targets should be aspirational to convey to the public and stakeholders that they take the issue seriously and consider safety a top priority. Most participants believed that their agencies would not want to set targets showing worsening fatalities and serious injuries, regardless of what trendlines show. Communications. All participants preferred the simpler flat reduction targets for aiding in communication with stakeholders and the public. For Louisiana and Florida this was a big driver of their target decision, while Montana and Delaware mostly kept the federal targets in the background during discussions with stakeholders to focus on talking about their toward zero deaths goals. One reason for this is that many find the concept of five-year rolling averages more confusing than helpful. Communication of safety targets has a dual set of challenges: difficulty in communicating worsening performance for data-driven targets or having to explain not meeting aspirational targets year after year. The participants in this group generally preferred having the latter discussion rather than the former. One even âwished stakeholders cared moreâ about the agency not meeting their targets. Overall, there has been a positive reception from stakeholders on always striving to meet aspirational targets. Coordination. Coordination with MPOs and other stakeholders varies, with some agencies incorporating their input into the actual target decision, and others reaching their target decision internally or with other state offices and communicating the result to MPOs. In the latter case any concerns or pushback from the MPOs would be considered, but it was not part of the initial process and to date this has not occurred. Internal & External Influences. Most agencies neither used models other than trend lines nor did they formally incorporate external influences. Influences like the impact of education campaigns, enforcement efforts, and other agency activity are âin the backgroundâ during target setting, but not formally or quantitatively incorporated. Data, Analysis, & Decision-Making. All participants spoke of conducting data-driven analyses for internal purposes and decision-making but noted that this activity was almost completely separate from target setting considerations. Description of Safety Target Setting Methods Delaware. Delaware DOT (DelDOT) conducted a number of data and trendline analyses before their first round of federal target setting in 2018, but the offices leading the target setting effort decided to support the 2015 Strategic Highway Safety Plan (SHSP) goal of halving fatalities and serious injuries by 2035. Each yearâs targets are therefore a set reduction (3 fatalities and 15 serious injuries) in the annual numbers needed to reach that longer-term goal. The same method was applied to all five safety measures.
21 Florida. Florida DOT (FDOT) has a target of zero for both fatalities and serious injuries across all measures. Although this target was not set based on realistic trends and expectations, this target does not mean the agency is not using data and making forecasts about where trends are realistically heading. This work is being done and discussed internally in relation to which interventions the agency should take. FDOT also continues to have regular discussions with MPOs around both data-driven forecasts and the annual target setting process. FDOT reports that there is sometimes explaining to do about why a letter comes from FHWA about not meeting its target but that this discussion is preferable to one about why the agency is comfortable with setting increasing targets on such a serious issue. Louisiana. The Louisiana Department of Transportation and Development (LaDOTD) views its safety targets as more of a communications tool than a forecast and has identified a simple 1% annual reduction as the most useful way to communicate on safety matters. The agency is committed to âmaking the mostâ out of the federal requirements and the annual target setting process by hosting regular meetings with MPOs and local governments around safety. The simple 1% reduction target has been useful in these discussions as LaDOTD can see which regions, emphasis areas, or sectors are doing better or worse relative to the target and talk about the difference easily with stakeholders. Additional data analysis occurs internally in order to identify notable trends and effective interventions, but this activity is separate from the target setting process. For example, LaDOTD tracks emphasis areas (roadway departures, etc.) in terms of how much progress they are making toward long-term goals. Current data is showing an increase, but the agency did not want to make this increase part of their formal targets. Montana. The Montana Department of Transportation (MDT) sets its federal safety targets separate from its longer-term Vision Zero goal and interim goal of halving fatalities from the 2007 level by 2030. Federal targets were set by establishing historical trends and projecting them out to the target year. Summary of Focus Group 2: Mixed Semi-and High-Technical Philosophy. The participants included both semi-technical approaches (Texas and New Mexico), Iowaâs risk management approach, and Virginiaâs highly technical modeling approach. Most agencies preferred data-driven approaches, except for Texas which actually switched from a trend-based approach to selecting a more ambitious reduction for their targets. Two states have methodologies that either sometimes or always adjust targets to be more conservative than trends project: Iowaâs overall target setting strategy is to set targets higher than the trend indicates to accommodate risk, and in one year New Mexico increased its target from the projected trend to account for expected increases in driver behaviors that may lead to fatalities or serious injuries. These examples are in contrast to the more common occurrence of states adjusting forecasted trends downward in light of high targets being âunacceptable.â Communications. Most participants agree that there are some challenges with talking with some parties about worsening targets, particularly politicians, but that on the whole there is not so much concern that it would dissuade them from setting realistic, data-driven targets. One participant wished there were more pushback on not meeting targets to indicate stakeholders are taking it seriously. Texas was the only state with the opposite phenomenon where members of a target working group decided it was better to communicate an aspirational target in line with the new âzero fatalitiesâ statewide message. Most agreed that bicycle and pedestrian results and targets get the most attention. Internal & External Influences. Most DOTs in the group indirectly consider the effects of education and enforcement campaigns, and there was discussion of trying to break down silos in the future to more closely connect these activities to targets. Targets are adjusted by most agencies for vehicle miles traveled (VMT), either after VMT trends are projected or incorporated into the fatalities or serious injuries projection
22 itself. Virginia was the clear outlier on considering internal and external influences, with a model that incorporates 14 different factors. Data, Analysis, & Decision-Making. All states in the group rely on historical trend data as a beginning reference point, but additional data collection, analysis, and decision-making activity varied. Iowa and New Mexicoâs data analysis and decision-making are completely separate from target setting, though New Mexico does have performance relative to targets as a criterion for Highway Safety Improvement Program funding. Texas breaks targets and performance relative to targets down to the county level and uses the projected fatality rate difference to help determine funding allocation. Virginia uses the most data and tightly connects analysis and subsequent decision making with the target setting process. The model used to forecast the targets considers 14 factors that might influence outcomes and is also used to guide investment and policy decisions for safety spending. Description of Safety Target Setting Methods Iowa. Iowa DOT uses a ârisk-basedâ approach that considers uncertainty and is specifically designed to get the agency thinking about likelihood and consequence while setting targets, and to keep away from trying to âguessâ the impacts of changes. There were two drivers behind this decision: the performance outcomes have inherent randomness, and the whole process is very short term. Since there is not enough money or time to move the needle on outcomes in the next year or two, Iowa instead asks, âhow likely are we to meet our short-term targets?â They set a 75% confidence interval around the projected trend, and set the target at the upper bound, meaning the agency is 75% sure it will meet its target. This process allows management to set the risk and willingness to accept consequences. More Details: https://iowadot.gov/systems_planning/planning/federal-performance-management-and- asset-management#494851853-iowa-dot-federal-performance-targets New Mexico. New Mexico DOT (NMDOT) conducts straight line trend analysis and sets targets where the trend indicates performance is heading. Because the state has such a high rate of SUVs and impaired driving, in some years NMDOT has even set the target above the projected trend to account for trends in these areas. Further analysis on the causes of crashes and interventions is separate from target setting. Texas. Texas DOT (TxDOT) initially had a more data-driven approach based on a 2% reduction from trends (rather than reduction from baseline), which was based on the SHSP target. Last year, the agency established a target working group that would approve all future targets. When TxDOT shared this with the working group, they asked why the agency was setting higher targets. âWe already set these through the SHSP. If we are going to change, [we] need to talk with FHWA.â Since Texas had set a target of zero fatalities by 2050, there were concerns that the national targets were not aligned with these more ambitious targets. Virginia. Virginia DOTâs (VDOT) target setting philosophy changed in 2017 when the agency was seeing increases in fatalities and serious injuries and there was no way to hit its ambitious five-year projections. In contrast to Texasâs switch, VDOTâs assessment was that the targets were too optimistic to be useful and they needed to have more detailed predictions from which to make decisions. Their current approach is a very data-heavy model that accounts for 14 different factorsâ impact on safety outcomes. More Details: https://journals.sagepub.com/eprint/HY7SGYAVUKKNGAA6G4TD/full
23 Summary of Focus Group 3: Highly Technical Approaches Philosophy. All the states in this group mentioned the use of models in setting safety targets, and in line with this the participants all expressed the importance of a âdata-informedâ approach to setting their targets, even if their states have a vision zero target more broadly. Both Michigan and South Carolina DOTs used to set a straight-line reduction target, due to influence from other more politically driven state agencies, but over time the agencies felt that this simpler target setting approach did not allow them to measure and highlight the impact of their programming. The switch to data-informed targets allowed them to better incorporate these and other considerations to their targets and planning, and to communicate realistically with partners. All three states in this focus group allow for setting worsening targets. Communications. The participants acknowledged the challenge of communicating an increasing target, but all felt that this conversation was better than one about why the agency never met its targets, since it is more firmly based in the reality of what is happening on the roads and can more effectively lead to solutions. South Carolina DOT (SCDOT) has found that one way to communicate increasing targets effectively is to show stakeholders the trend line under the assumption that no agency action takes place and compare that to the trend projection that includes the actions the agency will take to reduce crashes. This comparison is effective for still showing improvement even in worsening conditions. Coordination. Coordination practices on targets were mixed. MPOs were not directly involved in setting the target level in South Carolina, though SCDOT maintains open communication and supports opportunities for MPOs to select projects with safety in mind. Both Michigan and Colorado DOTs have regular monthly meetings with their MPOs, where they share safety data and discuss target setting options before they are formally set. These are presented as âdiscussions, not directivesâ from the DOTs and MPO input is included in the final decisions. Internal & External Influences. All three agencies attempted to incorporate both exogenous factors as well as the impacts of programs and policies within agency control. VMT trends and agency countermeasures were used by all in forecasting targets. This includes initiatives such as educational and enforcement campaigns, but all participants agreed that these elements are much harder to quantify and there is still work to be done to realistically incorporate these into forecasts. Colorado and Michigan DOTs accounted for regulatory and policy changes as well as numerous economic factors, which Michiganâs model showed to have some of the largest effect on outcomes. Michigan DOT went the furthest by also including factors to account for personal risk proclivity and changes in vehicle technology. Data, Analysis, & Decision-Making. Analysis of causal factors and decisions on how to counter crashes are tied together for all three agencies since all targets are driven by the analytical models that drive these decisions. These participants report that this includes significant data gathering and therefore significant groundwork before targets can be set. They also speak of the significant amount of work after setting targets to communicate results to stakeholders, answer questions, align plans, track progress, and make decisions. There was some split on the usefulness of targets for subsequent actions. Participants indicated that the targets are too short term to try and make State Transportation Improvement Program (STIP) changes to meet them, but Colorado DOT reported trying to better incorporate the targets into long-range planning and to consider what they mean for later STIP decisions. SCDOT will shift funding between focus areas depending on short term performance relative to targets and has also found that the targets are a helpful tool to convince other stakeholders to support changes needed to improve performance.
24 Description of Safety Target Setting Methods Colorado. Colorado has had notable increases in fatalities and serious injuries between 2013 and 2018, when the trend reversed slightly, driven by increases in factors such as population and VMT growth, legalization of marijuana, and a thriving economy. Colorado DOT used these factors, along with funding and policy changes, as part of its models for setting targets. The agency ran several different models to predict its measures, applied best-fit curves, and selected final targets based on expert examination of the different predicted values. This was done through collaborative statistical analysis by Colorado DOTâs Highway Safety Office and Traffic and Safety Engineering Branch. All models showed a flattening of fatalities and serious injuries. The federal targets are separate from its more ambitious Moving Toward Zero Deaths goal in the SHSP. Michigan. Michigan DOT relies on a model developed by the University of Michigan Transportation Research Institute that relies on research from NCHRP Report 928, Identification of Factors Contributing to the Decline of Traffic Fatalities in the United States. The model relies on the correlation between traffic crashes, VMT, and road usersâ personal risk behavior. Four categories of factors were used to predict outcomes: the economy, safety and capital expenditures, vehicle safety, and safety regulations. Within the model, economic factors such as the Gross Domestic Product per capita, median annual income, the unemployment rate among 16- to 24-year-olds, and alcohol consumption had the greatest impact at approximately 85%. Preliminary findings indicate that individual acceptance of risk seems to have a greater impact than changes in VMT. The final model for fatalities is a log-change regression model that shows a slow and steady decline, while Michigan DOT used a linear model with eight years of data for serious injuries. South Carolina. SCDOT used to set safety targets using a straight 1% or 2% reduction from baseline but in the last four years the agency has moved toward a more data-informed approach. This change was prompted in part by a large increase in safety funding from the state, after which SCDOT knew they would need to be able to show they used the funds as effectively as possible. The agency quantified the historical relationship between VMT and fatalities, then applied this factor to a base trendline projected out as a forecast. They then applied estimates of the effect of SCDOTâs programs and countermeasures. This process used crash modification factors that are customized for South Carolinaâs higher rates of serious crashes. In the future, SCDOT would like to consider additional factors, such as economic factors and other considerations used in the University of Michigan's model and incorporate MPO projects. PM2: Infrastructure Conditions Measures One focus group was held for the PM2 infrastructure conditions measures on November 2, 2020. The focus group included the following states: Arizona, Pennsylvania, South Dakota, Vermont. The conversation focused on the following topics: ⢠Technical concerns or challenges that shaped the agencyâs approach to target setting. ⢠Nontechnical issues that impacted target setting. ⢠Recommended good practices for setting infrastructure condition targets. ⢠Planned improvements to the target setting process. Summary of PM2 Focus Group The focus group identified several similarities and differences between the agenciesâ approaches to targets setting. The following bullet list highlights similarities between the agencies. Additional specifics of each agencyâs approach are highlighted in the following section.
25 ⢠All participating agencies reported that technical challenges related to data quality, and discontinuities between historical data formats and the national highway performance (NHP) measures for pavement and bridge conditions, contributed to them to establishing relatively conservative targets. ⢠All agencies agreed that establishing short-term targets for infrastructure conditions does not provide adequate direction to investment decisions related to long-term improvement of those conditions. ⢠Each of the agencies base their investment decisions on factors that influence longer-term performance and use the short-term TPM targets for reporting and monitoring of progress towards longer-term goals. ⢠None of the agencies in the interview account for the anticipated addition of new assets in forecasting conditions. However, they all recognize that new assets impact actual conditions reported and thus the agencyâs ability to achieve their established targets. None of the agencies have plans to change this approach. ⢠All agencies identified significant challenges to establishing targets in terms of the PM2 measures, including: o National metrics for pavement differ from historic metrics used by the agencies. o The national metric for cracking is not well defined. o The national pavement metrics are calculated based on 1/10-mile segments, but pavement management systems (PMS) are configured to model longer segments. o The transition to element-level data for bridge inspections can create a disconnect between performance models configured using element-level data and component-level ratings used for the national bridge measures. ⢠Moving forward, the agencies are focused on improving data quality and developing models that can directly forecast the national measures as opposed to estimating the national measures based on forecasts of the agencyâs historic condition metrics. ⢠Even with these planned improvements to modeling the national measures, none of the agencies intended to replace their historic condition metrics for purposes of establishing long-term strategies or programming priorities. Description of Infrastructure Conditions Target Setting Methods Arizona. Arizona DOT established their targets at the same time the agency was changing its investment strategies and implementing new managements systems. The targets were established using trend analysis of historic data outside of the pavement and bridge management systems. Through this exercise the agency realized that the short-term targets do not allow enough time to see the impacts of changes in investments. Moving to more preservation activities has a long-term benefit but it is not reflected in short-term performance. This is particularly true regarding impacting the percent of good pavements and bridges. The agency is working to better configure its bridge management system to account for this, leading Arizona DOT to establish conservative targets. Arizona has historically managed bridges on a worst-first basis, which has kept the percentage of poor bridges low, but this approach is becoming unaffordable. Since bridges deteriorate very slowly it is difficult to demonstrate the benefit and importance of bridge preservation. The recent reduction in revenues as a result of COVID-19 has led to a disproportionate cut in bridge funding versus pavement funding. Arizona DOT is working to reflect the anticipated impacts of this funding change in the next Transportation Asset Management Plan (TAMP). One way to do this is to subdivide the fair category for bridges and identify critical bridges that are expected to transition from fair to poor due to the delay of needed preservation. These efforts are intended to better communicate the liability of reducing preservation investments.
26 To establish bridge management strategies, Arizona DOT looks at impacts 10 or more years out. The slow deterioration rates of bridges make changes in the near term very small. Also, the large range of conditions that fit into the fair category mask some of the Stateâs potential liabilities. Right now, many of the âfairâ bridges in Arizona are at the low end of fair and at risk for becoming poor without proper investments. This potential liability cannot be seen through the current national measure definitions, particularly within a 2- or 4-year time horizon. Arizona DOT identified an issue related to the collection of element-level and component-level bridge data. In 2016, bridge data collection transitioned to element-level data but the national measures for bridges are based on component level data. While both types of data are collected during inspections, Arizona DOT identified a discontinuity between the two sets of ratings on many bridges. This has been addressed through procedures and training, but since bridge inspections are performed on a 2-year cycle it will take time to evaluate the effectiveness of the changes and the need for further improvement. The agency is moving to more performance-based programming so project selection will be more focused on how to drive long-term conditions. The PMS has been put in place since 2018 and the bridge management system has received significant refinements as well. These improvements should help the agency improve trend analysis and forecasting. Pavement data quality is improving every year. The agency has incorporated all National Highway System (NHS) pavements in the PMS, including 1,800 lane miles of local-owned pavements. Bridge data are also being improved at the element level. Pennsylvania. Pennsylvania DOT (PennDOT) used the agencyâs pavement and bridge management systems to develop forecasts in support of target setting. At the time of target setting, PennDOT was in the process of changing its planning and programming processes to focus on lowest lifecycle cost, in alignment with the agencyâs asset management plan. However, changes to the programmed projects could not be implemented immediately. To establish the targets, PennDOT knew the existing program would be implemented as is. PennDOT used its pavement and bridge management systems but did not include toll authority or local- owned assets. The Pennsylvania Turnpike Authority was using a different method to collect pavement distress prior to 2017. As a result, PennDOT needed to assume that the deterioration of turnpike pavements would be the same as the deterioration of PennDOT pavements. As a result of all these factors, PennDOT established conservative targets that do not reflect the agencyâs current preservation-first philosophy because that philosophy was not yet reflected in the program of projects. As a result, PennDOT overperformed in terms of some of its targets. The agency is not changing any of the targets for 2021. PennDOT established some declining targets, and these declines in performance were difficult to communicate to partners such as MPOs. MPOs needed to either adopt the state targets or establish their own. However, several MPOs were reluctant to âsign-offâ on targets that showed conditions declining. For example, PennDOTâs statewide targets indicated interstate pavements could show declining condition levels, from 0.5% poor to 2.5% poor. PennDOT staff met with each MPO to discuss the target setting approach. PennDOT staff noted that the numbers were based on projections using forecasted revenue and without additional revenue there was not a way to justify setting higher targets. Since MPOs lacked the ability to forecast pavement and bridge conditions, they eventually agreed to support the statewide targets. The bridge management system has been improved over the past 2 years to provide much better long- term forecasts of overall conditions. Future improvements are planned to allow the PMS to forecast the national measures, but this will not be done soon. The challenge in forecasting pavement measures is due to both the metrics used and approach to segmentation of the network (1/10-mile v. project segment length).
27 South Dakota. South Dakota DOT (SDDOT) used the agencyâs pavement and bridge management systems to develop forecasts that supported target setting. For pavements, SDDOT is unable to model the national measures. Instead, the state measure, surface condition index (SCI), was forecast and then correlated to the federal pavement conditions measures. Establishing the targets in this way required several assumptions, the largest of which was that funding and conditions would be relatively stable over the target- setting period. Through this approach, SDDOT has developed targets that have turned out to be conservative. For example, the target for the percentage of interstate pavement in poor condition is 2.5% but actual conditions have shown nearly 0% of interstate pavements are in poor condition. Bridge targets were established in the same general manner using projections from the bridge management system. However, the consistency in bridge measures led to a more straightforward process. To address the uncertainty related to targets SDDOT prefers to establish target ranges. This is reflected in the South Dakota TAMP which shows minimum and goal targets in addition to the current and 10-year projected condition levels for pavements and bridges using the agencyâs performance measures. The TAMP shows the national performance measures in separate tables. Vermont. Vermont Agency of Transportation (VTrans) based its targets for pavement conditions on forecasts of conditions from the agencyâs PMS. VTrans has traditionally used four condition indices to manage its pavements. These indices measure condition based on ride quality, rutting, longitudinal cracking, and transverse cracking. However, the cracking metric that VTrans used historically did not align with the pavement performance metric established in the federal regulations. The federal regulation focuses on wheel path cracking, but the VTrans metric is more comprehensive. That difference made it difficult to forecast conditions since the performance models in place used a different cracking calculation. This was a larger issue for forecasting the amount of pavement in Good condition. Because of this uncertainty, VTrans established conservative targets. The agencyâs bridge management system is not as mature as pavement management which led to the use of historic trends to establish bridge targets. Data quality and consistency were more of a concern for pavements than bridges. The component-level data for bridges was consistent between historic and current approaches. However, the national measures for pavement condition represent a different approach to measuring distress than VTransâ historic pavement data. As a result, VTrans used projected pavement condition trends in terms of the historic measures and applied that trend to the national measure. The agency also had an issue with data quality related to the rutting metric which led to an over measurement of pavement in good condition in the baseline year for target setting. This combination of data concerns led the agency to establish conservative targets for pavement condition. VTrans is working to better align performance management and asset management. It is expected that the next version of the TAMP will be organized to more clearly convey how life cycle strategies and investments impact the performance of interstate pavements, non-interstate NHS pavements, and NHS bridges. This can be done by addressing each of these asset classes and networks in each section of the TAMP, allowing the reader to better understand how the agencyâs practices are translated to performance and performance expectations.
28 PM3: Reliability, Freight, Congestion, and CMAQ Emissions Reductions Measures Separate focus groups/interviews were held for the reliability and freight measures and for the Non- SOV and PHED measures. To accommodate schedules of participants, the PM3 focus groups were broken down as follows, including some individual interviews and e-mail exchanges: ⢠Reliability and Freight Measures Focus Group 1: Iowa DOT, NMDOT, SCDOT, TxDOT (10/26/2020) ⢠Reliability and Freight Measures Focus Group 2 (10/27/2020): Alabama DOT, Maryland DOT ⢠Reliability and Freight Measures Email Exchange (10/28/2020): California DOT ⢠Non-SOV and PHED Measures Focus Group (10/23/2020): Memphis, TN-MS-AR UZA (Memphis MPO); Philadelphia, PA-NJ-DE-MD UZA (Delaware Valley Regional Planning Commission); Washington, DC-VA-MD UZA (Metropolitan Washington Council of Governments) ⢠Non-SOV and PHED Measures Interview (11/03/2020): Charlotte, NC-SC UZA (North Carolina DOT) ⢠Non-SOV and PHED Measures Interview (11/03/2020): Milwaukee, WI UZA (Southeastern Wisconsin Regional Planning Commission) All PM3 discussions covered the following topics: ⢠Details on the target setting methods used, including data, tools, and process ⢠Challenges faced in setting targets ⢠Future plans for target setting ⢠Resources or information that would help improve the process in the future Travel Time Reliability and Freight Reliability Summary of Travel Time Reliability and Freight Reliability Measures Interviews A handful of themes emerged during our interviews on travel time and freight reliability measures: ⢠Many found no real incentive to set aspirational targets, and therefore chose to set more conservative targets that would be achievable based on the data available. ⢠Within many agencies, there is a disconnect between the planning process and target setting timelines: planning and programming has already been done for the next two and four years, so target setting is happening afterwards, and therefore lacks ability to influence planning. ⢠Limited historical data is a significant barrier to establishing a robust target setting process. ⢠COVID-19 has made setting targets even more challenging, and many agencies consequently opted not to adjust targets for the 2020 mid-performance period. ⢠For states like Iowa and New Mexico that have 100% or nearly 100% reliability, these targets are not especially useful. ⢠State DOTs need to understand how FHWA uses targets because that affects how states approach the process of target setting. ⢠Many states noted the significance of regional differences and proposed varying ways to account for those differences and integrate them into statewide targets. ⢠Regarding what effective target setting means, many seem to be figuring that out â whether it is effective to set a target and then achieve it, or whether it is effective to set a target that motivates certain actions in an attempt to achieve the target.
29 Description of Travel Time Reliability and Freight Reliability Target Setting Methods Alabama. Alabama DOT (ALDOT) calculated traffic growth rate for the state and used a weighted growth rate for major counties to adjust for their relative impacts. The agency originally developed a statistical model for level of travel time reliability (LOTTR) and then revised it with an analysis of current capacity and growth rates to forecast future LOTTR. For the Mid Performance Period, ALDOT reviewed the programmed projects used in the 2018 analysis and found some did not get authorized or completed according to plans. The targets were adjusted accordingly in 2020, and its already conservative approach became even more conservative. California. In preparation for the PM3 2018 target-setting effort, coordination between the California Department of Transportation (Caltrans) and California MPOs occurred via guidance from Technical Advisory Group (TAG) meetings, which included members from MPOs and Caltrans, in-person/webcast workshops in 2017 and 2018, and other key stakeholder meetings. The information provided by the MPOs via these workshops and meetings was used to collaboratively establish targets for four of the performance measures. A key tool used for setting the initial targets was the âNational Performance Management Research Data (NPMRDS) Analyticsâ web-based tool provided by Regional Integrated Transportation Information System (RITIS). Caltrans was provided access to this tool as a participant in the Transportation Performance Management Pooled-Fund Study. This tool was vital in establishing four of the six initial performance measure targets because it provided a simple, easy to use analysis of the NPMRDS data. Several innovative collaborative tools were used in working with stakeholders for establishing the initial targets. For instance, at the December 2017 Target Setting Workshop, held in Los Angeles, participants used an interactive text-based polling tool called âPoll Everywhere.â Workshop participants were given draft baseline numbers for each of the performance measures, and then given three target scenarios 1) Setting targets above the existing baseline number; 2) Maintaining the existing baseline number; and 3) Setting targets below the existing baseline number. Next, the participants were provided a text number to vote on which target-setting direction they supported. Finally, these results were used to prepare draft targets for future discussions with the TAG, Caltrans management, and MPOs. (Note: Individual discussions were held with each MPO with an UZA over one million to establish targets for the two congestion performance measures that are reported for UZAs). Iowa. A data-driven/data-supported approach to target setting was Iowa DOTâs primary focus. Iowa DOT used a trend analysis where sufficient historical data was available. For PM3, because only one year of data was available, a statistical analysis with the Center for Advanced Transportation Technology Laboratory (CATT lab) tool was used. For the 2020 Mid-Performance Period, Iowa re-evaluated its approach as it missed two out of three of its targets. It then changed its statistical analysis models and adjusted targets accordingly. This meant changing its assumption from 2018 that reliability data was normally distributed. An analysis of monthly data was used to analyze deviation, in the absence of more than one year of annual data, to establish a distribution. Because of its preference for a data-driven approach to target setting in 2018 and 2020, exogenous qualitative variables were not incorporated. However, the DOT indicated those variables will likely be considered in 2022 when more data are available.
30 Iowa also indicated its system is nearly 100% reliable, which makes a statewide target for reliability of limited value and meaning. The state is therefore focusing attention on its few unreliable segments. It also noted that because projects are already programmed for the next few years, it sees the targets more as a check on whether the system is progressing in the right direction in the long term. More Details: https://iowadot.gov/systems_planning/fpmam/Methodology-for-PM3-target- adjustments.pdf Maryland. Maryland DOT used a consultant to fit statistical models linking LOTTR and Truck Travel Time Reliability (TTTR) on individual roadway segments as reported by NPMRDS to segment attributes, including volume, capacity, and roadway characteristics from the Highway Performance Monitoring System (HPMS) and NPMRDS. While the models have limited overall explanatory power (explaining about 25% of the total variation in segment level scores), they yielded highly significant coefficient estimates for forecasting. The approach to setting targets then considered current and planned roadway investments. The agency looked at each current and planned project through 2021 from its STIP and identified those associated with capacity improvements. Forecasted future volumes based on traffic growth rates and anticipated future capacity accounting for planned projects were then applied in the model to develop a forecast of LOTTR and TTTR. In 2020, for the mid-performance period analysis, Maryland DOT did not meet its interstate reliability target. In adjusting the target, it looked at COVID recovery scenarios, and all scenarios projected missing the target. Even though the data pointed to a number below the baseline, a policy decision was made not to set a target below the baseline, so the target was revised to equate to the baseline. Although Maryland missed its targets, Maryland DOT feels the methodology it used to set them was sound and is therefore not currently planning to change its approach. The agency noted that part of its effective approach to target setting for reliability and freight is related to its new TSMO division, which is focusing in part on improving reliability. Our interviewee noted the importance of the messaging that accompanies the targets and communicating their significance to both enhancing the target setting process and enacting related actions to achieve those targets. More Details: https://www.baltometro.org/sites/default/files/bmc_documents/committee/presentations/tc/TC180807pres _MDOT0Reliability-Forecasting.pdf New Mexico. To establish its reliability and freight targets, NMDOT used a model fitting through log linear regressions analysis. It used volume-based growth and a travel demand model to forecast growth. The targets chosen were conservative, in part because the agency found no incentive to set aspirational targets. It focused more on a data-driven approach that would establish achievable targets for the agency. The agency opted not to make adjustments for the mid-performance period due to uncertainties around COVID-19 and its effects on reliability. In terms of tools that would facilitate an improved target setting process in the future, NMDOT noted resources on strategies to improve travel time reliability would be helpful both in framing its approach to target setting as well as in its efforts to achieve those targets. South Carolina. To establish its reliability and freight targets in 2018, SCDOT tested several different statistical techniques to understand the relationship between several variables and reliability, but the
31 differing statistical models did not make a significant difference. One variable it considered was the impact of paving projects, but its analysis did not yield a conclusive result. This was in part likely due to a limited dataset, and the agency will continue to analyze this relationship as more data becomes available. For 2020, SCDOT changed its approach. It is making adjustments based on new data and focusing on analyzing isolated unreliable hotspots and conducting customized analyses for different regions to account for differing factors and variables unique to specific regions. For example, coastal regions are constricted by peninsulas and other geographic features. It is also looking at scenario analyses for future target setting. SCDOT noted issues with the level of clarity from FHWA on how the targets are being used or should be used and coupled with the fact that South Carolina already had an existing performance management system, the agency did not devote a lot of resources and capacity to setting the TPM targets. The agency is also unclear on the relationship between concrete actions (like building a certain project) and impacts of those actions on the targets and is working to better understand those relationships for future target setting. Texas. The Texas Department of Transportation (TxDOT) worked with the Texas Transportation Institute (TTI) to establish targets. TxDOT ran a linear regression to project future yearsâ traffic volumes using data from the stateâs STARS II traffic database and NPMRDS data. The analysis estimated performance for segments and then rolled those estimates up to the state level to set statewide targets. More than the other states interviewed, a big part of TxDOTâs analysis relied on the stateâs large MPOs and their individual travel demand models, as those MPOs account for 60% of the stateâs total VMT. This approach was taken due to the sophistication of the MPOsâ models and the differing conditions between regions. Texas ultimately chose to set conservative targets and stressed the importance of coordinating with MPOs and other stakeholders. For the mid-performance period, TxDOT is using 2018-2019 data to reexamine its targets, but is still discussing the right methodology due to COVID-19 uncertainties. Non-SOV Mode Share and Annual Hours of PHED per Capita Summary of Non-SOV Mode Share and PHED Interviews ⢠Approaches to setting targets for non-SOV travel were relatively consistent and simple. All five focus group participants and interviewees used American Community Survey (ACS) data and conducted a trendline analysis to support setting targets for the non-SOV mode share measure. ⢠For the Charlotte, NC-SC UZA, the Milwaukee UZA, and the Washington, DC-VA-MD UZA, a combination of the ACS data trendline plus travel demand model forecasting was used. The Philadelphia UZA partners discussed supplementing the ACS data with other data types like transit ridership, but ultimately chose not to due to incomplete or unreliable data from other modes. ⢠A theme that emerged in conversations was the non-SOV share changes by a very small amount each year (and sometimes not at all), so targets were being considered in tenths of a point. ⢠Another theme was the lack of resources and attention the target setting process was given at many agencies during the mid-performance period, which stemmed from a combination of competing priorities and timelines and staffing limitations during the COVID-19 pandemic. ⢠Uncertainties related to COVID-19 has made setting targets even more challenging, and many agencies consequently opted not to adjust targets for the 2020 Mid-Performance Period.
32 Description of Urbanized Area (Non-SOV and PHED per Capita) Target Setting Methods Memphis MPO (Memphis, TN-MS-AR UZA). The Memphis MPO led the target setting for CMAQ measures for the Memphis, TN-MS-AR UZA, coordinating among multiple agencies (three states and two MPOs). The Memphis MPO tested multiple regression analyses and set conservative targets. For 2018, Tennessee DOT developed a model for PHED targets using an existing contract with the University of Tennessee. For the mid-performance period in 2020, more INRIX data was available, and the agencies therefore opted not to use the 2018 model. They ran a regression analysis with the INRIX data and chose the lowest modeled value, which they found to be a reasonable estimate based on planned construction and other known variables. For Non-SOV travel, Memphis used ACS data to conduct a basic trendline analysis to set its target. The team of agencies discussed using a model to set this target, but ultimately opted to use simply ACS trendline data for this initial round, as the partners agreed it was the most reliable data source and straightforward method for setting this target. For both 2018 and 2020, the Memphis MPO characterized the CMAQ target setting process as consensus/committee driven. The interviewees found this to be a sensible approach, given the number of agencies involved in the UZA and the lack of experience among all in setting targets for these measures. The Memphis regional partners are considering how and whether to consider capital improvements as they set targets going forward but noted the challenge in attributing impacts on these measures to specific investments. More Details: https://www.fhwa.dot.gov/tpm/reporting/state/uza.cfm?uacc=56116 Delaware Valley Regional Planning Commission (Philadelphia, PA-NJ-DE-MD UZA). The Delaware Valley Regional Planning Commission (DVRPC) led the target setting process for the CMAQ measures for the Philadelphia, PA-NJ-DE-MD UZA. For the Non-SOV travel measure, DVRPC did not want to use overlapping 5-year ACS data, as the overlapping data would change the trendline, but was interested in how other agencies handled this and whether they opted to use overlapping data. It ultimately used a linear trend and established a baseline. DVRPC looked at the household travel survey data to compare to the ACS data, and found a difference, but because it was older than the ACS data, opted to rely on the ACS. The agency also discussed using other data sources, such as transit ridership data, to supplement and validate ACS data, but ultimately chose to solely use ACS data, as data from other modes was not comparable and of varying quality and completeness. DVRPC expects that as more data becomes available â for example, the region is doing more sidewalk traffic counts than ever before â it will use that data to inform its target setting approach in the future. The DVRPC representative indicated more direction on whether to use overlapping 5-year ACS data is the best approach would be useful for future target setting rounds. For PHED, DVRPC used its travel demand model. Its analysis included data on population and VMT, including data from PennDOT and New Jersey DOT (NJDOT), and national transit database data to establish a forecast. DVRPC noted the special challenge of messaging PHED, since it reflects only excessive delay, a concept with which many are unfamiliar.
33 For the mid-performance period, the Philadelphia regional partners opted not to adjust targets, mainly due to the pandemic. DVRPC suggested the forecasts from its travel demand model is no longer valid because travel patterns have significantly changed. It also experienced staffing capacity limitations due to the pandemic, and therefore had fewer resources to devote to reexamining its targets and approach. More Details: https://www.dvrpc.org/TIP/PA/pdf/PBPP.pdf Metropolitan Washington Council of Governments (Washington, DC-VA-MD UZA). The Metropolitan Washington Council of Governments (MWCOG) led the target setting effort for the Washington, DC-VA-MD UZA. The regional partners analyzed non-SOV mode share using two approaches: one using the regionâs travel demand model to produce outputs and the other using the ACS trendline. MWCOG selected the midpoint of the results from its travel demand model forecast and the ACS trendline as the target. The ACS trend was essentially flat, and the model showed a 0.1 to 0.2% increase each year, resulting in a midpoint of about 0.1% increase. MWCOG stated that analyzing each project planned in the TIP or long-range transportation plan (LRTP) for its influence on this measure would yield little value, mainly due to the relative minimal impact of each project. MWCOG staff did express interest, though, in learning whether and how other MPOs might be considering project-level impacts. MWCOGâs travel demand model, though, does account for planned improvements such as new tolled express lanes. For the PHED measure, Maryland DOT and VDOT purchased INRIX data, which helped provide consistent data for the region. MWCOG used its travel demand model and applied the anticipated growth in congestion, utilizing the output AM peak hour VMT, to the trendline. Its mid-performance period analysis showed PHED levels fluctuating and is considering how to identify the cause of that. It is also working to better understand the data and tools from CATT lab and INRIX, how they are changing, and how to integrate these with its own data and tools. This where MWCOG is currently focusing its efforts and expects it will change its target setting approach in the future. For 2022, MWCOG is hopeful that better data and a deconflicted timeline for developing and adopting a revised LRTP will allow for a more robust target setting process. In terms of process, for 2018, MWCOGâs board was most interested in the PM1 measures, and therefore much of the attention was given to those at the expense of the other measures. More Details: https://www.mwcog.org/assets/1/6/TPB_CMAQ_Performance_Plan_October_2018.pdf North Carolina DOT (Charlotte, NC-SC UZA). North Carolina DOT took the lead on pulling together the relevant agencies for the Charlotte UZA, which included four MPOs (Cabarrus Rowan, Charlotte, Gaston-Cleveland-Lincoln, Rock Hill-Fort Mill) and two DOTs (North Carolina and South Carolina). The team of agencies relied on ACS trend data for non-SOV mode share and established a range based on the trend. The team also considered planned projects in the region that might have an impact on non- SOV travel, but ultimately determined the impacts could not be adequately estimated to factor into the target. The chosen target represented a number at the lower range of the trend, to reflect a desire to set a more conservative target for this first round. For PHED, the agencies looked at two to three years of trend data available (although NPMRDS v1 and v2 differences created issues). North Carolina DOT used the RITIS tool for this analysis and used it to establish a range of potential performance targets for the 4-year period. The team also considered planned
34 capital improvements and their impact on delay, but ultimately decided none would have a significant impact on PHED. The agencies decided to be conservative for the first round of target setting, with the knowledge they could revisit the measures in 2020. For 2020, the target was not adjusted because the data was showing a decrease in delay that seemed to be an anomaly and the COVID-19 pandemic had reduced participating agenciesâ capacity to devote more resources to analyzing that anomaly. More Details: https://crtpo.org/PDFs/PerformanceBasedPlanning/NCDOT_Baseline_Performance_Period_Report.pdf Southeastern Wisconsin Regional Planning Commission (Milwaukee, WI UZA). The Southeastern Wisconsin Regional Planning Commission (SEWRPC) holds regular meetings with the Wisconsin DOT (WisDOT) to coordinate on target setting. For this process, the University of Wisconsin-Madisonâs TOPS lab pulled in NPMRDS data and WisDOT and SEWRPC conducted the analyses. For Non-SOV mode share, SEWRPC used a combination of two analyses to set its targets. As with the other agencies interviewed, SEWRPC used ACS data to establish a trendline and projected forward. It then conducted an analysis using the MPOâs travel demand model, which accounts for planned improvements among other factors. The halfway point between the model (a more optimistic number) and the ACS trend projection was chosen. In addition to this analysis, SEWRPC looked at projects in its fiscally constrained plan and identified those that were likely to be funded during the relevant period to establish an estimate, which it used as it considered the results of the ACS trendline and the model results. Although the Milwaukee UZA did not hit its target, it did fall within the margin of error, and was therefore found to be an acceptable target to maintain. For PHED, the travel demand model was used again. In this case, the model results seemed too optimistic for SEWRPCâs and WisDOTâs comfort. For this reason, no adjustments were made for the 2020 Mid- Performance Period. SEWRPC and WisDOT continue to monitor the new data as it becomes available and are actively considering how to enhance the target setting approach going forward. The biggest challenge SEWRPC and its partners faced in setting targets for the CMAQ measures was the NPMRDS dataset. Modeling was used to address the gaps in data. FHWA also provided recommendations on how to address the data limitations. Although SEWRPC used an optional data tool add-on, the agency indicated it may not do so in the future now that it has been through the process and understands how to set targets.