National Academies Press: OpenBook

Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Southern California (2014)

Chapter: 5.2 Limitations of the C11 Reliability Analysis Tool

« Previous: 5.1 Overview of the C11 Reliability Analysis Tool
Page 78
Suggested Citation:"5.2 Limitations of the C11 Reliability Analysis Tool." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Southern California. Washington, DC: The National Academies Press. doi: 10.17226/22332.
×
Page 78
Page 79
Suggested Citation:"5.2 Limitations of the C11 Reliability Analysis Tool." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Southern California. Washington, DC: The National Academies Press. doi: 10.17226/22332.
×
Page 79
Page 80
Suggested Citation:"5.2 Limitations of the C11 Reliability Analysis Tool." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Southern California. Washington, DC: The National Academies Press. doi: 10.17226/22332.
×
Page 80
Page 81
Suggested Citation:"5.2 Limitations of the C11 Reliability Analysis Tool." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Southern California. Washington, DC: The National Academies Press. doi: 10.17226/22332.
×
Page 81
Page 82
Suggested Citation:"5.2 Limitations of the C11 Reliability Analysis Tool." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Southern California. Washington, DC: The National Academies Press. doi: 10.17226/22332.
×
Page 82

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Figure 5.2. Project C11 Reliability Analysis Tool result summary screen. 5.2 Limitations of the C11 Reliability Analysis Tool In the course of tool testing, the study team observed a number of tool limitations that impact both the accuracy of results and ease of use. The section below discusses the tool limitations in general in order to provide the SHRP 2 Reliability program with an idea of tool fixes necessary to support SHRP 2 implementation. The results of the tool testing are presented after the tool limitations. Input Limitations Difficult to Calibrate The tool and its associated user’s guide (Cambridge Systematics et al. 2013a) do not provide instructions on how to calibrate the tool to real-world conditions. Instructions on how to calibrate the tool are necessary for tool implementation, since tool calibration is the first step of any real- 71

world application. As shown in an example in the “Baseline Condition Estimation of the C11 Reliability Analysis Tool” section (Section 5.3), the study team eventually discovered that the tool can be calibrated to the observed conditions on the facility by adjusting the peak capacity and the hourly distribution of demand. ADJUSTING PEAK CAPACITY Although the study team was able to calibrate the tool by adjusting the peak capacity for both the I-210 and I-5 facilities, the peak capacities used were unrealistically low. For example, in order to calibrate the tool, capacities as low as 1,300 vehicles per hour per lane were used, which is well below the known flow rate at capacity for these two facilities (but probably indicative of throughput during congestion). For purposes of technical integrity, the tool should provide a clear and technically solid method of calibration that does not involve calibration utilizing unrealistic peak capacities. ADJUSTING HOURLY DISTRIBUTION OF DEMAND The tool’s interface does not allow the user to input the hourly distribution of demand. Only after going into a hidden password-protected tab was the study team able to discover the default distribution assumed in the tool and adjust the distribution to match the actual volume found on the facilities. The default distribution in the tool assumes a bidirectional demand with an a.m. and p.m. peak. Neither the default distribution nor the assumption of bidirectional demand was an accurate assumption for the facilities tested by the team. As described in Chapter 3, both facilities exhibit congestion during both peak periods with some directions reporting higher congestion and unreliability. For purposes of accuracy, the tool should include the ability for users to effortlessly view and adjust the hourly distribution of demand as needed. This should be provided in a user input section that does not require a password. DIFFICULT TO UNDERSTAND SOME INPUT FIELDS The tool’s user interface is generally easy to understand, but some input fields may be confusing to users because some values must be entered for only one direction of travel and other values need to be entered for both directions. For example, Figure 5.3 shows that the No. of Lanes and Peak Capacity fields specify that one-way values should be entered. Through trial and error, the team discovered that the Current AADT, for which the tool provides no instructions on the number of directions, requires an aggregated average annual daily traffic (AADT) for both directions. For ease of use and accuracy in input data, the tool should include clear instructions on whether fields require one-directional or bidirectional data. 72

Figure 5.3. Unclear input directions in the C11 Reliability Analysis Tool. INFLEXIBLE ANALYSIS PERIODS The tool’s user interface includes preset analysis periods from which to choose, but users may need to analyze a different time period based on facility characteristics, organizational standards, or other factors. For example, the a.m. or p.m. peak periods along a particular facility may not conform to the analysis periods provided in the tool. Alternatively, a user may want to adjust the analysis period to be consistent with other analyses. For the Southern California pilot site, the study team tried to match analyses to ones previously conducted in microsimulation models for the CSMPs. These models used specific time periods for each facility. The C11 tool should provide flexibility in allowing users to choose their own start and end times for analysis. CONFUSION IN SETTING TIME HORIZON The user sets the time horizon for the analysis as part of the scenario inputs. The time horizon is entered as the number of years after the current year for the future year. For example, if the current year is 2014, entering 20 years for the time horizon will result in a future year of 2034. This terminology is inconsistent with the number of years in a benefit-cost life cycle. The previous example has a time horizon of 20 years but produces beginning and ending years for a 21-year life cycle. 73

The C11 tool allows the user to enter different time horizons for each scenario in a C11 workbook. The tool appears to calculate the benefits correctly, but the future year is labeled according to the time horizon for the last scenario. The mismatch in calculations and labeling can be a source of confusion. A best practice would be to select a single time horizon and use that time horizon for all scenarios in a C11 workbook. The calculation page, which is typically hidden from the user, includes a global setting for the time horizon. However, changing the global setting does not change the time horizon in an analysis. The C11 tool automatically selects the current year according to the computer’s clock. The user is unable to change this setting. As a result, if a user wanted to analyze a project to be constructed in a future year, the scenario input data would need to be entered for the future year, while ignoring the labeling on the output page. TRAVEL TIME UNIT COSTS AND AVERAGE VEHICLE OCCUPANCY The travel time unit costs appear to be on a per vehicle basis. Neither the C11 user’s guide (Cambridge Systematics et al. 2013a) nor the technical documentation (Cambridge Systematics et al. 2013b) refers to average vehicle occupancy (i.e., the average number of people per vehicle). In addition, the C11 tool does not provide an input for entering the average number of occupants in personal vehicles on the facility. As a result, the travel time cost entered into the tool should include the average vehicle occupancy (AVO). If an agency estimates travel time costs on a per person basis, these costs should be multiplied by the AVO and entered as the travel time costs in the C11 tool. In the future, the C11 tool should include an AVO input. VALUE OF TIME LABELING The C11 tool refers to the value of time associated with trucks as “commercial value of time.” This nomenclature could be confused with on-the-clock travel, which includes automobiles used for business purposes. At the top of the detailed results page, the “commercial value of time” is inconsistently called the “truck unit cost.” The tool should be changed to refer consistently to the “truck value of time.” DESCENDING MILE MARKERS The C11 tool assumes that the user enters a beginning milepoint smaller than the ending milepoint. If the user enters a beginning milepoint larger than the ending milepoint, the tool will produce negative results. Since agencies sequentially number mile markers in one direction, they will be in descending order in the opposite direction. The tool should automatically take the absolute value of the difference in the milepoints entered. 74

Output Limitations Difficult to Correlate Benefit Results to TTI Although the results are generally easy to understand, the tool does not specify which set of reliability data are used to calculate the benefits. The study team eventually discovered that the benefits are based on 50th and 80th percentile travel time indices (TTIs) after review of the C11 technical documentation (Cambridge Systematics et al. 2013b). The C11 tool that was tested by the study team did not provide 50th percentile TTI results. Only the mean, 80th percentile, and 95th percentile TTI values are reported, which did not allow the team to correlate TTI values to the benefits values reported by the tool. An updated version of the tool (not tested by the pilot study team) does include the 50th percentile, to allow users to calculate their own monetized reliability benefits from the tool’s reliability estimates. This will help ensure that the calculations are consistent with their own benefit-cost framework. Multiple Definitions of Recurring Delay The results summary (see Figure 5.2) in the version of the C11 tool tested by the pilot study team used inconsistent definitions of recurring delay. When reporting the recurring delay associated with “Total Annual Weekday Delay (veh-hrs),” the C11 Reliability Analysis Tool defined recurring delay as the travel time (estimated using speed-volume planning relationships) greater than the free-flow time. This recurring delay ignores the travel times associated with incidents or other sources of travel time variability. This relationship is shown on page 15 of the C11 technical documentation (Cambridge Systematics et al. 2013b) in the following equations: Equation 1: 𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅 = 𝑅𝑅 − ( 1 𝐹𝐹𝐹𝐹𝐹𝐹𝐹𝐹𝐹𝐹𝐹𝐹𝐹𝐹𝐹𝐹𝐹𝐹𝐹𝐹𝐹𝐹𝐹𝐹𝐹𝐹 ) Equation 2: 𝑅𝑅 = �1+�0.1225𝐹𝐹�𝑣𝑣𝑐𝑐�2�� 𝐹𝐹𝐹𝐹𝐹𝐹𝐹𝐹𝐹𝐹𝐹𝐹𝐹𝐹𝐹𝐹𝐹𝐹𝐹𝐹𝐹𝐹𝐹𝐹𝐹𝐹 ,𝑓𝑓𝑓𝑓𝑅𝑅 𝑣𝑣 𝑐𝑐 ≤ 1.40, where t = travel rate (hours per mile) v = hourly volume c = capacity (for an hour) Note: v/c should be capped at 1.40 In comparison, the recurring reliability reported in the “Total Annual Weekday Congestion Costs ($)” included more delay than reported earlier in the vehicle-hours of delay, which was likely to include a portion of the incident-related delay. The C11 tool calculates the cost of unreliability as the monetized delay associated with the difference in the 50th and 80th percentile TTI figures. The recurring delay is the 50th percentile TTI compared to the free-flow travel time. The C11 tool estimates the 50th percentile TTI using the data poor equations from the SHRP 2 L03 project, and these estimates include incidents. 75

Next: 5.3 Baseline Condition Estimation of the C11 Reliability Analysis Tool »
Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Southern California Get This Book
×
 Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Southern California
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

TRB’s second Strategic Highway Research Program (SHRP 2) Reliability Project L38 has released a prepublication, non-edited version of a report that tested SHRP 2's reliability analytical products at a Southern California pilot site. The Southern California site focused on two freeway facilities: I-210 in Los Angeles County and I-5 in Orange County. The pilot testing demonstrates that the reliability analysis tools have the potential for modeling reliability impacts but require some modifications before they are ready for use by agencies.

Other pilots were conducted in Minnesota, Florida, and Washington.

READ FREE ONLINE

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!