National Academies Press: OpenBook
« Previous: 3 COMPUTATIONAL METHODS
Page 91
Suggested Citation:"4 APPLICATIONS." National Academies of Sciences, Engineering, and Medicine. 2014. Guide to Establishing Monitoring Programs for Travel Time Reliability. Washington, DC: The National Academies Press. doi: 10.17226/22614.
×
Page 91
Page 92
Suggested Citation:"4 APPLICATIONS." National Academies of Sciences, Engineering, and Medicine. 2014. Guide to Establishing Monitoring Programs for Travel Time Reliability. Washington, DC: The National Academies Press. doi: 10.17226/22614.
×
Page 92
Page 93
Suggested Citation:"4 APPLICATIONS." National Academies of Sciences, Engineering, and Medicine. 2014. Guide to Establishing Monitoring Programs for Travel Time Reliability. Washington, DC: The National Academies Press. doi: 10.17226/22614.
×
Page 93
Page 94
Suggested Citation:"4 APPLICATIONS." National Academies of Sciences, Engineering, and Medicine. 2014. Guide to Establishing Monitoring Programs for Travel Time Reliability. Washington, DC: The National Academies Press. doi: 10.17226/22614.
×
Page 94
Page 95
Suggested Citation:"4 APPLICATIONS." National Academies of Sciences, Engineering, and Medicine. 2014. Guide to Establishing Monitoring Programs for Travel Time Reliability. Washington, DC: The National Academies Press. doi: 10.17226/22614.
×
Page 95
Page 96
Suggested Citation:"4 APPLICATIONS." National Academies of Sciences, Engineering, and Medicine. 2014. Guide to Establishing Monitoring Programs for Travel Time Reliability. Washington, DC: The National Academies Press. doi: 10.17226/22614.
×
Page 96
Page 97
Suggested Citation:"4 APPLICATIONS." National Academies of Sciences, Engineering, and Medicine. 2014. Guide to Establishing Monitoring Programs for Travel Time Reliability. Washington, DC: The National Academies Press. doi: 10.17226/22614.
×
Page 97
Page 98
Suggested Citation:"4 APPLICATIONS." National Academies of Sciences, Engineering, and Medicine. 2014. Guide to Establishing Monitoring Programs for Travel Time Reliability. Washington, DC: The National Academies Press. doi: 10.17226/22614.
×
Page 98
Page 99
Suggested Citation:"4 APPLICATIONS." National Academies of Sciences, Engineering, and Medicine. 2014. Guide to Establishing Monitoring Programs for Travel Time Reliability. Washington, DC: The National Academies Press. doi: 10.17226/22614.
×
Page 99
Page 100
Suggested Citation:"4 APPLICATIONS." National Academies of Sciences, Engineering, and Medicine. 2014. Guide to Establishing Monitoring Programs for Travel Time Reliability. Washington, DC: The National Academies Press. doi: 10.17226/22614.
×
Page 100
Page 101
Suggested Citation:"4 APPLICATIONS." National Academies of Sciences, Engineering, and Medicine. 2014. Guide to Establishing Monitoring Programs for Travel Time Reliability. Washington, DC: The National Academies Press. doi: 10.17226/22614.
×
Page 101
Page 102
Suggested Citation:"4 APPLICATIONS." National Academies of Sciences, Engineering, and Medicine. 2014. Guide to Establishing Monitoring Programs for Travel Time Reliability. Washington, DC: The National Academies Press. doi: 10.17226/22614.
×
Page 102
Page 103
Suggested Citation:"4 APPLICATIONS." National Academies of Sciences, Engineering, and Medicine. 2014. Guide to Establishing Monitoring Programs for Travel Time Reliability. Washington, DC: The National Academies Press. doi: 10.17226/22614.
×
Page 103
Page 104
Suggested Citation:"4 APPLICATIONS." National Academies of Sciences, Engineering, and Medicine. 2014. Guide to Establishing Monitoring Programs for Travel Time Reliability. Washington, DC: The National Academies Press. doi: 10.17226/22614.
×
Page 104
Page 105
Suggested Citation:"4 APPLICATIONS." National Academies of Sciences, Engineering, and Medicine. 2014. Guide to Establishing Monitoring Programs for Travel Time Reliability. Washington, DC: The National Academies Press. doi: 10.17226/22614.
×
Page 105
Page 106
Suggested Citation:"4 APPLICATIONS." National Academies of Sciences, Engineering, and Medicine. 2014. Guide to Establishing Monitoring Programs for Travel Time Reliability. Washington, DC: The National Academies Press. doi: 10.17226/22614.
×
Page 106
Page 107
Suggested Citation:"4 APPLICATIONS." National Academies of Sciences, Engineering, and Medicine. 2014. Guide to Establishing Monitoring Programs for Travel Time Reliability. Washington, DC: The National Academies Press. doi: 10.17226/22614.
×
Page 107
Page 108
Suggested Citation:"4 APPLICATIONS." National Academies of Sciences, Engineering, and Medicine. 2014. Guide to Establishing Monitoring Programs for Travel Time Reliability. Washington, DC: The National Academies Press. doi: 10.17226/22614.
×
Page 108
Page 109
Suggested Citation:"4 APPLICATIONS." National Academies of Sciences, Engineering, and Medicine. 2014. Guide to Establishing Monitoring Programs for Travel Time Reliability. Washington, DC: The National Academies Press. doi: 10.17226/22614.
×
Page 109
Page 110
Suggested Citation:"4 APPLICATIONS." National Academies of Sciences, Engineering, and Medicine. 2014. Guide to Establishing Monitoring Programs for Travel Time Reliability. Washington, DC: The National Academies Press. doi: 10.17226/22614.
×
Page 110
Page 111
Suggested Citation:"4 APPLICATIONS." National Academies of Sciences, Engineering, and Medicine. 2014. Guide to Establishing Monitoring Programs for Travel Time Reliability. Washington, DC: The National Academies Press. doi: 10.17226/22614.
×
Page 111
Page 112
Suggested Citation:"4 APPLICATIONS." National Academies of Sciences, Engineering, and Medicine. 2014. Guide to Establishing Monitoring Programs for Travel Time Reliability. Washington, DC: The National Academies Press. doi: 10.17226/22614.
×
Page 112
Page 113
Suggested Citation:"4 APPLICATIONS." National Academies of Sciences, Engineering, and Medicine. 2014. Guide to Establishing Monitoring Programs for Travel Time Reliability. Washington, DC: The National Academies Press. doi: 10.17226/22614.
×
Page 113
Page 114
Suggested Citation:"4 APPLICATIONS." National Academies of Sciences, Engineering, and Medicine. 2014. Guide to Establishing Monitoring Programs for Travel Time Reliability. Washington, DC: The National Academies Press. doi: 10.17226/22614.
×
Page 114
Page 115
Suggested Citation:"4 APPLICATIONS." National Academies of Sciences, Engineering, and Medicine. 2014. Guide to Establishing Monitoring Programs for Travel Time Reliability. Washington, DC: The National Academies Press. doi: 10.17226/22614.
×
Page 115
Page 116
Suggested Citation:"4 APPLICATIONS." National Academies of Sciences, Engineering, and Medicine. 2014. Guide to Establishing Monitoring Programs for Travel Time Reliability. Washington, DC: The National Academies Press. doi: 10.17226/22614.
×
Page 116
Page 117
Suggested Citation:"4 APPLICATIONS." National Academies of Sciences, Engineering, and Medicine. 2014. Guide to Establishing Monitoring Programs for Travel Time Reliability. Washington, DC: The National Academies Press. doi: 10.17226/22614.
×
Page 117
Page 118
Suggested Citation:"4 APPLICATIONS." National Academies of Sciences, Engineering, and Medicine. 2014. Guide to Establishing Monitoring Programs for Travel Time Reliability. Washington, DC: The National Academies Press. doi: 10.17226/22614.
×
Page 118
Page 119
Suggested Citation:"4 APPLICATIONS." National Academies of Sciences, Engineering, and Medicine. 2014. Guide to Establishing Monitoring Programs for Travel Time Reliability. Washington, DC: The National Academies Press. doi: 10.17226/22614.
×
Page 119
Page 120
Suggested Citation:"4 APPLICATIONS." National Academies of Sciences, Engineering, and Medicine. 2014. Guide to Establishing Monitoring Programs for Travel Time Reliability. Washington, DC: The National Academies Press. doi: 10.17226/22614.
×
Page 120
Page 121
Suggested Citation:"4 APPLICATIONS." National Academies of Sciences, Engineering, and Medicine. 2014. Guide to Establishing Monitoring Programs for Travel Time Reliability. Washington, DC: The National Academies Press. doi: 10.17226/22614.
×
Page 121
Page 122
Suggested Citation:"4 APPLICATIONS." National Academies of Sciences, Engineering, and Medicine. 2014. Guide to Establishing Monitoring Programs for Travel Time Reliability. Washington, DC: The National Academies Press. doi: 10.17226/22614.
×
Page 122
Page 123
Suggested Citation:"4 APPLICATIONS." National Academies of Sciences, Engineering, and Medicine. 2014. Guide to Establishing Monitoring Programs for Travel Time Reliability. Washington, DC: The National Academies Press. doi: 10.17226/22614.
×
Page 123
Page 124
Suggested Citation:"4 APPLICATIONS." National Academies of Sciences, Engineering, and Medicine. 2014. Guide to Establishing Monitoring Programs for Travel Time Reliability. Washington, DC: The National Academies Press. doi: 10.17226/22614.
×
Page 124
Page 125
Suggested Citation:"4 APPLICATIONS." National Academies of Sciences, Engineering, and Medicine. 2014. Guide to Establishing Monitoring Programs for Travel Time Reliability. Washington, DC: The National Academies Press. doi: 10.17226/22614.
×
Page 125
Page 126
Suggested Citation:"4 APPLICATIONS." National Academies of Sciences, Engineering, and Medicine. 2014. Guide to Establishing Monitoring Programs for Travel Time Reliability. Washington, DC: The National Academies Press. doi: 10.17226/22614.
×
Page 126
Page 127
Suggested Citation:"4 APPLICATIONS." National Academies of Sciences, Engineering, and Medicine. 2014. Guide to Establishing Monitoring Programs for Travel Time Reliability. Washington, DC: The National Academies Press. doi: 10.17226/22614.
×
Page 127
Page 128
Suggested Citation:"4 APPLICATIONS." National Academies of Sciences, Engineering, and Medicine. 2014. Guide to Establishing Monitoring Programs for Travel Time Reliability. Washington, DC: The National Academies Press. doi: 10.17226/22614.
×
Page 128
Page 129
Suggested Citation:"4 APPLICATIONS." National Academies of Sciences, Engineering, and Medicine. 2014. Guide to Establishing Monitoring Programs for Travel Time Reliability. Washington, DC: The National Academies Press. doi: 10.17226/22614.
×
Page 129
Page 130
Suggested Citation:"4 APPLICATIONS." National Academies of Sciences, Engineering, and Medicine. 2014. Guide to Establishing Monitoring Programs for Travel Time Reliability. Washington, DC: The National Academies Press. doi: 10.17226/22614.
×
Page 130
Page 131
Suggested Citation:"4 APPLICATIONS." National Academies of Sciences, Engineering, and Medicine. 2014. Guide to Establishing Monitoring Programs for Travel Time Reliability. Washington, DC: The National Academies Press. doi: 10.17226/22614.
×
Page 131
Page 132
Suggested Citation:"4 APPLICATIONS." National Academies of Sciences, Engineering, and Medicine. 2014. Guide to Establishing Monitoring Programs for Travel Time Reliability. Washington, DC: The National Academies Press. doi: 10.17226/22614.
×
Page 132
Page 133
Suggested Citation:"4 APPLICATIONS." National Academies of Sciences, Engineering, and Medicine. 2014. Guide to Establishing Monitoring Programs for Travel Time Reliability. Washington, DC: The National Academies Press. doi: 10.17226/22614.
×
Page 133
Page 134
Suggested Citation:"4 APPLICATIONS." National Academies of Sciences, Engineering, and Medicine. 2014. Guide to Establishing Monitoring Programs for Travel Time Reliability. Washington, DC: The National Academies Press. doi: 10.17226/22614.
×
Page 134
Page 135
Suggested Citation:"4 APPLICATIONS." National Academies of Sciences, Engineering, and Medicine. 2014. Guide to Establishing Monitoring Programs for Travel Time Reliability. Washington, DC: The National Academies Press. doi: 10.17226/22614.
×
Page 135
Page 136
Suggested Citation:"4 APPLICATIONS." National Academies of Sciences, Engineering, and Medicine. 2014. Guide to Establishing Monitoring Programs for Travel Time Reliability. Washington, DC: The National Academies Press. doi: 10.17226/22614.
×
Page 136

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

89 This chapter presents a series of case studies that illustrates the application of many of the aspects of this guide. In particular, the case studies illustrate real-world examples of using a travel time reliability monitoring system (TTRMS) to quantify the effects of various sources of nonrecurrent congestion. The chapter also provides an overview of a range of use cases that further illustrates the potential applications of a TTRMS. There are two appendices that provide more details: Appendix C: Case Studies and Appendix D: Use Case Analyses. CASE STUDIES This section describes how the functional use cases, data collection and management procedures, and computational methodologies detailed in the previous chapters have been applied to data from various transportation systems across the United States. This chapter includes fi ve case studies performed by the research team to demonstrate the approaches to travel time reliability monitoring described in this guide. The case studies were performed in San Diego, California; Northern Virginia; Sacramento–Lake Tahoe, California; Atlanta, Georgia; and New York/New Jersey. Figure 4.1 shows the case study locations. The fi ve main case studies are followed by additional applications in other locations. The goal of each case study is to illustrate how agencies apply best practices for monitoring system deployment, travel time reliability calculation methodology, and agency use and analysis of the system. To accomplish this goal, the team implemented 4 APPLICATIONS

90 GUIDE TO ESTABLISHING MONITORING PROGRAMS FOR TRAVEL TIME RELIABILITY a prototype TTRMS at each of the five sites. These systems take in sensor data in real time from a variety of transportation networks, process these data inside a large data warehouse, and generate reports on travel time reliability to help agencies better oper- ate and plan their transportation systems. To complete the case studies, the research needed to employ an existing monitor- ing system to facilitate data collection and analysis. The TTRMS in each case study is based on the existing Performance Measurement System (PeMS) monitoring system, a web-based software system developed for the state of California that collects traffic data from over 30,000 loop detectors every 30 seconds, filters and cleans the raw data, computes performance measures, and aggregates and archives them to enable detailed analysis. PeMS is a traffic data collection, processing, and analysis tool that extracts information from real-time intelligent transportation systems data, saves it perma- nently in a data warehouse, and presents it in various forms to users via the web. The use of PeMS for this project does not imply that PeMS is the only data management and measurement product that could be employed to complete this work, but it is the one the team chose to use for this project. To complete the case studies outside California, the research team linked PeMS to various existing monitoring systems. Because PeMS can calculate many different performance measures, the requirements for linking it with an existing system depend on the features being used. PeMS needs to acquire both the roadway network infor- mation and equipment configuration metadata before traffic data can be stored in the database. PeMS has a strict equipment configuration framework that is described in the Travel Time Reliability Monitoring System resource document. Figure 4.1. Case study locations. Map data © 2012 Google.

91 GUIDE TO ESTABLISHING MONITORING PROGRAMS FOR TRAVEL TIME RELIABILITY Different methodologies were applied and specific use cases were demonstrated in each case study based on each location’s existing data and monitoring systems. Each case study consists of the following sections: • Monitoring system — Detection; and — Management systems. • Investigations — System integration; — Integration of sources of nonrecurrent congestion; and — Other use cases. Systems integration experiments relate to activities that occur before the devel- opment of a probability density function (PDF) for travel time reliability. System integration includes investigations into data integration considerations, comparison with probe data, and development of travel time reliability functions. The Northern Virginia, Sacramento–Lake Tahoe, Atlanta, and New York/New Jersey case studies include system integration experiments. Integration of sources of nonrecurrent congestion experiments include both sys- tem integration aspects and use case demonstrations. Other use cases relate to the demonstra tion of specific use cases after a PDF has been created. Integration of sources of nonrecurrent congestion experiments demonstrate specific use cases related to analyzing the seven sources of congestion. The San Diego, Sacramento–Lake Tahoe, Atlanta, and New York/New Jersey case studies include investigation sources of non- recurrent congestion. Other use case investigations demonstrate specific use cases for the various types of users described in Appendix D. The San Diego case study includes investigations of use cases that implement planning-based reliability tools. SAN DIEGO, CALIFORNIA This case study focused on using a mature reliability monitoring system in San Diego to illustrate the state of the art for existing practice. Led by the San Diego Association of Governments (the metropolitan planning organization) and the California Depart- ment of Transportation (Caltrans), the San Diego region has developed one of the most sophisticated regional travel time monitoring systems in the United States. This system is based on an extensive network of sensors on freeways, arterials, and transit vehicles. It includes a data warehouse and software system for calculating travel times automatically. Regional agencies use these data in sophisticated ways to make opera- tions and planning decisions. Figure 4.2 shows the study area for the San Diego case study. Because this technical and institutional infrastructure was already in place, the team focused on generating a sophisticated reliability use case analysis. The rich, multimodal nature of the San Diego data presented numerous opportunities for

92 GUIDE TO ESTABLISHING MONITORING PROGRAMS FOR TRAVEL TIME RELIABILITY state-of-the-art reliability monitoring, as well as challenges in implementing guide methodologies on real data. The purpose of this case study was as follows: • Assemble regimes and travel time PDFs (TT-PDFs) from individual vehicle travel times. • Explore methods to analyze transit data from automated vehicle location and automated passenger count equipment. • Demonstrate high-level use cases encompassing freeways, transit, and freight systems. • Relate travel time variability to the seven sources of congestion. Monitoring System Detection Caltrans District 11 encompasses San Diego and Imperial counties and the metro- politan area of San Diego. A variety of detection systems are used in the study area to monitor freeways, arterials, and the transit fleet. District 11 has 3,592 sensors, which are a mix of loop detectors and radar detectors, located at 1,210 locations on its free- ways. District 11 also has 17 wireless vehicle sensors deployed to monitor intersection approaches on its arterials. Figure 4.2 Study area for San Diego case study. Map data © 2012 Google.

93 GUIDE TO ESTABLISHING MONITORING PROGRAMS FOR TRAVEL TIME RELIABILITY On the transit side, the San Diego Metropolitan Transit System is supplying data from their real-time computer-aided dispatch system into an archived data user service. To monitor its transit fleet, the transit system has equipped more than one-third of its bus fleet with automated vehicle location transponders and more than one-half of its fleet with automated passenger count equipment. Management Systems All Caltrans districts use PeMS for data and performance measure archiving and re- porting. District 11 uses an arterial extension of PeMS (A-PeMS) to collect and store its arterial data. District 11 also uses a transit extension of PeMS (T-PeMS), to obtain schedule, automated vehicle location, and automated passenger count data from its existing real-time transit management system, compute performance measures based on these data, and aggregate and store them for further analysis. Caltrans uses other management systems in conjunction with PeMS to operate its transportation network. For example, the California Highway Patrol’s computer- aided dispatch system provides an automated incident data feed that is fed into PeMS in real time. Caltrans also keeps a nonautomated database of incidents through its Traffic Accident Surveillance and Analysis System (TASAS). TASAS data are incorpo- rated into PeMS with a 2-year lag. Investigations System Integration of Transit Data The biggest data challenge in this case study was processing the transit data, which are stored in a newly developed performance measurement system. This case study represented the first research effort to use these data and this system. The research team found that data quality is a major issue when processing transit data to compute travel times. Many of the records reported by equipped buses had errors that had to be programmatically filtered out. Assembling route-based reliability statistics using a drastically reduced subset of good data presented the next challenge. From this experience, the research team con- cluded that transit travel time reliability monitoring requires a robust data processing engine that can programmatically filter data to ensure that archived travel times are accurate. In addition, transit reliability analysis requires a long timeline of historical data, because typically only a subset of buses is monitored and a large percentage of obtained data points will prove invalid. Integration of Sources of Nonrecurrent Congestion Freeways The following use case relates to integration of sources of nonrecurrent congestion for users of the freeway system. Freeway Use Case 1: Conducting offline analysis on the relationship between travel time variability and the seven sources of congestion. This use case is primarily for the system planner and roadway manager user types. To perform this analysis, methods were developed to create TT-PDFs from large data sets of travel times that occurred

94 GUIDE TO ESTABLISHING MONITORING PROGRAMS FOR TRAVEL TIME RELIABILITY under each congested condition. This use case analysis illustrates one potential method for linking travel time variability with the sources of congestion. In this case study, the research team opted to pursue a less sophisticated but more accessible approach than had previously been developed because it provides meaningful and actionable results without requiring agency staff to have advanced statistical knowledge. The application of the methodology to the two study corridors in San Diego revealed key insights into how this type of analysis should be performed, as detailed in the San Diego case study resource document. Transit The following use case relates to integration of sources of nonrecurrent congestion for users of the transit system. Transit Use Case 1: Conducting offline analysis on the relationship between travel time variability and the seven sources of congestion. This use case serves a function primarily used by transit planners and operators. This use case analysis, described further in the San Diego case study resource document, illustrates one method for exploring the relationship between travel time variability and the seven sources of congestion. The application of the methodology to the three San Diego routes revealed key insights into how this type of analysis should be performed. Other Use Cases This case study demonstrated an additional five high-level use cases that broadly en- compass reliability information of interest to various users of the transportation sys- tem. The specific use cases were developed to be well suited for demonstration using the San Diego data sources. The use cases apply to roadway, transit, and freight users. Freeways The following use cases related to freeway system users were demonstrated. Freeway Use Case 2: Using planning-based reliability tools to determine departure time and travel time for a trip. This use case represents a function that would be used by drivers. The use case demonstration showed the route that is the fastest on average is not always the route that consistently gets travelers to their destination on time. Freeway Use Case 3: Combining real-time and historical data to predict travel times in real time. This use case is primarily for the operations manager user type. This use case demonstration, which is described in the San Diego case study resource docu- ment, shows that it is possible to provide predictive travel time ranges and expected near-term travel times by combining real-time and archived travel time data. The travel time predictions for both study routes proved similar to the actual travel times measured on the sample day. Transit The following use cases related to transit system users were demonstrated. Transit Use Case 2: Using planning-based reliability tools to determine departure times and travel times for a trip. This use case primarily serves the transit passenger user type. This use case demonstration resulted in departure times and corresponding planning times for two bus routes. The methodology is described in detail in the San

95 GUIDE TO ESTABLISHING MONITORING PROGRAMS FOR TRAVEL TIME RELIABILITY Diego case study resource document. The demonstration of this use case concluded that the most direct analysis would be achieved by restricting the date range to dates with identical schedules. Transit Use Case 3: Analyzing the effects of transfers on the travel time reliability of transit trips. This use case primarily serves the transit operator user type. It was con- cluded that unusually long in-vehicle travel times can have a larger effect on traditional reliability measures than missed transfers, potentially hiding the existence of missed transfers on a route. Freight The following use case related to freight system users was demonstrated. Freight Use Case: Using freight-specific data to study travel times and travel time variability across an international border crossing. This use case represents a function- ality that would primarily be used by freight service providers. This use case demon- stration represented an initial use of truck travel time data from the Otay Mesa border crossing to evaluate travel time reliability for different aspects of a border crossing. By understanding where the bottlenecks are in the border crossing process and how they affect travel times and reliability, managers can begin to take steps to improve operations. NORTHERN VIRGINIA This case study provides an example of a more traditional transportation data col- lection network operating in a mixture of urban and suburban environments. North- ern Virginia was selected as a case study site because it provided an opportunity to integrate a reliability monitoring system into a preexisting, extensive data collection network. The focus of this case study was to describe the required steps and consider- ations for integrating a TTRMS into existing data collection systems. The purpose of this case study was as follows: • Describe the data acquisition and processing steps needed to transfer information between the existing system and PeMS. • Demonstrate methods to ensure the data quality of infrastructure-based sensors by comparing probe vehicle travel times using the procedures described in Chapter 3. • Develop multistate travel time reliability distributions from traffic data. The study area for this investigation comprised I-66 from Manassas to Arlington, Virginia, and I-395 from Springfield to Arlington, Virginia. Figure 4.3 shows the study corridors for the Northern Virginia case study. Monitoring System The Northern Virginia (NOVA) District of the Virginia Department of Transporta- tion (VDOT) includes over 4,000 miles of roadway in Fairfax, Arlington, Loudoun, and Prince William counties. Traffic operations in the district are managed from the Northern Virginia Traffic Operations Center, which manages more than 100 miles

96 GUIDE TO ESTABLISHING MONITORING PROGRAMS FOR TRAVEL TIME RELIABILITY Figure 4.3. Study area for Northern Virginia case study. Map data © 2012 Google. of instrumented roadways, including high-occupancy vehicle (HOV) facilities on I-95/I-395, I-295, I-66, and the Dulles Toll Road. The Northern Virginia Traffic Opera- tions Center has deployed a range of technologies to support its activities, including cameras, dynamic message signs, ramp meters, and lane control signals. Detection In Northern Virginia, VDOT has deployed an extensive network of point-based detec- tors (primarily inductive loops and radar-based detectors), which were described in Chapter 2, to facilitate real-time collection of volume, occupancy, and (limited) speed data on freeways. A key component of the case study is ensuring the data quality of infrastructure-based sensors, as described in Chapter 3. To monitor regional travel conditions, the NOVA District collects data from a range of sources on area freeways, including multiple types of traffic sensors and third parties such as INRIX, Trichord, and Traffic.com. The Northern Virginia case study resource document contains details about the types of traffic sensors and their specific locations. Management Systems Northern Virginia’s freeway management system is operated by VDOT staff located at the Traffic Operations Center. Staff members use the freeway management system to monitor and manage traffic, respond to incidents, and disseminate traveler information. In addition to managing freeway-related operations, VDOT staff use the NOVA Smart Traffic Signal System to manage surface street and arterial systems in the region, moni- toring, controlling, and maintaining over 1,000 traffic signals within their jurisdiction.

97 GUIDE TO ESTABLISHING MONITORING PROGRAMS FOR TRAVEL TIME RELIABILITY Investigations System Integration PeMS Configuration. For the purposes of this case study, data from NOVA’s data collection network and management system were integrated into a developed archived data user service and TTRMS. The steps and challenges encountered in enabling the information and data exchange between these two large and complex systems are described in detail in the Northern Virginia case study resource document. The goal of this experiment was to provide agencies with a real-world example of the resources needed to accomplish data collection and monitoring system integration, and the likely challenges that will be encountered when procuring a monitoring system. NOVA equipment configuration information was obtained from an XML file posted on the Regional Integrated Transportation Information System (RITIS) web- site. The issues with fitting the data into the PeMS configuration related to conflicting terminologies, information required by PeMS that was missing from the configuration file, and equipment types not supported by PeMS. The Northern Virginia case study resource document describes these issues in more detail, as well as the metadata qual- ity control steps used to insert NOVA configuration information into PeMS. Configuring PeMS to receive NOVA data helped define the requirements for com- plex traffic systems integration and illustrate what agencies can do to facilitate the pro- cess of implementing reliability monitoring. The process of fully integrating the NOVA data with PeMS took several weeks. Agencies interested in acquiring PeMS or a similar system can take steps to make this integration go more smoothly and quickly. First, it is important that the implementation and maintenance of a traffic data collection system be carried out with a broad audience in mind. Often, increasing access to data outside an organization can help to further agency goals; for example, providing data to mobile application developers can help agencies distribute informa- tion in a way that increases the efficiency of the transportation network. Another way that agencies can facilitate the distribution of data from their data collection system is by establishing one or more data feeds. Maintaining multiple data feeds can be a challenge. If agencies want to provide a feed of processed data, it will save resources in the long run to document the processing steps performed on the data. This will allow implementers of external systems to evaluate them and undo them, if needed. Aside from the processing documentation, maintaining clear documentation on the format of data files and units of data will greatly facilitate the use of data outside of the agency. Documentation on the path of data from a detector through the agency’s internal systems can also be of value to contractors and other external data users. Clearly explaining this information in a text file minimizes time-consuming back-and- forth communication between agency staff and contractors and prevents inaccurate assumptions from being made.

98 GUIDE TO ESTABLISHING MONITORING PROGRAMS FOR TRAVEL TIME RELIABILITY Probe Vehicle Comparisons The team performed a quality control procedure to better understand the implica- tions of the data quality issues on travel times. In particular, the team wanted to know how well the probe data aligned with the traffic speed and travel time estimates pro- vided by the sparsely deployed point-based detectors. Probe vehicle runs were con- ducted along I-66 to amass ground-truth data that could be compared with the sensor data. In addi tion to analyzing speed data, the team analyzed the differences between the travel times experienced by the probe vehicle during each trip versus the estimated travel times generated from the sensor speeds. It was determined that the steadiness of the travel time estimates from the sensors is not ideal for computing travel time reli- ability, which relies on the ability of the system to detect variability in traffic conditions over time. As a result, it is highly unlikely that these sensors would provide accurate travel times under most congested conditions. The research team’s analysis of the data available from these sensors has yielded findings of potential interest to a variety of agencies, particularly those agencies fac- ing maintenance and calibration issues associated with older sensor systems, as well as those agencies with more sparsely spaced spot sensors. Overall, five primary fac- tors were identified that accounted for differences between the probe vehicle data and speeds or estimated travel times generated based on VDOT sensor data. These factors are described in the Northern Virginia case study resource document. Public agency staff should consider these factors when making decisions concerning the deployment of new data collection infrastructure and the maintenance and expansion of existing systems. Travel Time Reliability Using Multiple Regimes Because of the type of data available in this case study and previous investigations in the I-66 corridor, the research team elected to experiment with travel time reliabil- ity monitoring ideas that are being developed in SHRP 2 Project L10, Feasibility of Using In-Vehicle Video Data to Explore How to Modify Driver Behavior that Causes Nonrecurring Congestion. Project L10 researchers are experimenting with a multistate travel time reliability modeling framework using mixed-mode normal distributions to represent the PDFs of travel time data from a simulation model of eastbound I-66 in Northern Virginia. The team in this case study adopted that technique and applied it to the travel times calculated from the freeway loop detectors on eastbound I-66. The goal of this study was to generate, for each hour of the day, two outputs: the percentage chance that the traveler would encounter a certain condition and the aver- age and 95th percentile travel times for each condition. The methodology to answer these questions and the results of the analysis are described in the Northern Virginia case study resource document. The methodological findings of this investigation are that multistate normal distri- bution models can approximate travel time distributions generated from loop detectors better than normal or lognormal distributions. During the peak hours on a congested facility, three states are generally sufficient to balance a good model (distribution) fit with the need to generate information that can be easily communicated to interested parties. During off-peak hours, two states typically provide a reasonable model or

99 GUIDE TO ESTABLISHING MONITORING PROGRAMS FOR TRAVEL TIME RELIABILITY distribution fit. The outputs of this method can inform travelers of the percentage chance that they will encounter moderate or severe congestion and, if they do, what their expected and 95th percentile travel times will be. SACRAMENTO–LAKE TAHOE, CALIFORNIA This case study illustrates an example of a rural transportation network with a fairly sparse data collection infrastructure. The purpose of this case study was as follows: • Examine vehicle travel time calculation and reliability using Bluetooth and radio frequency identification (RFID) reidentification systems. • Filter out travel times from trip times collected by Bluetooth and electronic toll collection (ETC) devices. • Explore the following four aspects of the ETC and Bluetooth reader (BTR) net- works used in the Lake Tahoe region: — Detailed locations and mounting structures; — Lanes and facilities monitored; — Percentage of traffic sampled; and — Percentage and number of vehicles reidentified between readers. • Quantify the effects of adverse weather- and demand-related conditions on travel time reliability using data derived from Bluetooth and ETC systems. The study area for this case study comprises I-5 through Sacramento and the two highways leading east to Lake Tahoe: I-80 and US-50. Figure 4.4 shows the study cor- ridors for the Sacramento–Lake Tahoe case study. Monitoring System This case study is located in Caltrans District 3, which encompasses the Sacramento metropolitan area and the Sacramento Valley and Northern Sierra regions of California. District 3 includes urban, suburban, and rural areas, including areas near Lake Tahoe where weather is a serious travel time reliability concern and there is heavy recre- ational traffic. Two major Interstates pass through the district: I-80, which is oriented generally east–west; and I-5, which is oriented generally north–south along the west side of the Sacramento and San Joaquin Valleys. Other major freeway facilities include US-50, which connects Sacramento and South Lake Tahoe, and SR-99, which runs north–south along the east side of the Sacramento and San Joaquin Valleys. Detection Caltrans District 3 only collects traffic data along freeway facilities. It operates 2,251 point detectors (either radar detectors or loop detectors) located in more than 1,000 roadway locations in the district. To supplement the point detection network, District 3 has installed 32 nonrevenue generating ETC readers (25 on I-80 and 7 on US-50) in rural portions of the Sierra Nevada Mountains near Lake Tahoe. Details

100 GUIDE TO ESTABLISHING MONITORING PROGRAMS FOR TRAVEL TIME RELIABILITY Figure 4.4. Study area for Sacramento–Lake Tahoe case study. Map data © 2012 Google. about the locations of these ETC readers can be found in the Sacramento–Lake Tahoe case study resource document. Management Systems All Caltrans districts use PeMS for data and performance measure archiving and re- porting as described above. Caltrans uses other management systems in conjunction with PeMS to operate its transportation network. The California Highway Patrol’s computer-aided dispatch system provides an automated incident data feed that is fed into PeMS in real time. Caltrans also keeps a nonautomated database of incidents through its Traffic Accident Surveillance and Analysis System (TASAS). TASAS data are incorporated into PeMS with a 2-year lag. Investigations System Integration Automated Vehicle Identification Sensor Deployment The two sources of data used in support of this case study, based on the movement of vehicles equipped with ETC and Bluetooth devices, are extremely new and are not currently integrated into Caltrans District 3’s existing PeMS data feed. Consequently, it was necessary to incorporate these data sets into project-specific instances of PeMS

101 GUIDE TO ESTABLISHING MONITORING PROGRAMS FOR TRAVEL TIME RELIABILITY for analysis as part of this project. The prerequisite data collection through monitoring system integration–related activities included ETC and Bluetooth data as described in the Sacramento–Lake Tahoe case study resource document. This case study explored four aspects of the ETC and BTR networks used in the Sacramento–Lake Tahoe area: (1) detailed locations and mounting structures, (2) lanes and facilities monitored, (3) percentage of traffic sampled, and (4) percentage and number of vehicles reidentified between readers. As a whole, the study showed that vehicle reidentification technologies are suitable for monitoring reliability in rural environments, provided traffic volumes are high enough to generate a sufficient num- ber of samples. For rural areas with heavy recreational or event traffic, vehicle reidentification technologies such as ETC and Bluetooth can provide sufficient samples to calculate accurate average travel times at a fine granularity during high-traffic time periods. During these high-volume periods, vehicle reidentification technologies can be used to monitor travel times and reliability over long distances, such as between the rural region and nearby urban areas. For agencies deploying vehicle reidentification monitoring networks, it is nec- essary to understand that the quality of the collected data is highly dependent on the decisions made regarding ETC and Bluetooth technologies during the design and installation process. For agencies leveraging existing networks, it is important to fully understand the configuration of the network before using its data. Travel Time Calculation Because of the significant amounts of Bluetooth-based travel time data available for analysis, the research team elected to focus its methodological efforts on this data set rather than on data generated by the ETC-based system. The primary goal of BTR-based data analysis is to characterize segment travel times between BTRs based on the reidentification of observations derived from unique mobile devices. Generally, the data processing procedures associated with the calcula- tion of BTR-to-BTR travel times can be broadly broken down into three processes, which are discussed in detail in the Sacramento–Lake Tahoe case study resource docu- ment: (1) identification of passage times, (2) generation of passage time pairs, and (3) generation of segment travel time histograms. The various methodological approaches and processes for estimating ground-truth segment travel times based on Bluetooth data that were evaluated for this case study are described in the Sacramento–Lake Tahoe case study resource document. A number of factors were identified that influence travel time reliability and guided the develop- ment of methods for processing reidentification observations and calculating segment travel times. The results show that smart filtering and processing of Bluetooth data to better identify likely segment trips increase the quality of calculated segment travel time data. This approach helps preserve the integrity of the data set by retaining as many points as possible and basing decisions to discard points on the physical charac- teristics of the system rather than on their statistical qualities.

102 GUIDE TO ESTABLISHING MONITORING PROGRAMS FOR TRAVEL TIME RELIABILITY Privacy Considerations For either of the data collection technologies described in this guide to be success- ful over the long term, safeguards must be put in place to ensure that the privacy of individual drivers being sampled is protected. It is recommended that any probe data collection program implemented by public agencies or private sector companies on their behalf adhere to a predetermined set of privacy principles (e.g., ITS America’s Fair Information and Privacy Principles) aimed at maintaining the anonymity of spe- cific users. In addition, any third-party data provider working for a public agency to implement a travel time data collection solution based on either of the technologies described in this case study should be required to submit an affidavit indicating that they will not use data collected on the agency’s behalf in an inappropriate manner. Applying a one-way cryptographic hash to personally identifiable information is one approach to these issues. Integration of Sources of Nonrecurrent Congestion Effects of Weather- and Demand-Related Conditions The purpose of this use case was to quantify the impact of adverse weather- and demand-related conditions on travel time reliability using data derived from the case study’s Bluetooth and ETC-based systems deployed in rural areas. To examine travel time reliability within the context of this use case, methods were developed to generate PDFs from large quantities of travel time data representing different operating condi- tions. To facilitate this analysis, travel time and flow data from ETC readers deployed on I-80 westbound and BTRs deployed on US-50 eastbound and westbound were obtained from PeMS and compared with weather data from local surface observation stations. PDFs were subsequently constructed to reflect reliability conditions along these routes during adverse weather conditions, as well as according to time of day and day of week. The PDFs of travel times under different operating conditions con- sistently demonstrated the unreliability associated with low visibility, rain, and travel under high-demand conditions. ATLANTA, GEORGIA The team selected the Atlanta metropolitan region to provide an example of a mixed urban and suburban site that primarily relies on video detection cameras for real-time travel information. The main objectives of the Atlanta case study were as follows: • Demonstrate methods to resolve integration issues by using real-time data from Atlanta’s traffic management system for travel time reliability monitoring. • Compare probe data from a third-party provider with data reported by agency- owned infrastructure. • Fuse the regime estimation and nonrecurrent congestion analysis methodologies to inform on the reliability impacts of nonrecurrent congestion. Figure 4.5 shows the study corridors investigated in the Atlanta case study.

103 GUIDE TO ESTABLISHING MONITORING PROGRAMS FOR TRAVEL TIME RELIABILITY Monitoring System Detection In the Atlanta region, the Georgia Department of Transportation (GDOT collects data from over 2,100 roadway sensors, which include a mix of video detection sensors and radar detectors. Both types of sensors consist of single devices that monitor traffic across multiple lanes. The majority of active sensors monitor freeway lanes, with some limited coverage of conventional highways. Sensors in the active network are manu- factured by four vendors. In general, the different types of sensors are divided up by freeway. The Atlanta case study resource document provides more details about the sensor vendors and the location of active mainline sensors in the GDOT network cat- egorized by manufacturer. To deepen the case study analysis and explore alternative data sources, the research team acquired a parallel, probe traffic data set provided by NavTeq. The data set covers the entirety of the I-285 ring road and is reported by traffic message channel ID. One use case of this case study focuses on comparing probe data from a third-party provider with data reported by agency-owned infrastructure. Figure 4.5. Study area for Atlanta case study. Map data © 2012 Google.

104 GUIDE TO ESTABLISHING MONITORING PROGRAMS FOR TRAVEL TIME RELIABILITY Management Systems GDOT monitors traffic in the Atlanta metropolitan area in real-time through Navigator, its advanced traffic management system (ATMS). The Transportation Management Center (TMC), located in Atlanta, is the headquarters and information clearinghouse for Navigator. GDOT’s traffic management system integrates with traffic sensors, CCTVs, changeable message signs, ramp meters, weather stations, and highway advi- sory radio. Navigator was initially deployed in metropolitan Atlanta in preparation for the 1996 Summer Olympic Games. Navigator collects lane-specific volume, speed, and occupancy data in real time and stores the data in a database table for 30 minutes. Every 15 minutes, the raw Navigator traffic data samples are aggregated up to lane- specific 15-minute volumes, average speeds, and average occupancies and are archived for each detector station. The data are not filtered or quality controlled before being archived. Aside from the traffic data, Navigator also maintains a historical log of incidents. When the TMC receives a call about an incident, TMC staff log it as a potential inci- dent in Navigator until it can be confirmed through a camera or multiple calls. Once the incident has been confirmed, its information is updated in Navigator to include the county, type of incident, and estimated duration. This incident information is archived and stored. For the purposes of this case study, data from GDOT’s Navigator system were inte- grated into PeMS, a developed archived data user service and TTRMS. Two aspects of the Navigator framework presented major challenges for incorporating the traffic data into PeMS. First, the frequency of data reporting differed for different device types; and second, many video detection system device data samples were missing. One experiment of this case study focused on resolving these integration issues to ensure data quality. Investigations System Integration Data Integration The first system integration experiment details how the integration issues of using ATMS data for travel time reliability monitoring were resolved. The experiment showed that unstructured configuration information obtained from ATMS requires careful analysis when mapping to the data model of a reliability monitoring system. It also highlights the importance of understanding the reporting frequency and form of detector data for ensuring accurate aggregation and travel time calculation. Probe Data The second system integration experiment compared the speed data reported by agency-owned infrastructure with probe data obtained from a third-party provider on the I-285 ring road. Results showed the speeds between data types to be similar dur- ing peak hours, but that the third-party provider artificially capped speeds to remain below a certain threshold. The experiment also investigated the speed error introduced

105 GUIDE TO ESTABLISHING MONITORING PROGRAMS FOR TRAVEL TIME RELIABILITY by the differences in locations between the agency-owned infrastructure and the mid- point of its associated third-party link (defined by traffic message channel ID). Some difference in reported speeds was attributed to the distance of the agency-owned detec- tion devices from the midpoint of the third-party provider links. Integration of Sources of Nonrecurrent Congestion Travel Time Reliability Using Multiple Regimes The use case analysis applied the methodological advancement techniques established and demonstrated in previous case studies to travel time data on a downtown Atlanta corridor to interpret the impact of the seven sources of nonrecurrent congestion on travel time reliability. Two of the main themes of the case study demonstrations are estimating the quan- tity and characteristics of the operating travel time regimes experienced by different facilities and calculating the impacts of the seven sources of nonrecurrent congestion on travel time reliability. The methodological goal of the Atlanta case study was to fuse the previously developed regime estimation and nonrecurrent congestion analysis methodologies by using multistate models to inform on the reliability impacts of non- recurrent congestion. This developed method consists of three steps: 1. Regime characterization, to estimate the number and characteristics of each travel time regime measured along the facility; 2. Data fusion, to link travel times with the causal factor (such as weather or inci- dent) active during their measurement; and 3. Seven sources analysis, to calculate the contributions of each source to each travel time regime. Analysis showed that the study corridor operates with two regimes during the peak period, with the more-congested and variable regime composed of many travel times influenced by traffic incidents. This case study showed that, with proper quality control and integration measures, ATMS data can be used for travel time reliability monitoring, including the linking of travel time variability with the sources of non- recurrent congestion. NEW YORK/NEW JERSEY The New York City site was chosen to provide insight into travel time monitoring in a high-density urban location. The 2010 U.S. Census revealed New York City’s popula- tion to be in excess of 8 million residents, at a density near 28,000 people per square mile. Although New York City has a low rate of auto ownership compared with other U.S. cities, more than half of all commute trips are still made in single-occupancy vehicles. In 2010, these factors contributed to New York City having the longest aver- age commute time of any United States city, at 31.3 minutes.

106 GUIDE TO ESTABLISHING MONITORING PROGRAMS FOR TRAVEL TIME RELIABILITY The main objectives of the New York/New Jersey case study were as follows: • Obtain time-of-day travel time distributions for a study route based on probe data. • Identify the cause of bimodal travel time distributions on certain links. • Explore the causal factors for travel times that vary significantly from the mean conditions. The route analyzed in this case study begins in the Boerum Hill neighborhood of Brooklyn and ends at JFK International Airport, traversing three major freeways: the Brooklyn–Queens Expressway (I-278), the Queens–Midtown Expressway (I-495), and the Van Wyck Expressway (I-678). Figure 4.6 shows the study route from origin to destination. Monitoring System Detection In addition to the reasons cited above, the New York/New Jersey site was selected because it is covered by a probe data set, provided to the research team by ALK Tech- nologies, a third-party data provider. These data are composed of global positioning system (GPS) traces collected from mobile devices inside individual vehicles. This de- tection technology provides high-density information along the vehicle’s entire path, as Figure 4.6. Study area for New York/New Jersey case study. Map data © 2012 Google.

107 GUIDE TO ESTABLISHING MONITORING PROGRAMS FOR TRAVEL TIME RELIABILITY opposed to infrastructure-based sensors, which measure traffic only at discrete points. This probe data set was analyzed at two levels: at the individual GPS trace level and through aggregation into single per link speed values. The raw GPS trace data set is the only case study data set that traces the entire path of vehicle trips. The aggregated speeds are similar in format to the traffic message channel path–based data analyzed in the Atlanta case study. The data obtained for this case study cover a rectangular region around the study route. A static collection of historical probe data provided the basis for analysis in this case study. No real-time data were acquired or analyzed. Unlike the other case studies, this case study did not specifically deploy an archived data user service. Investigations System Integration Probe Data The first investigation describes how to obtain route travel time distributions from the probe data set. This experiment discusses the data density along the route, presents methods for visualizing individual probe trips within the context of historical condi- tions, and details three techniques for constructing route-level travel time distributions. The central outcome of this experiment is the comparison of time-of-day travel time distributions along the route constructed using each of the three techniques. Methods were developed to compare a particular probe vehicle’s path with the 25th percen- tile, 75th percentile, and median speed profile along the route by time of day. Probe traces are also visualized within historical speed bounds based on location and time of day. This methodology makes it possible to simulate the upper and lower bound of expected trip trajectories from a particular point along the route on the basis of the historical travel times. The raw ALK probe data are in the form of standard National Marine Electronics Association GPS sentences taken directly from the probe vehicles. These data are fur- ther processed by ALK into link-based speed measurements. Although each data point contains rich information, the data set is sparse in that few probe vehicles traverse the entire route from beginning to end. As a result, the route travel time distribution must be constructed piecemeal from individual link data. Obtaining composite travel time distributions from vehicles that only traveled on a portion of the route is a complex process, most notably because this project has shown that travel times on consecutive links have a strong linear dependence. This linear dependence must be accounted for when combining individual link travel times into an overall route travel time distribu- tion. This is the core methodological challenge of this case study. Three methods for computing route PDFs from the available probe data are compared: 1. Constructing the PDFs carefully from direct measurements. This method begins by determining the distribution of speed measurements on the first link of the route. This distribution is combined with the travel time distributions of longer trips that also traversed the initial link. Incrementally, longer trips are added to

108 GUIDE TO ESTABLISHING MONITORING PROGRAMS FOR TRAVEL TIME RELIABILITY the distribution until a speed distribution for the entire route is obtained. Trips are grouped by time of day, at an hourly granularity when the data density allows. 2. Constructing the PDFs with a Monte Carlo simulation. This method considers consecutive pairs of links along the route (e.g., Link 1 and Link 2, Link 2 and Link 3). It constructs the full-route PDF out of a large number of simulated trips. Each simulation begins with the sampling of a travel time on the first link. Next, the correlation between travel times on Link 1 and Link 2 is examined, and a travel time sample on Link 2 is taken based on this correlation and the original Link 1 sample. This procedure is repeated for Link 3, based on the previous Link 2 sample and the correlation between Links 2 and 3, and continues until a single trip along the entire route has been simulated. A large number of these simulated trips form the full travel time distribution for the route. 3. Constructing the PDFs assuming link speed independence. This method ignores the linear dependence between consecutive links and directly computes the route travel time distribution as if all link travel times were independent. It works by simply convolving the distributions of travel times on consecutive links. For ex- ample, the frequency distribution of travel times on the first link will be added to the frequency distribution of travel times on the second link, and so on until a full travel time distribution for the route is obtained. This case study showed that it is possible to obtain trip reliability measures based on probe data, even when the probe data are sparse. The travel time distribution for the route is constructed from vehicles that only travel on a portion of the route and takes into account the linear dependence of speeds on consecutive links. This case study also contributes techniques for creating time–space contour plots based on probe speeds. These contour plots can be made to represent any measured speed percentile, so that contours for the worst observed conditions can be compared with typical conditions. Travel Time Distributions The second system integration experiment details an investigation into the cause of bimodal travel time distributions on certain links. Time of day, day of the week, and nonrecurrent congestion sources are explored as a source of the bimodality. Integration of Sources of Nonrecurrent Congestion Seven Sources The use case analysis explores the associated factors for travel times that vary signifi- cantly from the mean conditions. This use case represents this case study’s investigative analysis of the seven sources of nonrecurrent congestion on travel time reliability. BERKELEY HIGHWAY LABORATORY One objective of the case studies was to test and refine the methods developed for defining and identifying segment and route regimes for freeway and arterial networks. The team’s research to date has focused on identifying operational regimes based on

109 GUIDE TO ESTABLISHING MONITORING PROGRAMS FOR TRAVEL TIME RELIABILITY individual vehicle travel times and determining how to relate these regimes to system- level information on average travel times. Because individual vehicle travel times on freeways are not available in the San Diego metropolitan region, data from the Berke- ley Highway Laboratory (BHL) were used in this analysis. Details about the BHL applications can be found in the San Diego case study resource document. Figure 4.7 shows the BHL location. Monitoring System Detection BHL is a 2.7-mile section of I-80 in west Berkeley and Emeryville, California. The BHL includes 14 surveillance cameras and 16 directional dual-inductive loop detector sta- tions dedicated to monitoring traffic for research purposes. The sensors are a unique resource because they provide individual vehicle measurements. The corridor was also temporarily instrumented with two BTR stations along eastbound I-80 to record the time stamps and media access control addresses of Bluetooth devices in passing vehicles. Figure 4.7. Study area for Berkeley Highway Laboratory data investigations. Map data © 2012 Google.

110 GUIDE TO ESTABLISHING MONITORING PROGRAMS FOR TRAVEL TIME RELIABILITY Investigations System Integration Data from the BHL section of I-80 was used in this case study. This section is valu- able because it has colocated dual-loop detectors and Bluetooth sensors. This data set provided an opportunity for the team to begin to assemble regimes and TT-PDFs from individual vehicle travel times. These TT-PDFs are needed to support motorist and traveler information use cases. Because the majority of the case study sites did not pro- vide data on individual traveler variability, it was important for the research team to study the connection between individual travel time variability and aggregated travel times and whether the former can be estimated from the latter. Analysis was performed on a day’s worth of BHL data from the BTRs and loop detector stations to see if operative regimes for individual vehicle travel times can be identified from Bluetooth data. The research team concluded that this can, indeed, be done. Based on more than 5,000 observations of individual travel times, three regimes can be identified: (1) off peak or uncongested, (2) peak or congested, and (3) transition between congested and uncongested. All three regimes can be character- ized by three-parameter Gamma density functions, as demonstrated in the San Diego case study. USE CASES A functioning reliability monitoring system must meet the needs of many types of users because different users perceive and value deviations from the expected travel time in dif- ferent ways. Each user class has different motivations for monitoring travel time reliabil- ity, and these needs have to be accounted for in the types of analysis that the system can support through the user interface. Use cases are a formal systems engineering construct that transforms user needs into short, descriptive narratives that describe a system’s be- havior. Use cases capture a system’s behavioral requirements by detailing scenario-driven threads through the functional requirements. The collective use cases define the monitor- ing system by capturing its functionalities and applications for various users. Appendix D provides a series of use case illustrations to help readers of the Guide determine what information a TTRMS needs to produce and what applications it needs to satisfy their specific situation. Once the appropriate users and their needs for reliability information are defined, readers can determine the performance measures, spatial coverage, data interface needs (i.e., weather, crashes, construction activity, spe- cial events), and archival requirements for their monitoring system. The use cases are organized around the various stakeholders who use or manage aspects of the surface transportation system. The use cases for each aspect of the trans- portation system are also broken down into providers and consumers (i.e., supply and demand). The system user types are described below and shown in Table 4.1: • Policy and planning support. Agency administrators and planners who have responsibility for, and make capital investment decisions about, the highway network;

111 GUIDE TO ESTABLISHING MONITORING PROGRAMS FOR TRAVEL TIME RELIABILITY • Overall highway system. Operators of the roadway system (supply), which include its freeways, arterials, collectors, and local streets; and drivers of private autos, trucks, and transit vehicles (demand); • Transit subsystem. Operators of transit systems that operate on the highway net- work, primarily buses and light rail (supply) and riders (demand); and • Freight subsystem. Freight service suppliers (supply) and shippers and receivers that make use of those services (demand). Appendix D describes several use cases for each user type listed above. Each use case is described by specific parameters: a user, a statement of the question being posed, a description of the inputs needed to answer the question, the steps involved in answering the question, and the result to be obtained. Table 4.2 shows a template for the parameters provided for each use case. The full list of use cases considered in this project is given in Table 4.3. The use cases are categorized into those that pertain to agency administrators and planners, system operators and users, transit passengers, schedulers or operators, and freight cus- tomers or operators. A subset of these use cases is provided in this chapter to illustrate potential applications. The descriptions in this chapter have been chosen as illustrations and are no more or less important than the remaining use cases in Appendix D. TABLE 4.1. USER TYPES AND THEIR CLASSIFICATION System User Type Service Provider (Supply) User (Demand) Policy and planning support Administrators and planners N/A Overall highway system Highway system operators (public or private) Privately owned vehicle drivers, taxi drivers, limousine drivers Transit subsystem Transit operators, transit vehicle operators Transit passengers Freight subsystem Carriers, freight movers, truck drivers Freight customers (including both shippers and receivers) Note: N/A = not applicable. TABLE 4.2. USE CASE TEMPLATE User The type of TTRMS user posing the question. Question A description of the question being asked and why it would be posed. Steps A list of the actions that have to be performed to answer the question. Inputs The data and information needed to answer the question. This description helps users understand the inputs required and helps programmers understand the data inputs that must be assembled. Result The system output at the completion of the use case.

112 GUIDE TO ESTABLISHING MONITORING PROGRAMS FOR TRAVEL TIME RELIABILITY TABLE 4.3. USE CASES FOR A TRAVEL TIME RELIABILITY MONITORING SYSTEM Category Subgroup Use Case System administrators and planners Administrators AE1: See what factors affect reliability AE2: Assess the contributions of the factors AE3: View the travel time reliability of a subarea AE4: Assist planning and programming decisions AE5: Document agency accomplishments AE6: Assess progress toward long-term reliability goals AE7: Assess the reliability impact of a specific investment Planners AP1: Find the facilities with highest variability AP2: Assess the reliability trends over time for a route AP3: Assess changes in the hours of unreliability for a route AP4: Assess the sources of unreliability for a route AP5: Determine when a route is unreliable AP6: Assist rural freight operations decisions Roadway network managers and users Managers MM1: View historical reliability impacts of adverse conditions MM2: Be alerted when the system is struggling with reliability MM3: Compare a recent adverse condition with prior ones MM4: Gauge the impacts of new arterial management strategies MM5: Gauge the impacts of new freeway management strategies MM6: Determine pricing levels using reliability data Drivers— constrained trips MC1: Understand departure times and routes for a trip MC2: Determine a departure time and route just before a trip MC3: Understand the extra time needed for a trip MC4: Decide how to compensate for an adverse condition MC5: Decide en route whether to change routes Drivers— unconstrained trips MU1: Determine the best time of day to make trip MU2: Determine how much extra time is needed Transit system Transit planners TP1: Determine routes with the least travel time variability TP2: Compare exclusive bus lanes with mixed-traffic operations Transit schedulers TS1: Acquire reliability data for building schedules TS2: Choose departure times to minimize arrival uncertainty Transit operators TO1: Identify routes with the poorest reliability TO2: Review reliability for a route TO3: Examine the potential impacts of bus priority on a route TO4: Assess a mitigating action for an adverse condition Transit passengers TC1: Determine the on-time performance of a trip TC2: Determine an arrival time just before a trip TC3: Determine a friend’s arrival time TC4: Understand a trip with a transfer Freight system Freight service providers FP1: Identify the most reliable delivery time FP2: Estimate a delivery window FP3: Identify how to maximize the probability of an on-time delivery FP4: Assess the on-time probability for a scheduled shipment FP5: Assess the impacts of adverse highway conditions FP6: Determine the start time for a delivery route FP7: Find the departure time and routing for a set of deliveries FP8: Solve the multiple vehicle routing problem under uncertainty FP9: Alter delivery schedules in real time Freight customers FC1: Minimize shipping costs due to unreliability FC2: Determine storage space for just-in-time deliveries FC3: Find the lowest-cost reliable origin FC4: Find the warehouse site with the best distribution reliability

113 GUIDE TO ESTABLISHING MONITORING PROGRAMS FOR TRAVEL TIME RELIABILITY SEE WHAT FACTORS AFFECT RELIABILITY (AE1) In this use case, the agency administrator wants to see what factors affect the reliability of the segments and routes in the system. That is, he or she wants to know to what extent system reliability is affected by incidents, weather, work zones, special events, traffic control devices, fluctuation in demand, and demand exceeding capacity. For example, if the analysis shows that the system experiences unreliability largely due to incidents, the administrator might want to choose to increase spending on inci- dent management systems or roadway safety improvements. The analysis might also help administrators set benchmarks against which they can test future improvements. Table 4.4 summarizes this use case. TABLE 4.4. SEE WHAT FACTORS AFFECT RELIABILITY (AE1) User Agency administrator Question What factors affect reliability? Steps 1. Select the system of interest (e.g., a region or set of facilities). 2. Select the time frame for the analysis: the date range, days of the week, and times of day. 3. Assemble travel time (travel rate) observations for the system for the time frame of interest. 4. Label each observation in terms of the regime that was operative at the time the observation was made (i.e., each combination of nominal congestion and nonrecurring event, including none). 5. Prepare TR-PDFs for each regime identified. 6. Analyze the contributions of the various factors so that the differences in impacts can be assessed. Inputs Travel times and rates for the system and date range of interest plus information about the nominal system loading that would have been expected and any nonrecurring events. Result A set of TR-PDFs that portray the impacts of various factors on travel time reliability. Note: TR-PDF = travel rate PDF. Step 1 is to select the system of interest; often, this is a region or set of facilities. In this instance the system selected comprises three freeway routes from the I-5/I-805 junction on the north side of San Diego to the I-5/SR-15 junction in downtown San Diego. The three routes are labeled in Figure 4.8 as Route 1 (I-5), Route 2 (I-805/ SR-15/I-5), and Route 3 (I-805/SR-163/I-5). In subsequent text, these three routes are identified more succinctly as I-5, SR-15, and SR-163. Step 2 is to select the time frame of interest. In this instance it is 2011, all week- days, and all 24 hours during those days. Step 3 is to focus on assembling travel rate data. The data for San Diego are aver- age travel rates for the three routes based on system detector data obtained by walking the time–space matrix for hypothetical trips that start every 5 minutes on all three routes. The travel rates are displayed in Figure 4.9 plotted against time of day and in Figure 4.10 plotted against vehicle miles traveled (VMT) per hour. Since the data for the entire year are shown, there are 72,000 values for each route. Hence, there are 216,000 data points in the combined graphs.

114 GUIDE TO ESTABLISHING MONITORING PROGRAMS FOR TRAVEL TIME RELIABILITY Step 4 is to label each observation—all 216,000 in this case—in terms of the regime that was operative for each observation. The technique for adding these labels involves two substeps. The first substep is to add a nonrecurring event designation, if any. This is done automatically if these events have been tracked in real time and the database contains fields that describe them. If not, they have to be identified by look- ing for outliers. The plots of travel rates against time of day and VMT per hour in Figures 4.9 and 4.10, respectively, for the San Diego data reveal several outliers in the data. In this experiment, the search for outliers was done by hand. Data from San Diego were found for the following categories of nonrecurring events: • Incident: an accident or incident could be identified. • Weather: an inclement weather event could be identified. • Special event: some unusual event, often sports related, could be identified. • Demand: the VMT (implicitly, the traffic flows) was higher than normal for the time of day at which the high travel rate arose. Figure 4.8. Subarea examined for Use Case AE1. Map data © 2012 Google.

115 GUIDE TO ESTABLISHING MONITORING PROGRAMS FOR TRAVEL TIME RELIABILITY Figure 4.9. Five-minute average weekday travel rates for three routes in San Diego. 40 60 80 100 120 140 160 180 200 0:00 3:00 6:00 9:00 12:00 15:00 18:00 21:00 0:00 Tr av el R at e (s ec /m i) Time of Day (hh:mm) Trends in Travel Rates by Time of Day for the I-5 Route 40 60 80 100 120 140 160 180 200 0:00 3:00 6:00 9:00 12:00 15:00 18:00 21:00 0:00 Tr av el R at e (s ec /m i) Time of Day (hh:mm) Trends in Travel Rates by Time of Day for the SR 15 Route L02  Guide   Inserts  for  2nd  pages   2014.08.07     For  Figure  4.9:  Bottom  of  three  graphs:               40 60 80 100 120 140 160 180 200 0:00 3:00 6:00 9:00 12:00 15:00 18:00 21:00 0:00 Tr av el  R at e   (s ec /m i) Time  of  Day  (hh:mm) Trends  in  Travel  Rates  by  Time  of  Day  for  the  SR  163  Route

116 GUIDE TO ESTABLISHING MONITORING PROGRAMS FOR TRAVEL TIME RELIABILITY 40 60 80 100 120 140 160 180 200 0 20000 40000 60000 80000 100000 120000 140000 160000 Tr av el R at es (s ec /m i) VMT/hour Trends in Travel Rates versus VMT/Hour for the I-5 Route 40 60 80 100 120 140 160 180 200 0 20000 40000 60000 80000 100000 120000 140000 160000 Tr av el R at es (s ec /m i) VMT/hour Trends in Travel Rates versus VMT/Hour for the SR 15 Route Figure 4.10. Five-minute average weekday travel rates plotted against VMT per hour for three routes in San Diego. L02  Guide   Inserts  for  2nd  pages   2014.08.07     For  Fig.  4.10:  Bottom  of  three  figures:                         40 60 80 100 120 140 160 180 200 0 20000 40000 60000 80000 100000 120000 140000 160000 Tr av el  R at es  (s ec /m i) VMT/hour Trends  in  Travel  Rates  versus  VMT/Hour  for  the  SR  163  Route

117 GUIDE TO ESTABLISHING MONITORING PROGRAMS FOR TRAVEL TIME RELIABILITY Appendix D contains detailed descriptions of how the data were categorized into each nonrecurring event. The second substep in Step 4 involves labeling each observation based on the nominal loading of the system expected for each observation. This is done by analyz- ing the observations that remain once the nonrecurring events have been removed. Many metrics could be used to assess this impact, but this use case illustration uses the semivariance (SV) measure. Chapter 3 described the SV measure and the reasons for using it. Figure 4.11 shows the SV value computed for every 5-minute interval of the San Diego data for each of the three routes. The value of r employed is the minimum travel rate observed for the entire year. The SV is normalized based on the number of observations employed because the number of observations varies from one 5-minute period to another. As shown in Figure 4.11, reliability becomes worse as the traffic levels increase. This finding is not unexpected, but it demonstrates that the expectations of many researchers are correct: reliability is best when the traffic volumes are low, such as late at night or early in the morning. It is poorer when the traffic volumes are higher, such as during the midday, and it is poorest when the traffic volumes are the highest, as dur- ing the p.m. peak. The maximum SV values, which are not shown in the figure, reach about 1,000. Figure 4.11. Semivariances by 5-minute time period for the normal condition for three routes in San Diego. 0 20 40 60 80 100 120 140 160 180 200 0:00 2:00 4:00 6:00 8:00 10:00 12:00 14:00 16:00 18:00 20:00 22:00 0:00 Se m i-V ar ia nc e of th e Tr av el R at e pe r O bs er va ti on ([ se c/ m i]^ 2/ n) Time of Day (hh:mm:ss) I-5 SR 15 SR 163

118 GUIDE TO ESTABLISHING MONITORING PROGRAMS FOR TRAVEL TIME RELIABILITY Although no right answer exists for the number of congestion categories to use, four were selected here: uncongested, low, moderate, and high. Uncongested meant the SV was below 20; low meant 20 to 40; moderate, 40 to 120; and high, above 120. Thus, the I-5 route was classified as follows: • Uncongested all day, except • High from 2:15 to 6:50 p.m. The SR-15 route was classified as follows: • Uncongested from midnight to 2:10 a.m.; • Low from 2:15 to 6:45 a.m.; • Uncongested from 6:50 to 8:15 a.m.; • Low from 8:20 to 9:05 a.m.; • Moderate from 9:10 a.m. to 2:10 p.m.; • High from 2:15 to 7:20 p.m.; and • Uncongested from 7:25 p.m. to midnight. The SR-163 route was classified as follows: • Uncongested from midnight to 6:45 a.m.; • Moderate from 6:50 a.m. to 2:15 p.m.; • High from 2:20 to 7:20 p.m.; and • Uncongested from 7:25 p.m. to midnight. Step 5 is to develop TR-CDFs for each regime, that is, each combination of nomi- nal loading (from the analysis above) and nonrecurring event (from the first categori- cal analysis), including none. The TR-CDFs are created by appropriately binning the 5-minute travel time observations. Figure 4.12 presents the results. Step 6 is to interpret the results in terms of the effects on reliability of the various factors. This step overlaps with the following use case, so the results are presented there. ASSESS THE CONTRIBUTIONS OF THE FACTORS (AE2) The objective in this use case is to determine how various factors affect system reli- ability. Such information helps inform decisions about how to improve performance: geometric treatments, capacity enhancements, operational changes, better signage, improved roadway striping, resurfacing, or better lighting. It can also help managers determine which facilities need better real-time traveler information (e.g., changeable message signs displaying alternate routes and travel times). Table 4.5 summarizes this use case.

119 GUIDE TO ESTABLISHING MONITORING PROGRAMS FOR TRAVEL TIME RELIABILITY Figure 4.12. CDFs by regime for the three routes in San Diego. 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 40 50 60 70 80 90 100 110 120 130 140 Cu m ul ati ve P ro ba bi lit y Travel Rate (sec/mi) TR-CDFs by Regime for the SR 15 Route Normal Uncong Normal Low Normal Mod Normal High Demand Uncong Demand Low Demand Mod Demand High Weather Uncong Weather Low Weather Mod Weather High Special Events Uncong Special Events High Incident Uncong Incident Low Incident Mod Incident High 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 40 50 60 70 80 90 100 110 120 130 140 Cu m ul ati ve P ro ba bi lit y Travel Rate (sec/mi) TR-CDFs by Regime for the SR 163 Route Normal Uncong Normal Mod Normal High Demand Uncong Demand Mod Demand High Weather Uncong Weather Mod Weather High Special Events Uncong Special Events Mod Special Events High Incident Uncong Incident Mod Incident High 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 40 50 60 70 80 90 100 110 120 130 140 Cu m ul ati ve P ro ba bi lit y Travel Rate (sec/mi) TR-CDFs by Regime for the I-5 Route Normal Uncong Normal High Demand Uncong Demand High Weather Uncong Weather High Special Events Uncong Special Events High Incident Uncong Incident High

120 GUIDE TO ESTABLISHING MONITORING PROGRAMS FOR TRAVEL TIME RELIABILITY Steps 1 through 4 are the same as for Use Case AE1. Step 5 aims to determine the extent to which the facilities are affected by various factors. Figures 4.11 and 4.12 can be studied to develop these insights. Figure 4.11 shows that the three routes have somewhat different daily patterns of reliability. The I-5 route has high reliability (a low SV value) throughout the day except during the p.m. peak. In contrast, the SR-15 route has an additional increase in its SV (a drop in reliability) across the midday (a higher SV). The SR-163 route has an even more dramatic increase in its SV across the midday, but a lower SV during the early morn- ing hours. In addition, SR-163 has a discernible a.m. peak, but the other two routes do not. From an interpretation standpoint, the I-5 route is probably the most reliable. It is still challenged during the peak, but it consistently has the lowest SV values except for a few 5-minute periods from around 7 to 9 p.m. Even though Figure 4.9 suggests the SR-15 route may have the lowest average travel rates most of the day, I-5 is the most reliable route. Reinforcing the observation that I-5 is probably the most reliable route, Figure 4.11 also suggests that SR-163 is the least reliable route. It has the highest SV values during a significant portion of the day (except in the early morning, when the SR-15 route has higher values), and the differences are significant, especially during the morning and midday time periods. Figure 4.12 provides additional insights. Although the plots are rather dense, they tell a story about the performance of these three routes. Looking at I-5 first, the TR-CDF for the uncongested or normal condition is at the far left and is almost verti- cal. This means it has very reliable travel rates during this condition. However, the I-5 performance during the congested conditions is quite different. Even when there are no identifiable nonrecurring events, larger travel rates are involved, as shown by the smooth CDF for the high-congestion, normal condition, having travel rates from about 50 to 100 s/mi. The TR-CDFs for three nonrecurring events (incidents, special events, TABLE 4.5. ASSESS THE CONTRIBUTIONS OF THE FACTORS (AE2) User Agency administrator Question How do various factors affect system reliability? Steps 1. Select the system of interest (e.g., facilities, routes). 2. Select the time frame for which the analysis is to be conducted 3. Assemble travel rate data for each facility. 4. Create TR-PDFs (rates) for each facility and regime (i.e., combinations of system loading and nonrecurring event). 5. Study the TR-PDFs and determine the extent to which the facilities are affected by the various factors. 6. Rank order the facilities based on the relative impacts so that those most affected can receive mitigating treatments. Inputs A database of TR-PDFs with each observation labeled based on the regime to which it belongs (i.e., system loading and nonrecurring event). Result A rank ordered list of the facilities based on the TR-PDFs by regime.

121 GUIDE TO ESTABLISHING MONITORING PROGRAMS FOR TRAVEL TIME RELIABILITY and weather) during high congestion largely overlap, and no one CDF dominates the other. However, the TR-CDF for the demand condition (under high congestion) is strikingly different, with much larger travel rates even at low percentiles, and a maxi- mum value that is substantially smaller than that for the other three nonrecurring cat- egories. The implication is that demand needs to be a cause for concern, and reducing the rates for low-percentile values may be possible through geometric improvements, although it may be more important to focus on the tail for the three other conditions. The story for the SR-15 route is similar. Almost all of the regimes involving no or low congestion have similar TR-CDFs. The one notable exception is the TR-CDF for uncongested conditions with incidents. As with the I-5 TR-CDFs, incidents produce a major shift for the travel rates at the higher-percentile values (in this case, above about 90%). The four TR-CDFs that are strikingly different are those for incidents, special events, weather, and demand during periods that would normally involve high conges- tion. This is not surprising, but it does reinforce the importance of taking actions that help manage the severity of these events when they occur during congested operation. As shown in Figure 4.12, the SR-163 route’s TR-CDFs are widely scattered, and nonrecurring events have an impact under all levels of congestion. The graph shows important details: • The most significant impacts (the CDFs farthest to the right), which are all during high congestion, come from (right to left) weather, special events, and incidents. • The next two impacts are for weather under moderate congestion and demand during high congestion. • The next three impacts (right to left) are incidents; special events; and demand under low-congestion conditions, not moderate. With these differences noted, the reliability performance of SR-163 is otherwise similar to the other two. More specifically, it has a travel rate performance very similar to the other two routes under uncongested normal conditions, but it struggles to main- tain that performance either when the congestion levels get higher or non recurring events occur. The more significant shifts in the TR-CDFs for various conditions on the SR-163 route compared with the other routes lead to a conclusion that there are problems with SR-163 between I-805 and I-5. Although it is not the purpose of the methodologies in this guide to determine what geometric and other treatments would help alleviate reliability problems, geometric improvements and expedient response to incidents and other events would likely have a significant impact on reliability on this section of SR-163. Appendix D provides details about further insights that can be obtained from Figure 4.12. Step 6 involves rank ordering the facilities based on the relative impacts so that those facilities most affected can receive mitigating treatments. Table 4.6 provides a way to develop the rankings. Columns 3 to 12 in the table report the average SV values for each regime and the frequency (n) with which that regime occurs. The table also shows the SV totals for each congestion condition (e.g., 573,000 for I-5 dur- ing uncongested conditions and 4,705,000 during congested conditions) based on the

122 GUIDE TO ESTABLISHING MONITORING PROGRAMS FOR TRAVEL TIME RELIABILITY sum–product of the SV and n values. The far-right column (Facility Total) in Table 4.6 reports the total SV in the travel rate for the year. The facility totals suggest that the most unreliable (least reliable) facility is SR-163, which is consistent with the scatter- plot shown in Figure 4.10 and the line graph shown in Figure 4.11. The SR-15 route is the next most unreliable, but its SV distribution is slightly different. As Table 4.7 shows, a higher percentage can be attributed to incidents and special events during nominally high-congestion conditions. A summary of this analysis might be as follows: all three routes exhibit varia- tions in reliability depending on the recurring congestion condition and nonrecurring event. Evidence of these differences is most significant for the SR-163 route, and it TABLE 4.6. SEMIVARIANCES FOR EACH REGIME FOR THREE ROUTES IN SAN DIEGO Route Condition Normal Demand Weather Special Events Incidents ∑(SV*n) (×1,000) Facility Total (×1,000)SV n SV n SV n SV n SV n I-5 Uncongested 7 55,533 60 1,250 46 797 111 135 172 285 573 5,278 High 205 12,783 1,415 472 2,563 175 1,399 104 1,769 466 4,705 SR-15 Uncongested 15 24,491 47 147 68 229 29 77 139 55 400 9,465 Low 27 15,931 118 102 106 193 0 0 97 25 457 Moderate 46 14,863 127 13 151 271 0 0 93 103 740 High 241 13,918 2,415 665 3,751 162 3,113 168 3,032 587 7,868 SR-163 Uncongested 11 32,823 13 1,019 61 277 21 29 54 102 386 9,561 Moderate 56 20,950 169 519 399 333 601 344 684 354 1,841 High 261 12,764 1,789 1,028 1,924 254 1,424 243 1,385 961 7,333 Note: n = number of observations. TABLE 4.7. PERCENTAGES FOR SEMIVARIANCES FOR EACH REGIME FOR THREE ROUTES IN SAN DIEGO Route Condition Normal (%) Demand (%) Weather (%) Special Events (%) Incidents (%) Total (%) Facility Total (%) I-5 Uncongested 8 1 1 0 1 11 100 High 50 13 8 3 16 89 SR-15 Uncongested 4 0 0 0 0 4 100 Low 4 0 0 0 0 5 Moderate 7 0 0 0 0 8 High 35 17 6 6 19 83 SR-163 Uncongested 4 0 0 0 0 4 100 Moderate 12 1 1 2 3 19 High 35 19 5 4 14 77

123 GUIDE TO ESTABLISHING MONITORING PROGRAMS FOR TRAVEL TIME RELIABILITY seems apparent that its problems are due to the geometric conditions on the section of SR-163 from I-805 to I-5. All three routes are significantly affected by high congestion, even under normal conditions; the TR-CDF for that condition is dramatically different from the CDFs for normal operation under less-congested conditions. Certain non- recurring events—incidents, weather, special events, and fluctuation in demand that is higher than normal—all have a significant effect on reliability during highly congested conditions. Finally, it is clear that these TR-CDFs provide guidance about actions that might be useful to help alleviate the reliability problems. VIEW THE TRAVEL TIME RELIABILITY PERFORMANCE OF A SUBAREA (AE3) In this use case, the agency administrator wants to review the travel time reliability performance of a subarea of the network. Subarea aggregations support transporta- tion network planning and operations decisions for large-scale metropolitan networks. Table 4.8 summarizes this use case. As shown in Figure 4.13, two spatial aggregation approaches can provide users with subarea travel time reliability statistics: 1. A windowing approach, which isolates the subarea and focuses only on routes entirely within the subarea’s boundary. This approach allows for the evaluation of the reliability impacts of policies enacted within a specific subarea and the analysis of subarea boundary-to-boundary travel time reliability measures. 2. A focusing approach, which is aimed at reliability measures for all of the routes that pass through the subarea. This approach allows linkages and relationships to be maintained between a subarea and its surrounding districts. It can generate statistics on the reduced subarea networks without losing reliability information at the origin–destination level for long-distance trips. TABLE 4.8. VIEW THE TRAVEL TIME RELIABILITY PERFORMANCE OF A SUBAREA (AE3) User Agency administrator Question What is the reliability performance of a subarea? Steps 1. Define the boundary of the subarea of interest. 2. Choose the spatial aggregation method: windowing or focusing approach. 3. Select the date range, days of the week, and times of day over which to aggregate data. 4. Assemble TR-PDFs (rate) for the subarea differentiated by facility type, operating condition, time of day, and so forth. 5. Develop a picture of the reliability of the region that reflects the importance of each of the facility types, operating conditions, and times of day. Inputs TR-PDFs (rates) for the subarea differentiated by facility type, operating condition, time of day, and so forth. Result TR-PDFs for the region and selected routes (as shown in Figure 4.13) that reflect the importance of each of the facility types, operating conditions, and times of day.

124 GUIDE TO ESTABLISHING MONITORING PROGRAMS FOR TRAVEL TIME RELIABILITY Step 1 is to select the subarea of interest. In this case, it is the same portion of the San Diego metropolitan area shown in Figure 4.8. Step 2 is to choose the aggregation method. In this particular instance, a window- ing approach is employed to study the reliability performance of the same three routes from A to B considered in the previous use case: I-5, SR-15, and SR-163. Step 3 is to select the date range, days of the week, and times of day over which to conduct the analysis. As with the prior use case, 2011 has been chosen, all weekdays, and all 24 hours. Step 4 is to assemble the TR-PDFs differentiated by facility type, operating condi- tion, time of day, and so forth. In this case the same regimes used in the previous use case are used: the nominal loading (uncongested or low, moderate, or high conges- tion) and nonrecurring event (incidents, special events, weather) including normal (no unusual nonrecurring condition). Hence, the TR-CDFs presented in Figure 4.12 still pertain. Step 5 is to focus on developing a picture of the reliability of the region that reflects the importance of each of the facility types, operating conditions, and times of day. For this purpose, the data presented in Table 4.7 can be used. For the San Diego routes, it was assumed that the three routes represent all the significant facilities in the region. Table 4.9 presents the results of the three routes’ combined contributions to unreliability. Figure 4.13. Subarea aggregation approaches. TABLE 4.9. CONTRIBUTION BY REGIME TO TOTAL SEMIVARIANCE FOR THREE ROUTES IN SAN DIEGO Condition Normal (%) Demand (%) Weather (%) Special Events (%) Incidents (%) Total (%) Uncongested 5 0 0 0 0 6 Low 2 0 0 0 0 2 Moderate 8 0 1 1 1 11 High 38 17 6 4 16 82 Total 52 18 7 5 17 100

125 GUIDE TO ESTABLISHING MONITORING PROGRAMS FOR TRAVEL TIME RELIABILITY As shown in Table 4.9, normal conditions (nonrecurring event category) under high congestion are the principle source of unreliability. That combination is followed by demand under high congestion, and then incidents under high congestion. The contribu- tions from the other combinations are individually under 10%. Weather under high con- gestion is at 6%, and normal conditions under uncongested conditions contribute 5%. The average SV values in Table 4.7 provide a more detailed sense of when these contributions arise. For example, the SR-15 route has among the highest average SV values (e.g., 3,751 for weather under high congestion; 3,032 for incidents under high congestion), and these categories contribute substantially to the total SV (17% and 19%, respectively). Furthermore, Figure 4.12 shows that the CDFs for those two situ- ations are still increasing through the 70th and 80th percentiles at the maximum travel rates plotted (130 to 140 s/mi). ASSIST PLANNING AND PROGRAMMING DECISIONS (AE4) In this use case, the user wants to make planning and programming decisions based on inputs from a TTRMS. Table 4.10 summarizes this use case. A description of the use case can build off Use Case AE1. Imagine a hypothetical situation in which the conditions portrayed for SR-163 are the status of that route at the end of 2009. The SR-15 conditions are for that same route (SR-163) at the end of 2010, and the conditions for I-5 are the status of that route (SR-163) at the end of 2011. Admittedly, such change would be remarkable progress, but that is actually use- ful here because it makes the differences clear. Step 1 is to select the routes and subareas of interest. In this case it is the same por- tion of the San Diego metropolitan area shown in Figure 4.8, and the route of interest is the one involving SR-163. TABLE 4.10. ASSIST PLANNING AND PROGRAMMING DECISIONS (AE4) User Agency administrator Question What agency actions are having significant impacts on travel time reliability? Steps 1. Select routes and subareas for analysis. 2. Assemble TR-PDFs for routes and areas for before-and-after traffic conditions under equivalent operating conditions. 3. Analyze the before-and-after changes in reliability along those routes and in those subareas, and relate those changes to the actions taken as part of the transportation improvement projects. 4. Find the trends in the efficacy of different types of transportation improvement projects. 5. Use the results as input into decision making associated with future agency planning and programming decisions. Inputs TR-PDFs for each route and area under the before-and-after conditions for similar network operating conditions associated with the travel rate observations about the transportation improvement project actions that were taken. Result Results of the before-and-after cause-and-effect analysis of the improvement actions taken and a process for conducting the assessment.

126 GUIDE TO ESTABLISHING MONITORING PROGRAMS FOR TRAVEL TIME RELIABILITY Step 2 is to assemble TR-PDFs for routes and areas for before-and-after traffic condi- tions under equivalent operating conditions. In the context of the hypothetical situation described above, this has already been done, and the results are presented in Figure 4.12. (However, those results must be interpreted as reflecting the SR-163 route performance in reverse chronological order, with the most recent performance presented first.) Step 3 is to analyze the before-and-after changes in reliability along the routes in the subarea and relating those changes to the actions taken as part of transportation improvement projects. Hence, in the context of the hypothetical situation described in Step 2, there are remarkable changes to assess. Step 4 is to find trends in the efficacy of different types of transportation improve- ment projects. Table 4.11 presents the findings consistent with the hypothetical con- struct presented earlier. It shows that the agency actions are improving the reliability of the facility under normal conditions, dropping the SV under uncongested conditions from 11 to 7, eliminating the existence of a moderate-congestion condition, and reduc- ing the SV during high congestion from 261 to 205. The SV trend under the other con- ditions is less clear. In some cases there is improvement, but in others there is not. Of course, generating clear trends in these other categories is somewhat difficult because the severity of the nonrecurring events can differ in one year versus another. Even though the agency may be doing a better job of managing the consequences of the events, the distribution of the severity of the events may make it difficult to see the impacts in a simple measure like the SV. Note in Table 4.11 that the SV values for the special event and incident events show increases in the SV values from Year X to Year Y and then decreases from Year Y to Year Z. Whether progress has been made is unclear. Figure 4.14 shows TR-CDFs by regime for the I-5 route. In this figure, the progress with special events becomes clearer. Notice that the TR-CDF for Year Z has upper-percentile values that are better than in either Years X or Y; so although the lower percentiles are not particularly improved, the higher percentiles are. The same is true for incidents. TABLE 4.11. CHANGES IN RELIABILITY OVER TIME FOR A HYPOTHETICAL ROUTE Condition Year Normal Demand Weather Special Events Incidents SV n SV n SV n SV n SV n Uncongested X 11 32,823 13 1,019 61 277 21 29 54 102 Y 15 24,491 47 147 68 229 29 77 139 55 Z 7 55,533 60 1,250 46 797 111 135 172 285 Moderate X 56 20,950 169 519 399 333 601 344 684 354 Y 73 30,794 244 115 257 464 0 0 190 128 Z 0 0 0 0 0 0 0 0 0 0 High X 261 12,764 1,789 1,028 1,924 254 1,424 243 1,385 961 Y 241 13,918 2,415 665 3,751 162 3,113 168 3,032 587 Z 205 12,783 1,415 472 2,563 175 1,399 104 1,769 466 Note: SV = semivariance; n = number of observations.

127 GUIDE TO ESTABLISHING MONITORING PROGRAMS FOR TRAVEL TIME RELIABILITY Figure 4.14. Using TR-CDFs to analyze performance changes. 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 40 50 60 70 80 90 100 110 120 130 140 Cu m ul ati ve P ro ba bi lit y Travel Rate (sec/mi) TR-CDFs for the I-5 Route for Incidents, High Congestion Year X Year Y Year Z 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 40 50 60 70 80 90 100 110 120 130 140 Cu m ul ati ve P ro ba bi lit y Travel Rate (sec/mi) TR-CDFs for the I-5 Route for Special Events, High Congestion Year X Year Y Year Z

128 GUIDE TO ESTABLISHING MONITORING PROGRAMS FOR TRAVEL TIME RELIABILITY Although the performance for lower percentiles in Year Z is not better than in Year Y (but better than or the same as in Year X), it is better for the higher percentiles, at about the 88th percentile and above. Hence, an improvement in reliability has been accomplished. Of course, although improvements have been made in the higher per- centiles, further improvement can be made in the lower ones (for the high-congestion condition). Putting more resources into special event management and incident clear- ance would help improve performance further. Step 5 focuses on using the results as input into decision making associated with future agency planning and programming decisions. The aggregate SVs do not portray the picture as completely as the CDFs, but they are more succinct and convey a general sense of the situation. For example, the normal condition occurs frequently, as would be expected, but the SVs are all quite small (the largest values occur during high congestion and range from 200 to 300). By comparison, the SVs for the less-frequent conditions range up to almost 4,000. Figure 4.15 plots the SVs against the occurrence frequencies for events that occur 1,500 times per year or less. The normal data points are not present because their frequencies of occurrence are much larger, but their SVs are relatively low. Figure 4.15. Relative importance of different conditions. 0 500 1000 1500 2000 2500 3000 3500 4000 0 200 400 600 800 1000 1200 1400 Av er ag e Tr av el R at e Se m i-V ar ia nc e of th e 5- m in ut e Pe rio ds (s ec /m i)^ 2 Frequency of Occurrence (5-minute periods/year) Factor Impacts Normal Demand Weather Special Events Incidents I-15, High I-15, High SR 15, High SR 15, High SR 163, High I-5, High SR 15, High

129 GUIDE TO ESTABLISHING MONITORING PROGRAMS FOR TRAVEL TIME RELIABILITY Combinations that lie on the outer boundary of this plot are important conditions on which to focus. Figure 4.15 shows there are a few conditions that merit signifi- cant attention to mitigate low-probability, high-consequence events. These are demand events on SR-15 and SR-163 under nominally high-congestion conditions, incident events on I-5 and SR-15 under nominally high-congestion conditions, and weather events on SR-15 under nominally high-congestion conditions. Mitigating strategies would have a significant payoff for these situations. Interestingly, and perhaps unex- pectedly, it is the SR-15 route rather than the SR-163 route that may deserve the most attention in terms of managing significant consequences of nonrecurring events. That is, although the SR-163 route certainly has reliability problems, it does not surface in Figure 4.15 as being the route that produces the most significant reliability problems. DETERMINE WHEN A ROUTE IS UNRELIABLE (AP5) In this use case, the agency planner wants to see when a route’s travel time reliability is unacceptable. The planner can use this analysis for a variety of purposes: • To determine if travel time variability is an all-day problem, or if it is confined to specific time periods; • To decide where and when to implement corrective measures (e.g., ramp metering) to help mitigate congestion-induced variability; • To determine where and when HOV or high-occupancy toll lanes could be an al- ternative to provide more consistent travel times for carpools or paying drivers; or • To see what can be done in rural areas to mitigate the impacts of beach traffic in the summer or recreational skiing traffic in the winter. Table 4.12 summarizes this use case. TABLE 4.12. DETERMINE WHEN A ROUTE IS UNRELIABLE (AP5) User Agency planner Question When does a route have unreliable travel times? Steps 1. Select the route for which the reliability assessment is desired. 2. Select a metric to assess reliability and a value to be used to distinguish between reliable and unreliable operation. 3. Determine the time frame for the analysis (e.g., a year, a quarter, a season, only weekdays; peak hours, weekdays, all times). 4. Assemble TR-PDFs for the route for the time period and system operating conditions of interest. 5. Determine the times when the route has unreliable travel times. 6. Search for reasons why the route might have had unreliable travel times under those conditions (e.g., weather, incidents, work zones). 7. Create a list of those reasons and exhibits that show the percentage of time during which those conditions exist. Inputs TR-PDFs for the route and across time for the date ranges of interest and other specifications desired. Result A list of those reasons and exhibits that show the percentage of time during which those conditions exist.

130 GUIDE TO ESTABLISHING MONITORING PROGRAMS FOR TRAVEL TIME RELIABILITY This use case can be addressed using any one of the routes examined in Appendix D (I-5, SR-15, SR-163, and I-8). The metric can again be the SV, and the time frames can be those for which data were available: all of 2011 in the case of I-5, SR-15, and SR-163; and November 3, 2008, until February 27, 2009, in the case of I-8. The SV data, analyzed in various use cases in Appendix D, suggest that • I-5 is unreliable only during the p.m. peak. • SR-15 is somewhat unreliable during midday and significantly unreliable during the p.m. peak. • SR-163 is more unreliable during the midday and equally unreliable during the p.m. peak. • I-8 is unreliable during the a.m. peak and to a lesser extent during the p.m. peak and into the early evening. • The reasons for unreliability are predominantly the following: — High congestion during the p.m. peak (in the case of I-5, SR-15, and SR-163) and the a.m. peak (in the case of I-8); — Weather, which is a significant source of unreliability for all four routes, es- pecially during regimes involving high congestion, but also for other regimes as well; — Incidents, although they predominantly have an impact during regimes involv- ing high congestion; and — Special events, especially during the early evening on I-8. Table 4.13 and Table 4.14, respectively, provide hour and percentage breakdowns of the times (regimes) when each facility’s average SV exceeds 100. Notice that the trends are quite different for I-8 versus the other three facilities. I-8 has all of its hours of unreliable operation during nonrecurring events (weather, special events, and incidents) and none during normal operation, but the other three routes have most (more than 76%) of their unreliable operation during conditions of high-congestion, normal operation. Moreover, I-8 has no percentages attributable to demand conditions, but the other three do. BE ALERTED WHEN THE SYSTEM IS STRUGGLING WITH RELIABILITY (MM2) The user wants to know when the travel times on a facility have become unreliable or are about to do so. Consistent with the discussions in MM1, this alert tells the user the system is entering a condition in which travelers cannot achieve the travel times they want because the congestion is too high (travel is constrained), or the system’s ability to provide consistent travel times has become low, or both. A roadway system man- ager might use this information as the basis for sending out alerts to variable message signs or route guidance devices. Table 4.15 summarizes this use case.

131 GUIDE TO ESTABLISHING MONITORING PROGRAMS FOR TRAVEL TIME RELIABILITY TABLE 4.13. HOURS OF UNRELIABLE OPERATION BY REGIME AND FACILITY Route Condition Normal Demand Weather Special Events Incidents Total HoursSV Hours SV Hours SV Hours SV Hours SV Hours I-5 Uncongested 7 0 60 0 46 0 111 11 172 24 35 High 205 1,065 1,415 39 2,563 15 1,399 9 1,769 39 1,167 SR-15 Uncongested 15 0 47 0 68 0 29 0 139 5 5 Low 27 0 118 9 106 16 0 0 97 0 25 Moderate 46 0 127 1 151 23 0 0 93 0 24 High 241 1,160 2,415 55 3,751 14 3,113 14 3,032 49 1,292 SR-163 Uncongested 11 0 13 0 61 0 21 0 54 0 0 Moderate 56 0 169 43 399 28 601 29 684 30 129 High 261 1,064 1,789 86 1,924 21 1,424 20 1,385 80 1,271 I-8 Westbound Uncongested 5 0 16 0 27 5 0 0 17 0 0 Low 9 0 21 0 101 51 20 0 24 0 51 Moderate 11 0 35 0 80 0 473 3 337 10 13 High 45 0 50 0 1,180 15 0 0 805 20 35 TABLE 4.14. PERCENTAGE OF UNRELIABLE OPERATION BY REGIME AND FACILITY Route Condition Normal Demand Weather Special Events Incidents Total (%)SV % SV % SV % SV % SV % I-5 Uncongested 7 0 60 0 46 0 111 1 172 2 3 High 205 89 1,415 3 2,563 1 1,399 1 1,769 3 97 SR-15 Uncongested 15 0 47 0 68 0 29 0 139 0 0 Low 27 0 118 1 106 1 0 0 97 0 2 Moderate 46 0 127 0 151 2 0 0 93 0 2 High 241 86 2,415 4 3,751 1 3,113 1 3,032 4 96 SR-163 Uncongested 11 0 13 0 61 0 21 0 54 0 0 Moderate 56 0 169 3 399 2 601 2 684 2 9 High 261 76 1,789 6 1,924 2 1,424 1 1,385 6 91 I-8 Westbound Uncongested 5 0 16 0 27 0 0 0 17 0 0 Low 9 0 21 0 101 52 20 0 24 0 52 Moderate 11 0 35 0 80 0 473 3 337 10 13 High 45 0 50 0 1,180 15 0 0 805 21 36

132 GUIDE TO ESTABLISHING MONITORING PROGRAMS FOR TRAVEL TIME RELIABILITY TABLE 4.15. BE ALERTED WHEN THE SYSTEM IS STRUGGLING WITH RELIABILITY (MM2) User Roadway system manager Question Has a route or system become unreliable? Steps 1. Select the segments or routes being monitored. 2. Select the conditions for which notification is desired. 3. Design the test that will be used to identify the condition selected. 4. Monitor the TT-PDFs (or TR-PDFs) to see if an unreliable condition has arisen. Inputs Real-time information about the status of the segment or route plus historical TT-PDFs for the segments and routes under surveillance. In addition, real-time information about the network conditions as explanatory variables. Result An alert message that displays the facility and location where the travel time reliability is adverse and TT-CDFs that compare the current segment or route travel time against the ones that would be expected. Note: TT-CDF = travel time cumulative density function. This use case is similar to incident detection or identification of times when the system’s behavior has become unstable. As in use case MM1 and elsewhere, when the system is under stress due to heavy congestion or adverse nonrecurring conditions, individual vehicle travel times become less variant (i.e., consistent) because the con- gestion keeps people from being able to travel at the speeds they want. Moreover, the system struggles to provide the same travel times each time these conditions occur. It seems the system cannot control the manner in which vehicles interact or the effects on system capacity from the nonrecurring event. The steps in the use case are as follows. Step 1 is to select the segments or routes to be monitored. I-5 southbound in Sacramento will be employed. Step 2 is to select the conditions for which notification is desired. The choice will be any condition in which the facility has difficulty letting users travel at their desired speeds. Step 3 is to design the test that will be used. Step 4 is to monitor the TT-PDFs (or TR-PDFs) to see if reliability is suffering. The data collected will be used ex post facto to see when reli- ability was affected. Based on data for individual vehicle travel times, it appears that the lower- percentile travel times (the higher speeds) are affected earliest and the most (i.e., the higher- percentile speeds decrease the most). Without question, the other percentile travel times also change, but not as dramatically, at least not initially. That is, the lower-percentile travel times provide the earliest sign that the system is entering an unreliable condition. As the system becomes more heavily loaded from either conges- tion or a nonrecurring event, it loses the ability to permit drivers to achieve the lowest travel times. It cannot let those vehicles thread their way through the traffic stream. Either the traffic density is too high, or the nonrecurring event has interfered with that ability (e.g., as a result of queuing). The clearest evidence of this can be seen in time traces. Figure 4.16 shows the per- centiles of individual vehicle travel times for I-5 southbound in Sacramento on March 1, 2011. Additional plots of the same data can be found in Figures 4.5, 4.6, and 4.7. One

133 GUIDE TO ESTABLISHING MONITORING PROGRAMS FOR TRAVEL TIME RELIABILITY can see that as incidents occur or as the system becomes more heavily loaded, the 5th percentile travel time increases. The first event occurs at about 11:00, when the travel times abruptly rise and then return to normal. This was an incident. The second event starts at about 15:00, when the travel times begin to increase in advance of the p.m. peak. In the first 11:00 instance, notice that all the percentile travel times increase without any advance warn- ing; however, in the 15:00 instance, the 5th percentile travel time begins to increase (and the standard deviation begins to decrease) well in advance of the changes in the other percentile travel times. The message seems clear. In the case of recurring congestion, the low-percentile travel times (e.g., 5th percentile) and the standard deviation both provide leading indi- cations that a period of congested operation is approaching. The travel times (from day to day) are about to become unreliable in the sense that people will not be able to achieve their desired travel times, and the travel time they experience from one instance to the next will not be the same. However, when an unexpected nonrecurring event such as an incident affects the system’s operation, it does so abruptly, without leading indications (from the 5th percentile travel time or the standard deviation) that conditions are about to change. In addition, in the latter case when the event (espe- cially an incident) creates a bottleneck that affects all drivers, the travel times not only all increase, but they become consistent. When the event affects only some lanes (and thus only some drivers), travel times increase but with a significant variation in the Figure 4.16. Trends in travel time percentiles and their standard deviations. 0 2 4 6 8 10 12 10:00 12:00 14:00 16:00 18:00 Tr av el T im e Pe rc en ti le o r S ta nd ar d De vi ati on (m in ) Time on March 1, 2011 tAvg t(5) t(25) t(50) t(75) t(95) 10*StdDev R-Flag

134 GUIDE TO ESTABLISHING MONITORING PROGRAMS FOR TRAVEL TIME RELIABILITY travel times achieved. People in the less-affected lanes are able to achieve significantly shorter travel times than those in the more-affected lanes. Finally, one thing remains true in all conditions: the lower-percentile (e.g., 5th percentile) travel times are always affected, either in advance of the full-fledged condition (e.g., when congested opera- tion occurs) or immediately on its onset (e.g., with a nonrecurring event during other- wise uncongested operation). The implication of these observations is that a test that identifies these periods of unreliable operation can be predicated on the lower-percentile travel times. One test will probably not fit all conditions—most likely such tests need to be tuned to the facilities being observed—but a test based on the lower-percentile travel times is likely to always work. Tests based on reductions in the standard deviation will also work for recurring event conditions, and such tests appear to provide an even earlier warning that conditions are in the process of changing. The first transient shown in Figure 4.16 is an unexpected incident that blocked all of the lanes. The increased travel times toward the right side of the graph represent the p.m. peak period. The 5th percentile travel times start to rise, and the standard deviation starts to drop. By contrast, the 75th percentile travel times do not change much. The graph shows that the standard deviation goes down, not up, when unreli- able conditions arise. Three different phenomena can be represented by this type of graph, of which two can be seen in Figure 4.16: • Recurring congestion over all lanes (apparent on the right side of Figure 4.16) or recurring congestion over some lanes but not all in a compound pattern; • An incident that causes all vehicles to be affected, increasing all travel times (shown on the left side of Figure 4.16); and • An incident that affects one lane but not all vehicles. This is particularly visible when curb lanes are affected, since the 5th percentile does not change much, but the 75th and 95th percentiles are significantly affected. The results of applying one test are shown in the bottom of Figure 4.16. The test involves checking three conditions: the average of the 5th percentile travel time in four successive observations rises above 4.7 minutes, the 5th percentile rises above 5 minutes in the current observation, or the 75th percentile travel time rises above 6 minutes. The latter test seems to catch instances in which some but not all lanes are affected by an incident. SUMMARY The case studies and use case analyses presented in this chapter provide an illustra- tion of the techniques and potential applications for a TTRMS. Further details can be found in Appendices C and D.

Next: 5 SUMMARY »
Guide to Establishing Monitoring Programs for Travel Time Reliability Get This Book
×
 Guide to Establishing Monitoring Programs for Travel Time Reliability
Buy Paperback | $68.00
MyNAP members save 10% online.
Login or Register to save!

TRB’s second Strategic Highway Research Program (SHRP 2) Report S2-L02-RR-2: Guide to Establishing Monitoring Programs for Travel Time Reliability describes how to develop and use a Travel Time Reliability Monitoring System (TTRMS).

The guide also explains why such a system is useful, how it helps agencies do a better job of managing network performance, and what a traffic management center (TMC) team needs to do to put a TTRMS in place.

SHRP 2 Reliability Project L02 has also released Establishing Monitoring Programs for Travel Time Reliability, that describes what reliability is and how it can be measured and analyzed, and Handbook for Communicating Travel Time Reliability Through Graphics and Tables, offers ideas on how to communicate reliability information in graphical and tabular form.

A related paper in TRB’s Transportation Research Record, “Synthesizing Route Travel Time Distributions from Segment Travel Time Distributions,” examines a way to synthesize route travel time probability density functions (PDFs) on the basis of segment-level PDFs in Sacramento, California.

READ FREE ONLINE

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!