National Academies Press: OpenBook

Analytical Procedures for Determining the Impacts of Reliability Mitigation Strategies (2012)

Chapter: Chapter 9 - Conclusions and Recommendations

« Previous: Chapter 8 - Application Guidelines
Page 163
Suggested Citation:"Chapter 9 - Conclusions and Recommendations." National Academies of Sciences, Engineering, and Medicine. 2012. Analytical Procedures for Determining the Impacts of Reliability Mitigation Strategies. Washington, DC: The National Academies Press. doi: 10.17226/22806.
×
Page 163
Page 164
Suggested Citation:"Chapter 9 - Conclusions and Recommendations." National Academies of Sciences, Engineering, and Medicine. 2012. Analytical Procedures for Determining the Impacts of Reliability Mitigation Strategies. Washington, DC: The National Academies Press. doi: 10.17226/22806.
×
Page 164
Page 165
Suggested Citation:"Chapter 9 - Conclusions and Recommendations." National Academies of Sciences, Engineering, and Medicine. 2012. Analytical Procedures for Determining the Impacts of Reliability Mitigation Strategies. Washington, DC: The National Academies Press. doi: 10.17226/22806.
×
Page 165
Page 166
Suggested Citation:"Chapter 9 - Conclusions and Recommendations." National Academies of Sciences, Engineering, and Medicine. 2012. Analytical Procedures for Determining the Impacts of Reliability Mitigation Strategies. Washington, DC: The National Academies Press. doi: 10.17226/22806.
×
Page 166
Page 167
Suggested Citation:"Chapter 9 - Conclusions and Recommendations." National Academies of Sciences, Engineering, and Medicine. 2012. Analytical Procedures for Determining the Impacts of Reliability Mitigation Strategies. Washington, DC: The National Academies Press. doi: 10.17226/22806.
×
Page 167
Page 168
Suggested Citation:"Chapter 9 - Conclusions and Recommendations." National Academies of Sciences, Engineering, and Medicine. 2012. Analytical Procedures for Determining the Impacts of Reliability Mitigation Strategies. Washington, DC: The National Academies Press. doi: 10.17226/22806.
×
Page 168
Page 169
Suggested Citation:"Chapter 9 - Conclusions and Recommendations." National Academies of Sciences, Engineering, and Medicine. 2012. Analytical Procedures for Determining the Impacts of Reliability Mitigation Strategies. Washington, DC: The National Academies Press. doi: 10.17226/22806.
×
Page 169
Page 170
Suggested Citation:"Chapter 9 - Conclusions and Recommendations." National Academies of Sciences, Engineering, and Medicine. 2012. Analytical Procedures for Determining the Impacts of Reliability Mitigation Strategies. Washington, DC: The National Academies Press. doi: 10.17226/22806.
×
Page 170
Page 171
Suggested Citation:"Chapter 9 - Conclusions and Recommendations." National Academies of Sciences, Engineering, and Medicine. 2012. Analytical Procedures for Determining the Impacts of Reliability Mitigation Strategies. Washington, DC: The National Academies Press. doi: 10.17226/22806.
×
Page 171

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

163 C h a p t e r 9 Findings and products of the research Data Set Compilation and Usage A large and comprehensive data set was compiled with which to conduct the research. The data set will be of use for future research and the SHRP 2 data archive being constructed with the L03 data set as its core. The data set includes many levels of aggregation and summarization. The traffic data from urban freeways, which are the largest portion of the data set, include the original measurements from roadway detectors (5-minute intervals by lane) and number in the hundreds of millions of records. The traffic data also are summarized at several spatial and temporal aggregation levels. The most- summarized portion of the data set is the one used for the cross- sectional statistical analysis: every record is an annual summary of traffic and reliability characteristics, with annual event char- acteristics and roadway features merged into it. The data pro- cessing included new procedures that the research team created specifically for the project. The sources of the data were primarily from state depart- ments of transportation; data included continuous traffic mea- surements, incidents, work zones, intelligent transportation system equipment, operating policies, and geometric charac- teristics. In addition, the team purchased a limited amount of private-vendor vehicle probe data for rural freeways and signalized arterials; the rural freeway data were adequate to establish reliability, but the signalized arterial data did not appear to have enough samples, and local signal timing data were not available for the time period of the probe data. Inci- dent data from a second private vendor also were available without a fee; these provided the needed lane blockage data in several locations where public agencies did not collect this type of information. Fusion and integration of the various data proved to be a daunting and time-consuming task. The data sets had differ- ent georeferencing, which complicated the matching of traffic data, incidents, improvements, and geometric characteris- tics. Much of the matching had to be done manually. A large amount of testing, quality control, and development of new processing procedures had to be conducted. The utility of the data set as a research resource was proven several times during the project. Often, the team needed to investigate new areas or compute factors, and these tasks were easily accomplished because the data were analysis ready. It is expected that future researchers will appreciate this feature. In addition to supporting research, the data set represents an excellent model for practitioners to use in developing per- formance monitoring systems for congestion and reliability. Specifically, the different levels of temporal and spatial aggre- gation can be used to support many local requirements. The fusion of traffic, event, and geometric data provides the basis for tracking reliability trends, and it also includes the data required to explain those trends (e.g., demand and events). Data processing for performance monitoring is not trivial, and many different methods and assumptions can be used. The L03 research provides a basis for standardizing those procedures. Exploratory Analyses A large variety of exploratory analyses were undertaken before the main analyses to test assumptions and develop data processing methods and as an aid in understanding reli- ability in general. The highlights of these exploratory analyses follow. Recommended Reliability Metrics Empirical testing revealed that the performance metrics defined in the early stages of the research were sensitive to the effects of improvements. However, the team noticed that the 95th per- centile Travel Time Index (TTI) may be too extreme a value to be influenced significantly by operations strategies and that Conclusions and Recommendations

164 the 80th percentile was more sensitive to these improvements. As a result, the 80th percentile TTI was added to the list of reliability performance metrics for the remainder of the research. The final set of reliability metrics, which also are appropriate for general practice, appears in Table 9.1. Travel Time Distributions Development of travel time distributions is the starting point for defining reliability metrics and a convenient way to visual- ize general congestion and reliability patterns for a highway section or trip. Examination of the distributions from the study sections used in this research reveals several characteristics: • The shape of the travel time distribution for congested peak times (nonholiday weekdays) is much broader than the sharp spike evident in uncongested conditions. The breadth of this broad shoulder of travel times decreases as conges- tion level decreases; • Similarly, the tails of the distributions (to the right) appear more exaggerated for the uncongested time slices. How- ever, note that the highest travel times occur during the peaks; and • Despite the fact that peaks have been defined, some trips occur at close to free flow. More trips are at free-flow speeds in the peak period than in the peak hour, probably because the peak times actually shift slightly from day to day, as traf- fic demand can be shifted by events. Also, there are probably some days when overall demand is lower than other days. Data Requirements for Establishing Reliability Because reliability is defined by the variability of travel condi- tions (travel time), it must be measured over a substantial portion of time to allow all of the influences of random events to be exerted. The optimal question here is, how much data are enough? Tests showed that an absolute minimum of 6 months of data is required to establish reliability within a small error rate in areas where winter weather is not a major factor. A full year of data is preferred. Trends in Reliability A study was undertaken using the Atlanta study sections to track performance for 2006, 2007, and 2008. Between 2006 and 2007, average congestion increased and reliability decreased, using the Planning Time Index and the Buffer Index to measure reliability. However, between 2007 and 2008, average congestion levels fell on all study sections as demand fell in response to the reduction in overall economic activity; this decrease corre- sponded to many anecdotal stories and other analyses about congestion in 2008. However, on most study sections, the Buffer Index showed an increase or a very marginal decrease, which would indicate that reliability worsened in most cases. In con- trast, the Planning Time Index decreased on all sections. This discrepancy between the indices raised doubts about the use of the Buffer Index as the primary metric for tracking trends in reliability. The problem comes from way the Buffer Index is cal- culated: it is the buffer time (difference between the 95th per- centile and the mean) normalized by the mean. In this experiment the 95th percentile decreased less than the mean, resulting in a higher Buffer Index. In other words, the decreased demand affected all points on the travel time distribution, not just the upper tail. The team believes the mechanism for these changes was a reduction in demand that led to across-the-board decreases in congestion, including days with and without road- way events (disruptions). However, conditions on the worst days, which are primarily a result of severe disruptions, were improved to a lower degree than typical or average conditions. The team expects that operations strategies would have a more pronounced effect on the times influenced by severe events. The result of this experiment was that the Buffer Index is considered too erratic or unstable for use as the primary reli- ability metric for tracking performance trends or for studying the effects of improvements. However, as a secondary metric, it provides useful information and should be included in a suite Table 9.1. Recommended Reliability Metrics Reliability Performance Metric Definition Units Buffer Index Difference between 95th percentile TTI and average travel time, normalized by average travel time. Difference between 95th percentile TTI and median travel time (MTT), normalized by MTT. % Failure and on-time measures Percentage of trips with travel times <1.1 MTT and <1.25 MTT. Percentage of trips with space mean speed less than 50, 45, and 30 mph. % Planning Time Index 95th percentile TTI. None 80th percentile TTI Self-explanatory. None Skew statistic (90th percentile TTI - median)/(median - 10th percentile TTI). None Misery Index (modified) Average of highest 5% of travel times divided by free-flow travel time. None

165 of reliability performance metrics. In Atlanta from 2007 to 2008, it might be said that from the perspective of the user, the new conditions of 2008 were indeed less reliable, if one assumes that the 2008 average congestion was the base level: the worst days (as measured by the 95th percentile) are still out there. If, however, one considers the base level of congestion to be 2007, then it is clear that overall, the user’s experience was improved. Defining Peak Hour and Peak Period Most previous studies of reliability and congestion define fixed time periods for the peak hour and peak period. How- ever, for this research, the team decided that the most appro- priate method would be to define each term specifically for each study section. Several methods were tested, with the most effective using a definition based on the most typical start and end times of continuous congestion. The resulting time slices were reviewed against local anecdotal knowledge and required very little adjustment. Estimating Demand in Oversaturated Conditions on Freeways Because the study took an empirical approach to studying reliability, the team had to deal with the thorny issue of how to measure demand given that measured volumes under congested flow are actually less than capacity on freeways. A method for assigning the demand stored in queues during periods of flow breakdown was developed and used through- out the remainder of the research, particularly in defining the demand-to-capacity ratio for the statistical modeling. Reliability Breakpoints on Freeways It was shown that travel time reliability on a freeway is not a function of counted traffic volumes until a breakpoint volume is reached. At that breakpoint, travel time reliability decreases abruptly. Once the breakpoint volume is exceeded, the decrease in travel time reliability (increase in the variance) is extreme and so abrupt as to suggest it is a vertical function, with a nonsingu- lar relationship to further volume increases. The breakpoint volume varies significantly between facilities and even within the same freeway facility (by location and direction of travel on the same facility), and it does not appear to be a fixed ratio of the theoretical capacity of the subject section of the facility. The breakpoint in reliability generally occurs at a counted volume significantly lower than the theoretical capacity of the facility computed according to the methodology of the Highway Capacity Manual (HCM). This is partly because the breakpoint volume computed in this analysis was the average hourly volume counted over a peak period, and not the peak 15-minute demand used in the HCM capacity calculation. But this peaking effect does not entirely explain the differ- ence between breakpoint and theoretical capacity. Part of the reason that the breakpoint volume is significantly lower than the theoretical capacity is that most sections of freeway are upstream of a bottleneck and, thus, are affected by down- stream congestion backing up into the subject section long before the subject section’s HCM capacity is reached. Further, traffic-influencing events, especially incidents, effectively lower capacity when they occur, and over time these events cause reliability to degrade. This effect manifests itself in lower break- point volumes than for capacity related strictly to physical fea- tures. Finally, even for bottlenecks, the data suggest that the reliability breakpoint occurs long before the theoretical HCM capacity of the bottleneck is reached. Sustainable Service Rates on Freeways Just as travel times vary over time, capacity is not a fixed value, but also varies over time. The same factors that influence reli- ability also affect capacity variability. Incidents and work zones reduce overall roadway capacity by blocking lanes and shoul- ders and by affecting driver behavior (e.g., lower speeds and variable following distances due to rubbernecking). Weather conditions affect driver behavior in similar ways. Capacity probably is not affected by the amount of demand (volume) as reliability is, but it is affected by the nature of that demand. That is, at a microlevel, when volumes are very close to theo- retical capacity, variability in driver behavior, small bursts of demand at merge areas (e.g., on-ramps), and the distribu- tion of trucks at specific places and times all probably cause flow to break down at different demand levels. The research did not specifically tease out these factors, but all of them are embedded in the final capacity distributions. The team devel- oped a large set of capacity distributions that look roughly like travel time distributions, but reversed: the tail of the distribu- tion is skewed to the left (lower capacity values) rather than to the right. Because these distributions were developed from year-long data measurements, they include the effect of many influencing factors, resulting in capacity values that could be used in a stochastic framework to model congestion and reli- ability. The set of capacity distributions also is a useful con- struct for accounting for reliability within future versions of the HCM. Travel Time Distributions on Urban Freeways, Signalized Arterials, and Rural Freeways An analysis of travel time distributions for different time slices and congestion levels revealed the following characteristics: • All distributions feature a tail that is skewed to the right (i.e., higher travel times). Most of these abnormally high

166 travel times can be attributed to one or more of the sources of congestion; that is, they occur in the presence of an event(s) and/or high demand; • Uncongested periods are characterized by a sharp peak of travel time frequencies near the free-flow speed; • When congestion dominates the time slice (e.g., peak hour, peak period), the travel time distribution becomes more broad and less peaked; • Travel time distributions on signalized arterials are uni- formly broad in shape, even for relatively low levels of con- gestion, presumably because of signal delay at even low volumes and interference from side traffic; and • As trips become longer, travel time distributions assume the typical uncongested shape. Vulnerability to Flow Breakdown Examination of the 5-minute data at individual stations (groups of detectors in a direction on a highway segment) reveals that there is an upsurge in the 95th percentile travel times 20 to 45 minutes before the start of what is considered the normal peak period. This upsurge begins before the uptick in average travel times and indicates that this window of time is vulnerable to flow breakdown. These windows are extremely important for operators to focus on as breakdowns during this time will strongly influence the duration and severity of the peak. Reliability of Urban Trips Based on the Reliability of Links For extended travel (trips of 10 to 12 miles) on urban freeways, the reliability of the entire trip can be predicted as a function of the reliability of the links that comprise the trip. Although not specifically tested, it should be possible to construct trip reliability for trips that include other types of highways in addi- tion to freeways, subject to the issue of time dependency for long trips. Before-and-After Studies on Selected Study Sections The primary goal of the research was to develop relationships for predicting the change in reliability due to improvements. The best way to accomplish this was with controlled before- and-after studies. However, such analyses are substantially more challenging than what is typically done because of the data requirements: to establish reliability empirically, 6 to 12 months of data are required, with 12 months being the preferred data collection period. This means a long period of continuously collected data is required both before and after the improvement. So, instead of designing traditional before- and-after experiments, the team concentrated on collecting continuous traffic data from areas known from previous experience to have quality data, interesting congestion, and good records of event data. At a minimum, this method of data collection would provide the best data for developing cross- sectional statistical relationships. As it turned out, the team was able to identify 17 cases of improvements that coincided with identified data, although the types of improvements were somewhat limited. The analysis produced reliability adjustment factors that can be applied to the various improvements. The adjustment factors for a specific type of improvement vary slightly, pre- sumably because background (baseline) conditions are some- what different. Users are directed to the detailed descriptions of the studies in Appendix B to select the conditions most appropriate for their situation. A global finding from the before-and-after analyses was that all forms of improvements, including capacity expan- sion, affect both average congestion and reliability in a posi- tive way (i.e., average congestion is reduced and reliability is improved). Conceptually, this makes sense: one of the seven sources of congestion and reliability identified earlier was the amount of base capacity. All things being equal, more capac- ity (in relation to demand) means that the roadway is able to absorb the effects of some events that would otherwise cause disruption. The size of this effect was greater than the team had originally anticipated (see Chapter 8 for a complete dis- cussion). For transportation professionals, this significance of capacity means that to the extent that reliability is valued more highly than average travel time, a large part of the ben- efits of capacity-expansion projects has been missed in his- torical analyses. Cross-Sectional Statistical Modeling Going into the project, the team realized that only a limited number of before-and-after studies would be possible. Therefore, much of the effort for the study went into the creation of a cross-sectional data set from which statistical models could be developed. The final analysis data set for the statistical modeling is highly aggregated: each record represents reliability, traffic, and event data summarized for a section for a year. This structure is necessary because reliability is measured as the variability in travel times over the course of a year. As such, the cross-sectional model is a macroscale model. It does not seek to predict the travel time for a particular set of circumstances; that is, it is not appropriate for real-time travel time prediction. Rather, it seeks to predict the overall travel time characteristics of a highway section in terms of both mean and reliability per- formance. It is, therefore, appropriate for adaptation to many existing models and applications that seek to do the

167 same, and it can serve as the basis for conducting cost–benefit analyses. Two model forms were developed: simple and complex. The simple model form relates all of the reliability metrics to the mean TTI for all three highway types studied (urban free- ways, rural freeways, and signalized arterials). These relation- ships are convenient for many applications that produce mean travel time–based measures as output (e.g., traditional travel demand forecasting models, HCM). Because the mean TTI developed from the research data includes the effects of all pos- sible influences of congestion, which produces a mean value greater than model results (which usually are for typical non- extreme conditions), an adjustment factor was developed to convert model output to the overall mean TTI so that the relationships can be applied. A more detailed model form was developed that relates reliability measures to the factors that influence reliability. It has long been theorized that reliability is determined by demand, capacity, incidents, weather, and work zones. In fact, that is what the team found from analyzing the research data set. A tiered predictive model was developed that related the reliability metrics over highway sections (multiple links, usually 4 to 5 miles long) for different time slices to • The critical demand-to-capacity ratio (maximum from the individual links); • Lane hours lost due to incidents and work zones combined (annual); and • Number of hours during which rainfall was ≥0.05 inch (annual). The rainfall variable must be computed using weather records. Guidance was developed for how to develop the demand- to-capacity ratio. Lane hours lost was decomposed into a series of subrelationships that can be estimated using easily obtained data. Congestion by Source The research team had conducted congestion by source analy- ses in earlier projects, but the data available for those studies were incomplete. The L03 research offered an opportunity to assemble the data more carefully and to incorporate other data sources. The goal was to capture the contributions of the factors influencing congestion and reliability, as shown in Figure 9.1. The analysis was conducted at a microlevel: data at the 5-minute level were analyzed for possible effects by the sources. An assignment of congestion causality was made for the measured delay in the Seattle data. Taken at face value, the analysis supports the commonly heard statement that “inci- dents and crashes cause between 40% and 60% of all delay.” In reality, a considerable portion of the delay associated with Figure 9.1. A model of congestion and its sources.

168 incidents and crashes is caused by large traffic volumes. There- fore, the amount of delay caused by incidents is actually less than can be reasonably assigned by simply observing the occurrence of events. There were numerous examples in the analysis data set of significant crashes and other incidents that caused little or no congestion because of when they occurred. These showed that without sufficient volume, an incident causes no measurable change in delay. In the Seattle area, many incidents take place during peak periods, causing already existing congestion to grow worse, the result of the interwoven effects of incidents, bad weather, and traffic volumes on travel times. In addition, all types of disruptions to normal roadway performance (rain, crashes, noncrash incidents) cause congestion to start earlier and last longer during the peak period, while increasing travel times during the normally congested times. Incidents and other disruptions also can cause congestion to form during times of the day that are normally free from congestion. However, congestion only forms when the disruption lowers functional capacity below traffic demand. Thus volume, relative to road- way capacity, is a key component of congestion formation, and in urban areas it is likely to be the primary source of congestion. Disruptions then significantly increase the delay that the basic volume condition creates. The fact that traffic volume is the basis of congestion also affects how various traffic disruptions alter travel patterns. Not only does traffic volume affect whether an incident causes con- gestion, but it affects how long that congestion lasts once the primary incident has been removed. The Seattle data showed that in the morning peaks, disruptions have a more noticeable effect on the timing of the end of the peak period, while in the evening the opposite is true. In summary, analysis of 42 roadway segments in the Seattle metropolitan area showed that a majority of travel delay in the region is the direct result of traffic volume demand exceed- ing available roadway capacity. Whenever they occur, inci- dents, crashes, and bad weather add significantly to the delays that can be otherwise expected. The largest of these disruptions plays a significant role in the worst travel times that travelers experience on these roadways. However, the relative impor- tance of any one type of disruption tends to vary considerably from corridor to corridor. In peak periods, incidents add only marginally to total delay, but they shift when and where those delays occur, as well as who suffers from those delays. That is, many inci- dents shift where a normally occurring bottleneck occurs, freeing up some roadway sections, while causing others to suffer major increases in congestion. But taken as a total, if a section is already normally congested, the added delay from incidents is modest (at least in Seattle) compared with the daily delay from simply too many vehicles for the available physical capacity. In congested urban areas, traffic incidents often cause unreliable traffic patterns more than increases in total delay. Although the total delay value does goes up, the big change is often the shift in who gets delayed. For a specific severe inci- dent, many travelers may value the extra (unplanned) delay very highly, and they are very likely to remember these extreme cases. Some of that (total) delay is offset by other travelers who reach their destination early because their trip is down- stream of the incident-caused bottleneck, and volume has probably been metered by that bottleneck. Significance of Demand for Reliability Estimation A major result of the research was the finding that demand (volume) is an extremely important determinant of reliabil- ity, especially in terms of its relation to capacity. As shown in Figure 9.1, demand’s interaction with physical capacity is the starting point for determining congestion. The research team initially postulated that the effect of most events is determined by the level of demand under which those events occur. For example, if an incident or work zone blocks a traffic lane, the impact will only be felt if volumes are high enough to be affected by the lost capacity. However, the team did not expect demand to have as strong an effect as the analyses indicated. Throughout the different analyses conducted for the L03 research, demand kept emerging as a significant factor. The case for the strong effect of demand (volume) is summarized as follows: • The Atlanta trend analysis revealed that roughly a 3% drop in demand significantly improved both average congestion level and reliability between 2007 and 2008. • The before-and-after studies of capacity improvements produced a strong improvement in reliability, not just average congestion. The team believes the mechanism for this improvement is a simultaneous change in capacity in relation to demand (the demand-to-capacity or volume- to-capacity ratios), so a change in either will produce the same effect. This simultaneous effect was subsequently verified in the cross-sectional statistical models. • The Seattle congestion by source analysis revealed that a substantial portion of delay could not be attributed to an event, even with careful consideration of off-section condi- tions and special events. This leaves demand as the sole cause. The Seattle analysis also shows that incidents during low- demand periods have only a small effect on congestion. • The midday cross-sectional models did not show lane hours lost due to incidents and work zones as a statistically sig- nificant independent variable, indicating that under low- volume conditions (i.e., conditions in which volumes are low relative to the available physical capacity), the annual

169 effect of disruptions is small. Extreme disruptions (e.g., multiple lane closures) clearly will have an effect on an individual day, but over the course of a year these events are rare and do not appear to move the annualized reli- ability metrics very much at all. • The peak hour and peak period cross-sectional models showed that the demand-to-capacity ratio was a stronger contributor to the model than lane hours lost. The influence of demand is probably related not only to sheer volume of traffic but its characteristics. As volumes approach theoretical capacity, traffic flow becomes unstable and increas- ingly susceptible to breakdown due to small changes. These small changes can occur at a point substantially less than theoretical capacity, and when they occur near potential bottle- neck areas such as on-ramps, weaving areas, and lane drops, the team postulates that their effect is enhanced. In addition to variations in demand as a source of unreli- able travel times, evidence exists that physical capacity is also variable. This variation in physical capacity, which results from disruptions and other factors that can occur on a high- way segment, was observed by the research team throughout the course of a year. However, the work of Brilon and prelimi- nary research conducted by other SHRP 2 contractors suggest that capacity varies even in the absence of disruptions (1). Why would physical capacity vary? The team believes that fluctuations in traffic conditions at a microscale are the most likely causal factors. These small changes could be related to • Driver behavior—One or a few vehicles can behave aber- rantly (e.g., sudden unexplained stops); • Truck presence—A small increase in trucks in the traffic stream at a given point in time and space could have a det- rimental effect; and • Microbursts of merging traffic—A small but intense influx of vehicles from an on-ramp could be enough to cause flow breakdown. The finding that demand and capacity strongly influence travel time reliability has several implications: • The mechanism for the influence of demand and capacity on travel time reliability can be seen in the before-and-after studies. Consider the distribution of travel times that occurs on a routinely congested highway segment over the course of a year. Capacity additions and demand reductions will reduce nearly all the travel times in the congested portion of the distribution and will improve congestion on nearly all days; capacity and demand are always present in the roadway environment. In contrast, strategies geared to dis- ruptions (e.g., incident management) will only affect con- gestion when those disruptions occur, and disruptions will not appear during every congested period of every day. In other words, only selected travel times in the congested por- tion of the distribution will be reduced by strategies such as incident management; • It is clear that traditional capacity projects improve reli- ability, and failure to account for this effect in economic analyses has excluded benefits to users; and • Demand management strategies, such as pricing, also will lead to improvements in reliability. Accounting for volumes in relation to available capacity can provide a tool for efficiently allocating operations strategies, particularly incident management. That is, times and loca- tions that are most vulnerable to flow breakdowns can be targeted. Reliability As a Feature of Congestion The intertwined relationship between demand, capacity, and disruptions documented in the L03 research leads to another major conclusion: reliability is a feature or attribute of conges- tion, not a distinct phenomenon. Because any influence on congestion will lead to unreliable travel, reliability cannot be considered in isolation. Going into the research, the project team’s thinking, like that of the profession in general, was that reliability related primarily to disruptions and the operational treatments aimed at those disruptions. The analysis showed that even in the absence of disruptions, a substantial amount of variability (i.e., unreliability) in travel times exists for recurring-only (bottleneck-related) conditions. Therefore, the most inclusive view of travel time reliability is that it is part of overall congestion. Just as congestion can be defined by extent and severity, it can also be defined by how it varies over time. Operational treatments are clearly effective in dealing with unreliable travel, but so are other congestion- relief measures. recommendations for Future research Based on the results of this study, the team offers the following suggestions for future research. Detailed Examination of Reliability Causes and Prediction on Signalized Arterials Because of data limitations in the number of signalized arte- rials with continuous travel time data, the amount of data on those that did, lack of continuous volume data to match against the available travel time data, and no information on incident and work zone characteristics, only simple analyses using travel time data from signalized arterials could be undertaken for

170 this study. However, since the completion of data collection for this research, it is clear that data availability is about to increase dramatically. Private vendors of vehicle probe data have improved their data processing methods and increased their sources of travel time data in the past 18 months. As a result, many states already have purchased statewide private- vendor probe data, primarily for traveler information appli- cations. Like freeway detector data, these data have value in developing performance measures and supplying research studies after the fact. This trend is expected to continue as new sources, perhaps even those from consumer sources, continue to be added to their products. In addition, new and relatively inexpensive technologies for collecting travel times on signalized highways, such as Bluetooth readers and vehicle signature detectors, offer great potential for new forms of traffic management applications by public agencies. Effective Collection of Systemwide Demand Data The study was possible because traditional urban freeway detectors collect both speeds and volumes. However, if the newer sources of speed and travel time data discussed above become widespread, there will be no companion volume mea- surements until the number of vehicles detected approaches 100%. The L03 research has shown that demand is a vital deter- minant of reliability. Further, from an operations viewpoint, emerging methods such as active traffic management are likely to require more, not less, data (travel times and volumes) to feed their control processes. Consistency in Data Collection for Incidents and Work Zones The research team labored mightily to find and process inci- dent and work zone data to match against the traffic measure- ments. The duration of blockages (recognizing that the nature of blockages can change over the course of a single event) was the critical piece of data required. Also, consistency in geo- coding of events, traffic detectors, and roadway features would greatly enhance future research. An extra complication is the fact that private vendors (at least the two used in this research) use the Traffic Message Channel standard for geolocation, a standard that is almost never used by public agencies. To avoid the large amount of manual intervention endured by the team (which would be even more onerous for public agencies trying to deal with the issues systemwide rather than on selected study sections), consideration should be given to how all of these data should be collected, organized, and related to each other. The development of new standards or the extension of existing ones may be required to accomplish this goal. Development of Alternative Reliability Concepts for Extreme Events As developed in this research, the concept of reliability is part of the urban congestion problem. That is, it has been studied on highways that experience routine congestion from both recurring and nonrecurring sources. The working definition used was that reliability is a description of how travel times vary over time. It was noted that extreme events (disruptions) such as major snow or ice storms, hurricane evacuations, and full highway closures do not have a statistical significance in trying to predict reliability, which, by definition, occurs over the course of a year. Because they are so rare, they only shift the annual travel time distribution by a small amount. However, these extreme events are extremely important to both trans- portation agencies and travelers, even if their occurrence is rare. If the urban congestion–based reliability concepts cannot describe these events, then an alternative should be explored. Standard Data Processing Methods for Developing Congestion and Reliability Performance Measures In order to conduct the research, data processing procedures had to be developed to develop reliability performance met- rics. These metrics are likely to be used on their own in many other transportation applications. However, a large amount of leeway exists in how the metrics can be developed from field data. As congestion performance monitoring becomes more widespread, and perhaps even federally mandated, the need to produce consistent metrics will become critical. Improved Methods for Microlevel Weather Data Collection The locations of the weather observations used in the study relative to the study sections were admittedly crude. The assumption was that data from the closest National Weather Service station observations would apply to the study sec- tions, when they could be several miles apart. This assump- tion probably led to misallocation of rainfall occurrence for at least some cases, but major weather fronts are most likely accounted for in the data. However, the team believes that better methods can be explored. In lieu of deploying weather stations at regular intervals, which would be prohibitively expensive, one promising method is the automated processing of time-lapse radar information to obtain precipitation data. Reliability of Trips At the beginning of the study the team selected the extended highway section as the basic unit of analysis. Relatively

171 homogenous highway sections in terms of geometrics, typi- cally covering 4 to 5 miles for urban sections (with much longer lengths for the few rural freeway sections), were chosen. These study sections were chosen because this is the level at which the data were available and because they can be used by many existing applications. However, for several reasons, cal- culating the reliability of an entire trip is likely to be quite different. First, with few exceptions, the study sections were selected because they had relatively high volumes and were moderately congested during peak times; that is, they represent the worst conditions that can be encountered for a user making an entire trip. This means that a trip-based travel time dis- tribution is likely to gravitate toward one that shows less congestion and better overall reliability. An additional com- plication is the scheduling component: if a trip can start within a window of time as opposed to a specific time, users can in theory improve the travel time and reliability of their trip. Research is needed on these subjects, specifically how they affect investment decisions. That is, the facility focus as suggested by the L03 perspective leads to a certain set of investments (improvements). If the focus is changed to the entire trip (i.e., trips, as well as facilities, are managed), how do the investment decisions change? Before-and-After Studies for Demand Management, Active Traffic Management, and Institutional Aspects of Incident Management Reliability evaluations style (with long before-and-after peri- ods) should be undertaken as these types of projects are deployed. In addition to observing changes in congestion and reliability, these future studies should report the changes in the independent variables for the L03 cross-sectional statistical models (demand, capacity, and the characteristics of inci- dents and work zones). The present study noted that various degrees of institutional arrangements and policies related to incident management should have a positive effect on inci- dent duration, which can then be related to reliability via the statistical models. The idea is that, beyond the deployment of equipment, the success of incident management will be determined by how agency agreements and policies translate to reductions in incident duration in the field. Real-Time Predictive Models A potentially useful corollary to the macrolevel reliability relationships developed in the L03 effort is the development of models that would relate the congestion level on a specific day to the contributing factors. Such models would provide travel time prediction for a given set of circumstances rather than reliability prediction, but they would provide a useful tool for traffic managers. The L03 data set could be used as a starting point for this research, although based on the team’s experiences with the congestion by source analysis, more microlevel data on traffic flow and events might be necessary (e.g., 30-second to 1-minute volumes and speeds). A micro- level examination of traffic flow breakdown would provide great insight into the causes of congestion. Expand on the Concept of Whole-Year Capacity The L03 research demonstrated that capacity varies substan- tially. The concept of whole-year capacity, touched on in the L03 exploratory analyses, is worth pursuing further. Because many predictive models (including travel demand forecast- ing and macroscopic and mesoscopic simulation models) use the concept of capacity as a starting point for determining congestion, whole-year capacity may be an entry point for incorporating reliability into these models. That is, instead of using a fixed capacity, model runs could use whole-year capacity distributions stochastically. Because the whole-year capacity distributions developed from empirical data include all of the possible influencing factors, they represent a more realistic picture of how capacity actually behaves. reference 1. Brilon, W., J. Geistefeldt, and H. Zurlinden. Implementing the Concept of Reliability for Highway Capacity Analysis. In Transporta- tion Research Record: Journal of the Transportation Research Board, No. 2027, Transportation Research Board of the National Academies, Washington, D.C., 2007, pp. 1–8. http://trb.metapress.com/content/ u700713ur834410r/fulltext.pdf.

Next: Appendix A - Data Elements and Structure for the Statistical Analysis Data Set »
Analytical Procedures for Determining the Impacts of Reliability Mitigation Strategies Get This Book
×
 Analytical Procedures for Determining the Impacts of Reliability Mitigation Strategies
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

TRB’s second Strategic Highway Research Program (SHRP 2) Report S2-L03-RR-1: Analytical Procedures for Determining the Impacts of Reliability Mitigation Strategies explores predictive relationships between highway improvements and travel time reliability. For example, how can the effect of an improvement on reliability be predicted; and alternatively, how can reliability be characterized as a function of highway, traffic, and operating conditions? The report presents two models that can be used to estimate or predict travel time reliability. The models have broad applicability to planning, programming, and systems management and operations.

An e-book version of this report is available for purchase at Amazon, Google, and iTunes.

Errata

In February 2013 TRB issued the following errata for SHRP 2 Report S2-L03-RR-1: On page 80, the reference to Table 2.9 should be to Table 2.5. On page 214, the reference to Table B.30 should be to Table B.38. These references have been corrected in the online version of the report.

READ FREE ONLINE

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!