National Academies Press: OpenBook
« Previous: Part 2 - Framework and Tools for Travel Time Reliability Analysis
Page 90
Suggested Citation:"Part 3 - Applications ." National Academies of Sciences, Engineering, and Medicine. 2014. Incorporating Reliability Performance Measures into Operations and Planning Modeling Tools. Washington, DC: The National Academies Press. doi: 10.17226/22388.
×
Page 90
Page 91
Suggested Citation:"Part 3 - Applications ." National Academies of Sciences, Engineering, and Medicine. 2014. Incorporating Reliability Performance Measures into Operations and Planning Modeling Tools. Washington, DC: The National Academies Press. doi: 10.17226/22388.
×
Page 91
Page 92
Suggested Citation:"Part 3 - Applications ." National Academies of Sciences, Engineering, and Medicine. 2014. Incorporating Reliability Performance Measures into Operations and Planning Modeling Tools. Washington, DC: The National Academies Press. doi: 10.17226/22388.
×
Page 92
Page 93
Suggested Citation:"Part 3 - Applications ." National Academies of Sciences, Engineering, and Medicine. 2014. Incorporating Reliability Performance Measures into Operations and Planning Modeling Tools. Washington, DC: The National Academies Press. doi: 10.17226/22388.
×
Page 93
Page 94
Suggested Citation:"Part 3 - Applications ." National Academies of Sciences, Engineering, and Medicine. 2014. Incorporating Reliability Performance Measures into Operations and Planning Modeling Tools. Washington, DC: The National Academies Press. doi: 10.17226/22388.
×
Page 94
Page 95
Suggested Citation:"Part 3 - Applications ." National Academies of Sciences, Engineering, and Medicine. 2014. Incorporating Reliability Performance Measures into Operations and Planning Modeling Tools. Washington, DC: The National Academies Press. doi: 10.17226/22388.
×
Page 95
Page 96
Suggested Citation:"Part 3 - Applications ." National Academies of Sciences, Engineering, and Medicine. 2014. Incorporating Reliability Performance Measures into Operations and Planning Modeling Tools. Washington, DC: The National Academies Press. doi: 10.17226/22388.
×
Page 96
Page 97
Suggested Citation:"Part 3 - Applications ." National Academies of Sciences, Engineering, and Medicine. 2014. Incorporating Reliability Performance Measures into Operations and Planning Modeling Tools. Washington, DC: The National Academies Press. doi: 10.17226/22388.
×
Page 97
Page 98
Suggested Citation:"Part 3 - Applications ." National Academies of Sciences, Engineering, and Medicine. 2014. Incorporating Reliability Performance Measures into Operations and Planning Modeling Tools. Washington, DC: The National Academies Press. doi: 10.17226/22388.
×
Page 98
Page 99
Suggested Citation:"Part 3 - Applications ." National Academies of Sciences, Engineering, and Medicine. 2014. Incorporating Reliability Performance Measures into Operations and Planning Modeling Tools. Washington, DC: The National Academies Press. doi: 10.17226/22388.
×
Page 99
Page 100
Suggested Citation:"Part 3 - Applications ." National Academies of Sciences, Engineering, and Medicine. 2014. Incorporating Reliability Performance Measures into Operations and Planning Modeling Tools. Washington, DC: The National Academies Press. doi: 10.17226/22388.
×
Page 100
Page 101
Suggested Citation:"Part 3 - Applications ." National Academies of Sciences, Engineering, and Medicine. 2014. Incorporating Reliability Performance Measures into Operations and Planning Modeling Tools. Washington, DC: The National Academies Press. doi: 10.17226/22388.
×
Page 101
Page 102
Suggested Citation:"Part 3 - Applications ." National Academies of Sciences, Engineering, and Medicine. 2014. Incorporating Reliability Performance Measures into Operations and Planning Modeling Tools. Washington, DC: The National Academies Press. doi: 10.17226/22388.
×
Page 102
Page 103
Suggested Citation:"Part 3 - Applications ." National Academies of Sciences, Engineering, and Medicine. 2014. Incorporating Reliability Performance Measures into Operations and Planning Modeling Tools. Washington, DC: The National Academies Press. doi: 10.17226/22388.
×
Page 103
Page 104
Suggested Citation:"Part 3 - Applications ." National Academies of Sciences, Engineering, and Medicine. 2014. Incorporating Reliability Performance Measures into Operations and Planning Modeling Tools. Washington, DC: The National Academies Press. doi: 10.17226/22388.
×
Page 104
Page 105
Suggested Citation:"Part 3 - Applications ." National Academies of Sciences, Engineering, and Medicine. 2014. Incorporating Reliability Performance Measures into Operations and Planning Modeling Tools. Washington, DC: The National Academies Press. doi: 10.17226/22388.
×
Page 105
Page 106
Suggested Citation:"Part 3 - Applications ." National Academies of Sciences, Engineering, and Medicine. 2014. Incorporating Reliability Performance Measures into Operations and Planning Modeling Tools. Washington, DC: The National Academies Press. doi: 10.17226/22388.
×
Page 106
Page 107
Suggested Citation:"Part 3 - Applications ." National Academies of Sciences, Engineering, and Medicine. 2014. Incorporating Reliability Performance Measures into Operations and Planning Modeling Tools. Washington, DC: The National Academies Press. doi: 10.17226/22388.
×
Page 107
Page 108
Suggested Citation:"Part 3 - Applications ." National Academies of Sciences, Engineering, and Medicine. 2014. Incorporating Reliability Performance Measures into Operations and Planning Modeling Tools. Washington, DC: The National Academies Press. doi: 10.17226/22388.
×
Page 108
Page 109
Suggested Citation:"Part 3 - Applications ." National Academies of Sciences, Engineering, and Medicine. 2014. Incorporating Reliability Performance Measures into Operations and Planning Modeling Tools. Washington, DC: The National Academies Press. doi: 10.17226/22388.
×
Page 109
Page 110
Suggested Citation:"Part 3 - Applications ." National Academies of Sciences, Engineering, and Medicine. 2014. Incorporating Reliability Performance Measures into Operations and Planning Modeling Tools. Washington, DC: The National Academies Press. doi: 10.17226/22388.
×
Page 110
Page 111
Suggested Citation:"Part 3 - Applications ." National Academies of Sciences, Engineering, and Medicine. 2014. Incorporating Reliability Performance Measures into Operations and Planning Modeling Tools. Washington, DC: The National Academies Press. doi: 10.17226/22388.
×
Page 111
Page 112
Suggested Citation:"Part 3 - Applications ." National Academies of Sciences, Engineering, and Medicine. 2014. Incorporating Reliability Performance Measures into Operations and Planning Modeling Tools. Washington, DC: The National Academies Press. doi: 10.17226/22388.
×
Page 112
Page 113
Suggested Citation:"Part 3 - Applications ." National Academies of Sciences, Engineering, and Medicine. 2014. Incorporating Reliability Performance Measures into Operations and Planning Modeling Tools. Washington, DC: The National Academies Press. doi: 10.17226/22388.
×
Page 113
Page 114
Suggested Citation:"Part 3 - Applications ." National Academies of Sciences, Engineering, and Medicine. 2014. Incorporating Reliability Performance Measures into Operations and Planning Modeling Tools. Washington, DC: The National Academies Press. doi: 10.17226/22388.
×
Page 114
Page 115
Suggested Citation:"Part 3 - Applications ." National Academies of Sciences, Engineering, and Medicine. 2014. Incorporating Reliability Performance Measures into Operations and Planning Modeling Tools. Washington, DC: The National Academies Press. doi: 10.17226/22388.
×
Page 115
Page 116
Suggested Citation:"Part 3 - Applications ." National Academies of Sciences, Engineering, and Medicine. 2014. Incorporating Reliability Performance Measures into Operations and Planning Modeling Tools. Washington, DC: The National Academies Press. doi: 10.17226/22388.
×
Page 116
Page 117
Suggested Citation:"Part 3 - Applications ." National Academies of Sciences, Engineering, and Medicine. 2014. Incorporating Reliability Performance Measures into Operations and Planning Modeling Tools. Washington, DC: The National Academies Press. doi: 10.17226/22388.
×
Page 117
Page 118
Suggested Citation:"Part 3 - Applications ." National Academies of Sciences, Engineering, and Medicine. 2014. Incorporating Reliability Performance Measures into Operations and Planning Modeling Tools. Washington, DC: The National Academies Press. doi: 10.17226/22388.
×
Page 118
Page 119
Suggested Citation:"Part 3 - Applications ." National Academies of Sciences, Engineering, and Medicine. 2014. Incorporating Reliability Performance Measures into Operations and Planning Modeling Tools. Washington, DC: The National Academies Press. doi: 10.17226/22388.
×
Page 119
Page 120
Suggested Citation:"Part 3 - Applications ." National Academies of Sciences, Engineering, and Medicine. 2014. Incorporating Reliability Performance Measures into Operations and Planning Modeling Tools. Washington, DC: The National Academies Press. doi: 10.17226/22388.
×
Page 120
Page 121
Suggested Citation:"Part 3 - Applications ." National Academies of Sciences, Engineering, and Medicine. 2014. Incorporating Reliability Performance Measures into Operations and Planning Modeling Tools. Washington, DC: The National Academies Press. doi: 10.17226/22388.
×
Page 121
Page 122
Suggested Citation:"Part 3 - Applications ." National Academies of Sciences, Engineering, and Medicine. 2014. Incorporating Reliability Performance Measures into Operations and Planning Modeling Tools. Washington, DC: The National Academies Press. doi: 10.17226/22388.
×
Page 122
Page 123
Suggested Citation:"Part 3 - Applications ." National Academies of Sciences, Engineering, and Medicine. 2014. Incorporating Reliability Performance Measures into Operations and Planning Modeling Tools. Washington, DC: The National Academies Press. doi: 10.17226/22388.
×
Page 123
Page 124
Suggested Citation:"Part 3 - Applications ." National Academies of Sciences, Engineering, and Medicine. 2014. Incorporating Reliability Performance Measures into Operations and Planning Modeling Tools. Washington, DC: The National Academies Press. doi: 10.17226/22388.
×
Page 124
Page 125
Suggested Citation:"Part 3 - Applications ." National Academies of Sciences, Engineering, and Medicine. 2014. Incorporating Reliability Performance Measures into Operations and Planning Modeling Tools. Washington, DC: The National Academies Press. doi: 10.17226/22388.
×
Page 125
Page 126
Suggested Citation:"Part 3 - Applications ." National Academies of Sciences, Engineering, and Medicine. 2014. Incorporating Reliability Performance Measures into Operations and Planning Modeling Tools. Washington, DC: The National Academies Press. doi: 10.17226/22388.
×
Page 126
Page 127
Suggested Citation:"Part 3 - Applications ." National Academies of Sciences, Engineering, and Medicine. 2014. Incorporating Reliability Performance Measures into Operations and Planning Modeling Tools. Washington, DC: The National Academies Press. doi: 10.17226/22388.
×
Page 127
Page 128
Suggested Citation:"Part 3 - Applications ." National Academies of Sciences, Engineering, and Medicine. 2014. Incorporating Reliability Performance Measures into Operations and Planning Modeling Tools. Washington, DC: The National Academies Press. doi: 10.17226/22388.
×
Page 128
Page 129
Suggested Citation:"Part 3 - Applications ." National Academies of Sciences, Engineering, and Medicine. 2014. Incorporating Reliability Performance Measures into Operations and Planning Modeling Tools. Washington, DC: The National Academies Press. doi: 10.17226/22388.
×
Page 129
Page 130
Suggested Citation:"Part 3 - Applications ." National Academies of Sciences, Engineering, and Medicine. 2014. Incorporating Reliability Performance Measures into Operations and Planning Modeling Tools. Washington, DC: The National Academies Press. doi: 10.17226/22388.
×
Page 130
Page 131
Suggested Citation:"Part 3 - Applications ." National Academies of Sciences, Engineering, and Medicine. 2014. Incorporating Reliability Performance Measures into Operations and Planning Modeling Tools. Washington, DC: The National Academies Press. doi: 10.17226/22388.
×
Page 131
Page 132
Suggested Citation:"Part 3 - Applications ." National Academies of Sciences, Engineering, and Medicine. 2014. Incorporating Reliability Performance Measures into Operations and Planning Modeling Tools. Washington, DC: The National Academies Press. doi: 10.17226/22388.
×
Page 132

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

93 p A r t 3 This part of the report describes two case studies incorporating travel time reliability into microscopic and mesoscopic models and summarizes the findings and conclusions of this research project. APPLICATIONS

94 deterministic scenarios from existing historical sources. This case study uses the former approach: a set of random scenarios are constructed using Monte Carlo sampling for each category. The factors that are considered as scenario components are weather, incident, and day-to-day demand random variation as shown in Table 8.1. A detailed description for each scenario component is presented in the following subsections. Scenario Specification Weather While considering incident and demand variations as random factors, we control the weather factor in constructing scenar- ios in this case study. In other words, we create a specific rain scenario and use it for all weather cases (i.e., WD-RA and WE-RA). The rain scenario is based on historical observations as discussed in the Chapter 6 section, Implementation of Sce- nario Manager, subsection Weather Scenario. The Scenario Manager allows users to supply specific weather time-series data to generate a fixed weather scenario. We used the weather data collected on May 3, 2010, at the ASOS weather station located at the LaGuardia Airport. Figure 8.2 shows the 5-hour weather scenario prepared for this case study. Incidents Incident properties are characterized using parametric mod- els as discussed in the Chapter 6 section, Implementation of Scenario Manager, subsection Incident Scenario. For fre- quency, we use a Poisson distribution to model the number of incidents for a given time period. To capture the dependency between weather and incident frequency, we use weather- conditional incident rates. Table 8.1 presents the estimated rate parameters. For incident duration, we specified a gamma distribution based on model-fitting results and estimated two input parameters: shape = 1.210 and scale = 31.553. Incident intensity is expressed as the percentage capacity loss (the The purpose of this chapter is to demonstrate application of the overall methodology for performing reliability analyses using the framework and tools developed under this project in connection with a mesoscopic traffic simulation model, in this case DYNASMART-P (Mahmassani and Sbayti 2009). The following sections describe the entire procedure for per- forming the analysis in sequential order: defining, generating, and simulating scenarios; analyzing simulation outputs and extracting reliability statistics; and comparing simulation- based analysis results with observed data. defining Scenarios Defining Spatial and Temporal Boundaries for Evaluating Travel Time Reliability The spatial domain of interest selected for this application is an area in the New York City region. Figure 8.1 shows the sim- ulation network prepared for the analysis, which covers most of New York City and part of New Jersey. The time domain of interest is the morning time period from 6 a.m. until 11 a.m. between May 2, 2010, and May 17, 2010. Formulating Study Objectives and Defining Scenario Cases The objective of the case study is to examine the effect of weather on travel time reliability for weekday and weekend traffic. Specifically, we obtain reliability performance measures for the following four scenario cases: Weekdays under Rain (WD-RA), Weekends under Rain (WE-RA), Weekdays under No Rain (WD-NR), and Weekends under No Rain (WE-NR). Generating Scenarios Using the Scenario Manager Specific scenarios under each of the four cases may be obtained either by generating random scenarios using the Scenario Manager’s Monte Carlo sampling capability or by using Analysis Process: Mesoscopic Models C h A p t e r 8

95 Table 8.1. Scenario Components and Input Parameters Weekday or Weekend exogenous Sources Scenario CaseWeather Incident Day-to-Day Demand Variation Frequency: poisson () Duration: Gamma (, ) Intensity: empirical pMF DMF: Normal (, ) weekdays No Rain l(cl) = 0.00136 a = 1.210 b = 31.553 P(0.15) = 0.4, P(0.30) = 0.5, P(0.60) = 0.1 µ = 1.0 s = 0.17 weekdays No Rain (wd-NR) Rain (see Figure 8.2) l(lR) = 0.00158 l(MR) = 0.00204 l(hR) = 0.00251 weekdays Rain (wd-RA) weekends No Rain l(cl) = 0.00055 µ = 1.0 s = 0.14 weekends No Rain (wE-NR) Rain (see Figure 8.2) l(lR) = 0.00064 l(MR) = 0.00083 l(hR) = 0.00101 weekends Rain (wE-RA) Note: l(w) = incident rate under weather state w (incidents/hour/lane-mile); P(x) = probability that the fraction of link capacity lost due to a given incident becomes x (i.e., remaining capacity becomes 1 - x); PMF = probability mass function; and dMF = demand multiplication factor. Figure 8.1. Study networks: DYNASMART-P New York City network (gray) and Aimsun Manhattan network (black).

96 fraction of link capacity lost due to the incident). We con- structed the empirical probability mass function (PMF) based on historical incident data, in which three levels of capacity loss (15%, 30%, and 60%) are considered in conjunction with their probabilities (0.4, 0.5, and 0.1, respectively). Day-to-Day Demand Random Variation To understand the day-to-day demand fluctuation pattern, we examine GPS probe data obtained from TomTom; the data cover 16 consecutive days from May 2, 2010, to May 17, 2010, in New York. We aggregated the observed vehicle trajectories for each day and estimated the variation in daily traffic volume using the demand multiplication factor (DMF) introduced in Chapter 6, section, Implementation of Scenario Manager, sub- section Demand Scenario: Day-to-Day Random Variation. Although the available trajectory data represent only a portion of the entire travel demand in the study region, the analysis results provide insight into the characteristics of respective vari- ations in weekday and weekend traffic levels. Based on the esti- mation results, we specify the demand multiplication factor for weekdays as a normally distributed random variable with mean = 1.0 and standard deviation = 0.17; and the demand multiplication factor for weekends as a normal random variable with mean = 1.0 and standard deviation = 0.14, as shown in Table 8.1. Scenario Sampling and Calculation of Scenario Probabilities Based on those specified parameters for weather, incident, and demand components, we sampled 10 random scenarios for each scenario category using the Scenario Manager, yielding a total of 40 scenarios to be simulated. The Scenario Man- ager also calculates the probability of each scenario case, as presented in Table 8.2. Simulating Scenarios Using dYNASMArt-p Once input scenarios are prepared, the next step is to simulate those scenarios using DYNASMART-P to obtain scenario- specific outputs (i.e., simulated vehicle trajectory data). The simulation time horizon for each scenario is 5 hours, from 6 a.m. to 11 a.m. obtaining reliability Statistics Using the trajectory processor The Trajectory Processor allows users to load vehicle trajec- tory data obtained from the traffic simulation model and examine travel time distributions at various time and space Table 8.2. Joint and Marginal Probabilities for Scenario Categories Day of Week Weather SumNo Rain Rain weekday 0.400 (wd-NR) 0.265 (wd-RA) 0.665 weekend 0.265 (wE-NR) 0.070 (wE-RA) 0.335 Sum 0.665 0.335 1.000 Figure 8.2. Weather scenario (rain): Constructed based on historical data from May 3, 2010.

97 resolutions. As discussed in Table 7.3 in Chapter 7, different reliability metrics can be used to assess the reliability perfor- mance at different levels of the system: network-level, O–D- level, and path level. Network-Level Analysis To evaluate reliability performance for the entire network, we use distance-normalized travel times (i.e., travel time per mile, or TTPM) in deriving various network-level metrics. Table 8.3, Table 8.4, and Table 8.5 present various network-level perfor- mance measures obtained from scenario-specific outputs for three departure time intervals: 7–8 a.m., 8–9 a.m., and 9–10 a.m., respectively. The selected measures include average TTPM, standard deviation of TTPMs, and 95th/90th/80th percentile TTPMs, four of which are depicted in Figures 8.3 through 8.6. Each chart displays a total 120 data points (= 10 scenarios × 4 scenario cases × 3 departure time intervals) for a given mea- sure. The X-axis of each chart represents the Scenario ID shown in the second column of the tables. Some findings from the charts are summarized as follows: • Both the average travel time and the travel time variability decrease in the order of Weekdays under Rain (WD-RA), Weekends under Rain (WE-RA), Weekdays under No Rain (WD-NR), and Weekends under No Rain (WE-NR). • The effect of weather (rain) on travel time unreliability is more pronounced than the day-of-week effect, as both WD-RA and WE-RA (scenarios with rain) have higher lev- els of network congestion and travel time variability com- pared with WD-NR and WE-NR (scenarios without rain). • The time-of-day effect is more pronounced than the effect of weather as the difference between the performance mea- sures for different departure time intervals is more obvious than those for different scenario cases. Overall, the value range of a given measure significantly increases as the departure time interval changes from 7–8 a.m. to 9–10 a.m. • The variability of the estimates across different scenario instances (i.e., interscenario variability within each sce- nario case) tends to decrease in the order of WD-RA, WE-RA, WD-NR, and WE-NR. For example, data points from WD-RA for the 80th percentile TTPM for 9–10 a.m. are much more scattered than those from WE-NR. O–D-Level Analysis Users could choose a specific origin–destination (O–D) pair to examine O–D-level travel time distributions and the associated performance measures. For the analysis, we selected an O–D pair between the origin zone 685 and the destination zone 605 from the network, as shown in Figure 8.7. Multiple routes are available for travel between the given O–D pair, two of which are depicted in Figure 8.7. As in the network-level analysis, we present detailed performance measures for each scenario for different departure time intervals, 7–8 a.m. and 8–9 a.m., in Table 8.6 and Table 8.7, respectively. The average number of vehicles per scenario traveling along the given O–D between 7 a.m. and 8 a.m. is 105; for 8–9 a.m., it is 112. In addition to TTPM-based measures used in the network-level analysis, we could also examine metrics based on nonnormalized travel times provided that travel times for the same O–D can be com- parable regardless of what route is used. The analysis uses five measures: mean, standard deviation, 80th percentile of the travel time distribution, the Buffer Index, and the Skew Index (see Table 7.3 for the definitions of the metrics). Figure 8.8 shows the estimation results for the mean travel time, Figure 8.9 shows the standard deviation of travel times, and Figure 8.10 shows the estimation results for the 80th percentile travel time. The magnitude and interscenario variability for the mean travel time and the 80th percentile travel time decrease in the order of Weekdays under Rain (WD-RA), Weekends under Rain (WE-RA), Weekdays under No Rain (WD-NR), and Weekends under No Rain (WE-NR) as in the network-level analysis. This pattern is, however, less evident for the standard deviation (Figure 8.9) and the Buffer Index (Figure 8.11). Path-Level Analysis Analyst can also examine travel time distributions for a specific path. For the path-level analysis, we selected a segment along the Franklin D. Roosevelt East River Drive on the east side of New York City, as shown in Figure 8.12. The length of the selected path (from Point A to Point B) is 3.98 miles. The Tra- jectory Processor identifies all the vehicles that traverse the given path and extracts travel times spent on that path to con- struct the path-level travel time distribution. Table 8.8 presents detailed statistics for the selected performance measures mean, standard deviation, 80th percentile of the travel time, Planning Time Index, and Buffer Index (see Table 7.3 for the definitions of the metrics). Estimated results are visualized in Figures 8.13 through 8.16. Comparison with Observed Data As discussed in Chapter 7, the Trajectory Processor provides the ability to process not only simulated outputs but also observed vehicle trajectories. Users could perform the same types of analyses presented in the previous sections (e.g., network/O–D/path-level analyses) using the observed trajec- tory data. One of the important goals for this capability is to validate a constructed (simulated) travel time distribution by comparing it with its observed counterpart. We use the TomTom GPS probe data already mentioned, which cover 16 (text continues on page 111)

98 Table 8.3. Network-Level Performance Measures, Departure Time Interval 7 a.m. to 8 a.m. Scenario Case Scenario ID Network-Level Analysis Average ttpM (min/mile) Standard Deviation of ttpM (min/mile) 95th percentile ttpM (min/mile) 90th percentile ttpM (min/mile) 80th percentile ttpM (min/mile) weekdays/Rain (wd-RA) 1 1.80 1.08 2.78 2.29 2.03 2 2.12 1.55 4.43 3.12 2.35 3 1.97 1.36 3.68 2.66 2.19 4 1.79 1.05 2.74 2.28 2.03 5 2.01 1.52 3.88 2.76 2.24 6 1.99 1.41 3.76 2.71 2.21 7 1.87 1.26 3.14 2.41 2.10 8 2.10 1.53 4.32 3.03 2.33 9 1.82 1.14 2.85 2.31 2.05 10 2.02 1.46 3.93 2.79 2.25 weekends/Rain (wE-RA) 11 1.85 1.11 3.09 2.40 2.09 12 2.25 1.84 5.04 3.48 2.49 13 1.93 1.28 3.49 2.57 2.16 14 1.91 1.23 3.34 2.49 2.13 15 1.76 0.99 2.62 2.23 1.99 16 2.12 1.59 4.43 3.09 2.34 17 1.83 1.17 2.91 2.33 2.06 18 1.78 1.05 2.69 2.26 2.01 19 1.77 1.02 2.67 2.26 2.01 20 1.84 1.22 3.01 2.36 2.07 weekdays/No Rain (wd-NR) 21 1.72 1.07 3.11 2.30 1.94 22 1.67 1.02 2.81 2.14 1.88 23 1.64 1.03 2.61 2.07 1.85 24 1.66 0.91 4.00 2.14 1.88 25 1.66 1.07 2.73 2.11 1.86 26 1.75 1.10 3.29 2.41 1.98 27 1.65 1.04 2.71 2.10 1.86 28 1.67 1.00 2.83 2.16 1.89 29 1.55 0.83 2.20 1.95 1.76 30 1.79 1.15 3.44 2.51 2.01 weekends/No Rain (wE-NR) 31 1.63 0.89 2.64 2.08 1.85 32 1.64 1.04 2.63 2.07 1.84 33 1.60 0.94 2.43 2.02 1.81 34 2.00 1.55 4.41 3.09 2.20 35 1.66 0.95 2.83 2.15 1.88 36 1.63 1.01 2.60 2.06 1.84 37 1.64 0.97 2.65 2.08 1.85 38 1.61 0.90 2.53 2.05 1.83 39 1.59 0.98 2.38 2.01 1.80 40 1.53 0.78 2.15 1.94 1.74

99 Table 8.4. Network-Level Performance Measures, Departure Time Interval 8 a.m. to 9 a.m. Scenario Case Scenario ID Network-Level Analysis Average ttpM (min/mile) Standard Deviation of ttpM (min/mile) 95th percentile ttpM (min/mile) 90th percentile ttpM (min/mile) 80th percentile ttpM (min/mile) weekdays/Rain (wd-RA) 1 2.92 3.50 7.62 4.93 3.15 2 4.36 5.61 12.95 9.10 5.75 3 3.67 4.29 10.23 7.12 4.64 4 2.88 3.26 7.45 4.81 3.09 5 3.87 4.81 11.07 7.75 4.99 6 3.77 4.56 10.61 7.40 4.82 7 3.23 3.74 8.66 5.90 3.78 8 4.30 5.57 12.73 8.89 5.63 9 3.01 3.50 7.91 5.22 3.34 10 4.01 4.98 11.77 8.17 5.21 weekends/Rain (wE-RA) 11 3.19 3.72 17.01 5.76 3.76 12 4.91 6.86 14.99 10.49 6.56 13 3.55 4.19 9.78 6.77 4.39 14 3.41 3.86 9.26 6.39 4.16 15 2.61 2.86 6.55 4.08 2.71 16 4.39 5.76 13.04 9.14 5.76 17 3.00 3.42 7.76 5.22 3.39 18 2.76 3.06 6.98 4.56 2.98 19 2.74 3.14 6.92 4.43 2.89 20 3.05 3.53 7.88 5.29 3.47 weekdays/No Rain (wd-NR) 21 3.08 3.98 8.50 6.00 3.89 22 2.83 3.45 7.48 5.17 3.39 23 2.71 3.42 7.02 4.84 3.14 24 2.74 3.39 7.09 5.01 3.29 25 2.80 3.52 7.39 5.06 3.30 26 3.24 4.32 9.13 6.43 4.13 27 2.81 3.59 7.42 5.13 3.30 28 2.80 3.34 7.35 5.21 3.38 29 2.26 4.68 8.88 5.25 2.88 30 3.40 4.54 9.88 6.88 4.38 weekends/No Rain (wE-NR) 31 2.65 3.10 6.83 4.79 3.15 32 2.72 3.40 7.16 4.83 3.11 33 2.48 3.13 6.23 4.14 2.70 34 4.02 5.60 12.43 8.67 5.50 35 2.75 3.44 7.07 5.02 3.28 36 2.68 3.24 6.95 4.71 3.07 37 2.72 3.26 7.18 4.91 3.17 38 2.54 3.09 6.39 4.37 2.87 39 2.47 3.24 6.06 4.06 2.65 40 2.22 2.67 5.51 3.24 2.18

100 Table 8.5. Network-Level Performance Measures, Departure Time Interval 9 a.m. to 10 a.m. Scenario Case Scenario ID Network-Level Analysis Average ttpM (min/mile) Standard Deviation of ttpM (min/mile) 95th percentile ttpM (min/mile) 90th percentile ttpM (min/mile) 80th percentile ttpM (min/mile) weekdays/Rain (wd-RA) 1 4.39 7.12 14.59 9.27 5.44 2 6.30 11.20 22.07 14.72 8.87 3 5.55 9.76 19.07 12.22 7.17 4 4.43 7.33 14.92 9.55 5.46 5 6.02 10.34 21.14 13.74 8.17 6 5.91 10.11 20.53 13.53 8.04 7 5.03 8.11 17.07 10.94 6.49 8 6.40 10.90 22.48 15.04 9.04 9 4.51 7.28 15.05 9.73 5.66 10 6.26 10.32 21.97 14.59 8.67 weekends/Rain (wE-RA) 11 4.99 8.09 19.22 10.83 6.42 12 7.01 12.16 25.51 16.81 9.75 13 5.46 9.42 18.96 12.02 7.04 14 5.23 8.36 17.29 11.32 6.99 15 3.81 6.10 12.50 7.91 4.40 16 6.45 11.03 22.82 15.00 9.06 17 4.63 7.42 15.26 9.86 5.88 18 4.16 6.98 13.81 8.71 5.07 19 4.04 6.44 13.39 8.56 4.86 20 4.70 7.86 15.84 10.06 5.91 weekdays/No Rain (wd-NR) 21 4.87 8.96 17.62 11.03 6.12 22 4.76 8.29 16.62 10.33 5.97 23 4.65 8.26 16.37 10.08 5.77 24 4.36 8.35 22.82 9.12 5.10 25 4.76 9.02 16.62 10.11 5.83 26 4.83 9.92 16.70 10.60 6.07 27 4.67 8.20 16.28 10.15 5.89 28 4.44 8.40 15.67 9.34 5.26 29 3.60 6.39 13.03 8.05 4.32 30 5.06 10.06 18.36 11.30 6.27 weekends/No Rain (wE-NR) 31 4.49 7.68 15.83 9.70 5.59 32 4.55 7.95 16.04 9.78 5.55 33 4.04 7.24 13.56 8.48 4.82 34 5.38 10.55 18.80 12.65 7.13 35 4.36 8.90 14.87 8.87 5.06 36 4.48 8.07 15.74 9.60 5.45 37 4.51 8.02 15.64 9.66 5.57 38 4.17 7.16 14.25 8.82 5.10 39 4.11 7.76 14.06 8.45 4.66 40 3.41 5.95 11.27 6.91 3.69

101 Figure 8.3. Mean travel time per mile (network-level). Figure 8.4. Standard deviation of travel time per mile (network-level).

102 Figure 8.5. 80th percentile travel time per mile (network-level). Figure 8.6. 95th percentile travel time per mile (network-level).

103 Zone ID: 685 Zone ID: 605 Figure 8.7. Selected origin–destination (O–D) pair for O–D-level analysis. Figure 8.8. Mean travel time (O–D-level).

104 Figure 8.9. Standard deviation of travel times (O–D-level). Figure 8.10. 80th percentile travel time (O–D-level).

105 Figure 8.11. Buffer Index (O–D-level). Figure 8.12. Selected path for path-level analysis (from Point A to Point B).

106 Figure 8.13. Mean travel time (path-level). Figure 8.14. 80th percentile travel time (path-level).

107 Figure 8.15. Planning Time Index (path-level). Figure 8.16. Buffer Index (path-level).

108 Table 8.6. O–D-Level Performance Measures, Departure Time Interval 7 a.m. to 8 a.m. Scenario Case Scenario ID o–D-Level Analysis (Zone 685 ➔ Zone 605) Average Number of observations per Scenario  105 Average travel time (min) Standard Deviation of travel times (min) 80th percentile travel time (min) Buffer Index Skew Index weekdays/Rain (wd-RA) 1 15.56 7.45 16.65 0.30 1.90 2 29.05 27.33 33.82 1.62 6.11 3 19.34 5.79 23.61 0.64 2.56 4 14.95 3.54 16.58 0.37 2.32 5 19.99 6.04 22.86 0.75 2.38 6 19.91 6.59 24.92 0.58 2.58 7 17.69 8.55 19.44 0.47 2.99 8 28.26 27.30 29.88 1.08 4.35 9 16.56 12.50 17.12 0.44 1.91 10 20.00 6.04 23.87 0.71 2.68 weekends/Rain (wE-RA) 11 16.43 3.81 17.91 0.43 2.11 12 28.91 21.41 33.23 1.53 3.02 13 19.18 5.86 22.86 0.79 2.63 14 21.51 23.49 21.11 0.32 2.73 15 14.65 4.80 16.19 0.28 1.12 16 26.63 18.77 31.57 1.33 4.61 17 16.44 8.32 17.22 0.40 2.41 18 14.52 2.60 16.42 0.30 1.86 19 14.40 2.39 16.47 0.36 1.63 20 16.84 10.41 18.42 0.44 2.07 weekdays/No Rain (wd-NR) 21 16.25 5.16 20.20 0.46 3.79 22 17.34 12.76 18.96 0.57 3.26 23 15.25 9.94 16.41 0.40 2.99 24 17.31 12.10 19.73 0.63 3.42 25 16.30 13.25 18.21 0.38 3.24 26 17.83 8.74 22.09 0.82 3.07 27 16.29 12.85 17.49 0.36 3.81 28 16.58 10.72 19.31 0.82 2.80 29 12.22 1.57 13.52 0.23 1.31 30 18.45 8.38 22.93 0.81 3.68 weekends/No Rain (wE-NR) 31 16.94 13.63 18.62 0.75 3.50 32 15.70 12.71 16.71 0.40 2.42 33 13.57 3.35 15.59 0.39 2.82 34 22.27 12.98 23.94 0.83 4.96 35 16.90 14.76 18.72 0.49 4.14 36 15.48 11.99 16.19 0.35 2.96 37 15.43 8.53 16.51 0.32 2.16 38 14.52 3.78 16.73 0.44 2.74 39 13.15 3.19 14.18 0.43 1.99 40 12.07 1.56 13.28 0.21 1.22

109 Table 8.7. O–D-Level Performance Measures, Departure Time Interval 8 a.m. to 9 a.m. Scenario Case Scenario ID o–D-Level Analysis (Zone 685 ➔ Zone 605) Average Number of observations per Scenario  112 Average travel time (min) Standard Deviation of travel times (min) 80th percentile travel time (min) Buffer Index Skew Index weekdays/Rain (wd-RA) 1 45.55 34.41 72.05 1.72 7.14 2 50.42 29.99 83.98 1.13 7.99 3 49.87 34.76 83.76 1.65 5.13 4 41.58 29.29 63.58 1.34 4.47 5 42.20 28.98 74.38 1.32 7.87 6 41.11 26.74 73.78 1.36 6.33 7 40.13 28.62 55.48 1.29 3.18 8 53.79 33.07 96.88 1.00 5.20 9 47.24 37.41 68.54 2.04 5.37 10 47.33 33.75 87.88 1.28 5.58 weekends/Rain (wE-RA) 11 44.93 35.56 63.32 2.06 3.85 12 65.26 38.65 103.26 0.76 3.54 13 45.09 31.62 74.20 1.32 4.30 14 52.74 38.34 78.55 1.75 4.95 15 42.57 38.33 64.03 2.08 10.28 16 53.24 34.28 92.48 1.23 8.23 17 42.46 33.24 60.78 1.60 4.33 18 40.98 32.09 59.50 1.73 5.55 19 41.27 33.77 57.99 1.92 8.96 20 45.58 36.03 71.70 1.39 4.70 weekdays/No Rain (wd-NR) 21 44.12 31.75 82.54 1.21 10.15 22 35.87 24.79 63.59 1.32 7.28 23 34.52 24.51 57.21 1.44 6.14 24 38.69 26.92 66.86 1.25 6.87 25 35.17 23.02 58.96 1.38 5.17 26 33.06 22.01 42.88 1.49 9.76 27 35.92 24.40 62.88 1.33 6.37 28 36.15 24.27 66.60 1.19 7.11 29 26.56 23.64 30.74 1.84 6.56 30 40.27 26.31 77.68 1.22 6.26 weekends/No Rain (wE-NR) 31 32.63 21.30 58.00 1.29 6.00 32 37.30 29.42 56.63 2.04 5.66 33 35.63 26.43 53.41 1.63 6.96 34 52.94 24.35 83.09 0.75 1.94 35 38.45 25.60 67.84 1.22 7.09 36 36.52 24.64 56.58 1.33 4.42 37 38.83 29.23 63.93 1.41 5.83 38 40.14 36.74 55.04 2.03 8.15 39 33.39 26.04 42.79 1.71 5.14 40 31.29 34.84 35.67 2.75 11.43

110 Table 8.8. Path-Level Performance Measures, Departure Time Interval 7 a.m. to 8 a.m. Scenario Case Scenario ID path-Level Analysis (point A ➔ point B in Fig. 8.12) Average Number of observations per Scenario  1,199 Average travel time (min) Standard Deviation of travel times (min) 80th percentile travel time (min) planning time Index Buffer Index weekdays/Rain (wd-RA) 1 31.50 20.10 37.73 2.33 0.85 2 43.56 28.20 61.74 4.29 1.33 3 41.45 28.25 55.28 4.05 1.38 4 31.88 21.71 37.34 2.38 0.86 5 41.27 28.45 55.03 3.75 1.20 6 39.31 25.92 51.91 3.80 1.33 7 37.19 27.31 45.52 3.61 1.40 8 41.74 26.55 58.24 3.95 1.26 9 33.69 23.39 40.96 2.74 1.02 10 42.94 30.27 58.91 4.48 1.52 weekends/Rain (wE-RA) 11 34.91 22.74 43.22 3.01 1.13 12 43.65 27.81 61.53 4.14 1.21 13 39.46 27.11 51.65 3.70 1.29 14 38.01 26.43 47.77 3.72 1.41 15 28.91 15.94 34.32 1.90 0.65 16 43.47 27.69 63.25 4.19 1.29 17 33.48 22.55 41.00 2.77 1.06 18 29.73 17.74 35.14 2.03 0.71 19 30.31 18.35 35.46 2.11 0.74 20 34.41 25.72 40.72 3.14 1.27 weekdays/No Rain (wd-NR) 21 34.59 22.15 44.23 3.13 1.21 22 30.67 18.23 37.97 2.61 1.10 23 29.72 19.95 35.16 2.44 1.04 24 31.34 18.86 39.40 2.72 1.14 25 31.56 22.88 37.63 2.70 1.12 26 34.50 21.67 46.95 2.94 1.04 27 31.33 24.28 36.46 2.74 1.16 28 32.22 20.18 40.13 2.80 1.14 29 24.79 12.13 29.35 1.52 0.54 30 35.34 19.64 49.40 2.91 0.98 weekends/No Rain (wE-NR) 31 28.82 16.69 34.70 2.32 0.99 32 29.95 19.65 36.11 2.52 1.10 33 28.04 18.14 33.07 2.04 0.82 34 36.38 20.88 50.47 3.46 1.18 35 30.74 17.67 38.17 2.54 1.04 36 28.78 17.29 34.55 2.27 0.96 37 30.01 20.60 35.63 2.43 1.00 38 27.74 16.29 33.02 2.14 0.92 39 23.66 10.11 28.24 1.42 0.50 40 27.27 17.36 32.00 1.98 0.81

111 consecutive days from May 2, 2010, to May 17, 2010, in New York, to perform this comparison. We selected the same path used in the path-level analysis (see Figure 8.12) to obtain the measures for the GPS data and compare them with the simu- lation results presented in the previous section. For the same departure time interval (7–8 a.m.), we identified a total of 29 GPS traces traversing the selected path. Given this relatively small sample size, it was not advisable to further divide the sample into different scenario categories to perform detailed comparison for each scenario category separately. Instead, we used the entire 29 traces to construct the travel time distribu- tion, which can be viewed as a small sample of observed path travel times at departure time interval 7–8 a.m. between May 2, 2010, and May 17, 2010. The goal of the analysis is thus to examine how similar (or different) this observed travel time distribution is to (or from) the simulated travel time distribu- tions in an overall sense. The estimation results are provided in Table 8.9. Figures 8.17 through 8.20 display the measures estimated from the GPS data (dotted lines) in conjunction with the measures from the simu- lation outputs (scatter plots). The scatter plots are the same as those in Figure 8.13 through Figure 8.16. For all figures, the observed statistics lie within the range of the simulated statistics, suggesting that (1) the traffic simulation model could reproduce the real-world traffic-pattern for the given path, and (2) the con- structed travel time distributions under various scenarios could be effectively used to predict potential variations in travel times. Table 8.9. GPS Data Performance Measures, Departure Time Interval 7 a.m. to 8 a.m. Number of observations Mean travel time (min) 80th percentile travel time (min) planning time Index Buffer Index gPS traces 29 27.94 38.89 3.43 1.00 (continued from page 97) Figure 8.17. Simulation versus observation: Mean travel time (path-level).

112 Figure 8.18. Simulation versus observation: 80th percentile travel time (path-level). Figure 8.19. Simulation versus observation: Planning Time Index (path-level).

113 Figure 8.20. Simulation versus observation: Buffer Index (path-level).

114 C h A p t e r 9 The purpose of this chapter is to demonstrate how micro- simulation tools can be used in performing reliability analy- ses using the framework and tools developed under this project. The Aimsun simulation software was used to per- form the microsimulation task. Study Area description For the micro-model scenario the study area was a section of the wider meso-model study area and was located in the East Manhattan area bounded by 74th Street to the north, 48th Street to the south, 5th Avenue to the west, and York Avenue to the east. Figure 9.1 shows the extent of the study area con- sidered for microsimulation purposes. The micro-model covers an area that includes 178 lane kilo- meters and 217 signalized intersections. A total of 147 centroids were connected to the network to generate origin–destination trips, including 44 gate and 103 internal centroids. Two base models were constructed representing peak a.m. weekday and weekend conditions. The weekday a.m. peak period model consisted of a total demand of around 155,000 vehicles over a 5-hour period from 6 a.m. to 11 a.m. The weekend peak period model consisted of a total demand of around 80,000 vehicles over a period of 3 hours from 2 p.m. to 5 p.m. Microsimulation Approach and objective The general objective of the microsimulation tests was to determine a range of reliability measures that is characteristic of the study area for weekday and weekend traffic. The week- day and weekend scenarios were subjected to incident and demand variation events that are typical of the study area. Due to limitations with the modeling platform, the imple- mentation of variable weather conditions was not possible as part of the microsimulation study. It was assumed that constant fair weather conditions prevailed across all the sce- narios tested for weekday and weekend. Scenario description The same methodology that was used to generate scenarios for the meso-model using the Scenario Manager was applied for the micro-model. The approach that was taken was to generate all the scenarios in one operational step using the Scenario Manager for the wider study area. Additional details of that procedure can be found in the Chapter 8 section, Gen- erating Scenarios Using the Scenario Manager. The scenarios relevant for the microsimulation study area were then selected based on incidents that were located within the boundaries. Fifteen of the generated weekday scenarios and four of the weekend scenarios contained incidents within the microsimulation study area. Figure 9.2 and Figure 9.3 show the incident locations used for the study. Microsimulation travel time reliability results The input scenarios were prepared and imported into the Aimsun weekday and weekend models. The trajectories out- put for each vehicle completing trips were obtained for each scenario run and processed through the Trajectory Processor to obtain the reliability metrics. Network-Level Results The reliability performance across the entire network was measured using distance normalized travel times (i.e., aver- age travel time per mile, or TTPM) across 3 hours for the weekday and weekend peak periods. The weekday peak was for the a.m. period with time intervals spanning 7–8 a.m., 8–9 a.m. and 9–10 a.m. (Tables 9.1, 9.2, and 9.3). For the weekend, peak hourly intervals were reported between 2 p.m. Analysis Process: Microscopic Models

115 and 5 p.m. (Tables 9.4, 9.5, and 9.6). The metrics reported include average TTPM, standard deviation of TTPM, and the 95th/90th/80th percentile TTPMs. The results are dis- played on the following charts for the 15 weekday scenarios and the four weekend scenarios that were modeled and in Figures 9.4–9.6. The observed trends from the data show that for the network-wide performance • The travel time variability is significantly less during typi- cal weekend peak periods than weekday peaks. • The variability by time of day is more pronounced across the hourly intervals for the weekday peaks. The travel times for the later hours in the period are characterized by more variability. • Overall there is a wider range of variability in travel times for the microsimulation experiment compared with the mesosimulation experiment. For example, for the third weekday hour (9–10 a.m.), the average TTPM for Sce- nario 6 is 7.77 min/mile, while for Scenario 11 the value is 36.23 min/mile, resulting in a spread of 28.46 min/mile. This is much higher compared with the meso-experiment in which the largest spread for average TTPM is around 2 min/mile. Possible reasons for this are discussed further in a subsequent section, Summary of Microsimulation Experi- ment Findings. O–D-Level Analysis For travel between origin and destination (O–D) points within the network, two gate centroids were selected as is shown in Figure 9.7. This pair of centroids had a significant number of trips between them for all the hour intervals studied. The results for all trips between the O–D pair and for the hourly intervals between 7 a.m. and 9 a.m. for weekdays are presented in Table 9.7 and Table 9.8, and for the hourly intervals between 2 p.m. and 4 p.m. for weekends in Table 9.9 and Table 9.10. Figure 9.1. Microsimulation study area (© Google Maps). Weekday Scenario Weekend Scenario Figure 9.2. Microsimulation network showing incident locations. (text continues on page 122)

116 Table 9.1. Network-Level, Departure Time Interval 7 a.m. to 8 a.m., Weekday Scenario Name Scenario ID Network-Level Analysis Average ttpM (min/mile) Standard Deviation of ttpM (min/mile) 95th percentile ttpM (min/mile) 90th percentile ttpM (min/mile) 80th percentile ttpM (min/mile) 4-21 1 10.55 5.46 20.63 16.63 13.47 21-29 2 9.52 4.91 18.59 15.09 12.18 25-3 3 10.26 5.12 19.55 16.25 13.20 41-7 4 9.71 5.02 19.37 15.56 12.61 44-12 5 8.45 4.31 16.12 13.28 10.85 46-39 6 7.17 4.19 14.16 11.46 9.09 48-29 7 7.71 4.18 15.00 12.27 9.81 58-10 8 8.48 4.27 16.11 13.27 10.80 61-34 9 11.55 6.41 23.71 18.78 14.78 65-22 10 10.80 5.74 21.51 17.35 13.94 72-8 11 12.14 6.78 24.65 19.85 15.69 80-26 12 7.35 4.02 14.16 11.64 9.35 85-23 13 11.64 6.86 23.78 18.64 14.87 89-4 14 8.87 4.42 17.06 13.96 11.38 90-49 15 10.32 5.19 20.33 16.60 13.30 Figure 9.3. Scatter plot: Average travel time per mile.

117 Table 9.2. Network-Level, Departure Time Interval 8 a.m. to 9 a.m., Weekday Scenario Name Scenario ID Network-Level Analysis Average ttpM (min/mile) Standard Deviation of ttpM (min/mile) 95th percentile ttpM (min/mile) 90th percentile ttpM (min/mile) 80th percentile ttpM (min/mile) 4-21 1 13.69 8.18 26.27 21.73 17.46 21-29 2 12.26 5.82 23.45 19.15 15.64 25-3 3 13.86 9.75 26.93 22.15 17.58 41-7 4 12.90 7.77 24.59 20.19 16.25 44-12 5 11.13 5.64 21.78 17.91 14.43 46-39 6 7.77 3.96 14.87 12.09 9.81 48-29 7 8.87 4.57 17.07 13.88 11.33 58-10 8 10.24 4.87 19.62 16.00 13.11 61-34 9 16.27 11.08 31.27 25.86 20.76 65-22 10 14.81 10.03 28.14 23.36 18.63 72-8 11 19.14 17.41 40.10 31.26 23.75 80-26 12 8.08 4.06 15.26 12.58 10.21 85-23 13 18.87 13.31 39.60 31.11 24.42 89-4 14 12.47 6.83 24.33 19.89 15.92 90-49 15 13.86 7.38 26.78 22.07 17.60 Table 9.3. Network-Level, Departure Time Interval 9 a.m. to 10 a.m., Weekday Scenario Name Scenario ID Network-Level Analysis Average ttpM (min/mile) Standard Deviation of ttpM (min/mile) 95th percentile ttpM (min/mile) 90th percentile ttpM (min/mile) 80th percentile ttpM (min/mile) 4-21 1 24.27 19.55 57.60 43.62 32.97 21-29 2 15.13 8.96 28.48 23.85 19.10 25-3 3 23.03 20.54 51.08 39.40 29.81 41-7 4 15.90 11.24 30.06 24.72 19.96 44-12 5 13.87 9.25 26.51 21.89 17.52 46-39 6 8.72 3.97 15.94 13.22 10.97 48-29 7 11.02 5.21 20.91 17.41 14.15 58-10 8 12.34 5.76 22.73 19.14 15.66 61-34 9 27.32 20.56 60.12 46.01 34.94 65-22 10 26.29 29.44 61.60 46.86 33.34 72-8 11 36.23 27.44 74.87 60.43 49.66 80-26 12 10.14 4.55 18.78 15.57 12.93 85-23 13 27.10 21.02 57.57 44.75 34.62 89-4 14 16.03 11.22 31.09 25.61 20.28 90-49 15 20.68 15.67 41.46 32.95 26.61

118 Table 9.4. Network-Level, Departure Time Interval 2 p.m. to 3 p.m., Weekend Scenario Name Scenario ID Network-Level Analysis Average ttpM (min/mile) Standard Deviation of ttpM (min/mile) 95th percentile ttpM (min/mile) 90th percentile ttpM (min/mile) 80th percentile ttpM (min/mile) 39-4 1 7.86 4.10 15.11 12.46 10.21 56-7 2 7.86 4.10 15.11 12.46 10.21 75-5 3 7.86 4.09 15.05 12.50 10.24 94-4 4 7.64 3.87 14.35 12.00 9.91 Table 9.5. Network-Level, Departure Time Interval 3 p.m. to 4 p.m., Weekend Scenario Name Scenario ID Network-Level Analysis Average ttpM (min/mile) Standard Deviation of ttpM (min/mile) 95th percentile ttpM (min/mile) 90th percentile ttpM (min/mile) 80th percentile ttpM (min/mile) 39-4 1 9.22 5.50 19.46 15.30 11.99 56-7 2 9.23 5.51 19.46 15.35 12.01 75-5 3 9.10 5.27 18.64 14.80 11.69 94-4 4 8.88 5.21 18.44 14.45 11.41 Table 9.6. Network-Level, Departure Time Interval 4 p.m. to 5 p.m., Weekend Scenario Name Scenario ID Network-Level Analysis Average ttpM (min/mile) Standard Deviation of ttpM (min/mile) 95th percentile ttpM (min/mile) 90th percentile ttpM (min/mile) 80th percentile ttpM (min/mile) 39-4 1 10.00 6.06 21.60 17.17 13.28 56-7 2 9.76 5.96 21.20 16.86 13.01 75-5 3 9.44 5.68 20.55 15.90 12.30 94-4 4 10.04 5.96 21.76 17.28 13.37

119 Figure 9.4. Scatter plot: Standard deviation of travel time per mile. Figure 9.5. Scatter plot: 80th percentile travel time per mile.

120 Figure 9.6. Scatter plot: 95th percentile travel time per mile. Figure 9.7. Location of origin (3457817) and destination (3475128) in the network.

121 Table 9.7. Origin (3457817)–Destination (3475128), Departure Time Interval 7 a.m. to 8 a.m., Weekday Scenario Name Scenario ID o–D-Level Analysis Average travel time (min) Standard Deviation of travel time (min) 95th percentile travel time (min) 90th percentile travel time (min) 80th percentile travel time (min) Buffer Index Skew Index Number of Vehicles 4-21 1 11.66 4.10 18.66 16.93 15.12 0.60 0.98 592 21-29 2 10.44 4.37 19.70 17.26 13.53 0.89 2.10 579 25-3 3 12.27 3.84 19.04 16.89 15.22 0.55 0.90 632 41-7 4 11.26 4.55 19.95 17.54 15.11 0.77 1.66 585 44-12 5 9.92 4.20 17.33 15.73 13.68 0.75 1.16 613 46-39 6 4.72 1.40 6.86 6.52 6.01 0.45 0.84 613 48-29 7 7.73 3.23 14.21 12.46 10.35 0.84 2.07 668 58-10 8 8.85 2.90 14.20 12.20 10.93 0.60 0.79 685 61-34 9 11.81 4.51 19.61 17.67 15.46 0.66 1.42 560 65-22 10 11.58 3.82 18.08 16.68 15.12 0.56 1.04 578 72-8 11 12.31 5.22 22.99 20.30 16.75 0.87 1.74 530 80-26 12 5.85 2.16 10.48 8.81 7.26 0.79 1.93 685 85-23 13 11.57 4.74 19.26 17.14 14.31 0.66 1.26 653 89-4 14 8.76 3.52 14.90 13.12 11.70 0.70 1.50 632 90-49 15 10.81 3.86 18.31 15.74 14.12 0.69 1.35 573 Table 9.8. O (3457817)–D (3475128), Departure Time Interval 8 a.m. to 9 a.m., Weekday Scenario Name Scenario ID o–D-Level Analysis Average travel time (min) Standard Deviation of travel time (min) 95th percentile travel time (min) 90th percentile travel time (min) 80th percentile travel time (min) Buffer Index Skew Index Number of Vehicles 4-21 1 13.36 4.89 22.15 19.69 16.86 0.66 1.07 412 21-29 2 14.37 5.05 22.40 20.37 18.41 0.56 0.72 439 25-3 3 13.93 4.71 23.16 20.12 17.31 0.66 1.39 462 41-7 4 13.61 4.74 21.87 19.02 17.03 0.61 0.97 456 44-12 5 14.60 5.31 23.53 21.09 18.59 0.61 0.81 496 46-39 6 6.32 1.21 8.34 7.85 7.22 0.32 1.34 688 48-29 7 10.36 3.03 15.98 14.50 12.60 0.54 1.27 625 58-10 8 12.71 4.11 19.86 17.88 15.80 0.56 1.02 496 61-34 9 17.11 5.75 27.41 24.79 21.25 0.60 1.45 439 65-22 10 14.91 4.74 22.95 21.51 18.39 0.54 1.29 547 72-8 11 18.46 10.82 34.00 25.77 22.28 0.84 1.84 454 80-26 12 8.69 2.64 13.70 12.67 10.71 0.58 1.75 665 85-23 13 17.53 6.60 29.94 26.61 22.10 0.71 2.50 463 89-4 14 13.21 4.13 20.66 18.21 16.18 0.56 0.97 536 90-49 15 12.98 3.65 20.33 18.10 15.40 0.57 1.59 450

122 Table 9.9. O (3457817)–D (3475128), Departure Time Interval 2 p.m. to 3 p.m., Weekend Scenario Name Scenario ID o–D-Level Analysis Average travel time (min) Standard Deviation of travel time (min) 95th percentile travel time (min) 90th percentile travel time (min) 80th percentile travel time (min) Buffer Index Skew Index Number of Vehicles 39-4 1 6.47 2.28 10.22 8.26 7.53 0.58 1.39 547 56-7 2 6.47 2.28 10.22 8.26 7.53 0.58 1.39 547 75-5 3 6.52 2.31 10.54 8.80 7.61 0.62 1.71 547 94-4 4 6.14 1.36 8.29 7.85 7.25 0.35 1.18 563 Table 9.10. O (3457817)–D (3475128), Departure Time Interval 3 p.m. to 4 p.m., Weekend Scenario Name Scenario ID o–D-Level Analysis Average travel time (min) Standard Deviation of travel time (min) 95th percentile travel time (min) 90th percentile travel time (min) 80th percentile travel time (min) Buffer Index Skew Index Number of Vehicles 39-4 1 10.60 4.80 19.51 17.22 14.05 0.84 1.70 576 56-7 2 10.66 4.80 19.39 17.28 14.26 0.82 1.65 576 75-5 3 8.92 3.42 15.67 12.63 10.94 0.76 1.51 575 94-4 4 9.50 4.46 18.08 15.71 12.48 0.90 2.10 586 The results are reported based on average nonnormalized travel times for all trips across all routes between the O–D pair. Five metrics were reported: the average travel time, standard deviation of travel time, 95th/90th/80th percentile travel times, Buffer Index, and Skew Index. Figures 9.8 to 9.11 display the results that show that the inter scenario variability is more significant for weekdays com- pared with weekends. Compared with the meso-model results, the results for the micro experiment show a much wider range of variation. Path-Level Analysis Analysis of travel time reliability can also be done at a path level for trips following a route between two points in the network. The length of the path chosen for this experiment is around 1.2 miles and is shown in Figure 9.12. The weekday peak was for the 7–8 a.m. time interval (Table 9.11), and the weekend peak was for the 2–3 p.m. interval (Table 9.12). The performance measures reported for the path analysis are average travel time, standard deviation, 95th/90th/80th per- centile, Planning Time Index, and Buffer Index. The results are displayed in Figures 9.13 to 9.16 and indicate that the travel time distribution at a path level is significantly more variable between scenarios for the weekday peak versus sce- narios for the weekend peak. Summary of Microsimulation Experiment Findings In summary, the findings of the microsimulation experi- ments across all levels of detail are characterized by the following: • Weekday peak period travel times are more variable than weekend peak periods. • Variability in travel time increases as the demand increases during the simulation period. • Compared with the meso-model the microsimulation travel times are much more variable for the same period of analysis. This can be attributed to: 44 Study area size. The much smaller study area of the micro-model does not allow for much contribution to the mean travel time by trips that are not affected by incidents. The impact of incidents is more significant in this small microsimulation context because the majority of the trips in the model are affected. Across a wider (continued from page 115)

123 Figure 9.8. Average travel time (3457817–3475128). Figure 9.9. Standard deviation of travel times (3457817– 3475128).

124 Figure 9.10. 80th percentile travel time (3457817– 3475128). Figure 9.11. Buffer Index (3457817–3475128).

125 area, such as in the meso-experiment, overall average times would not be as sensitive to local incidents as much, since there would be many of the model trips that are far removed from the incident and that would oper- ate under normal travel conditions. 44 Fundamental difference in the microsimulation and meso- simulation tools. The way Aimsun does micro-modeling versus the way DYNASMART does meso-modeling could be another reason for greater variability in the micro results. In micro-models, individual vehicles typically function separately and are tracked continuously through- out the simulation and reported as separate trajectories. In DYNASMART, there is more of a grouping of individ- ual vehicles in “platoons,” and each vehicle output metric is influenced by the way the platoon moves through the network. Figure 9.12. Path location. Table 9.11. Departure Time Interval 7 a.m. to 8 a.m., Weekday Scenario Name Scenario ID path-Level Analysis Average travel time (min) Standard Deviation of travel time (min) 95th percentile travel time (min) 90th percentile travel time (min) 80th percentile travel time (min) Buffer Index planning time Index 4-21 1 11.56 3.69 17.94 16.45 14.84 0.55 12.14 21-29 2 10.43 3.60 17.40 15.34 13.15 0.67 11.84 25-3 3 11.15 2.90 15.15 14.69 13.48 0.36 10.30 41-7 4 10.46 3.46 16.60 15.15 13.61 0.59 11.31 44-12 5 9.00 3.27 14.87 13.38 11.62 0.65 10.07 46-39 6 5.62 1.36 7.74 7.32 6.75 0.38 5.28 48-29 7 7.39 2.10 11.57 10.59 8.99 0.57 7.91 58-10 8 9.57 2.80 14.45 13.17 11.96 0.51 9.87 61-34 9 11.12 3.31 17.39 15.86 13.49 0.56 11.75 65-22 10 11.33 3.59 16.71 15.84 14.44 0.48 11.34 72-8 11 13.03 4.33 22.50 18.02 16.07 0.73 15.25 80-26 12 6.24 1.41 8.72 8.24 7.32 0.40 5.95 85-23 13 10.83 3.05 15.09 13.82 13.08 0.39 10.18 89-4 14 8.60 2.42 12.44 12.10 10.84 0.45 8.42 90-49 15 10.55 2.98 15.42 15.03 13.47 0.46 10.43

126 Table 9.12. Departure Time Interval 2 p.m. to 3 p.m., Weekend Scenario Name Scenario ID path-Level Analysis Average travel time (min) Standard Deviation of travel time (min) 95th percentile travel time (min) 90th percentile travel time (min) 80th percentile travel time (min) Buffer Index planning time Index 39-4 1 7.52 1.54 10.14 9.36 8.82 0.35 6.95 56-7 2 7.52 1.54 10.14 9.36 8.82 0.35 6.95 75-5 3 7.58 1.68 10.38 9.82 8.95 0.37 7.11 94-4 4 7.35 1.47 9.84 9.42 8.63 0.34 6.74 Figure 9.13. Average travel time. Figure 9.14. 80th percentile travel time.

127 Figure 9.15. Planning Time Index. Figure 9.16. Buffer Index.

128 C h A p t e r 1 0 The SHRP 2 L04 research project has addressed the need for a comprehensive framework and a conceptually coherent set of methodologies to (1) better characterize travel time reliability and the manner in which the various sources of variability operate individually and in interaction with each other in determining the overall reliability performance of a network, (2) assess its impacts on users and the system, and (3) deter- mine the effectiveness and value of proposed counter mea- sures. In doing so, this project has closed an important gap in the underlying conceptual foundations of travel modeling and traffic simulation, and provided practical means of generating realistic reliability measures using network simulation models in a variety of application contexts. The general methodology for the inclusion of reliability in planning and operational models is based on the notion that transportation reliability is intrinsically related to the varia- tion in experienced (or repeated) travel times for a given facility or travel experience. Thus, integrating reliability in traffic models is about capturing and representing the effect of the various sources of variation on the performance of the transportation system. The proposed approach is grounded in a fundamental distinction between (1) systematic variation in travel times resulting from predictable seasonal, day-specific, or hour-specific factors that affect either travel demand or net- work service rates, and (2) random variation that stems from various sources of largely unpredictable (to the user) fluctua- tion. The former are addressed exogenously through model segmentation and demand/supply scenarios, creating the back- drop against which the random sources of variation are mod- eled. These sources are modeled both in terms of their direct impact on network performance and in terms of travelers’ responses which result in changes in travel demand. In this study, several sources of variability have been dis- tinguished in a taxonomy that recognizes demand- versus supply-side, exogenous versus endogenous, and systematic versus random variability. The variability in system perfor- mance has both systematic causes, which can be modeled and predicted, and causes that can only be modeled as random variables and which occur according to some probabilistic mechanism. The general approach to modeling phenomena and sources of variability incorporates as much as possible the causal or systematic determinants of variability, while the remaining inherent variation is then added to the representa- tion through suitably calibrated probabilistic mechanisms. This approach can be implemented for both micro- and mesosimulation levels, as demonstrated in this project. Not- withstanding the desire for explanation, the portion of vari- ability that must be viewed as inherent, or random, is likely to remain substantial. The incorporation of reliability factors into the models can be done in either of two principal ways: (1) analytically, in which case travel time is implicitly treated as a random vari- able and its distribution, or some parameters of this distribu- tion, such as mean and variance, are described analytically and used in the modeling process or; (2) empirically, through multi ple scenarios, in which case the travel time distribution is not parameterized analytically but is simulated directly or explicitly through multiple model runs with different input variables. The conclusion emerging from this research is that both methods are useful and could be hybridized to account for different sources of travel time variation in the most effec- tive and computationally efficient way. Travel time variability can be measured and analyzed in a variety of ways and at different levels of disaggregation. To constructively measure variability of travel times, a specific time unit must be chosen in terms of interval during the day (e.g., an hour between 7:00 a.m. and 8:00 a.m.), day of week (e.g., Monday), and season (e.g., fall). This is necessary to control for systematic (e.g., seasonal) differences in travel time that occur between hours of the day, between days of the week, and between seasons. The remaining variability of travel times across different days for the same unit (hour, day of week, and season) can then be used as the basic mea- sure of travel reliability. Study Findings and Conclusions

129 By necessity, the quantification of travel time variability (that characterizes the reliability of travel in a network) entails representing the variability of travel times through the network’s links and nodes along the travel paths followed by travelers, and taking into account the correlation between link travel times. Capturing these correlation patterns is gen- erally very difficult when only link-level measurements are available. More important, given that a vehicle typically tra- verses a large number of links along its journey, deriving path-level and O–D-level travel time distributions from the underlying link travel time distributions is an extremely unwieldy and analytically forbidding task. A way around these challenges with regard to travel time correlation across links and nodes is to obtain or measure the path- and/or O–D-level travel times as a complete entity and not by constructing it from link-level distributions. In a sim- ulation model, this means obtaining the travel times over entire or partial vehicle trajectories. Regardless of the specific reliability measures of interest, the availability of vehicle tra- jectories in the output of a simulation model enables con- struction of the path- and O–D-level travel time distributions of interest, as well as the extraction of link-level distributions. As such, the key building block for producing measures of reliability in a network simulation model is vehicle trajecto- ries and the associated experienced traversal times through the entirety or part of the travel path. The vehicle trajectory contains the traffic information and itinerary associated with each vehicle in the transportation network. An important conclusion and contribution of the study is that travel time variability is best measured by variation across individual trajectories for the given facility and time unit. Thus, for reliability analysis purposes, the proposed framework unifies all particle-based simulation approaches so long as they produce vehicle trajectories; this methodological approach is further supported with the detailed discussion in Chapter 4 and the development of functional requirements for such simulation models. In addition, many existing simulation tools view and model various sources of travel time variability (e.g., traffic incidents, work zones, weather, special events, other fluctua- tions in demand) as exogenous events using user-specified scenarios. Distinct from these exogenous factors, there are also endogenous sources of variation that are inherently reproduced, to varying degrees, by given traffic simulation models. Many studies have proposed ways to capture random variation in various traffic phenomena within particular micro- or mesosimulation models. Examples include flow breakdown, incidents due to drivers’ risk-taking behaviors, and heterogeneity in driving behaviors. All these have impor- tant implications for how the models are used to produce reliability estimates, and how these measures are interpreted and in turn used operationally. The proposed methodological approach for modeling and estimating travel time reliability using simulation models fea- tures three components: 1. The Scenario Manager, which captures exogenous unre- liability sources such as special events, adverse weather, work zones, and travel demand variation; 2. Reliability-integrated simulation tools that model sources of unreliability endogenously, including user heterogene- ity, flow breakdown, and collisions; and 3. The Vehicle Trajectory Processor, which extracts reliability information from the simulation output, namely, vehicle trajectories. The primary role of the Scenario Manager is to prepare input scenarios for the traffic simulation models; this a core part of the framework as it directly affects the final travel time distributions. The Scenario Manager is essentially a preproces- sor of simulation input files for capturing exogenous sources of travel time variation, such as external events, traffic control and management strategies, and travel demand-side factors. Recognizing the importance of the scenario definition and the complexity of identifying relevant exogenous sources, the Scenario Manager provides the ability to construct scenarios that entail any mutually consistent combinations of external events. It captures parameters that define external sources of unreliability (such as special events, adverse weather, and work zones) and enables users either to specify scenarios with par- ticular historical significance or policy interest, or to generate them randomly given the underlying stochastic processes of the associated events. Using these generated scenarios in conjunction with the his- torical average demand as inputs, the traffic simulation models produce the vehicle trajectory outputs. During the simulation, the traffic simulation models capture the endogenous sources of travel time variability, such as endogenous flow breakdown, heterogeneous driving behaviors, and so forth. In general, traf- fic operation models need to model variations from different sources in both demand and supply sides; they also need to cap- ture traffic physics that characterize inherent probabilistic phe- nomena, including the collective effects that arise from the inherent randomness in driving behavior, namely, flow break- down and its impact on travel time. In general, traffic operation models should be capable of recognizing and representing both demand- and supply-side causes of variability, due to different sources. Importantly, rather than affecting travel time reliability separately, these factors often interact, which requires the ability to model all or any combination of causes of variability in one operational model. Most critically, such operational models should be particle-based (whether microscopic or mesoscopic simulation models) and capable of producing reliability-related output in the form of vehicle travel trajectories.

130 The vehicle Trajectory Processor is then introduced to extract reliability-related measures from the vehicle trajectory output of the simulation models. It produces and helps visual- ize reliability performance measures (travel time distributions, indicators) from observed or simulated trajectories. Observed trajectories may be obtained directly through measurement (e.g., GPS-equipped probe vehicles), thus enabling validation of travel time reliability metrics generated on the basis of out- put from simulation tools. While chaining the three modules of the reliability analysis framework (Scenario Manager, Simulation Model, and Trajec- tory Processor) completes the necessary procedures for per- forming a scenario-based reliability analysis, there are two feedback loops worth mentioning to further incorporate behavioral aspects of travelers into the reliability modeling framework. One of these feedback loops could potentially use scenario-specific travel times to make scenario-conditional demand adjustment (e.g., departure time change under severe weather condition). The other loop suggests that the overall system uncertainty might affect the average demand by shift- ing the equilibrium point (i.e., reliability-sensitive network equilibrium), and such feedback could be used in travel demand forecasting models that predict the impact of reliabil- ity measures on travel patterns. These are key considerations for future research and development as identified further in the subsequent section, Recommendations for Future Research. The reliability analysis framework and associated proto- type tools developed in this project enable a full range of analysis to address network-level, O–D-level, path-level, and segment/link-level travel time reliability using regional plan- ning and operations models. In doing so, users need to con- sider not only different properties of the reliability measures but also their applicability at an intended analysis level. A number of reliability performance measures have been iden- tified and categorized on the basis of their applicability to different levels of travel time distributions and associated reliability analysis, namely, network-level, O–D-level, and path/segment/link-level. It is essential in the reliability per- formance analysis to consider the user’s point of view, as trav- elers will adjust their departure time, and possibly other travel decisions, in response to unacceptable travel times and delays in their daily commutes. User-centric reliability mea- sures describe user-experienced or perceived travel time reli- ability, such as probability of on time arrival, schedule delay, and volatility, and sensitivity to departure time. The majority of these measures can be readily generated through the pro- totype Trajectory Processor that was developed as part of this project, while others could be incorporated into future devel- opment and enhancement of the Trajectory Processor. The potential linking of travel demand forecasting models to traffic microsimulation provides the opportunity for more accurate representation of traffic conditions to be fed back to choices about travel time, travel route, travel mode, or the decision to travel at all. This project highlighted the impor- tance of a feedback mechanism that could incorporate travel time reliability into traditional trip-based travel demand models, emerging activity-based models, and route choice models. In the context of this project, incorporation of reli- ability was primarily considered in the overall framework of demand-network equilibrium, with the demand side repre- sented by an advanced activity-based model (ABM) and the network simulation side represented by an advanced dynamic traffic assignment (DTA). Several important aspects of ABM- DTA integration and associated feedback mechanisms are essential and need to be addressed even before incorporation of travel time reliability measures. The incorporation of reli- ability into a network simulation model requires innovative approaches to generate the reliability measures that are fed into the demand model, to make route choice sensitive to reli- ability measures, and to ensure that a realistic correlation pat- tern is taken into account when route-level measures of reliability are constructed from link-level measures. Incorporating travel time reliability into stochastic traffic simulation models enables the off-line evaluation of traffic network performance, including assessment of management interventions, policies, and geometric configuration, as well as both short-term and long-run impacts of policies aimed at improving travel time and service reliability. The reliability analysis tools developed in this project (namely, the Scenario Manager and Trajectory Processor), even in their current pro- totype state of development, can be readily used to perform essential elements of such evaluations. A prerequisite for the use of the analysis tools is the availability of a particle-based traffic simulation model, capable of producing vehicle trajec- tory output. For demonstration purposes, the Scenario Man- ager and Trajectory Processor prototypes incorporate interfaces to the Aimsun and DYNASMART-P simulation platforms, as examples of microscopic and mesoscopic models, respectively. It is noted that both the Scenario Manager and the Trajectory Processor have been developed at a prototype level of detail and functionality for project team use only, and are shared with the developer and user community on an “as is” basis. For this reason, they may not meet all requirements of an implement- ing agency without further development. Implementation Steps This project has developed and demonstrated a unified approach with broad applicability to various planning and operations analysis problems, which allows agencies to incor- porate reliability as an essential evaluation criterion. The approach is independent of specific analysis software tools to enable and promote wide adoption by agencies and developers. The project has also developed specific software tools intended

131 to prototype the key concepts—namely, those of a Scenario Manager and a Trajectory Processor—and demonstrated them with two commonly used network modeling software platforms. Agency Adoption Throughout this study, it has become clear that reliability as an evaluation and decision factor is here to stay. It is therefore essential for agencies and consultants that support them to provide the inputs required to consider reliability in design- ing and evaluating future programs, projects, and policies. Agency hesitation to adopt new approaches is rooted in two factors: (1) the institutional cost of doing something differ- ent, and (2) lack of trust and experience in the new genera- tion of tools available to address this need. The present project provides the approaches and tools to address the second fac- tor. Furthermore, it addresses the first factor by developing an approach that is essentially software neutral and can be read- ily adapted with the agencies’ existing modeling tools. Nonetheless, unless developers of commercial software provide the necessary utilities and linkages to fully enable reliability-based analysis approaches, agencies will not totally come on board. The SHRP 2 program has taken important steps to create further awareness of the importance of reli- ability as a decision factor and to create further awareness of the availability of these new approaches and tools. To further promote agency adoption, it is important to identify and facilitate early adopters—that is, those agencies that will show the way and that others can point to as success- ful examples to be emulated. Program funding for demonstra- tion projects with full agency engagement and commitment is therefore an essential ingredient to achieve greater agency adoption. developers Developers of commercial software application tools for both planning and operations applications play a critical role in the dissemination of new knowledge and advances in meth- odology developed under projects such as this one. The proj- ect team members are themselves actively engaged in the application and further development of the tools and their application; however, the transportation field is a vast one that requires a large number of players to work toward similar technical goals. The approaches and tools developed in this project are readily applicable with most software tools for microscopic and mesoscopic network simulation, albeit to varying degrees of completeness. The steps required by developers are rela- tively minor given the templates and code developed for this project. Naturally, commercial developers would all like to somehow add unique value to their offerings, for competitive market reasons. However, they will only do so if they believe there is market demand for the capability. This is where hav- ing a few early agency adopters will start the cycle of agency demand and developer supply. The present project has removed the technical risk for the developers, who need only invest in programming time to customize to their software’s unique features. Success Factors Key success factors for the results of this project include the following: • Creating greater awareness of the importance of reliability analysis for major planning and operations projects, as well as of the attainability of such analysis capabilities; • Adopting scenario-based approaches to project evalua- tion as the primary, default approach for conducting such evaluations; • Promoting greater appreciation and recognition of the entire distribution of travel time, rather than simply mean values; and • Making utilities available for use in connection with most network simulation software both to manage the creation and generation of scenarios and to analyze the output of such scenario runs to obtain travel time distributions and reliability descriptors. recommendations for Further research Longer-term impact evaluation entails integrating reliability considerations in equilibrium planning models. An ideal inte- gration would bring together reliability-sensitive network sim- ulation models with micro-level activity-based demand models. To this end, several important research directions have become clear in the course of this project. Many of them relate to more advanced methods of incorporation of travel time reliability, specifically schedule delay cost and temporal activity profiles. However, improving travel demand models and network simulation tools in this direction is closely intertwined with a general improvement of individual mesosimulation and micro- simulation models. The team makes the following specific recommendations for future research: • Continue research on advanced methods for incorporating travel time reliability into demand models and network sim- ulations tools, including the schedule delay cost approach and temporal utility profile approach. For demand models,

132 reliability should be included in mode choice and time- of-day choice and (through these choices or in a different way) also be incorporated into the other travel choices such as destination choice and trip frequency choice. • For network simulation models, in particular, reliability measures should be incorporated in such a way that they could be effectively generated within the network simula- tion procedure, as well as affect the route choice embedded in it. Most of the attempts to date have resulted in path- based route choice models with complicated path utilities that cannot be directly incorporated into real-world net- work simulations. • Travel demand and network simulation models that incor- porate reliability measures must be operational in large net- works. This is especially challenging for the network supply side, since most of the proposed formulations inherently require path-based assignment. Accordingly, and as part of the recommendations above, continue research and devel- opment of path-based assignment algorithms that incor- porate travel time reliability and can generate a trip travel time distribution in addition to mean travel time. • Continue research on schemes for the integration of advanced ABM and DTA that can ensure a full consistency of daily activity patterns and schedules at the individual level and behavioral realism of traveler responses. In this regard, addressing enhancement of time-of-day choice, trip departure time choice, and activity scheduling components is essential. This point relates to the conceptual structure of these models and their implementation with respect to temporal resolution. • The travel demand models and network simulation models that incorporate reliability measures should be combined in a certain equilibrium framework. It is probably unrealistic to expect that a closed-form equilibrium formulation with reliability measures would ever be found. It is more realistic to construct a so-called loosely coupled demand-supply model with at least some level of consistency between the reliability measures generated by the network simulation and those used in the route choice and demand models. The existence and uniqueness of the equilibrium (stationary) solution in this case becomes largely an empirical issue. • Encourage additional data collection on the supply side of activities and on scheduling constraints—including the distribution of jobs and workers by schedule flexibility, classification of maintenance and discretionary activities by schedule flexibility—and develop approaches to forecast related trends. • Continue research and application of multiple-run model approaches and associated scenario formations, for both the demand and network supply sides. This project’s syn- thesis and research have shown that a conventional single- run framework is inherently too limited to incorporate some important reliability-related phenomena such as non- recurrent congestion due to a traffic incident, special event, or extreme weather condition. • Incorporate travel time reliability in project evaluation and user benefit calculations. Restructure the output of travel models to support project evaluation and user benefit calcu- lations with consideration of the impact of improved travel time reliability.

133 Abdelghany, K., and H. Mahmassani. 2001. Dynamic Trip Assignment- Simulation Model for Intermodal Transportation Networks. In Transportation Research Record: Journal of the Transportation Research Board, No. 1771, Transportation Research Board, National Research Council, Washington, D.C., pp. 52–60. Abdelghany, A. F., and H. S. Mahmassani. 2003. Temporal Spatial Microassignment and Sequencing of Travel Demand with Activity Trip Chains. In Transportation Research Record: Journal of the Trans- portation Research Board, No. 1831, Transportation Research Board of the National Academies, Washington, D.C., pp. 89–97. Arup. 2003. Frameworks for Modeling the Variability of Journey Times on the Highway Network. Arup, London. Axhausen, K., S. Hess, A. Konig, G. Abay, J. Bates, and M. Bierlaire. 2007. State of the Art Estimates of the Swiss Value of Travel Time Savings. Presented at the 86th Annual Meeting of the Transportation Research Board, Washington, D.C. (CD-ROM) Bates, J., I. Black, J. Fearon, C. Gilliam, and S. Porter. 2002. Supply Models for Use in Modeling the Variability of Journey Times on the Highway Network. AET: Proceedings of the European Transport Conference. Association for European Transport (AET), Cambridge, UK. Bekhor, S., D. Christoph, and K. W. Axhausen. 2011. Integration of Activity-Based and Agent-Based Models: Case of Tel Aviv, Israel. Presented at the 90th Annual Meeting of Transportation Research Board, Washington, D.C. Ben-Akiva, M., and M. Abou-Zeid. 2007. Methodological Issues in Modeling Time-of-Travel Preferences. In: 11th World Conference on Transport Research, Berkeley, CA. Bogers, E. A. I., H. W. C. van Lint, and H. J. van Zuylen. 2008. Reliability of Travel Time: Effective Measures from Behavioral Point of View. Transportation Research Record: Journal of the Transportation Research Board, No. 2082. Transportation Research Board of the National Academies, Washington, D.C., pp. 27–34. Brennand, A. W. 2011. Incorporating Travel Time Reliability in the Estimation of Assignment Models. New Zealand Transport Agency Research Report 464. Brownstone, D., A. Chosh, T. F. Golob, C. Kazimi, and D. Van Amelsfort. 2003. Drivers’ Willingness-to-Pay to Reduce Travel Time: Evidence from the San-Diego I-15 Congestion Pricing Project, Transportation Research, 37A(4), pp. 373–388. Brownstone, D., and K. A. Small. 2005. Valuing Time and Reliability: Assessing the Evidence from Road Pricing Demonstrations, Trans- portation Research A, 39A, pp. 279–293. Cambridge Systematics, Inc. 2005. Traffic Congestion and Reliability: Trends and Advanced Strategies for Congestion Mitigation. Final Report. Prepared for Federal Highway Administration by Cambridge System- atics, Inc., with Texas Transportation Institute. Cambridge Systematics, Inc., Dowling Associates, Inc., System Metrics Group, Inc., and Texas A&M Transportation Institute. 2008. NCHRP Report 618: Cost-Effective Performance Measures for Travel Time Delay, Variation, and Reliability. Transportation Research Board of the National Academies, Washington, D.C. Cambridge Systematics, Inc., Texas A&M Transportation Institute, University of Washington, Dowling Associates, Street Smarts, H. Levinson, and H. Rakha. 2013. SHRP 2 Report S2-L03-RR-1: Analytical Procedures for Determining the Impacts of Reliability Mitigation Strategies. Transportation Research Board of the National Academies, Washington, D.C. Castiglione, J., and P. Vovsha. August 2012. Activity-Based Modeling, Session 11: Network Integration. TMIP Webinar Series presentation. See http://media.tmiponline.org/webinars/2012/TMIP_ABM_ Webinars/Session_11/Webinar11_Network%20Integration_Slides- With-Notes2.pdf. Castiglione, J., B. Grady, M. Outwater, and S. Lawe. 2012. Sensitivity Test- ing and Assessment of an Integrated Regional Travel Demand and Traffic Micro-simulation Model. To be presented at the 91st Annual Meeting of the Transportation Research Board, Washington, D.C. Chang, G. L., H. Mahmassani, and R. Herman. 1985. A Macroparticle Traffic Simulation Model to Investigate Peak-Period Commuter Decision Dynamics. In Transportation Research Record: Journal of the Transportation Research Board, No. 1005, Transportation Research Board, National Research Council, Washington, D.C., pp. 107–120. Chang, G. L., and H. S. Mahmassani. 1988. Travel time prediction and departure time adjustment behavior dynamics in a congested traf- fic system. Transportation Research Part B: Methodological 22(3), pp. 217–232. Concas, S., and A. Kolpakov. 2009. Synthesis of Research on Value of Time and Value of Reliability. Final Report. Prepared by the Center for Urban Transportation Research and the National Center for Transit Research of University of South Florida for the Florida Department of Transportation. Daganzo, C. F. 1999. A Behavioral Theory of Multi-Lane Traffic Flow, Part I: Long. Homogeneous Freeway Sections. Institute of Transporta- tion Studies, University of California, Berkeley. Dehghani, Y., T. Adler, T. Doherty, and R. Fox. 2003. Development of a New Toll Mode Choice Modeling System for Florida’s Turnpike Enterprise, In Transportation Research Record: Journal of the Trans- portation Research Board, No. 1858, Transportation Research Board of the National Academies, Washington, D.C., pp. 9–17. References

134 Dong, J., and H. S. Mahmassani. 2009. Flow Breakdown and Travel Time Reliability. In Transportation Research Record: Journal of the Transportation Research Board, No. 2124, Transportation Research Board of the National Academies, Washington, D.C., pp. 203–212. Eliasson, J. 2006. The Relationship Between Travel Time Variability and Road Congestion. Proceedings of the 2007 World Conference of Trans- port Research, Berkeley, Calif. Engelson, L. 2011. Properties of Expected Travel Cost Function with Uncertain Travel Time. Transportation Research Board Annual Meeting 2011, Paper #11-2709. Fosgerau, M. 2008. Congestion costs in bottleneck equilibrium with stochastic capacity and demand. Proceedings of the European Transport Conference 2008. Association for European Transport, London. Fosgerau, M., and N. B. Karlstrom. 2007. The Value of Reliability and the Distribution of Random Durations. Proceedings of the Euro- pean Transport Conference, Noordwijk, The Netherlands, Octo- ber 2007. Fox, J., A. Daly, and H. Gunn. 2003. Review of RAND Europe’s Transport demand Model Systems. RAND Europe. Hamdar, S. H., and H. S. Mahmassani. 2008. From Existing Accident- Free Car-Following Models to Colliding Vehicles: Exploration and Assessment. In Transportation Research Record: Journal of the Trans- portation Research Board, No. 2088, Transportation Research Board of the National Academies, Washington, D.C., pp. 45–56. Hamdar, S. H., M. Treiber, H. S. Mahmassani, and A. Kesting. 2008. Modeling Driver Behavior as a Sequential Risk Taking Task. Transpor- tation Research Record: Journal of the Transportation Research Board, No. 2088, Washington, D.C., pp. 208–217. Hensher, D. A., and P. Goodwin. 2003. Using Values of Travel Time Sav- ings for Toll Roads: Avoiding Some Common Errors. Transport Policy, 11, pp. 171–181. Institute for Transportation Studies. 2008. Multimodal Travel Time Vari- ability. Final report for Department for Transport. Institute for Trans- portation Studies, University of Leeds, Imperial College London, John Bates Services. Jiang, L., H. Mahmassani, and K. Zhang. 2011. Congestion Pricing, Heterogeneous Users, and Travel Time Reliability. In Transporta- tion Research Record: Journal of the Transportation Research Board, No. 2254, Transportation Research Board of the National Acade- mies, Washington, D.C., pp. 58–67. Jones, E. G. 1988. The Variability of Travel Times in a Commuting Cor- ridor During the Evening Peak Period. MS Thesis, Department of Civil Engineering, University of Texas at Austin. Jones, E. G., H. Mahmassani, R. Herman, and C. M. Walton. 1989. Travel Time Variability in a Commuting Corridor: Implications for Electronic Route Guidance. Proc., First International Conference on Applications of Advanced Technologies in Transportation Engineering, San Diego, Calif., pp. 27–32. Kim, J., and H. Mahmassani. 2011. Correlated Parameters in Driving Behavior Models. In Transportation Research Record: Journal of the Transportation Research Board, No. 2049, Transportation Research Board of the National Academies, Washington, D.C., pp. 62–77. Kim, H., J.-S. Oh, and R. Jayakrishnan. 2006. Activity Chaining Model Incorporating Time Use Problem and Its Application to Network Demand Analysis. Presented at the 85th Annual Meeting of the Transportation Research Board, Washington, D.C. (CD-ROM) Kitamura, R., and J. Supernak. 1997. Temporal Utility Profiles of Activ- ities and Travel: Some Empirical Evidence. In Stopher, P., and M. Lee-Gosselin (Eds.), Understanding Travel Behavior in Era of Change. Elsevier, pp. 339–350. Lam, W., and Y. Yin. 2001. An Activity-Based Time-Dependent Traffic Assignment Model, Transportation Research B, 35B, pp. 549–574. Levinson, D., K. Harder, J. Bloomfield, and K. Winiarczyk. 2004. Weighting Waiting: Evaluating Perception of In-Vehicle Travel Time under Mov- ing and Stopped Conditions. Transportation Research Record: Journal of the Transportation Research Board, 1898, Transportation Research Board of the National Academies, Washington, D.C., pp. 61–68. Li, R., G. Rose, and M. Sarvi. 2006. Using Automatic Vehicle Identifica- tion Data to Gain Insight into Travel Time Variability and Its Causes. In Transportation Research Record: Journal of the Transportation Research Board, No. 1945, Transportation Research Board of the National Academies, Washington, D.C., pp. 24–32. Lomax, T., D. Schrank, S. Turner, and R. Margiotta. 2003. Selecting Travel Reliability Measures. Texas Transportation Institute and Cambridge Systematics, Inc. Lyman, K., and R. L. Bertini. 2008. Using Travel Time Reliability Mea- sures to Improve Regional Transportation Planning and Operations. In Transportation Research Record: Journal of the Transportation Research Board, No. 2046, Transportation Research Board of the National Academies, Washington, D.C., pp. 1–10. Mahmassani, H. S. 2001. Dynamic Network Traffic Assignment and Simu- lation Methodology for Advanced System Management Applications. Networks and Spatial Economics, 1(3–4), pp. 267–292. Mahmassani, H. S., J. Dong, J. Kim, R. B. Chen, and B. Park. 2009. Incor- porating Weather Impacts in Traffic Estimation and Prediction Sys- tems. Publication No. FHWA-JPO-09-065. FHWA, U.S. Department of Transportation, Washington, D.C. Mahmassani, H. S., T. Hou, and J. Dong. 2013. Characterizing Travel Time Variability in Vehicular Traffic Networks: Deriving a Robust Relation for Reliability Analysis. In Transportation Research Record: Journal of the Transportation Research Board, No. 2315, Transporta- tion Research Board of the National Academies, Washington, D.C., pp. 141–152. Mahmassani, H. S., E. Jones, R. Herman, and C. M. Walton. 1989. Travel Time Variability in a Commuting Corridor: Implications for Elec- tronic Route Guidance. Proceedings of the First International Confer- ence on Application of Advanced Technologies in Transportation Engineering, American Society of Civil Engineers, San Diego, Calif. Mahmassani, H. S., J. Kim, T. Hou, A. Zockaie, M. Saberi, L. Jiang, O. Verbas, S. Cheng, Y. Chen, and R. Haas. 2012. Implementation and Evaluation of Weather Responsive Traffic Estimation and Prediction System. Final Report, Publication No. FHWA-JPO-12-055, FHWA, U.S. Department of Transportation, Washington, D.C. Mahmassani, H. S., and H. Sbayti. 2009. DYNASMART-P Version 1.6 User’s Guide. Northwestern University, Evanston, Ill. Mastako, K. A. 2003. Choice Set as an Indicator for Choice Behavior When Lanes Are Managed with Value Pricing. PhD dissertation. Texas A&M University. College Station. McNeill, D. R. 1968. A Solution to the Fixed-Cycle Traffic Light Problem for Compound Poisson Arrivals. Journal of Applied Probability, 5, pp. 624–635. Ministry of Transportation. 2003. Travel Demand Model Development for Traffic and Revenue Studies for Public-Private Partnership Highway Projects in the Montreal Region. Prepared by PB Consult, Inc., for the Ministry of Transportation of Quebec. MRC and PB. 2008. Technical Report. McCormick Rankin Corporation and Parsons Brinckerhoff. Ottawa TRANS Model Redevelopment, Technical Report. New Jersey DOT. 2005. Evaluation Study of Port Authority of New York and New Jersey’s Time of Day Pricing Initiative. Final Report. New Jersey Department of Transportation, Trenton. Palmquist, R., D. Phaneuf, and K. Smith. 2007. Measuring the Value of Time. NBER Working Paper Series, National Bureau of Economic Research, Cambridge, Mass.

135 Parsons Brinckerhoff, Northwestern University, Mark Bradley Research & Consulting, University of California at Irvine, Resource System Group, University of Texas at Austin, F. Koppelman, and GeoStats. 2013. SHRP 2 Report S2-C04-RW-1: Improving Our Understanding of How Highway Congestion and Pricing Affect Travel Demand. Transportation Research Board of the National Academies, Wash- ington, D.C. Puget Sound Regional Council. 2007. Traffic Choice Study. Available at http://psrc.org/transportation/traffic. Small, K. A. 1982. The Scheduling of Consumer Activities: Work Trips. American Economic Review, Vol. 72, No. 3, pp. 467–479. Small, K. A., R. Noland, X. Chu, and D. Lewis. 1999. Valuation of Travel- Time Savings and Predictability in congested Conditions for Highway User-cost Estimation. NCHRP Report 431. Transportation Research Board, Washington, D.C. Small, K. A., C. Winston, and J. Yan. 2005. Uncovering the Distribution of Motorists’ Preferences for Travel Time and Reliability. Econometrica 73(4), pp. 1367–1382. Sohn, K., and D. Kim. 2009. Statistical Model for Forecasting Link Travel Time Variability. Journal of Transportation Engineering, 135(7), pp. 440–453. Spear, B. D. 2005. A Summary of the Current State of the Practice in Modeling Road Pricing. Resource paper presented at Expert Forum on Road Pricing and Travel Demand Modeling, U.S. Department of Transportation, Alexandria, Va. Stefan, K., J. McMillan, C. Blaschuk, and J. Hunt. 2007. Estimation of a Weekend Location Choice Model for Calgary. Presented at 11th Transportation Planning Application Conference, Transportation Research Board, Daytona Beach, Fla. Stockton, W., P. Hughes, M. Hickman, D. Puckett, Q. Brown, A. Miranda, and S. Woong. 2000. An Evaluation of the Katy Freeway HOV Lane Pricing Project, TTI Report E 205001, Texas A&M Transportation Institute, College Station. Stockton, W., R. Benz, L. Rilett, D. Skowronek, S. Vadali, and G. Daniels. 2000. Investigation of General Feasibility of High Occupancy/Toll Lanes in Texas, TTI Report 4915-1, Texas A&M Transportation Institute, College Station. Stogios, Y. C., H. Mahmassani, and P. Vovsha. 2014. SHRP 2 Report: Incorporating Reliability Performance Measure into Operations and Planning Modeling Tools—Reference Material. Transportation Research Board of the National Academies, Washington, D.C. Supernak, J. 1992. Temporal Utility Profiles of Activities and Travel: Uncertainty and Decision Making. Transportation Research B, 26(B), pp. 549–574. Tseng, Y., and E. T. Verhoef. 2008. Value of Time by Time of Day: A Stated Preference Study. Transportation Research Part B, 42B (7–8), pp. 607–618. Tseng, Y. Y., B. Ubbels, and E. Verhoef. 2005. Value of Time, Schedule Delay, and Reliability—Estimation Results of a Stated Choice Experi- ment Among Dutch Commuters Facing Congestion, Paper presented at the 45th Congress of European Regional Science Association. van den Broek, M. S., J. S. H. van Leeuwaarden, I. J. B. F. Adan, and O. J. Boxma. 2004. Bounds and Approximations for the Fixed-Cycle Traffic-Light Queue. SPOR Report. Vovsha, P., and M. Bradley. 2004. A Hybrid Discrete Choice Departure Time and Duration Model for Scheduling Travel Tours, In Transpor- tation Research Record: Journal of the Transportation Research Board, No. 1894, Transportation Research Board of the National Acade- mies, Washington, D.C., pp. 46–56. Vovsha, P., and M. Bradley. 2006. Advanced Activity-Based Models in Con- text of Planning Decisions. In Transportation Research Record: Journal of the Transportation Research Board, 1981, Transportation Research Board of the National Academies, Washington, D.C., pp. 34–41. Vovsha, P., and E. Petersen. 2005. Escorting Children to School: Statisti- cal Analysis and Applied Modeling Approach. In Transportation Research Record: Journal of the Transportation Research Board, No. 1921, Transportation Research Board of the National Acade- mies, Washington, D.C., pp. 131–140. Vovsha, P., M. Bradley, and J. Bowman. 2005. Activity-Based Travel Forecasting Models in the United States: Progress Since 1995 and Prospects for the Future. In Progress in Activity-Based Analysis (H. Timmermans, ed.), Elsevier, Waltham, Mass., pp. 389–414. Vovsha, P., W. Davidson, and R. Donnelly. 2005. Making the State of the Art the State of the Practice: Advanced Modeling Techniques for Road Pricing. Resource paper presented at Expert Forum on Road Pricing and Travel Demand Modeling, U.S. Department of Trans- portation, Alexandria, Va. Vovsha, P., R. Donnelly, and S. Gupta. 2008. Network Equilibrium with Activity-Based Microsimulation Models: The New York Experi- ence. In Transportation Research Record: Journal of the Transporta- tion Research Board, No. 1005, Transportation Research Board, Washington, D.C., pp. 102–109. Vovsha, P., E. Petersen, and R. M. Donnelly. 2003. Explicit Modeling of Joint Travel by Household Members: Statistical Evidence and Applied Approach. In Transportation Research Record: Journal of the Transportation Research Board, No. 1831, Transportation Research Board of the National Academies, Washington, D.C., pp. 1–10. Watling, D. P. 2006. User Equilibrium Traffic Network Assignment with Stochastic Travel Times and Late Arrival Penalty. European Journal of Operational Research, 175(3), pp. 1539–1556. Zhou, X., H. S. Mahmassani, and K. Zhang. 2008. Dynamic Micro- Assignment Modeling Approach for Integrated Multimodal Urban Corridor Management. Transportation Research Part C, 16, pp. 167–186.

Next: Reliability Technical Coordinating Committee »
Incorporating Reliability Performance Measures into Operations and Planning Modeling Tools Get This Book
×
 Incorporating Reliability Performance Measures into Operations and Planning Modeling Tools
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

TRB’s second Strategic Highway Research Program (SHRP 2) Report S2-L04-RR-1: Incorporating Reliability Performance Measures into Operations and Planning Modeling Tools explores the underlying conceptual foundations of travel modeling and traffic simulation and provides practical means of generating realistic reliability performance measures using network simulation models.

SHRP 2 Reliability Project L04 also produced a report titled Incorporating Reliability Performance Measures into Operations and Planning Modeling Tools Application Guidelines that provides an overview of the methodology and tools that can be applied to existing microsimulation and mesoscopic modeling software in order to assess travel time reliability.

SHRP 2 Reliability Project L04 also produced another publication titled Incorporating Reliability Performance Measures into Operations and Planning Modeling Tools: Reference Material that discusses the activities required to develop operational models to address the needs of the L04 research project.

The L04 project also produced two pieces of software and accompanying user’s guides: the Trajectory Processor and the Scenario Manager.

Software Disclaimer: These materials are offered as is, without warranty or promise of support of any kind, either expressed or implied. Under no circumstance will the National Academy of Sciences or the Transportation Research Board (collectively “TRB”) be liable for any loss or damage caused by the installation or operation of these materials. TRB makes no representation or warranty of any kind, expressed or implied, in fact or in law, including without limitation, the warranty of merchantability or the warranty of fitness for a particular purpose, and shall not in any case be liable for any consequential or special damages.

READ FREE ONLINE

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!