National Academies Press: OpenBook

Dynamic, Integrated Model System: Jacksonville-Area Application (2013)

Chapter: Chapter 2 Model Calibration and Validation

« Previous: Chapter 1 Model Implementation
Page 94
Suggested Citation:"Chapter 2 Model Calibration and Validation." National Academies of Sciences, Engineering, and Medicine. 2013. Dynamic, Integrated Model System: Jacksonville-Area Application. Washington, DC: The National Academies Press. doi: 10.17226/22482.
×
Page 94
Page 95
Suggested Citation:"Chapter 2 Model Calibration and Validation." National Academies of Sciences, Engineering, and Medicine. 2013. Dynamic, Integrated Model System: Jacksonville-Area Application. Washington, DC: The National Academies Press. doi: 10.17226/22482.
×
Page 95
Page 96
Suggested Citation:"Chapter 2 Model Calibration and Validation." National Academies of Sciences, Engineering, and Medicine. 2013. Dynamic, Integrated Model System: Jacksonville-Area Application. Washington, DC: The National Academies Press. doi: 10.17226/22482.
×
Page 96
Page 97
Suggested Citation:"Chapter 2 Model Calibration and Validation." National Academies of Sciences, Engineering, and Medicine. 2013. Dynamic, Integrated Model System: Jacksonville-Area Application. Washington, DC: The National Academies Press. doi: 10.17226/22482.
×
Page 97
Page 98
Suggested Citation:"Chapter 2 Model Calibration and Validation." National Academies of Sciences, Engineering, and Medicine. 2013. Dynamic, Integrated Model System: Jacksonville-Area Application. Washington, DC: The National Academies Press. doi: 10.17226/22482.
×
Page 98
Page 99
Suggested Citation:"Chapter 2 Model Calibration and Validation." National Academies of Sciences, Engineering, and Medicine. 2013. Dynamic, Integrated Model System: Jacksonville-Area Application. Washington, DC: The National Academies Press. doi: 10.17226/22482.
×
Page 99
Page 100
Suggested Citation:"Chapter 2 Model Calibration and Validation." National Academies of Sciences, Engineering, and Medicine. 2013. Dynamic, Integrated Model System: Jacksonville-Area Application. Washington, DC: The National Academies Press. doi: 10.17226/22482.
×
Page 100
Page 101
Suggested Citation:"Chapter 2 Model Calibration and Validation." National Academies of Sciences, Engineering, and Medicine. 2013. Dynamic, Integrated Model System: Jacksonville-Area Application. Washington, DC: The National Academies Press. doi: 10.17226/22482.
×
Page 101
Page 102
Suggested Citation:"Chapter 2 Model Calibration and Validation." National Academies of Sciences, Engineering, and Medicine. 2013. Dynamic, Integrated Model System: Jacksonville-Area Application. Washington, DC: The National Academies Press. doi: 10.17226/22482.
×
Page 102
Page 103
Suggested Citation:"Chapter 2 Model Calibration and Validation." National Academies of Sciences, Engineering, and Medicine. 2013. Dynamic, Integrated Model System: Jacksonville-Area Application. Washington, DC: The National Academies Press. doi: 10.17226/22482.
×
Page 103
Page 104
Suggested Citation:"Chapter 2 Model Calibration and Validation." National Academies of Sciences, Engineering, and Medicine. 2013. Dynamic, Integrated Model System: Jacksonville-Area Application. Washington, DC: The National Academies Press. doi: 10.17226/22482.
×
Page 104
Page 105
Suggested Citation:"Chapter 2 Model Calibration and Validation." National Academies of Sciences, Engineering, and Medicine. 2013. Dynamic, Integrated Model System: Jacksonville-Area Application. Washington, DC: The National Academies Press. doi: 10.17226/22482.
×
Page 105
Page 106
Suggested Citation:"Chapter 2 Model Calibration and Validation." National Academies of Sciences, Engineering, and Medicine. 2013. Dynamic, Integrated Model System: Jacksonville-Area Application. Washington, DC: The National Academies Press. doi: 10.17226/22482.
×
Page 106
Page 107
Suggested Citation:"Chapter 2 Model Calibration and Validation." National Academies of Sciences, Engineering, and Medicine. 2013. Dynamic, Integrated Model System: Jacksonville-Area Application. Washington, DC: The National Academies Press. doi: 10.17226/22482.
×
Page 107
Page 108
Suggested Citation:"Chapter 2 Model Calibration and Validation." National Academies of Sciences, Engineering, and Medicine. 2013. Dynamic, Integrated Model System: Jacksonville-Area Application. Washington, DC: The National Academies Press. doi: 10.17226/22482.
×
Page 108
Page 109
Suggested Citation:"Chapter 2 Model Calibration and Validation." National Academies of Sciences, Engineering, and Medicine. 2013. Dynamic, Integrated Model System: Jacksonville-Area Application. Washington, DC: The National Academies Press. doi: 10.17226/22482.
×
Page 109
Page 110
Suggested Citation:"Chapter 2 Model Calibration and Validation." National Academies of Sciences, Engineering, and Medicine. 2013. Dynamic, Integrated Model System: Jacksonville-Area Application. Washington, DC: The National Academies Press. doi: 10.17226/22482.
×
Page 110

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

94 C h A P T e r 2 Model System Calibration and Validation Process Because the Jacksonville DaySim implementation was “transferred” from Sacramento—and the model coefficients and alternative-specific constants were initially estimated and calibrated for the Sacramento region—the project team had to recalibrate the core model components to reflect Jacksonville region-specific travel patterns. Calibration and validation of the entire model system is a highly iterative process that involves making changes to individual model components to better match observed data sources, as well as evaluating the impacts of these changes on other model components and on overall model system performance. One of the advantages of the disaggregate nature of activity- based microsimulation models such as DaySim is that they support more flexibility and realistic calibration adjustments than is possible with aggregate trip-based models. Note that a calibration effort was not performed for the Burlington implementation. Observed Data Sources Before calibrating the core behavioral components, the project team had to prepare observed data sets against which to com- pare the model outputs. The primary observed data source for the calibration of the core DaySim component models was the 2009 National Household Travel Survey (NHTS) data collected in 2008–2009. For some model components, such as the house- hold vehicle availability model and the work-tour-destination model, the NHTS was supplemented with observed informa- tion from the 2005–2009 American Community Survey (ACS) data. Because the focus of the C10A effort is on a region in which choices of nonhighway modes are limited, and thus the dynamic, integrated model represents behavioral changes pri- marily in response to roadway conditions, detailed transit infor- mation (e.g., an onboard survey) was not used in this effort. The observed transit mode share for the Jacksonville region is less than 1%. To support the calibration of the DaySim models, the proj- ect team first had to process the NHTS household, person, and trip records to create a new tour record file and to append additional information to the existing NHTS household, per- son, and trip files. A summary of the NHTS data available for calibration is shown in Table 2.1. Although additional NHTS “add on” survey data were col- lected in the Jacksonville region, the overall number of house- holds, persons, tours, and trips was relatively small. Because DaySim models travel behavior for a typical weekday, weekend days had to be removed from the data set, further reducing the sample size. Although the NHTS contains all the data items required for activity-based model (ABM) system development, such a small regional sample is insufficient to completely esti- mate the coefficients in the DaySim component models. However, in the absence of any other data sets containing the information required for ABM development, the NHTS was deemed acceptable for deriving calibration targets. In addition to the relatively small sample size, a number of other issues arose when using the NHTS data. These issues included the following: • The absence of any person, tour, or trip information for children under 5 years of age: Although these people are reflected in the household size information, no travel behav- ior is recorded. As a result, all summaries of DaySim esti- mated travel behaviors exclude travel by young children to facilitate comparisons. • Missing travel information for some members of the house- hold: Typically, all travel made by all members of each house- hold is collected during the household survey data collection process. Having this complete set of travel demand is even more critical in the context of advanced activity-based model systems which consider all travel by all household members across all times of day, and which may explicitly consider Model Calibration and Validation

95 travel made jointly by members of the household. To address these missing persons, the person weights had to be adjusted to match regional controls of persons by person type devel- oped to support creation of the synthetic population. • Inconsistent expansion factors: When the household and person expansion factors provided with the NHTS were applied, significant discrepancies with other regional person and household totals were observed, necessitating further adjustments to the expansion factors. Finally, a number of the NHTS-derived summaries seemed inconsistent with other household travel surveys used for DaySim development. For example, the NHTS seemed to show relatively high shares of workers choosing to work at home. Notwithstanding these issues, the NHTS was used as the primary data source in the absence of any viable alternatives. Calibration Results The following sections present the results of the initial cali- bration of the Jacksonville DaySim implementation. Although all model calibration adjustments have a simultaneous impact on the model predictions, the calibration effort typi- cally follows a sequential process from the top to the bottom of the DaySim model hierarchy because adjustments to upper-level models tend to affect lower-level model predic- tions more than the reverse. The calibration results described in these sections follow this hierarchy. Note that these calibration results should not be considered final. The C10A project has involved the use of multiple sets of regional skims at different temporal resolutions, using different network simulation methods at different points in the project. For example, an initial calibration was performed using skims for four broad time periods. Subsequently, the calibration was revisited when Microsimulator-based skims for 22 time periods became available. The calibration was further revised when Router-based skims from the fully integrated model system were developed. Also note that the summaries shown are not exhaustive and that additional summaries have been prepared and used in the calibration process. Usual Work and School Locations The usual work and school location models are the first mod- els in the DaySim system; they predict the usual destination parcels for work and school tours. Information on workplace locations can then be used in subsequent model components, such auto ownership. The work and school location models, as well as all the tour-destination-choice models, assume a single anchor point—the tour origin—from which imped- ance is measured, without direct consideration of the imped- ance for stops on the way to and from the tour destination. For the usual work and school location models, the anchor is the person’s home. In these models, the home location is treated as a special location, because it occurs with greater frequency than any given nonhome location, and size and impedance are not meaningful attributes. In addition, the model incorporates availability constraints; for example, only parcels with grade enrollment are available as school-tour destinations for children. Table 2.2 shows that the model system achieves a relatively good match between overall average work-tour lengths. DaySim predicts shorter work tours for part-time workers than observed in the NHTS, and longer student work tours. Figure 2.1 illustrates that the overall distribution of estimated work-tour lengths matches the distribution of observed work tours. This figure also illustrates that, even with relatively large travel markets, such as work purpose tours, the observed Table 2.1. NHTS Summary Statistics for Jacksonville Region County Households Persons Tours Total Tours Weekday Trips Total Trips Weekday Clay 658 1,365 1,717 1,304 4,630 4,108 Duval 205 448 599 438 1,628 1,428 Nassau 198 415 476 373 1,287 1,153 St Johns 79 171 203 172 580 548 Total 1,140 2,399 2,995 2,287 8,125 7,237 Table 2.2. Average Work-Tour Length, by Model Type (miles) Worker Type NHTS DaySim Full-time workers 13.67 13.51 Part-time workers 9.75 8.10 Students 5.71 6.84 Total 12.99 12.53 Source: DaySim and NHTS.

96 data derived from NHTS show a fair amount of variation. Finally, Table 2.3 summarizes commute flows from the 2005–2009 ACS data. This table demonstrates that the model system is doing a reasonable job of capturing these flows, although Duval County (which contains Jacksonville) is slightly over-predicted as a commute destination while St. Johns County (which is along the coast) is underpredicted. Note that a “cleaned” business employment database was not available for St. Johns County, and the St. Johns existing employment had to be adjusted to match estimates of county employment derived from external sources. Table 2.4 and Figure 2.2 summarize the usual school loca- tion model results. Overall, the current calibration of DaySim produces longer school tours than observed. DaySim predicts slightly longer school tours for grade school students and university students and slightly shorter school tours for high school students. As seen with the work tours, the observed data derived from NHTS show a fair amount of variation in school-tour lengths. Vehicle Availability The vehicle availability model predicts the number of motor- ized vehicles owned, leased, or otherwise belonging to the fleet of vehicles possessed by a household. The vehicle availability model takes as given the household characteristics, as well as the regular work location information of all workers in the household. To calibrate and validate the model, the estimated share of households in each vehicle availability category was compared with the observed shares of households along three primary dimensions: household potential drivers, household income, and household residence county. Two primary sources for observed data were identified and summarized: the 2005– 2009 ACS data and 2009 NHTS. Table 2.7 shows the differ- ence between the original estimated (Table 2.6) and the observed (Table 2.5) shares of households by auto availability and county of residence. The tables illustrate that the model is underpredicting 0-vehicle and 2-vehicle households and overpredicting 1-vehicle and 3+-vehicle households, suggest- ing that further calibration of this model is warranted. Source: DaySim and NHTS. 0 10000 20000 30000 40000 1 11 21 31 41 51 61 71 81 91 NHTS DaySim Figure 2.1. Distribution of work-tour lengths. Table 2.3. Worker Flows by County (%) County O–D Clay Duval Nassau St Johns Total Clay -1.2 1.2 0.0 0.0 0.0 Duval 0.5 -1.0 0.4 0.2 0.0 Nassau 0.0 0.5 -0.5 0.0 0.0 St Johns 0.3 0.8 0.0 -1.2 0.0 Total 0.4 1.5 0.1 1.0 0.0 Note: O–D is origin–destination. Source: DaySim and 2005–2009 ACS. Table 2.4. Usual School Location Average Distance (miles) School Type NHTS DaySim Grade school 5.25 5.41 High school 6.56 5.87 University 11.67 12.67 Total 6.64 7.14 Source: DaySim and NHTS.

97 Day Pattern The day-pattern model predicts the number and purpose of tours and intermediate stops made by each individual. These predictions arise from a series of sequential submodels that address different aspects of each individual’s daily activity pattern. The main activity pattern model predicts whether a person participates in any tours and intermediate stops for each of the seven different activity purposes and then the exact number of tours made for that purpose during the full day. Another submodel predicts the number and purpose of work-based subtours, while a final submodel predicts the number and purpose of intermediate stops. Calibration targets for the day-pattern model calibration were derived from the NHTS. The full set of targets addressed tours by person type; tour and stop combinations by person type; exact numbers of tours and stops by purpose and per- son type; exact number and purpose of work-based sub- tours by person type; numbers of stops by tour purpose; and the exact numbers of tours and stops by person type. The estimated results produced by the activity generator were then compared with these targets. The calibration and validation process primarily involved making adjust- ments to alternative specific constants and reviewing and revising estimated parameters to ensure reasonability and consistency. Table 2.8 compares the total number of tours for each of the destination purposes predicted by DaySim with the NHTS Source: DaySim and NHTS. 0 10000 20000 30000 40000 50000 60000 1 11 21 31 41 51 61 NHTS DaySim Figure 2.2. Distribution of school-tour lengths. Table 2.5. Observed Households, by Vehicle Availability (%) County 0 1 2 3 4+ Total Clay 0.3 3.3 5.7 2.2 0.8 12.3 Duval 5.4 26.3 27.0 7.7 2.7 69.1 Nassau 0.2 1.4 2.1 1.0 0.4 5.0 St Johns 0.5 4.2 6.6 1.6 0.6 13.6 Total 6.4 35.1 41.4 12.6 4.4 100.0 Source: 2005–2009 ACS. Table 2.6. Estimated Households, by Vehicle Availability (%) County 0 1 2 3 4+ Total Clay 0.1 3.8 5.5 2.0 0.9 12.3 Duval 3.7 27.0 26.3 8.6 3.5 69.1 Nassau 0.0 1.7 2.2 0.7 0.3 5.0 St Johns 0.1 4.9 5.9 1.9 0.8 13.6 Total 3.9 37.4 40.0 13.3 5.5 100.0 Source: DaySim. Table 2.7. Difference in Households, by Vehicle Availability (%) County 0 1 2 3 4+ Total Clay -0.2 0.5 -0.1 -0.3 0.1 0.0 Duval -1.7 0.7 -0.7 0.9 0.8 0.0 Nassau -0.2 0.3 0.1 -0.2 0.0 0.0 St Johns -0.4 0.7 -0.7 0.3 0.2 0.0 Total 2.5 2.3 1.5 0.6 1.1 0.0 Source: DaySim and 2005–2009 ACS.

98 observed tours by destination purpose. This table illustrates that, overall, DaySim is matching regional tours relatively well, with 4% too many tours across all purposes. Tours by indi- vidual destination purpose match reasonably well, with the exception of work tours, which are overpredicted by 12%. Further adjustment to the calibration to address this over- prediction is recommended. Table 2.9 summarizes the estimated and observed trips by destination purpose. This table demonstrates that DaySim is matching overall trips extremely well, although some of the individual purposes could use further refinement. Spe- cifically, work trips are overpredicted, which is consistent with the overprediction of work tours, while school trips are underpredicted. Tour and Stop Destinations Destinations for each of the tours predicted by the daily activ- ity pattern models are predicted by a purpose-segmented destination-choice model, using information about network impedances, purpose-specific size terms, and household and person attributes. The tour-destination-choice models predict a specific parcel as a destination and assume a single home anchor point (the tour origin, from which impedance is mea- sured), without direct consideration of the impedance for stops on the way to and from the tour destination. Unlike the usual work location, which has a nested structure to reflect the treatment of home as a special location, the destination- choice models consider all parcels in a multinomial structure, subject to availability constraints. Figure 2.3 through Figure 2.7 compare the observed NHTS tour length frequencies with the estimated DaySim tour length frequencies. These figures demonstrate that DaySim matches observed data reasonably well, but they also illustrate the vari- ations in some of the NHTS data, especially for purposes not well represented in the NHTS. One example of a purpose not well represented would be meal tours. Table 2.8. Tours, by Destination Purpose Purpose NHTS DaySim Diff % Diff work 432,006 485,234 53,228 12 school 176,802 184,635 7,833 4 escort 135,493 128,014 -7,479 -6 pers.bus 111,630 108,583 -3,047 -3 shop 216,455 225,625 9,170 4 meal 59,408 59,031 -377 -1 soc/rec 244,219 242,368 -1,851 -1 Total 1,376,013 1,433,490 57,477 4 Source: DaySim and NHTS. Table 2.9. Trips, by Destination Purpose Purpose NHTS DaySim Diff % Diff work 730,988 797,150 66,162 9 school 209,466 189,977 -19,489 -9 escort 265,299 254,821 -10,478 -4 pers.bus 251,000 250,055 -945 0 shop 646,348 651,131 4,783 1 meal 199,045 203,632 4,587 2 soc/rec 395,372 375,919 -19,453 -5 home 1,467,457 1,434,811 -32,646 -2 Total 4,164,975 4,157,496 7,479 0 Source: DaySim and NHTS. Source: DaySim and NHTS. 0% 5% 10% 15% 20% 1 11 21 31 41 51 61 NHTS DaySim Figure 2.3. Distribution of shop tour lengths.

99 Source: DaySim and NHTS. 0% 5% 10% 15% 20% 1 11 21 31 41 51 61 NHTS DaySim Figure 2.4. Distribution of social/recreation tour lengths. Source: DaySim and NHTS. 0% 5% 10% 15% 1 11 21 31 41 51 61 NHTS DaySim Figure 2.5. Distribution of personal business tour lengths. Source: DaySim and NHTS. 0% 5% 10% 15% 20% 1 11 21 31 41 51 61 NHTS DaySim Figure 2.6. Distribution of escort tour lengths.

100 0% 5% 10% 15% 20% 1 11 21 31 41 51 61 NHTS DaySim Source: DaySim and NHTS. Figure 2.7. Distribution of meal-tour lengths. Trip Mode The DaySim model system incorporates two sets of mode- choice models. The tour-mode-choice model predicts the pri- mary mode used for a tour, while the trip-mode-choice model predicts the mode used for each individual trip on the tour, constrained by the tour mode. The tour- and trip-mode-choice models incorporate a variety of network impedance, house- hold, and purpose attributes, and even land-use attributes. The core mode-choice models incorporate the following modes: • Drive to transit. • Walk to transit. • School bus. • Shared ride 2. • Drive alone. • Bike. • Walk. Table 2.10 through Table 2.12 summarize the observed and estimated mode shares by trip destination purpose. The mode-choice model calibration process involves making adjust- ments to both the tour models and trip models. However, only the trip-mode-choice model outputs are used directly in the network assignment process, so only the trip-mode results are reported in the following tables. These tables demonstrate Table 2.10. Observed Trip-Mode Shares, by Destination Purpose (%) Mode Work School Escort PersBus Shop Meal SocRec All Purp Drive Alone 80.7 17.9 29.9 48.8 56.0 35.1 24.3 52.8 SR2 - Driver 8.0 7.3 33.4 17.8 14.2 24.6 8.7 12.8 SR2 - Passenger 4.6 20.8 4.9 19.4 13.3 8.2 7.5 9.1 SR3+ - Driver 3.3 3.0 18.8 3.3 5.3 9.6 9.0 6.2 SR3+ - Passenger 1.3 39.0 8.9 8.7 5.5 13.0 15.3 9.9 Drive-Transit-Walk 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 Walk-Transit-Drive 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 Walk-Transit-Walk 0.0 0.0 0.0 0.6 0.0 0.0 2.4 0.4 School Bus 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 Bike 0.0 3.0 1.4 0.1 0.3 2.5 6.0 1.3 Walk 2.1 8.9 2.6 1.3 5.5 7.0 26.7 7.5 Total 100.0 100.0 100.0 100.0 100.0 100.0 100.0 100.0 Source: NHTS.

101 Table 2.11. Estimated Trip-Mode Shares, by Destination Purpose (%) Mode Work School Escort PersBus Shop Meal SocRec All Purp Drive Alone 83.5 14.4 22.4 57.4 52.6 28.4 43.7 56.4 SR2 - Driver 8.2 6.9 27.5 16.1 18.4 32.6 18.1 14.3 SR2 - Passenger 1.3 22.3 10.3 10.7 11.7 14.8 15.4 9.0 SR3+ - Driver 3.5 6.8 17.1 5.3 6.5 9.9 6.2 6.1 SR3+ - Passenger 0.8 30.1 11.0 7.0 7.7 10.6 10.7 7.9 Drive-Transit-Walk 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 Walk-Transit-Drive 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 Walk-Transit-Walk 0.2 0.5 0.0 0.2 0.0 0.2 0.1 0.2 School Bus 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 Bike 0.8 4.0 0.2 0.6 0.5 0.2 0.7 0.9 Walk 1.7 15.0 11.5 2.8 2.5 3.3 5.2 5.1 Total 100.0 100.0 100.0 100.0 100.0 100.0 100.0 100.0 Source: DaySim. Table 2.12. Difference in Trip-Mode Shares, by Destination Purpose (%) Mode Work School Escort PersBus Shop Meal SocRec All Purp Drive Alone 2.8 -3.5 -7.5 8.6 -3.4 -6.7 19.3 3.6 SR2 - Driver 0.2 -0.4 -5.9 -1.7 4.2 8.0 9.4 1.6 SR2 - Passenger -3.3 1.5 5.4 -8.7 -1.5 6.6 7.9 -0.1 SR3+ - Driver 0.2 3.8 -1.7 2.0 1.3 0.3 -2.8 0.0 SR3+ - Passenger -0.5 -8.9 2.1 -1.8 2.2 -2.4 -4.6 -2.0 Drive-Transit-Walk 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 Walk-Transit-Drive 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 Walk-Transit-Walk 0.2 0.5 0.0 -0.4 0.0 0.2 -2.4 -0.3 School Bus 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 Bike 0.8 1.0 -1.2 0.4 0.2 -2.3 -5.2 -0.4 Walk -0.4 6.1 8.9 1.6 -3.0 -3.7 -21.6 -2.4 Total 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 Source: DaySim and NHTS. that DaySim does a reasonably good job of matching aggregate mode shares, although the calibration by purpose could be improved. Overall, drive alone shares are overpredicted by 3.6%, while walk shares are underpredicted by 2.4%. Note that the tables include additional modes not in the preceding list. In the context of existing ABM implementations that have used static network assignment procedures, shared ride trips are simply aggregated to the zonal level and divided by an assumed occupancy rate to calculate vehicle trips. This approach does not work in a dis aggregate assignment simulation such as TRANSIMS because the goal is to preserve the details about each individual trip. Because it is not possible to divide discrete shared ride trips by an occupancy rate to estimate vehicle trips, driver and passenger status have to be assigned to travelers whose mode is identified as shared ride. DaySim does not predict whether a person is an auto driver or a passenger for shared ride tours and trips, so a detailed analy- sis was used to derive a method for assigning an auto driver or passenger designation to each auto tour and trip, based on the modes used on a given tour.

102 Tour and Trip Time of Day One of the most compelling features of ABM approaches is that they have the capability to treat time explicitly and con- sistently across all travel choice dimensions. Rather than using fixed factors or broad time periods, activity-based models can consider detailed time periods, as well as desired arrival times, departure times, and activity durations. This capability is essential to fulfilling the goal of the C10A project, which is to make operational a dynamic, integrated model that is sensitive to the dynamic interplay between travel behavior and network conditions. To provide this sensitivity, DaySim includes two types of time-of-day models. Tour arrival and departure time at the primary destination models predict the time that the person arrives at the tour primary destination and the time that the person leaves that primary destination. Intermediate stop arrival or departure time models predict the time that the person arrives at the stop location (on the first half tour) or the time that the person departs from the stop location (on the second half tour). The time-of-day models operate at a 30-min time resolution, using the 48 half-hour periods of the day. In addition, the models employ time windows when scheduling, so that when a tour or stop is scheduled, the por- tions of the window not filled are left as two separate and smaller time windows. Figure 2.8 through Figure 2.11 compare the estimated and observed arrival times at tour primary destinations by purpose. Figure 2.8 shows the strong a.m. peak for arrival times at work, while Figure 2.9 shows an even stronger a.m. peak for arrival times at school. For both these tour pur- poses, the estimated and observed distributions of tour arrivals by half-hour are very similar. Figure 2.10 illustrates that the tour arrival times for other purposes (shop, meal, escort, social/recreational, and personal business) are more evenly distributed across the day and that the estimated and observed distributions are similar. The estimated and observed work-based subtours, shown in Figure 2.11, do not match as closely. This subset of tours uses the work locations rather than the home locations as the anchors. The observed NHTS data show a strong peak at midday, corresponding with lunchtime, while DaySim predicts more of these tours at other times of day. In addition to considering tour arrival and departure times, DaySim incorporates parameters related to the dura- tions of activities. Figure 2.12 through Figure 2.15 show the estimated and observed tour durations. Overall, the esti- mated and observed results are similar, although DaySim predicts a stronger peak at a 9-hour work-tour activity, while the NHTS data show a stronger peak in school-tour durations at 6 hours. TrANSIMS Validation Process This section documents the calibration and validation results of assigning the Northeast Florida Regional Planning Model (NERPM) trip tables to the PLANNING-level regional net- work. Validation tests were performed using all three network resolutions described earlier in this report, with a particular focus on the PLANNING and FINEGRAINED resolution networks. However, because of the significantly longer run- times associated with the FINEGRAINED and ALLSTREETS networks, the PLANNING network has been the primary network resolution used in the integrated model system. 0.0% 2.0% 4.0% 6.0% 8.0% 10.0% 12.0% 14.0% 16.0% Be fo re 3: 30 4: 30 5: 30 6: 30 7: 30 8: 30 9: 30 10 :3 0 11 :3 0 12 :3 0 13 :3 0 14 :3 0 15 :3 0 16 :3 0 17 :3 0 18 :3 0 19 :3 0 20 :3 0 21 :3 0 22 :3 0 23 :3 0 0: 30 1: 30 2: 30 Af te r… NHTS DaySim Source: DaySim and NHTS. Figure 2.8. Work-tour arrival times.

103 0.0% 5.0% 10.0% 15.0% 20.0% 25.0% 30.0% Be fo re 3: 30 4: 30 5: 30 6: 30 7: 30 8: 30 9: 30 10 :3 0 11 :3 0 12 :3 0 13 :3 0 14 :3 0 15 :3 0 16 :3 0 17 :3 0 18 :3 0 19 :3 0 20 :3 0 21 :3 0 22 :3 0 23 :3 0 0: 30 1: 30 2: 30 Af te r… NHTS DaySim Source: DaySim and NHTS. Figure 2.9. School-tour arrival times. 0.0% 1.0% 2.0% 3.0% 4.0% 5.0% 6.0% 7.0% 8.0% Be fo re 3: 30 4: 30 5: 30 6: 30 7: 30 8: 30 9: 30 10 :3 0 11 :3 0 12 :3 0 13 :3 0 14 :3 0 15 :3 0 16 :3 0 17 :3 0 18 :3 0 19 :3 0 20 :3 0 21 :3 0 22 :3 0 23 :3 0 0: 30 1: 30 2: 30 Af te r… NHTS DaySim Source: DaySim and NHTS. Figure 2.10. Other arrival times.

104 0.0% 2.0% 4.0% 6.0% 8.0% 10.0% 12.0% 14.0% 16.0% 18.0% 20.0% Be fo re 3: 30 4: 30 5: 30 6: 30 7: 30 8: 30 9: 30 10 :3 0 11 :3 0 12 :3 0 13 :3 0 14 :3 0 15 :3 0 16 :3 0 17 :3 0 18 :3 0 19 :3 0 20 :3 0 21 :3 0 22 :3 0 23 :3 0 0: 30 1: 30 2: 30 Af te r… NHTS DaySim Source: DaySim and NHTS. Figure 2.11. Work-based arrival times. 0.0% 2.0% 4.0% 6.0% 8.0% 10.0% 12.0% 14.0% 16.0% 18.0% 0: 00 1: 00 2: 00 3: 00 4: 00 5: 00 6: 00 7: 00 8: 00 9: 00 10 :0 0 11 :0 0 12 :0 0 13 :0 0 14 :0 0 15 :0 0 16 :0 0 17 :0 0 18 :0 0 19 :0 0 20 :0 0 21 :0 0 22 :0 0 23 :0 0 > 24 … NHTS DaySim Source: DaySim and NHTS. Figure 2.12. Work-tour durations, in hours.

105 0.0% 5.0% 10.0% 15.0% 20.0% 25.0% 30.0% 35.0% 0: 00 1: 00 2: 00 3: 00 4: 00 5: 00 6: 00 7: 00 8: 00 9: 00 10 :0 0 11 :0 0 12 :0 0 13 :0 0 14 :0 0 15 :0 0 16 :0 0 17 :0 0 18 :0 0 19 :0 0 20 :0 0 21 :0 0 22 :0 0 23 :0 0 > 24 … NHTS DaySim Source: DaySim and NHTS. Figure 2.13. School-tour duration, in hours. 0.0% 5.0% 10.0% 15.0% 20.0% 25.0% 30.0% 35.0% 40.0% 45.0% 50.0% 0: 00 1: 00 2: 00 3: 00 4: 00 5: 00 6: 00 7: 00 8: 00 9: 00 10 :0 0 11 :0 0 12 :0 0 13 :0 0 14 :0 0 15 :0 0 16 :0 0 17 :0 0 18 :0 0 19 :0 0 20 :0 0 21 :0 0 22 :0 0 23 :0 0 > 24 … NHTS DaySim Source: DaySim and NHTS. Figure 2.14. Other tour durations, in hours.

106 Observed Data Sources The following three sources of 15-min count and data were compiled from the Florida Department of Transportation (FDOT): 1. Intelligent transportation systems (ITS) detectors on I-295 and I-95; 2. Portable traffic monitoring stations (PTMS) on arterials and freeways; and 3. Telemetered traffic monitoring sites (TTMS) on arterials and freeways. The PTMS and TTMS data were obtained from FDOT’s Transportation Statistics Office (TRANSTAT) and included only 15-min vehicle counts. The ITS data included 15-min count and speed data. All of these count data were collected in 2008–2009 and were processed as described later for com- parison with the 2005 model year assignments. With the help of the project team members at Florida Inter- national University, the observed traffic counts were scrubbed for spatial and temporal consistency. The data were then tagged to the NERPM master (merged) network to identify corre- sponding links in the 2005 NERPM network. The tagging 0.0% 10.0% 20.0% 30.0% 40.0% 50.0% 60.0% 0: 00 1: 00 2: 00 3: 00 4: 00 5: 00 6: 00 7: 00 8: 00 9: 00 10 :0 0 11 :0 0 12 :0 0 NHTS DaySim Source: DaySim and NHTS. Figure 2.15. Work-based-tour durations, in hours. process involved identifying a pair of nodes—Anode and Bnode—from the merged network for each of the count or speed locations. Tagging to the merged network ensured that the data can be transferred to the different network resolutions and modeling years without duplication of work. This process resulted in what is called a directional-data set in which each record corresponds to a link direction, whether or not it is represented as a two-way link in TRANSIMS. During the network conversion process, a link-node equiv- alence file is created by TransimsNet; it lists the sequence of nodes that were merged to create each TRANSIMS link. The LinkData program uses this file to transfer the directional count and speed data sets from the NERPM network links to the link numbers created for the PLANNING network. During this the data processing, issues with the original tagging process were identified and addressed, and further data checks were performed. Some data points needed to be merged because they were located on the same merged link. Similarly, a few data points needed to be dropped because of incomplete or erroneous data for all times of day or because of problems with proper identification of links. The number of data points from each data source included in each step of the process is shown in Table 2.13. Table 2.13. Number of Locations for PTMS, TTMS, and ITS Data Data Source Native Format Tagged to the Merged Network Valid Tags Transferred to the TRANSIMS Network PTMS 923 923 897 562 TTMS 20 7 7 7 ITS 190 123 122 84

107 Figure 2.16. Locations of PTMS and ITS data shown on the PLANNING network. The TTMS data points were very limited in comparison with the other two sources. However, because they are perma- nent traffic monitoring stations, traffic counts throughout the year at the 15-min resolution were available. This infor- mation can be used for studying the seasonal and special event traffic patterns at these select locations. Because the PTMS and ITS data provided the bulk of the validated data set, these two sources were relied on heavily for comparisons and are presented in this chapter. The locations of the PTMS and ITS data points for the Jacksonville region are shown in Figure 2.16. Validation Results Table 2.14 and Table 2.15 summarize the initial daily valida- tion by facility type and area type. These tables demonstrate that overall daily volumes match relatively well, with estimated volumes approximately 3.4% higher than observed volumes and a regional %RMSE of 38.6. Higher-level facilities are generally overpredicted, and lower-level facilities underpre- dicted. This is primarily attributable to adjustments made to parameters affecting the circuity of routes during the TRAN- SIMS microsimulation calibration to better match highway volumes and speeds. Table 2.15 indicates that the volumes in the denser regional core (area types 1 and 2) are generally overpredicted, with more suburban and rural areas slightly underpredicted. Estimated roadway volumes were also compared with observed volumes for four broad time periods. Note that unlike a traditional static assignment model, the TRANSIMS assignment process does not assign demand to the network by broad time period. Rather, the entire day’s demand of indi- vidual trips is loaded onto the network using minute-level departure time information provided by DaySim. However, time period summaries are still helpful in assessing the perfor- mance of the assignment model and informing adjustments to be made to both the DaySim demand and TRANSIMS sup- ply models.

108 Table 2.14. Daily Validation, by Facility Type Facility Type # Obs. Est. Vehicles Obs. Vehicles Diff. % Diff. %RMSE Freeway 128 5,475,280 5,136,426 338,854 6.6 32.2 Expressway 30 929,697 767,990 161,707 21.1 35.1 Principal Arterial 52 580,435 576,392 4,043 0.7 26 Major Arterial 293 5,150,743 5,040,281 110,462 2.2 28.3 Minor Arterial 99 900,580 900,885 -305 0 48.8 Collector 18 140,057 123,639 16,418 13.3 46.6 Local Street 8 52,533 104,271 -51,738 -49.6 88.3 Ramp 80 492,855 615,063 -122,208 -19.9 72.7 External 2 3,819 5,287 -1,468 -27.8 28.3 Total 710 13,725,999 13,270,234 455,765 3.4 38.6 Note: # Obs. = number of observations; Est. = estimated; Obs. = observed; Diff. = difference. The time period summaries illustrate that the inte- grated model results for the a.m., midday, and p.m. peak periods look reasonably good both in terms of matching aggregate volumes by facility type and in terms of %RMSE (Tables 2.16 through 2.18). The evening time period (Table 2.19) looks more problematic and will require additional investigation. To some extent this may be reflec- tive of the cascade effect. That is, because the DaySim- TRANSIMS model preserves the integrity and linked nature of the individual trips on a tour across both the demand and assignment simulations, if more time is needed to reach a given activity location than was expected Table 2.15. Daily Validation, by Area Type Area Type # Obs. Est. Vehicles Obs. Vehicles Diff. % Diff. %RMSE Area Type 1 45 665,453 594,015 71,438 12 32.2 Area Type 2 118 2,651,881 2,419,343 232,538 9.6 35.1 Area Type 3 451 9,004,366 8,775,450 228,916 2.6 26 Area Type 4 47 1,074,084 1,141,492 -67,408 -5.9 28.3 Area Type 5 49 330,215 339,934 -9,719 -2.9 48.8 Total 710 13,725,999 13,270,234 455,765 3.4 38.6 Note: # Obs. = number of observations; Est. = estimated; Obs. = observed; Diff. = difference. Table 2.16. A.M. Validation, by Facility Type Facility Type # Obs. Est. Vehicles Obs. Vehicles Diff. % Diff. %RMSE Freeway 128 930,670 986,919 -56,249 -5.7 34.7 Expressway 30 153,556 143,126 10,430 7.3 30 Principal Arterial 52 125,920 107,029 18,891 17.7 52.1 Major Arterial 293 999,174 898,274 100,900 11.2 35.1 Minor Arterial 99 187,351 177,146 10,205 5.8 59.4 Collector 18 23,813 22,902 911 4 36.5 Local Street 8 11,170 21,925 -10,755 -49.1 83.5 Ramp 80 105,651 122,742 -17,091 -13.9 84.8 External 2 831 750 81 10.8 33.1 Total 710 2,538,136 2,480,813 57,323 2.3 43.9 Note: # Obs. = number of observations; Est. = estimated; Obs. = observed; Diff. = difference.

109 Table 2.17. Midday Validation, by Facility Type Facility Type # Obs. Est. Vehicles Obs. Vehicles Diff. % Diff. %RMSE Freeway 127 1,483,157 1,405,075 78,082 5.6 2978 Expressway 30 288,789 204,735 84,054 41.1 3090 Principal Arterial 52 147,854 167,971 -20,117 -12 830 Major Arterial 293 1,455,803 1,548,916 -93,113 -6 1318 Minor Arterial 99 270,473 270,067 406 0.2 1016 Collector 18 39,580 36,260 3,320 9.2 641 Local Street 8 14,749 28,670 -13,921 -48.6 1862 Ramp 80 134,675 173,331 -38,656 -22.3 1132 External 2 1,073 1,890 -817 -43.2 409 Total 709 3,836,153 3,836,915 762 0 42.4 Note: # Obs. = number of observations; Est. = estimated; Obs. = observed; Diff. = difference. Table 2.18. P.M. Peak Validation, by Facility Type Facility Type # Obs. Est. Vehicles Obs. Vehicles Diff. % Diff. %RMSE Freeway 127 1,107,510 1,128,918 -21,408 -1.9 2029 Expressway 30 186,698 172,605 14,093 8.2 1325 Principal Arterial 52 114,778 127,018 -12,240 -9.6 525 Major Arterial 293 1,046,420 1,126,449 -80,029 -7.1 826 Minor Arterial 99 199,800 205,659 -5,859 -2.8 681 Collector 18 26,379 28,242 -1,863 -6.6 454 Local Street 8 10,398 23,008 -12,610 -54.8 1702 Ramp 80 97,834 134,869 -37,035 -27.5 741 External 2 847 1,207 -360 -29.8 180 Total 709 2,790,664 2,947,975 157,311 5.3 35.3 Note: # Obs. = number of observations; Est. = estimated; Obs. = observed; Diff. = difference. Table 2.19. Evening Validation, by Facility Type Facility Type # Obs. Est. Vehicles Obs. Vehicles Diff. % Diff. %RMSE Freeway 256 1,937,651 1,615,514 322,137 19.9 2542 Expressway 60 300,654 247,524 53,130 21.5 1556 Principal Arterial 104 191,883 174,374 17,509 10 613 Major Arterial 586 1,649,346 1,466,642 182,704 12.5 852 Minor Arterial 198 242,956 248,013 -5,057 -2 607 Collector 36 50,285 36,235 14,050 38.8 616 Local Street 16 16,216 30,668 -14,452 -47.1 1180 Ramp 160 154,695 184,121 -29,426 -16 617 External 4 1,068 1,440 -372 -25.8 166 Total 1420 4,544,754 4,004,531 540,223 13.5 65.5 Note: # Obs. = number of observations; Est. = estimated; Obs. = observed; Diff. = difference.

110 200,000 400,000 600,000 800,000 1,000,000 1,200,000 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 DAYHOUR EST OBS Figure 2.17. Estimated and observed total volumes, by hour. when the demand was scheduled, then the start time and end time for that activity will be delayed; that causes a cascade effect through the traveler’s entire daily activity pattern, with trips being pushed later and later in the day. This effect typically manifests in both the p.m. and evening periods; and in the current Jacksonville model, the p.m. is actually underpredicted. Because demand is continuously assigned to the network across the entire day, estimated volumes can be compared with observed volumes using any temporal resolution. Figure 2.17 illustrates the estimated and observed volumes by hour. This chart clearly demonstrates that the integrated model is assigning more demand from 8:00 p.m. (hour 20) through 10:00 p.m. (hour 22) than observed.

Next: Chapter 3 Model Sensitivity Testing »
Dynamic, Integrated Model System: Jacksonville-Area Application Get This Book
×
 Dynamic, Integrated Model System: Jacksonville-Area Application
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

TRB’s second Strategic Highway Research Program (SHRP 2) Report S2-C10A-RW-1: Dynamic, Integrated Model System: Jacksonville-Area Application explores development of a dynamic integrated travel demand model with advanced policy analysis capabilities.

The report describes the implementation of the model system in Burlington, Vermont, and in Jacksonville, Florida; the calibration and validation of the model system; and the application of the model system to a set of initial sensitivity tests.

The same project that developed this report also produced a report titled Transferability of Activity-Based Model Parameters that explores development of regional activity-based modeling systems for the Tampa Bay and Jacksonville regions in Florida.

Capacity Project C10A developed a start-up guide for the application of the DaySim activity-based demand model and a TRANSIMS network for Burlington, Vermont, to test linking the demand and network models before transferring the model structure to the larger Jacksonville, Florida, area. The two model applications used in these locations are currently available.

Software Disclaimer: This software is offered as is, without warranty or promise of support of any kind either expressed or implied. Under no circumstance will the National Academy of Sciences or the Transportation Research Board (collectively "TRB") be liable for any loss or damage caused by the installation or operation of this product. TRB makes no representation or warranty of any kind, expressed or implied, in fact or in law, including without limitation, the warranty of merchantability or the warranty of fitness for a particular purpose, and shall not in any case be liable for any consequential or special damages.

READ FREE ONLINE

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!