National Academies Press: OpenBook

Advanced Practices in Travel Forecasting (2010)

Chapter: Chapter Four - Implementation and Institutional Issues

« Previous: Chapter Three - Benefits of Advanced Models
Page 41
Suggested Citation:"Chapter Four - Implementation and Institutional Issues." National Academies of Sciences, Engineering, and Medicine. 2010. Advanced Practices in Travel Forecasting. Washington, DC: The National Academies Press. doi: 10.17226/22950.
×
Page 41
Page 42
Suggested Citation:"Chapter Four - Implementation and Institutional Issues." National Academies of Sciences, Engineering, and Medicine. 2010. Advanced Practices in Travel Forecasting. Washington, DC: The National Academies Press. doi: 10.17226/22950.
×
Page 42
Page 43
Suggested Citation:"Chapter Four - Implementation and Institutional Issues." National Academies of Sciences, Engineering, and Medicine. 2010. Advanced Practices in Travel Forecasting. Washington, DC: The National Academies Press. doi: 10.17226/22950.
×
Page 43
Page 44
Suggested Citation:"Chapter Four - Implementation and Institutional Issues." National Academies of Sciences, Engineering, and Medicine. 2010. Advanced Practices in Travel Forecasting. Washington, DC: The National Academies Press. doi: 10.17226/22950.
×
Page 44
Page 45
Suggested Citation:"Chapter Four - Implementation and Institutional Issues." National Academies of Sciences, Engineering, and Medicine. 2010. Advanced Practices in Travel Forecasting. Washington, DC: The National Academies Press. doi: 10.17226/22950.
×
Page 45
Page 46
Suggested Citation:"Chapter Four - Implementation and Institutional Issues." National Academies of Sciences, Engineering, and Medicine. 2010. Advanced Practices in Travel Forecasting. Washington, DC: The National Academies Press. doi: 10.17226/22950.
×
Page 46
Page 47
Suggested Citation:"Chapter Four - Implementation and Institutional Issues." National Academies of Sciences, Engineering, and Medicine. 2010. Advanced Practices in Travel Forecasting. Washington, DC: The National Academies Press. doi: 10.17226/22950.
×
Page 47
Page 48
Suggested Citation:"Chapter Four - Implementation and Institutional Issues." National Academies of Sciences, Engineering, and Medicine. 2010. Advanced Practices in Travel Forecasting. Washington, DC: The National Academies Press. doi: 10.17226/22950.
×
Page 48
Page 49
Suggested Citation:"Chapter Four - Implementation and Institutional Issues." National Academies of Sciences, Engineering, and Medicine. 2010. Advanced Practices in Travel Forecasting. Washington, DC: The National Academies Press. doi: 10.17226/22950.
×
Page 49
Page 50
Suggested Citation:"Chapter Four - Implementation and Institutional Issues." National Academies of Sciences, Engineering, and Medicine. 2010. Advanced Practices in Travel Forecasting. Washington, DC: The National Academies Press. doi: 10.17226/22950.
×
Page 50
Page 51
Suggested Citation:"Chapter Four - Implementation and Institutional Issues." National Academies of Sciences, Engineering, and Medicine. 2010. Advanced Practices in Travel Forecasting. Washington, DC: The National Academies Press. doi: 10.17226/22950.
×
Page 51
Page 52
Suggested Citation:"Chapter Four - Implementation and Institutional Issues." National Academies of Sciences, Engineering, and Medicine. 2010. Advanced Practices in Travel Forecasting. Washington, DC: The National Academies Press. doi: 10.17226/22950.
×
Page 52

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

41 Adopters of advanced models have had to overcome a number of obstacles and develop new techniques to reach their goals. Facing such challenges is hardly unique to advanced models, because most dramatic changes from the status quo require taking risks and thinking about old problems in new ways. It is interesting to note that many of the pioneers of advanced mod- els saw them as gradual improvements following many years of research and development. Those not so involved tended to view the changes as far more radical and sudden. The magnitude and variety of issues that have been identified and addressed is encouraging, for it attests that much has already been accom- plished. Some of the key implementation and institutional issues facing developers and users of advanced models are described in this chapter. Although most of the issues discussed were recurrent themes, several less common issues were unique and insightful enough to warrant mention. METHODOLOGICAL ISSUES A number of barriers were overcome to implement the roughly 40 advanced models listed in Table 1. Many of them related to the challenges of implementing a fundamentally new mod- eling paradigm. Some were anticipated based on the weak- nesses of the previous generation of models, whereas others were unique. Most of the advanced models in use today are based on the microsimulation of the interaction of individuals, households, and firms, whereas the four-step paradigm was based on aggregate representations of them. As households and firms were progressively divided into finer categories this approach became known as disaggregate modeling. However, it did not reach the level of spatial and temporal resolution and fidelity that characterize advanced models. The issues profiled here were faced by those who made the transition to advanced modeling. Complexity and Perceived Complexity The added sensitivity associated with advanced models does come at the cost of added complexity. With more moving parts, more knobs are available to turn in calibration and sen- sitivity testing. This means that there are more individual steps to calibrate, but it also provides the opportunity for the mod- els to capture the right behavior for the right reasons, result- ing in a sounder model. An example of the added complexity is a peak-spreading model versus fixed time-of-day factors. The fixed factors are easy to calibrate, because they can be derived directly from a survey, but offer no behavioral response to increased pricing or congestion in the peak periods, and so are limiting. There is no doubt that a peak-spreading model is more complex than a fixed factor; however, if that is an important policy consideration, then it is necessary. There does, however, appear to be some disconnect between the actual complexity of advanced models and the perceived complexity, with those on the outside tending to perceive the complexity as greater than it is. The perceived complexity of advanced models is an issue that has not been examined deeply, or at least not in the literature. The topic comes up at confer- ences on advanced modeling and appears prominently in the ensuing discussions. Many of those “on the fence” about moving to advanced models note it as a drawback of such models. However, no objective measure of the additional complexity of such models is apparent despite widespread acknowledgement of such from even in early literature on the topic (Kitamura 1988; Bowman and Ben-Akiva 2001). How- ever, it has also been pointed out that the increased complex- ity is not of the model itself, but rather the behavior being represented. Many proponents of advanced models cite the ability to capture such behavioral complexity as a key advan- tage (Transportation Research Board 2007; Ye et al. 2007; Outwater and Charlton 2008). The structural and algorithmic complexity of such models reflects those same qualities in the population under study. A paradox was evident from talking to practitioners, who believed that activity-based models were easier to explain conceptually to decision makers and the public, and more readily acceptable because of their closer correspondence with how people make travel choices. However, the concep- tual clarity comes at the price of increased data requirements, model complexity, and computational burden. It appears clear from the responses that until the perceived benefits outweigh the cost, opportunities to move forward in advanced modeling will fall short of their potential, irrespective of how impressive the research achievements and experiences from those on the cutting edge are. Quite simply, the proponents and early adopters of advanced models and the mainstream of the pro- fession are too far apart in their capabilities and resources to achieve the cohesion required to move advanced modeling into the mainstream. Some of this divide will be bridged when the experience base in advanced models deepens. As these models are proven CHAPTER FOUR IMPLEMENTATION AND INSTITUTIONAL ISSUES

42 in practice, their benefits—or lack thereof—will become more widely understood and published. For the time being their added complexity can only be acknowledged and traded off against the increase in utility gained from their adoption. Modeling Versus Forecasting An interesting philosophical issue was raised in several inter- views about the relationship between modeling and forecast- ing. Although the two cannot be fully separated, there is a subtle but important difference. Modeling is about building and applying tools that are sensitive to the policies of interest and respond logically to change, and the success of modeling is a function of its ability to provide useful and timely infor- mation during the decision-making process, even if there may be certain caveats or limitations for that information. For example, when ranking candidate projects for inclusion in a regional transportation plan it is important that the model produce consistent results for all projects such that they can be ranked fairly, even if the correspondence to what is on the ground today is not perfect. Forecasting is an attempt to envision or visualize future conditions. In the current context it usually involves predict- ing future travel demand and the resulting multimodal flows or changes in land use patterns over time. Forecasting usually, but not always, involves applying formal models, but can also incorporate other analyses and assumptions. Given the uncer- tainty about the future, several approaches might be used in forecasting. For example, a sketch planning or pivot point analysis might be compared with a regional travel or land use model. Direct and indirect comparisons can be made of the two forecasts. The differences in outcomes must be interpreted in light of the experience of the forecaster, reasonability of the results, confidence in the model and underlying data, and the assumptions about the stability of the behavior and trends implicit in the model. The success of the forecast can only be objectively measured through before and after studies. It is important to understand the distinction between these two activities, as they heavily influence the mindset of mod- elers in general and how they perceive the benefits of advanced modeling in particular. If the goal is forecasting, it is best to identify the factors most likely to affect the forecast and focus on getting those right. If the policy under consideration is to extend the existing transit system, for example, this often involves starting with an extensive onboard transit survey to understand the full use of the system as it stands today. It may further focus on ensuring that reasonable parking cost and land use inputs are going into the model, and involve evalu- ating the forecasts extensively to identify anomalies. The more the policies under consideration diverge from what is on the ground today, the more a model-centric approach may be needed. For example, when introducing a mode not currently in existence, the best that can be done is often to develop a model based on stated preference data, information from other regions, or rational theory. The same is true when evaluating any sort of behavior incentives not currently in existence, or when considering gas price, land use, or con- gestion conditions radically different than they exist today. This is where the modeling process can truly shine in helping planners and decision makers grapple with issues not fully understood. This is especially true of scenarios that have not been encountered before, such as traveler responses to much higher fuel prices than have been experienced over the past several decades. It is important to note that neither viewpoint is “correct” in any sense. They simply reflect the priorities and needs of the modeler and their clients. Given the wide diversity in how models and their outputs are used across the country it is hardly surprising that similarly wide differences in opinions exist about how (or whether) to best further the practice of travel modeling. As a consequence, some agencies will find the case for advanced modeling far more compelling than a neighboring agency with different priorities and mission. Model Estimation, Calibration, and Validation Three related steps are important to the model development process: 1. Estimation: Using statistical methods to determine the model coefficients that best fit observed data. 2. Calibration: Tweaking model coefficients to better match aggregate targets. 3. Validation: Comparing model results to observed data independent of what was used for estimation or calibration. Estimation can play a more important role in advanced models than in traditional models simply because there is less industry experience with the new models. Experience only exists with a handful of daily-activity pattern models, whereas practitioners have been developing mode choice models for three decades and have a good sense, for example, that the in- vehicle time coefficient for work trips should be between −0.02 and −0.03. At the same time, however, as models are being asked to respond to questions, such as the response to very high fuel prices, that pose a world very different from today, the value of models estimated solely from existing con- ditions and designed to replicate existing conditions becomes limited. In such cases, a strong theoretical foundation may be as important as the estimated values. Calibration involves applying the model and adjusting the coefficients to better match existing conditions. Calibration is specific to a locale and involves first calibrating each individual model component, and then evaluating the system as a whole. Advanced models, to the extent that they have more degrees of freedom, have more steps to calibrate and more knobs to turn during calibration. This does make calibration more chal-

43 lenging, but it also provides an opportunity to get the right result for the right reason, with the potential for less reliance on simple factoring without a relationship to behavior. Model validation is applying a model and comparing the results with a data source independent of what was used to esti- mate and calibrate the model. Most often, in a travel model, this is a comparison with traffic counts and transit boardings by route. In practice, calibration and validation are usually iterative, with model validation revealing issues that require further calibration to overcome. Questions of how well such models validate to observed conditions, as well as to existing traditional models, remain a topic of high interest among practitioners. The appropriate criteria and tests for determining model validity have been questioned, with some proposing that the long-standing focus on comparing observed with estimated link flows is inadequate for more advanced travel demand forecasting models. It is not clear why this bias remains, given the equally long-standing published advice on the topic (Barton-Aschman Associates and Cambridge Systematics 1997). Although standard valida- tion tests such as this are a good first measure of a model’s performance, there is much in the validity of a model that cannot be fully understood without sensitivity testing or using the model in application. For example, a model with constant time-of-day factors may validate very well against traffic counts by time-of-day, but would not be useful for testing policies designed to move travelers out of the peak periods. The TRB Innovations in Travel Modeling 2006 confer- ence (Austin, Texas) ended with many attendees asking for better evidence that such models perform acceptably in prac- tice before they would consider moving toward them. A series of eight presentations on operational activity-based models from around the world were highlighted in the Innovations in Travel Modeling 2008 conference in Portland, with the express goal of answering that challenge. It was clear from the pre- sentations that the models performed very well in this regard, although a single definitive evaluation of such models has yet to be compiled. The best mechanism to measure a model’s validity is through comparisons to before-and-after studies. Use the model to forecast the volume on a facility, build the facility, and evaluate how well the model did at predicting the volume on that facility. Naturally, building infrastructure is an expensive proposition and not something typically done for the pur- pose of evaluating a model. Therefore this measure of per- formance works well when forecasting the effects of policies or facilities similar to what has been done in the past; how- ever, when the policies are different from what has been done previously (such as rail or pricing where it does not currently exist), one must properly rely on the predictive ability of models. This attribute highlights a key advantage noted for advanced models—a model that is able to test the policy of interest is clearly superior to one that is not, even if the validity of that model cannot yet be proven with a before- and-after study. Appropriate model acceptance criteria for statewide travel models have likewise been oft-discussed to no resolution (Horowitz and Farmer n.d.). It has been acknowledged that such models are unlikely to validate at the same level as urban models. NCHRP has recently initiated a study to examine this topic, with results expected in 2012. There are few examples of operational freight or commer- cial vehicle advanced models in existence. Hunt and Stefan (2007) describe the development of a tour-based microsimu- lation model of commercial movements for the Calgary region. The calibration of the model is discussed in detail, but its val- idation is not. Donnelly (2007) describes the development of a microsimulation model of freight flows in Oregon, which incorporates validation criteria that have a variety of mea- sures, to include modal shares by commodity, trip length fre- quency distributions, incidence of trans-shipments, and trip chaining behavior. Both models are also discussed as case studies in Kuzmyak (2008). The subject of validation of dynamic network models has been discussed in the literature, but generally on prototypical networks. Most authors compare link flows by time period with target data, often treating each instance (link flow from small time increments, typically 15 min) on a link as separate observations while pooling all observations. There are two general categories of DTA models in use. Analytical solutions use node–abstract representations with classical volume– delay functions, much like static macroscopic models. Simulation-based models include explicit representation of traffic signals (phases, cycle length, offsets, etc.) to calculate delays at intersections. The former are used almost exclusively in academic research, whereas the latter are used in practice. Because most of the literature focuses on the former the results shown are difficult to generalize. Most of the work in valida- tion of large-scale DTA implementations has focused on com- paring zone-to-zone or point-to-point travel times with travel time surveys collected by MPOs. Only one application—an implementation of Dynameq in Calgary (Mahut et al. 2004)— had traffic counts at a level of resolution adequate enough to permit validation. To date, two other large applications— Atlanta and San Francisco—are works in progress that have not been published. In both instances, however, only daily traffic counts were available for the majority of the network, precluding a detailed comparison of link flows. A primer for DTA is under preparation by TRB’s Network Modeling Committee (ADB30), and will include a chapter on validation. Because it is oriented toward practitioners and end users it is expected to contain validation procedures familiar and relevant to practitioners. Until the current crop of models is validated, using widely accepted criteria such as these defin- itive conclusions about the performance of these models relative to static models cannot be drawn.

44 Transferability and Portability Transferability is concerned with whether a model developed in one location can be used in another location with substan- tially the same structure and, often, with little or no change to the parameters and coefficients of the model. Much has been published about the transferability of trip-based modeling approaches, particularly mode choice. Insufficient research has been carried out or experience gained with the advanced models described in this report to reach conclusions about their transferability. Moreover, there are several aspects of transferability that can be considered. Most of the agencies that have made progress with advanced models seemed relatively unconcerned about how portable or transferable their modeling work was, and placed low value on being able to import work done elsewhere. This may be because most of the early adopters have engaged developers to build custom models. This in turn is the result of the lack of existing proven platforms that could have been adopted easily to meet their requirements. Thus, transferability is a topic about which much must still be learned. The view among those considering moving to advanced models, however, is much different. Many are highly inter- ested in this topic, and have deferred plans to move toward advanced models until more is known about them. It is likely that many agencies are unable to afford the original develop- ment of such models, forcing them to rely on adaptation of successful work elsewhere. In closing sessions of the TRB Innovations in Travel Modeling conferences in 2006 and 2008 (in Austin and Portland, respectively) this topic was widely discussed. In addition to the pragmatic desire to build on the work of others, the ability to choose from among several competing models and a strong desire for proof of concept was voiced by attendees. To date, only one activity-based model has been transferred from one location to another. The Columbus Mid-Ohio Regional Planning Commission (MORPC) model was imple- mented in Lake Tahoe by transferring the model and coeffi- cients. The majority of the effort was therefore devoted to cal- ibration and validation of the model to local calibration targets (Willison et al. 2007). The results were very encouraging, with the model matching targets better than the locally developed model it replaced, as well as being easier to model the unique population characteristics of the region. The Lake Tahoe proj- ect is described more fully as a case study in chapter six. A second test of transferability is currently underway. An activity-based model was specified and estimated for Atlanta (ARC). Currently, both models are being implemented in Atlanta and the San Francisco Bay Area (MTC). This approach allows both agencies to share software development costs and to progress through model calibration and validation in parallel. When complete, it will serve as an interesting test of the transferability of an activity-based model across large regions with very different characteristics. The situation in land use–transportation modeling is quite different. Currently, the three most common packages in North America—UrbanSim, PECAS, and TRANUS—have been applied in several locations with varying degrees of success. These models fall into a somewhat different category than the activity-based model applications in that they are pur- posely designed to be widely deployed in a variety of places with different requirements and data availability. These models are unquestionably the furthest along in terms of the portability of all of the advanced models. The dynamic network tools have been developed in a sim- ilar manner. Although early tests were conducted in a num- ber of cities, the tools were designed to be broadly usable. However, most were envisaged as small area analysis tools and only recently have been investigated as replacements for static traffic assignment for regional travel models. Upcom- ing research and development associated with the SHRP 2 C10 project will investigate the formal linkage of activity- based travel and dynamic network models. Sacramento and Jacksonville have recently been chosen as the locations for the work under this project. There are no known cases where an advanced freight model has been transported to another location. Based on the initial examples the transferability and porta- bility of models can be considered at three levels: 1. Software, if well written and sufficiently generic, can be transferred to a new region with limited changes and no downside. Commercial software packages have proven for years that this is true. More recently, open- source software packages have been developed to facil- itate advanced modeling, including the Common Mod- eling Framework and the Open-Source Platform for Urban Simulation. 2. The second level of transferability is of estimated model parameters, which was done in the Lake Tahoe model, and is currently being done in the MTC models. The case for the legitimacy of transferring estimated param- eters is less clear, although both models appear to be behaving rationally so far. 3. The third level of transferability is of the model calibra- tion, which is not transferrable at all. One cannot expect to receive a model delivered in a shrink-wrapped box, unwrap it, and have it behave properly. Instead, the model must always be calibrated to local conditions, as was done with Lake Tahoe and MTC, as well as the land use models that have been transferred to new locations. Thus, it is best to view model transferability and portability as a mixed bag. It is not a panacea that will instantly grant new users a free lunch, but it does offer the potential to significantly reduce development costs, particularly for the software. There is potential for the modeling community to follow the lead of some highly successful open-source software projects, where

45 new developments are shared with the community in a com- patible format, allowing all to take advantage. Such sharing of developments may take off as a critical mass of advanced modelers develops, if there is a champion and framework for such sharing. Of course, transferability assumes some similarity between the regions being modeled. One could reasonably expect sim- ilarity among most American and Canadian cities, but trans- ferability to or from Europe or Asia is likely more limited in its potential. Software and Platforms Traditional travel demand models and the current crop of dynamic network models are supported by commercial ven- dors. They have made considerable investments in the core modeling capabilities, GUI, and documentation. Most also have dynamic linkages to relational databases and GIS, as well as the utilities to import data from their competitors format. These vendors in essence provide the software implementa- tion of the models, which substantially reduces, and in some cases eliminates, the need for the user to write their own soft- ware. Moreover, they provide training and support for users, and absorb the costs of updating the software and fixing bugs. Most users of traditional models appear to be satisfied with such an arrangement. By contrast, almost all of the activity-based models devel- oped to date, as well as the integrated land use–transport models, have been developed from scratch for each specific implementation. A few are entirely original works. DRCOG’s activity-based travel model is coded in the C# programming language, whereas LUSDR and Jem-n-R (Oregon DOT) are written in the R statistical language. The CEMDAP model, implemented at NCTCOG, is the only proprietary model among the advanced travel demand models. The rest have been developed using open source software components to reduce development time and cost. Waddell et al. (2005) have developed a model development platform called Opus, which uses a combination of the Python programming language and certain functions coded in C to maximize computational effi- ciency. The current generation of the UrbanSim model is based on Opus, which is flexible enough to accommodate the design of other types of spatial interaction (including travel demand) models. PB has developed Balsa, a similar library of model building blocks written in the Java programming language. Almost all of the activity-based models they have developed, starting with the MORPC model (Davidson et al. 2007), are based on Balsa. In both cases the finished model is unique to a given client; however, its “lower level” functionality is pro- vided by Opus or Balsa. Both Python and Java are themselves open source projects, are portable across operating sys- tems, and are supported by a broad and active user com- munity. Other open source software commonly used with advanced models includes relational database managers (e.g., PostgreSQL and SQLite) and visualization tools. The resulting models combine code that is specific to each client and parts from one or more open source packages. The client unquestionably holds all rights to the former, in effect allowing them to keep the overall model as open or closed to others as they desire. The trend to date, irrespective of compo- nents used, has been to make the entire model open source, or at least available to others developing similar models. This is both true for fully functional generalized models (e.g., PECAS, TRANUS, and UrbanSim) and models tailored for each client. Open source software offers some advantages over propri- etary software. Developers and users can inspect the code to learn details of its operation and help debug problems. They can modify it for specific applications and experiment with ways to make the code more efficient. The ability to build off of the work of others can reduce development cost, allow others to verify and check the code, use it with different oper- ating systems, and collaborate with others. The cost of entry is low, and a user community can provide help and ideas. How- ever, it is no panacea. Although the code is free, the expertise required to creatively and competently use it is not. Novice users can unwittingly introduce errors that are difficult to trace and that violate the assumptions or integrity of embedded models. Most open source projects are overtaken by inertia and quietly fade away (Fogel 2005; Daffara 2006), often because the underlying software addresses a narrow niche or is complex enough to present a steep learning curve. Both might fairly characterize Opus and Balsa; although they have enjoyed many of the advantages cited for open source soft- ware (most notably wide collaboration), part of their success undoubtedly is because there is no commercial alternative available. Moreover, it might be argued that their respective developers have been the only benefactors to date. Whether the next generation of advanced models contin- ues as “home grown” an open source remains to be seen. In part it will depend on what the commercial vendors bring to market. Equally influential will be whether a small number of models dominate, such that they have a chance of building a critical mass of users and developers. Finally, it will depend on having more than just a handful of model developers with the skills necessary to build and implement sophisticated software. Hardware and Model Run Times Most advanced models have much more demanding compu- tational requirements than (dis)aggregate travel models. This is particularly true for land use–transportation and dynamic network models. Such models are often characterized by run times that number in days rather than hours, which place them at a significant disadvantage to the models they seek to replace. The activity-based model implementations to date have employed sample enumeration or microsimulation approaches. The constituent models of the latter include rule systems, sam- pling from statistical distributions, deterministic mathematical models, and discrete choice models. The latter, popularized

46 in the traditional mode and destination choice models, are applied at the individual traveler level instead of for groups of them (as with traditional models). Coupled with the higher spatial resolution that newer models operate at, this vastly increases the number of alternatives considered and utility expressions calculated, resulting in much longer run times. Fortunately, improvements in computer hardware are clos- ing the gap between model run times and user expectations. As more developers find ways to parallelize or distribute their code to take full advantage of multi-core processors run times will continue to improve. The question remains, however, how much must they improve to meet the needs of users? There was near unanimous agreement that 16 h is the gold standard for model runs, as that would allow overnight runs. Some were insistent that 16 h represented an absolute maximum, and that multi-day run times seriously reduce the utility irrespective of how robust or informative the outcomes are. They reported that as policymakers they are accountable to typically require quick responses to their enquiries and that late replies typically do not influence outcomes. As such, they require tools that balance fast models with the desired behavioral, spatial, and temporal resolution desired. Steady advances in computer hardware and operating systems have provided the solution ingredients. Continu- ally faster microprocessors and memory provide linear reduc- tions in run time, although most agencies reported being unable to update their hardware more frequently than once every three years. Moreover, even when they do they often have a dif- ficult time controlling the purchases that are made on their behalf, putting them at a further disadvantage. Developers are helping them, sometimes unintentionally, by crafting mod- els that only run on 64-bit computers and operating systems, forcing the agency to upgrade the hardware available. How- ever, the transition from 32-bit to 64-bit architectures, like the transition from 16-bit to 32-bit computers in the early 1990s, only rarely happens, such that the effect will be significant now, but diminished in following years. Most advanced models now in use employ a cluster of com- puters (typically six to eight) over which execution of their programs is distributed. Each machine typically has several microprocessor cores (i.e., Intel’s Core2 Duo and machines built using their new I7 chip) and 4 to 16 GB of memory. The precise configuration depends on the model, but these com- puters currently range in price from $6,000 to $12,000. Thus, most agencies will spend between $36,000 and $60,000 on a cluster to run advanced models, depending on their needs and required configuration. Machines used for large-scale DTA modeling typically use larger amounts of memory (32 to 64 GB), considerably raising their cost. However, it must be emphasized that the hardware requirements and costs are decreasing as software is becoming more efficient and the performance of computer workstations continues to grow. Some developers are working hard to distribute or paral- lelize their code. The two terms are often used synonymously, but are different approaches. Distributed computing spreads the problems across multiple machines, some of which might be remotely located. Message passing is used to communicate between a controller and several workers, with each worker typically holding all of the data it needs. Parallel computing, on the other hand, generally involves multiple processes run- ning on a single computer and sharing the same memory space. Parallelization is particularly attractive, as it will allow programs to take full advantage of multi-core processors. It is easily implemented for tasks that do not have dependencies on other tasks, such as population synthesis or destination choice. However, there are many parts of both traditional and advanced models that do not lend themselves well to either parallel or distributed solutions, limiting the amount of improvement possible. Much of the parallelization of the code is accomplished through simply dividing the number of cases (travelers, households, etc.) among the number of cores. However, vendors such as Caliper Corporation and Citilabs have taken this one step further, using multi-threading of their assignment code to permit it to make maximum use of available processors and memory. Further advances in this area appear to hold the greatest promise for reducing run times without sacrificing model form or structure, and have the potential to further reduce the cost of hardware required to run advanced models. DATA ISSUES The issue of data was not identified as a major concern for tour- and activity-based person-travel models. However, ques- tions about data requirements appear to be a major concern to agencies contemplating adopting advanced models. The liter- ature is not clear about data requirements, in part because the data requirements for a model depend heavily on the scale that is applied, scope of issues and behavior it must address, and at what fidelity and resolution. Moreover, most agencies have invested heavily in GIS technology in the past few decades. That, coupled with ever-increasing sources of open source data available on the Internet, has reduced the per- ception that data are more readily available. The reality will of course depend greatly on the specific model, its intended appli- cations, availability of such data from other sources, and resources available to collect and analyze the data. Typical data requirements are therefore difficult to describe and even harder to estimate the costs of collection for. An attempt has been made to distill the data requirements that appear common to most of the advanced models deployed to date. The typical and optional data used in the various types of models discussed in this report are shown in Table 5. Most developers of advanced person-travel models reported availability of adequate data for model estimation and cali- bration, or facing challenges no worse than for the develop- ment of traditional data. Indeed, substantially the same sur- veys are required for collected travel diary data for traditional

47 trip-based or more recent advanced travel models (Stopher 1992; Sabina et al. 2008). However, the question of whether traditional sampling rates are adequate is an open question. The number of households included in travel diary surveys has declined over time, such that most agencies only obtain data from 2,000 to 3,000 households. This appears to be the minimal number of observations required to capture statisti- cally significant differences between market segments typi- cally used in disaggregate four-step models, although defini- tive guidance on optimal sample sizes remains controversial (Stopher and Jones 2003). The compromise usually adopted is to collapse dimensions within the data when smaller sur- veys fail to obtain enough samples to differentiate all desired aspects of travel behavior. Such an approach cannot be used with activity-based models, as the desired levels of resolution and fidelity translate into more detailed representations of households, travelers, and their choices. The resulting vari- ables and coefficients cannot generally be aggregated with- out reducing the explanatory power of the model, which runs counter to the goal of a richer and more accurate behavioral rep- resentation. Thus, the compromises imposed by using smaller sample sizes become readily apparent far more quickly when crafting activity-based models. However, reduced power and sensitivity of the model is obtained irrespective of whether a traditional four-step or cutting-edge activity-based model is crafted from them. To date, some of the activity-based model development has been accompanied by large household travel diary sur- veys. Some have included 12,000 households, at a cost much greater than for the aforementioned minimal required sample sizes. The size of these surveys was dictated by the higher level of behavioral resolution and more detailed modeling of choice behavior than found in typical models. (To be fair, it is important to note that surveys approaching this size would be required to develop more detailed and sophisticated trip- based models with a larger number of market segments.) The desire to be able to confidently differentiate the wide variety of tour patterns and estimate statistically significant param- eters for them also dictated larger sample sizes. In some cases it was also decided to err on the side of more observations than might be required, given the lack of experience with such mod- els at the time. The developers of such models have posited that perhaps half as many observations (5,000 to 6,000 house- holds) might be sufficient; however, more definitive guid- ance must await evaluation of several models currently in TABLE 5 TYPICAL DATA REQUIREMENTS FOR VARIOUS MODEL TYPES

48 development. The level of detail desired in the model will dic- tate the size of the survey required, which in turn should be informed by analytical requirements. A credible and useful activity-based model could be constructed from a small survey, but to obtain the increased sensitivity and detail desired by current adopters of such models larger sample sizes might be required. The literature also suggests that a greater reliance on stated preference experiments is likely (Petersen and Vovsha 2008), and might be especially relevant for modeling behavioral responses to high fuel prices, new technologies, virtual com- merce and meetings, and pricing and tolling schemes. Such conditions have not been encountered before, precluding the use of existing data and models built on them for assessing their impact. Unfortunately, stated preference surveys are more difficult to construct, execute, and interpret than revealed preference surveys. Fewer modelers and travel survey firms have experience with them. The cost associated with them and expertise required makes them ideal candidates for col- laborative data collection. To date, no such joint effort has been planned or undertaken; however, several respondents indicated a willingness to do so. The situation for the other types of advanced models con- sidered is dire by comparison. Land use–transportation mod- els use more data than traditional travel demand models, espe- cially if parcel or comparable small units of geography are involved (Moeckel et al. 2002; Miller et al. 2004; Clay and Johnston 2006). Such additional data include: • Floor-space consumption and tenure by land use type, • Population and employment by type at the same level of spatial resolution, • Permitted zoning or land use(s), and • Residential and nonresidential land prices. These data are generally available, although at cost and requiring considerable analysis to become well acquainted with them. Many are available through local government or third-party sources, but there are often substantial amounts of data cleaning and reconciliation required before the model can be made operational. Data on the cost of acquiring these data has proven elusive, in part because the cost has largely been borne by in-house staff or acquired through other governmen- tal agencies. Moreover, the choice of modeling platform will influence the cost and effort required. For example, deploying a land use model such as LUSDR will result in smaller data requirements compared with PECAS and UrbanSim. The situation is not as fortunate for freight and dynamic net- work modeling. The paucity of even basic data on urban freight behavior and spatial distribution patterns is a long-standing barrier to progress in understanding and modeling urban freight systems (Transportation Research Board 2003a; Wigan and Southworth 2005). Truck and shipper surveys conducted in most urban areas are very small, with several hundred to a few thousand observations. Given the diversity of commodities, vehicle types, and origin–destination patterns this represents probably too limited a sample from which to develop robust freight or commercial vehicle models. Compounding this problem is that most urban areas do not have extensive reliable trucks counts. No differentiation is made between privately owned trucks and vans and commercial vehicles of the same configuration. This category constitutes the largest share of commercial goods movements in urban areas (Holguin-Veras and Patil 2005). For freight models the problem is compounded by not being able to differentiate commercial vehicles carrying freight, as opposed to other commercial trip purposes. A similar situation faces users of dynamic network models. To date, in most areas where large-scale DTA has been applied there have been few counts available by hourly or 15-min intervals, and certainly too few from which to calibrate or val- idate the models. Data on actual versus modeled path choice are likewise not available. It is clear that substantial invest- ments will be required in mining real-time ITS and traffic control data to develop the databases needed to rigorously assess these models. In addition, simulation-based DTA mod- els and TRANSIMS include explicit representation of signals. Data on their operation (phasing, cycle length, offsets, etc.) are required to implement the model, but techniques for gen- erating these data on an urban scale are still lacking. Further research and development in this area will be necessary before these data can be synthetically generated without extensive user intervention. Some researchers, such as Gershenson (2005), advance the idea of self-organizing traffic control systems as an alternative to traditional signal optimization approaches. Mahmassani (personal communication April 2009) also advocates this approach, arguing that setting all phases of fully actuated signals to their minimum length and letting the DTA model find the best solution is preferable to external optimization. Such techniques might hold great potential, but require additional research and verification before their practical application can be assessed. COST AND SCHEDULING ISSUES Significant internal staffing and funding resources have been required for most of the development and implementation efforts undertaken to date. Virtually all of the funding has come from MPO or state DOT sources, with the notable excep- tion of federal funding for TRANSIMS. However, the issue of funding was dwarfed by concerns about scheduling issues. Virtually all of the efforts took longer to complete than antic- ipated, in some cases more than doubling the initial estimate. Overcoming these issues, both for currently underway and future efforts, will be essential if advanced models are to achieve widespread adoption. Cost Issues During the interviews, a lack of available funds for model development was cited surprisingly few times as an impedi-

49 ment to moving forward. However, it is acknowledged that most of those interviewed were already involved in advanced modeling, and that otherwise similar agencies might be as involved if they had comparable funds available. Many respon- dents believed that the current economic downturn, coupled with anticipated austerity measures implemented later on to reduce the debt incurred by stimulus spending, will reduce the funds available for advanced modeling over the next decade. The degree of reduction expected varied widely, as did opinions about how much it will slow down the evolution of advanced models. A few even thought that a reduction in funding would provide the benefit of forcing consolidation of efforts and standardization of models and data collection sooner than might otherwise occur. Information about the cost of developing and deploying advanced models is eagerly sought by agencies contemplat- ing an investment in them. “How much will it cost me?” is a question often heard when discussing such models. It was surprisingly difficult to answer this question despite a con- certed examination of efforts to date. In some cases such infor- mation was difficult to obtain or interpret. Some respondents did not know the full cost involved, often because their tenure was shorter than the model development work. A few declined to provide cost information despite repeated requests. How- ever, most objected to generalizing their experiences. Some believed their case was atypical, either because it was a first for that particular type of model or they perceived problems with local data or issues unique to their agency. The majority noted that theirs was “a work in progress,” and that the true cost would only be known after the model was successfully implemented. Software development has been required in most of the advanced models developed to date. This has influenced the overall cost of implementation in two ways. One was the loss of time and effort when software bugs were uncovered; growth pains that most hope are largely past. The second factor is the assumption of reusability. Once initially developed it is likely that the cost of a specific modeling platform will go down considerably for subsequent adopters. This, in turn, has two potentially misleading interpretations. One is on the part of managers, who mistakenly believe that if the software works elsewhere that their staff will refrain from making neces- sary changes. In this case the manager assumes the software cost is zero, when in reality it can be high. The other mis- interpretation is on the part of the developers, who probably cannot accurately state what portion of the development cost was devoted to software and what went to other development activities. Finally, it was difficult to obtain complete cost data from most respondents. Almost all knew how much they had paid consultants or developers. However, few could provide infor- mation about in-kind or internal expenditures, such as the cost of agency staff devoted to advanced modeling, data col- lection, peer review, training, or program management. Sev- eral agencies reported having two or more staff members dedicated to model development and implementation, sug- gesting that their total outlay for the overall effort is much larger than the size of consultant contracts. All that said, a few observations can be made: • The consultant contracts for the first few activity-based person-travel models (in San Francisco, New York City, and Columbus) were $1 to 2 million apiece. In two of these cities no agency staff participated in their devel- opment, whereas a single person played a large role in the third. • The consultant cost of currently ongoing efforts (to include Atlanta, the San Francisco Bay Area, San Diego, and Phoenix) ranged from $750,000 to $1.2 million. All of these agencies have a senior staff member dedicated part-time to directing the effort, and anticipate dedicat- ing more of their time and possibly additional staff as the models draw closer to completion. • It was difficult to isolate the cost of software versus model development, as many tasks involved both activities. • The Columbus model was adapted to and validated in Lake Tahoe at a cost of approximately $350,000, which included the cost of developing the network, zonal data, and other aspects of the model. • The cost of urban freight models was highly variable, ranging from $20,000 to more than $1.2 million. The former only involved the implementation of a sequential model using transferable parameters, whereas the latter involved extensive truck and establishment surveys. • The range of costs for land use models has been similarly wide. LUSDR, developed internally at the Oregon DOT, cost approximately $50,000 in staff time to develop and apply. By contrast, the more sophisticated models have included several millions of dollars in development cost alone, with the cost of application highly variable depend- ing on the availability and quality of data. • There are virtually no useful and comparable cost data available for the development of dynamic network mod- els at a regional scale. Several respondents expressed hope that the cost of future models could be significantly reduced using a combination of standard model forms, transferable parameters, and open source software. Part of the appeal of open source software is that several collaborators share the cost of its development, making possible software that none could produce working alone. Many respondents applied the same rationale to model development, with the hopes that shared funding would reduce the cost of advanced models. A few examples were cited of such collaborative development: • The ARC and (San Francisco Bay Area) MTC are shar- ing the cost of implementing their activity-based travel models. • The Oregon, Ohio, and Florida DOTs have each invested in common data and advanced travel models across the

50 state. Oregon’s LUSDR model is not location-specific and can thus be used elsewhere in the state, although to date it has not. • The statewide modeling program in Oregon sponsored the early development of both UrbanSim and PECAS. It was expected that other agencies would be eager to repli- cate these collaborative successes. Although some agencies expressed an interest in building on work already completed elsewhere, a surprisingly large number appeared uninterested in such partnerships. Many expressed an unwillingness to relinquish control over the final product or choice of devel- opers, or expressed skepticism that an agreement could be reached on important design issues. However, the respondents appeared far more eager to collaborate in the funding of data collection programs, especially for large-scale travel surveys and complicated stated preference experiments. The case for shared funding of specialized model components, such as visitor or special event models, was seen as advantageous by most agencies. Another topic widely identified as needing funding was education and training. There was consensus on the need for collaborative funding for intensive training programs that are longer and more in-depth than current National Highway Insti- tute or TRB workshops. However, other than an eagerness to see federal leadership in this area, it was apparent that those interviewed did not have a clear vision for how such programs would be structured or by whom. Phasing and Scheduling Issues Many of the advanced models deployed to date have taken longer and cost more than originally intended, and in some cases were delivered only through the sheer determination to succeed on the part of the champions and developers. The developers of such models offered a variety of explanations for this: • Such models are cutting-edge endeavors, with attendant technical uncertainties and lack of experience in cost and schedule estimation. • Several dead ends were encountered that required refor- mulation of the modeling approach. • The time estimated to implement the models in computer code was underestimated. • The resulting models have long run-times, often mea- sured in days, such that changes took longer to test and assess than with traditional models. • Only a handful of people were attempting such models until recently, such that just a few projects absorbed all of the available talent. • The funds available for the effort were not well aligned with the model design. Not surprisingly, the model recipients had a somewhat dif- ferent perspective. Although quick to acknowledge the devel- oper’s perspectives, they often cited overcommitment of the consultants, as well as overdesign of the model. Given the lack of experience with such models, they viewed it as difficult to know whether the proposed scope of work was achievable or not. The earlier projects appear to have been more affected by these factors than more recent undertakings, although the total number of such models attempted is still small and many are still in progress. Another characteristic of the earlier development efforts, not reported in the interviews but apparent from looking at the whole, is that they all embarked on large, multi-year develop- ment efforts preceded by detailed model specifications. The latter was typically completed within a few months; however, the subsequent development and implementation was under- taken over a period of years. Problems tended to become appar- ent only when the model became operational, which was most often near the end of the project. Accordingly, the options open to the team were few and mostly involved compromises on the original design or deferral of capabilities. Because many of these large projects were “once in a lifetime” opportunities for the agency to adopt advanced models there was often little ability to expand the budget. The resulting model was either reduced in functionality or resulted in inadvertent cost- sharing by the developers. That none of these projects failed outright is surprising. In contrast, most of the successful implementations studied—in terms of cost and schedule—were developed in stages. All were driven by the same comprehensive design at the outset, but were structured to provide interim capabili- ties and milestones. These in turn allowed the agency to gain familiarity with the models, assess them in practice, and per- mit changes as requirements or desired capabilities changed. More will be said about this in chapter five, as it represents a key lesson learned from the experience gained to date. INSTITUTIONAL ISSUES As challenging as the methodological, data, and budgetary issues were, perhaps the most daunting challenge facing most proponents of advanced modeling were institutional issues. Many suggested that otherwise successful advanced models could not succeed without overcoming these issues. Motivations for Advanced Models A few respondents noted that they were interested in advanced models based solely on a desire to keep up with the latest trends. However, in most cases, the cost of doing so is high enough that such motivation alone is insufficient to justify the investment. The majority of respondents were able to quickly describe a range of analyses that they have accomplished with advanced models that were difficult or impossible to model using traditional methods. The analytical needs that these models fulfilled are recounted in chapter three.

51 Most of the agencies that have instituted advanced models— and all but one with land use–transportation models—oper- ate in political climates where issues larger than just trans- portation are being tackled. Some, such as the Oregon DOT, are also charged with growth management, forcing them to expand their analyses beyond just transportation. In other instances the motivation for modeling a larger realm includes the need for making the case for transportation investments when they compete with other programs, a desire to demon- strate how transportation affects other sectors and land mar- kets, and mandates to contribute to larger analyses such as energy and emissions forecasts. Most believe that these “larger than just transportation” issues will grow in importance in the coming years, and are actively seeking to reorient their model- ing capabilities to support such analyses. Some wanted to cap- italize on the premise that they were “the only modeling game in town,” and saw these larger issues as an opportunity to leverage their considerable existing investment in data and models into new and larger roles for their agency. Many of the overarching issues affecting all advanced mod- els include pricing and public–private financing, economic growth and job retention, the effect of the economic downturn, energy scarcity and effect of large fuel price increases, impacts of changing vehicular technology, and greenhouse gas emis- sions and their effect on climate change. These are all in addi- tion to traditional concerns about congestion and transportation system efficiency, which remain as important considerations, but only two among many competing urgent agendas. There is growing concern that macroscopic traffic assign- ment models, a fixture of the four-step modeling paradigm, do not accurately portray the location, extent, and duration of congestion, especially in large metropolitan areas. The aver- age travel times over peak periods is faced by few travelers, who generally either encounter shorter times in the shoulders and longer ones in the peak hour, which skews trip distribu- tion and time-of-day models. The desire for more realistic portrayal of network conditions and traveler responses thereto is driving the move toward dynamic network models. Partnerships with Other Agencies The move into advanced modeling brought with it the need to develop strategic alliances, and in some cases close working relationships, with other entities. These varied by locale and the type(s) of advanced models used. Many agencies working on tour- or activity-based person-travel models reported that their new partnerships were with outside developers, but that their relationships with other agencies remain unchanged. Their model enhancements represented more of an evolution than expansion of their current mission and capabilities. Forays into land use–transportation modeling perhaps entailed the largest number of new relationships. In addition to linkages with land use planners, this field of modeling typ- ically requires close collaboration with GIS and business data specialists, as information not commonly used in urban trans- portation models is required. This includes detailed informa- tion about population, households, and firms, as well as floor- space occupancy and tenure, detailed employment data, and zoning and comprehensive planning overlays. Explicit link- ages with economic models require collaboration with urban and regional economic forecasters. In some agencies work- ing together on the technical level was simple; however, inter- nal politics and department boundaries made formal collabo- ration more difficult. Region-wide dynamic network models typically start from the same network as used in static assignment, but add consid- erably more operational details. Traffic control devices and their settings must be added to the network, as well as more careful coding of zonal centroid connectors. More detailed traffic count data than typically found in most MPOs are required for validation, as well as travel time information at fine-grained temporal resolution. Relationships with traffic and ITS engineers are required in such cases, as well as traffic control center staff. Some agencies reported that they expect to develop closer relationships with the EPA as the MOVES emission model comes into use. Modeling for air quality conformity analyses is currently carried out using the EMFAC model in California and MOBILE6 elsewhere. In many cases the output from traffic assignment (link flows and travel time by vehicle type) is provided to air quality agencies that run the emissions mod- els without assistance from the modeling agency. The MOVES model will require more complex data at a higher level of spatial and temporal resolution than MOBILE6, and will require close collaboration between transportation and air quality modelers. Staffing and Education By far the most frequent issue cited by those interviewed was a lack of suitably qualified and trained staff. This staff shortage manifested itself in many ways, from agencies being unable to compete with the private sector for the required talent, to con- viction that universities were not equipping graduates with the necessary skills, to inadequate funding for additional staff even if they could be recruited. The wide diversity of factors contributing to the lack of staff precludes an easy solution to the problem. The truth of this conclusion could be seen in that none of the agencies surveyed identified a means of overcoming it. Part of the difficulty is because no guidelines on minimum qualifications have been established for the various types of modeling identified in this report. Even if such guidelines were available, it is unclear whether the modeling commu- nity would embrace them. This is especially true if few or no opportunities exist to acquire the required skills. It is also unclear how such guidelines would be enforced absent a strong federal role in monitoring and certification. Finally, several

52 respondents noted that strong skills in software develop- ment, beyond the ability to use spreadsheets and write simple database queries, are equally important as modeling skills. Modelers at the Oregon DOT, for example, have become adept at using the R statistical language for analyzing data and building models. It is unlikely that the DRCOG model development would have succeeded without its staff having comparable skills. Two related issues were often mentioned as well. In sev- eral instances it was found that although the agency had “the right people” already, many were drawn away from model development to more pressing applications. This finding appeared to be correlated with those agencies that tended to do all of the modeling work in the region in-house, although some exceptions were apparent. Most agencies are organized around periodic updates to their long-range transportation plan, which is a major undertaking that consumes virtually all of their resources. In the agencies we talked to this cycle ran in three- to five-year intervals, which meant that model development, if it were to occur, had to be accommodated in the “off years” of the cycle. A second issue often identified by almost all agencies inter- viewed was the almost complete lack of training resources and opportunities available in advanced modeling. In many cases it was asserted that otherwise capable and interested staff had no practical means of acquiring the knowledge nec- essary to develop, implement, apply, or evaluate advanced models within their agency. Most believed that this knowl- edge was tightly concentrated within a relatively small num- ber of academics and consultants. The publications of the former were often viewed as being physically (by virtue of publication in costly academic journals) or mathematically inaccessible. It was widely believed that the work of the consultants was not well-documented and was protected by proprietary interests. In both instances it was noted that the level of detail provided in TRB publications and presentations, while appropriate for making contributions to the literature and sharing general knowledge, was wholly inadequate for conveying the level of detail necessary to replicate the work reported. There was no consensus about how to overcome this prob- lem. Many believed that this was an area where federal lead- ership would be most effective. Some pointed to the strides made by the FTA in harmonizing New Starts analysis require- ments and its annual travel forecasting workshops as evidence of what could be accomplished in this realm. Others believed that this was an area where the universities might logically take the lead, but have shown no interest in doing so. A few agencies reported that including a task for formal training in their consultant contracts was helpful, although the effec- tiveness of the resulting training has not been established. One agency advocated for a more coherent approach to tal- ent management, where the spreadsheet modelers, the soft- ware developers, and the statisticians each find a role suitable to their abilities. The most compelling solution suggested was an intensive “advanced modeling boot camp.” It was believed that this should be a month-long course that covered the fundamen- tal concepts required to understand advanced models, as well as an in-depth look at one specific model development proj- ect in enough detail to understand how to replicate or trans- fer the model to another location. However, most agencies reported that they would have difficulty affording the cost of such training, and most were openly candid about worries that the participants would afterwards be lured away by con- sultants after completing the minimum time necessary at the agency. A number of alternatives to a month-long course sug- gest themselves, but the issues of who will take the lead or how to pay for its delivery just to the largest MPOs remains unresolved. TRB has recently undertaken an important leadership role in the development of technical resources for travel modeling, some of which will likely include knowledge-sharing net- works and collaboration. With financial support from FHWA and FTA, TRB will serve as the coordinator of the Travel Demand Forecasting Technical Resource Initiative. It will provide staff and technical support to the program, which is designed to bring together leaders in the modeling community to help guide the development of a multi-faceted web-based portal. The contents and format of the portal are undefined at this writing, but are expected to include a wide variety of media, to include documentation on best and emerging prac- tices, research reports, links to relevant research in parallel areas, educational and training material, podcasts or other multimedia presentations of top topics and breaking research and development results, and other tools. In so doing, the Ini- tiative will not only introduce a new and widely accessible repository, but also equip modelers and agencies seeking to expand their capabilities. The staff and advisory committee are being invited at this writing, with work expected to begin in earnest early in 2010. Peer Review There was diversity of opinion about the value of external peer review panels for advanced modeling work. Some, such as the Oregon DOT, have long maintained a high value obtained from using a panel. The Ohio DOT is working on a similar model (an integrated land use–transport model with an activity- based model component), but reported that peer review was a low priority. Similar paradoxes were found for other types of model development work. In the case of dynamic network modeling no instances of peer review were found except for the TRANSIMS early deployments, which relied heavily on expert panels. Many of their subsequent activities (see Appendix C) have also benefited from peer reviews.

Next: Chapter Five - Lessons Learned »
Advanced Practices in Travel Forecasting Get This Book
×
 Advanced Practices in Travel Forecasting
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

TRB’s National Cooperative Highway Research Program (NCHRP) Synthesis 406: Advanced Practices in Travel Forecasting explores the use of travel modeling and forecasting tools that could represent a significant advance over the current state of practice. The report examines five types of models: activity-based demand, dynamic network, land use, freight, and statewide.

READ FREE ONLINE

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!