Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
12 2.1 The NCHRP Project 08-91 Framework The NCHRP Project 08-91 framework was developed following an extensive review of the- oretical and practical approaches to cross-asset resource allocation. As previously noted, the theoretical best practices confirmed the technical approach provided in the baseline framework; however, the state-of-the-practice review revealed that there are significant implementation challenges that must be addressed in order to successfully apply the theory amidst operational, political, and organizational considerations. Many of these were directly addressed in the final framework and tool prototype; others may be addressed in future research (Chapter 5). The following sections describe the final framework. The framework reinforces performance- based planning and decision-making principles by directing agency resources toward the most cost- beneficial investments with an eye toward meeting agency goals and objectives. The framework has five steps that are consistent with performance-based planning and management principles: 1. Goals and objectives identification, 2. Performance metric evaluation, 3. Project impact assessment, 4. Decision science application, and 5. Trade-off analysis. 2.1.1 Goals and Objectives Step 1 of the framework suggests the development of agency goals that are clearly articulated through objectives that encompass internal agency priorities as well as those of the transportation network. As stated in the AASHTO Transportation Asset Management Guide, goals provide âa sense of purpose, direction, and a high-level picture of what an organization wishes to achieveâ (AASHTO, 2011). In performance-based planning, goals are used to translate the future vision for the transpor- tation system into something that can be measured and tracked. The establishment of meaningful goals that achieve widespread organizational, partner, stakeholder, and public recognition and sup- port is critical to ensuring that investments made at all levels of the state are directed at common outcomes and aligned with the desired future vision for the transportation system. To achieve broader acceptance and understanding of statewide goals, the framework under- scores the importance of an inclusive goal and objective development process that considers the perspectives of all transportation system users and providers. The final framework aligns invest- ments with goals using a performance-based approach. The framework accommodates any goals for which the benefits of investments can be quantitatively or qualitatively assessed. Of course, agency goals that include the national goals created under MAP-21 (Table 1) will be required and are accommodated in the framework. C H A P T E R 2 Research Guidebook
Research Guidebook 13 2.1.2 Performance Measures The framework incorporates performance measures (Step 2) as a means to track progress toward goal fulfillment with respect to system operations and asset conditions. In the framework, any per- formance measure can be selected and compared as long as users provide the corresponding impact assessments. The framework requires comparisons of performance data but does allow for the inte- gration of data and information that are qualitative in nature. For instance, professional judgment can be applied to quantify impacts on a commensurate scale (e.g., low = 1, medium = 2, high = 3). Performance data can be obtained or calculated from agency management systems to forecast the future condition of the system across multiple performance areas and to understand likely project impacts. In this way, performance measures can be applied to understand which invest- ments are likely to contribute the most toward the achievement of broad performance outcomes. Selecting the right measures is dependent on the goals and objectives identified and on whether they are applied for performance forecasting or monitoring. In general, good measures are useful for short- or longer-term decision making, meaningful to the public and stakeholders, and sup- ported by quality data that are regularly maintained. As defined in the AASHTO Transportation Asset Management Guide, quality data are accurate, precise, appropriate in context and in level of detail, timely, accessible, and well defined (AASHTO, 2011). Performance targets can be developed from selected measures to define the state-of-good- repair thresholds (for physical assets) and level-of-service (LOS) thresholds (for system opera- tions). Targets have typically been set by edict or based on past performance or expert opinion and are adjusted over time to reflect more reasonable performance expectations in light of bud- get constraints. Data management systems can be used to track system performance against set targets, streamline internal and external communications, and develop predictive tools based on historical inspection and treatment data. The selection of projects in the framework provides the basis for the cross-resource alloca- tion approach and is directed at achieving network-level performance targets while allowing for project-level constraints. The framework then allows the agency to understand the costs Table 1. MAP-21 national goals. MAP 21 Goals Safety To achieve a significant reducon in traffic fatalies and serious injuries on all public roads Infrastructure condion To maintain the highway infrastructure asset system in a state of good repair Congeson reducon To achieve a significant reducon in congeson on the Naonal Highway System System reliability To improve the efficiency of the surface transportaon system Freight movement and economic vitality To improve the naonal freight network, strengthen the ability of rural communies to access naonal and internaonal trade markets, and support regional economic development Environmental sustainability To enhance the performance of the transportaon system while protecng and enhancing the natural environment Reduced project delivery delays To reduce project costs, promote jobs and the economy, and expedite the movement of people and goods by accelerang project compleon through eliminang delays in the project development and delivery process, including reducing regulatory burdens and improving agenciesâ work pracces
14 Guide to Cross-Asset Resource Allocation and the Impact on Transportation System Performance associated with achieving mandated performance targets but can also allow for the agency to set targets based directly on agency goals and preferences. 2.1.3 Project Impact Assessment The application of performance models to forecast future performance with respect to various metrics is an essential component of the framework. Step 3 requires a project list with before- and-after predictions. Predictive models are generally developed by agencies using historical data to show how a particular project will influence network performance. To the extent practicable, these models need to be calibrated to reflect current data and validated based on a comparison of actual to predicted performance outcomes. Performance-based modeling is used in the framework to predict project impacts prior to implementation or before projects are built. Project impacts are incorporated through changes to the expected future baseline of a particular measure via performance jumps (i.e., instanta- neous improvements) or delayed onsets of benefits (i.e., reduced rates of performance deterio- ration) depending on the measure being predicted and the project being evaluated (Figure 4). In particular, impacts can be inferred across performance areas as the forecasted change in a representative performance metric with and without implementation of a candidate project. Project impacts determined by this process are expressed in the same measurement units as the forecasted performance metrics. When these impacts are normalized to allow for a compari- son between dissimilar metrics, the most beneficial projects with respect to agency goals and preferences can be predicted prior to implementation. In this way, the framework is consistent with guidance provided in the AASHTO Transportation Asset Management Guide, which sug- gests that âthrough the use of management systems, engineering and economic analysis, and other tools, transportation agencies can evaluate collected data before making decisions on how specific resources should be deployedâ (AASHTO, 2011). Agencies commonly apply performance modeling within individual management systems to predict project impacts with respect to a particular performance area; for example, the effect of repaving on pavement condition. Such siloed analysis is limited in its ability to assess impacts across all performance areas, focusing instead on what can be readily predicted using the indi- vidual data management system. This approach can preclude consideration of other significant or adverse consequences that are important to understand when making decisions and con- sidering trade-offs. To avoid this potential pitfall, the final framework combines projects from all management systems into a single pooled set in order to evaluate project impacts across all performance areas in a systematic way (Figure 5). This allows for agencies to take credit for Figure 4. Project benefits can be realized (i) immediately or (ii) over time.
Research Guidebook 15 overlapping performance impacts not traditionally assessed within a silo (e.g., pavement resur- facing can lead to a safety benefit). It is important to note that many state DOTs apply predictive tools in the areas of bridge and pavement asset management but lack similar tools and methods for predicting performance and project impacts across other performance areas. Further, calibrating predictive models by com- paring observed with predicted outcomes can be challenging in practice when the relative contri- bution of a project to overall system performance is unknown; as stated in the Performance-Based Planning and Programming Guidebook, âa time lag exists between the implementation of many transportation improvements and the resulting changes to performance indicators, making the connection between decision making and results unclearâ (ICF International, 2013). Some project benefits are not immediate but are realized over time, so their effect on system performance may not be directly apparent; for these cases, it is especially important to maintain historical perfor- mance and treatment data and to calibrate predictive models over time to match observed data. Several tools and methods are available to quantitatively predict future baseline performance and project impacts with respect to infrastructure condition, safety, and mobility metrics. How- ever, for many other performance areas, including livability and sustainability, predictive tools and methods are not well developed. Expert opinion can be used to qualitatively assess project impacts across performance areas for which there are no predictive tools or methods available. As explained in an FHWA case study of the Minnesota DOT, it is âdifficult to balance goals with less rigorous measures, such as economic competitiveness or livability, with goals such as system preservation, mobility, and safetyâ (John A. Volpe National Transportation Systems Center, 2011). In order to develop a comprehensive understanding of project impacts across all performance areas, the framework incorporates both quantitative and qualitative measures. While perfor- mance modeling can be used in many cases to quantify the expected project impacts, the use of predictive tools and methods is not always appropriate or reasonable given available data. Additionally, breaking down silos to select the most beneficial projects overall and allocating resources accordingly will require a fundamental change in the way these decisions have historically been made. In practice, project selection often occurs within asset silosâthat is, projects are chosen within individual management systems given a predetermined budget allocation that is typically set based on historical proportions or by legislative edict rather than in accordance with performance goals. Arbitrarily allocating resources across siloed management systems in such a way limits the ability of agencies to select and implement projects that are most beneficial with respect to perfor- mance goals. The framework overturns this paradigm by basing cross-asset resource allocation on the selection of projects that are expected to achieve the greatest performance outcomes across all investment categories. This is also considered an improvement on the needs-based allocation that Figure 5. Combining all projects across management systems into pooled set. Source: Adapted from Mamlouk and Zaniewski, 1998; Labi, 2001.
16 Guide to Cross-Asset Resource Allocation and the Impact on Transportation System Performance is increasingly being applied at state DOTs. The challenge with this approach is that the allocation is reliant upon varying definitions of âneedâ and still takes a siloed focus instead of a system focus. The framework advances siloed decision making by moving away from viewing resource allo- cation as the input to the programming process and instead considers the allocation to be the output corresponding to the optimal set of achievable performance outcomes (Figure 6). Agencies often organize departments around silos that receive set allocations, thus requir- ing the individual departments to do the best with what they have. While a more integrated approach is strongly encouraged, the framework does allow for semi-siloed decision making. Two optimization techniques are used in the framework, depending on agency preferences and data availability. Of these two, a bottom-up approach that fully integrates linkages to manage- ment systems is preferred; however, a top-down approach can be conducted that incorporates the use of asset management systems within the silos. 2.1.4 Decision Science Application Project performance benefits can be predicted across performance areas using a variety of methods by calculating the change in a performance measure with or without implementa- tion of a candidate project. The resulting project impacts are expressed in the same units as the measures being predicted; thus, impacts across performance areas cannot be readily compared. Further manipulation is required to achieve a true apples-to-apples comparison between per- formance metrics across investment categories. The AASHTO Transportation Asset Management Guide similarly recognizes this âneed to combine dissimilar performance measures . . . in order to develop a scale that can be used for comparing and prioritizing alternative investments.â Decision science techniques are applied in the framework to create a transparent, structured, and repeatable method for normalizing and comparing project benefits across investment cat- egories on a level playing field based on the following process: â¢ Weight: Determine which project benefits are most important to the decision maker using a value matrix to evaluate priorities, Figure 6. Typical siloed investment planning versus a performance-based approach.
Research Guidebook 17 â¢ Scale: Convert project benefits with respect to various performance metrics into dimension- less units that can be readily compared, â¢ Score: Express project benefits in terms of their relative importance to the decision maker, â¢ Prioritize: Divide project benefits by costs to determine feasibility and rank eligible projects, and â¢ Optimize: Select the most cost-beneficial projects with respect to budget and performance constraints. The optimal cross-asset resource allocation is then equal to the ratio of the total selected proj- ect costs within each asset class divided by the total budget. While this seems straightforward, the ability of transportation agencies to implement a fully flexible, discretionary approach to resource allocation varies across the country due to unique institutional, organizational, and political situations. The framework accommodates the technical challenges of these nuances, as detailed in Section 2.4. As previously noted, the framework provides two optimization tech- niques for project selection. â¢ Bottom-up (project-level): The frameworkâs bottom-up analysis is preferred and is applied in the tool prototype. This technique provides comprehensive impacts (e.g., benefit and dis- benefit assessments) and cost estimates to evaluate and justify project-level selections. In this approach, performance values that would occur with and without implementation of the project for all metrics relating to agency goals are evaluated and compared. Using the previously described decision science techniques, optimized project sets are gen- erated under varying constraints and compared. The objective of the bottom-up optimiza- tion is to maximize the program score, subject to constraints, by changing which projects are selected, as shown in Figure 7. â¢ Top-down (network-level): The objective of the top-down optimization is a slightly different formulation than the bottom-up, as shown in Figure 8. This is more common in the industry in practice, and is based on the development of network-level performance versus investment- level curves that are built by running siloed management systems under varying financial constraints. Decision makers then have the ability to examine allocations until a suitable per- formance outcome is reached. If a top-down approach is used, the framework suggests a hybrid application (Figure 9), which applies decision science principles to identify the optimal mixture of performance levels across the asset types; this can also be applied in the tool prototype. Once the optimal point on each of the performance curves is decided based on user preferences, the associated project sets can be generated and compared. The main challenge of this approach is the reliance on management Figure 7. Example bottom-up project selection.
18 Guide to Cross-Asset Resource Allocation and the Impact on Transportation System Performance Figure 8. Example top-down performance versus investment level trade-offs. Figure 9. Hybrid approach blending top-down and bottom-up approaches to resource allocation.
Research Guidebook 19 systems since each system typically considers different performance measures without consid- eration of cross-asset impacts. 2.1.5 Trade-off Analysis Trade-off analysis is applied in the framework to determine what performance can be bought given various investment scenarios and funding levels. Additionally, it is used to inform deci- sion makers and stakeholders of exactly what is gained and lost by allocating resources in a specific way, thus allowing them to consider the benefits and implications of policies and strate- gies associated with making investment decisions. In this way, trade-off analysis can be used to facilitate meaningful discussions around what truly matters most to stakeholders and users of the transportation system given the complexity involved in applying limited resources to achieve comprehensive performance goals. Ultimately, the preferences gathered from this discussion can be used to adjust performance targets to arrive at the most beneficial cross-asset resource alloca- tion in light of fiscal constraints. Example trade-off analyses that can be explored using the framework include comparing project alternatives and programs; identifying the minimum investment level to achieve targets; and assessing the sensitivity of outcomes to varying investment levels, allocation percentages, political initiatives, and agency preferences (e.g., Figure 10). 2.2 Incorporating Risk MAP-21 legislation calls for state transportation agencies to develop risk-based transporta- tion asset management plans for highway infrastructure on the enhanced National Highway System. While âriskâ is a term with multiple connotations, it is considered in this context to deal Average IRI (inches/mile) % of Pavements in "Good" or Beer Condion Total Jobs Created % of Bridges in "Good" or Beer Condion Total Number of Crashes % of Congested Roads Performance by Scenario Unconstrained Preservaon First Congeson Reducon and Economic Development Focus Best Best Worst Figure 10. Example comparison of performance outcomes by strategic direction. Note: By assessing the impact of agency decisions with a focus on constraints in particular, the data-driven framework can be used to bolster a case for more flexibility in allocating resources.
20 Guide to Cross-Asset Resource Allocation and the Impact on Transportation System Performance with various hazard, financial, operational, and strategic threats and opportunities (Proctor and Varma, 2012). â¢ Hazard risks can be classified as uncertain structural performance due to an aging infra- structure or vulnerability to extreme events. Uncertain performance due to typical aging and climate processes has been evaluated in research, such as that in NCHRP Report 713: Estimat- ing Life Expectancies of Highway Assets, with an emphasis on identifying contingency funding levels; uncertain performance due to extreme weather events has been an area of emphasis in research such as that of Croope (2010), with a focus on reducing vulnerability through improved infrastructure resilience. â¢ Financial risks can be classified as having insufficient funding available due to either uncer- tain revenue or project costs. â¢ Operational risks can be classified as ineffectual maintenance programs or inaccurate fore- casting models. â¢ Strategic risks can be classified as weak program management and data collection processes. In the context of cross-asset resource allocation, a comprehensive consideration of risk is beneficial. Figure 11 details how such risk considerations are integrated into the framework. As highlighted in Figure 11: â¢ Hazard risks are integrated into the framework via the project development process by iden- tifying candidate projects to reach and maintain a state of good repair (SGR), simulating dete- rioration probabilities, as a performance measure/target in the prioritization/optimization process, and constraining the selection of critical projects to be âmust do.â â¢ Financial risks are incorporated through simulating revenue sources and assessing perfor- mance trade-offs for various levels of investments. â¢ Operational risks are incorporated through simulating performance impacts. â¢ Strategic risks are incorporated by testing the sensitivity of varying performance preferences, targets, and resource allocations strategies (e.g., silo versus integrated management, fixed ver- sus flexible budget allocation, and worst first versus proactive preservation). Goals and Objective Identification Include risk reduction objectives across investment areas Ways of incorporating risk into the framework Performance Metric Evaluation Develop risk assessment scores as a function of likelihood and consequence Project Impact Assessment Identify projects to mitigate deteriorating structural performance (hazard risk) and simulate uncertain costs and benefits (operational risk) Decision Science Application Determine âmust doâ projects with too high of risk to not complete and adjust risk tolerance when scaling impacts Trade-off Analysis Assess trade-offs of alternative funding scenarios (financial risk) and compare programmatic policies (strategic risk) Figure 11. Incorporating risk into the framework.
Research Guidebook 21 By simulating distributions describing the likelihood of performance outcomes, practitioners identify the level of confidence in achieving performance goals. For instance, if the standard devia- tion for each project impact was known, the probability of various system performance measures could be determined (Figure 12). A similar exercise could be completed for an uncertain budget or other factors. In all cases, it is suggested to identify, assess, manage, and monitor the effectiveness of risk strategies as determined via agency tolerance. Whether qualitatively or quantitatively deter- mined, risk likelihood and consequence can be registered in a log that can be used to evaluate strategies and can be updated over time in order to improve future decision making. 2.3 Tool Prototype 2.3.1 Technical Components The tool prototype focuses largely on the decision science application (framework Step 4), which includes weighting, scaling, scoring, prioritizing, and optimizing investments in the proj- ect pool. A description of the automated functions are described in the following; however, it should be noted that user overrides are provided in the tool prototype for all results. â¢ Weight: In the tool prototype, the AHP is applied to weigh each of the selected performance measures. The relative importance of each criterion is based on qualitative ratings assigned by the user for each pairwise comparison. Matrix algebra is then applied to identify the intended weights, which are technically represented by the eigenvector of the value matrix. To validate, consistency checks are built into the tool prototype to preserve the preference order of ratings (e.g., if measure A is preferred to measure B and measure B is preferred to measure C, then measure A should be preferred to measure C). The setting of ratings may be conducted in a collaborative group setting or through a Delphi process (anonymous rounds of setting and revising ratings per aggregated group results), the latter of which can mitigate biases from dominant personalities. Figure 12. Assessing confidence in performance outcomes.
22 Guide to Cross-Asset Resource Allocation and the Impact on Transportation System Performance â¢ Scale: Utility-based and linear scaling methods are programmed into the tool prototype. The utility method is preferred because it allows agencies to assign relative importance to varying lev- els of performance. By removing the assumption of linearity, preferences can be used to ensure that projects with higher performance outcomes can be scored on a sliding scale relative to lower performance levels. For example, a pavement project that results in an improvement from poor to fair condition may produce greater utility (e.g., âsatisfactionâ represented on a dimensionless scale of 0âleast to 1âmost) than a project with an improvement from fair to good condition. If all improvements are valued the same, then linear scaling may be applied. These techniques mitigate potential pitfalls associated with monetization efforts. By letting perfection stand in the way of progress, agencies often hesitate to include softer metrics such as livability in benefitâcost analyses. This exclusion can disaffect stakeholders that do not feel that they are heard. Addition- ally, even dollar-to-dollar comparisons can be subjective when considering the value associated with saving one dollar for the agency versus one dollar for the user. Therefore, utility scaling can be applied to more accurately align preferences for the comparison of dissimilar metrics. â¢ Score: A representative score for each project is assigned to each candidate in the toolâs project list. The score combines agency weighting and scaling preferences using the weighted-sum product method (Score = Weight1 Ã Scaled Value1 + Weight2 Ã Scaled Value2 + . . .). Higher scores indicate the relative magnitude of benefits realized by implementing a project. This scoring process is similar to the overall pavement condition index commonly applied by state DOTs, where the index is a function of varying distresses (e.g., rutting, raveling, and cracking) rated by inspectors on a simplified scale. â¢ Prioritize: Critical to a financially constrained program, the tool prototype develops a pri- oritized list of projects for screening based on their score-to-cost ratio (similar to a benefit- to-cost ratio). This essentially allows the best, most cost-effective projects to be programmed. â¢ Optimize: The optimal allocation of resources is automated in the tool prototype using three general techniques: â Bottom-up optimization: The selection of projects from a prioritized list is determined by solving the integer programming problem by way of a branch-and-bound algorithm. This algorithm works by systematically navigating along different possible combinations of projects (branches) and moving toward a solution that maximizes performance, while meeting all constraints, by dropping subsets of suboptimal paths (cutting branches). This technique is most commonly known for being able to approximate a solution to what is known as the travelling salesman problem, where travel time must be minimized subject to making all required stops by changing discrete pathways selected. â Top-down optimization: The selection of allocations from preferences of performance outcomes is determined by solving the nonlinear optimization problem using the general- ized reduced gradient algorithm. This algorithm analyzes derivatives (or rates of change) in the overall score by changing allocations so as to quickly arrive at a solution by maximizing the program score. â Trade-off optimization: In order to reduce computing time without a significant loss in precision when producing trade-off curves (i.e., the Pareto frontier of optimal solutions), a heuristic algorithm was constructed in the tool prototype by blending the greedy algorithm (sort prioritized list by score-to-cost ratio in descending order and program down the list until funds are exhausted) with a genetic algorithm (a machine-learning technique, similar to the Watson computer of Jeopardy fame, that mimics the human mind through artificial intelligence to quickly find patterns in the data through nonlinear mutation functions) to enhance the solution. 2.3.2 User Benefits Given the complexity of the technical components behind optimizing resource allocation decisions, results need to be communicated in an understandable way. In the tool prototype,
Research Guidebook 23 various summary graphics and dashboards are used to quickly view what performance can be achieved given user inputs. Additionally, as part of the MAP-21 legislation, state officials are asked to define various performance measures using system performance LOS and asset SGR. The tool prototype allows decision makers to define either performance categorization on a red- yellow-green color scale for each measure and to report outcomes of optimization and trade-off analyses in these terms. Performance dials showing these scales by measure are built into the tool to provide a real-time predictive performance report card suitable for executive decision making. Using the tool prototype provides a number of benefits, including transparency and account- ability in decision making as well as opening the door to discussions of agency leadership and practitioner priorities and preferences. These benefits are described in more detail in Chapter 4, which highlights the potential uses of the tool in agency planning, project selection, and program development. 2.4 Technical Challenges and Success Factors In order for any decision-support tool to be of practical use to transportation agencies, flexibil- ity is critical such that a variety of planning processes and measures can be accommodated. Rec- ognizing that performance and asset management programs vary in maturity across the country, adoption of the tool prototype will be dependent on the ease of use, ability to automate complex calculations, clear communication of outputs, and ability to iterate alternative decision strategies. The following specific technical challenges were identified in the research and tool prototype workshops and testing. Each was considered and overcome by modifications to the framework and/or tool prototype: â¢ Setting a planning horizon, â¢ Identifying and selecting must-do projects, â¢ Providing the ability to analyze user-specified performance measures (including qualitative metrics), â¢ Identifying performance measures by functional class, â¢ Handling of alternative funding structures, â¢ Integrating data from existing management systems, â¢ Allowing for geographic constraints, and â¢ Providing clear reporting of performance outcomes in a simple user interface. A discussion of each of these considerations is provided in the following subsections, as are the modifications made to the framework and tool prototype. 2.4.1 Setting a Planning Horizon One limitation of a stand-alone decision tool is the inability to communicate with agency- specific asset management systems. As such, any stand-alone tool would not have the benefit of linkages to agency databases, performance prediction models, or life-cycle-cost analytical tools to evaluate project alternatives. This poses a challenge for supporting longer-term analysis asso- ciated with cross-asset resource allocation. For the bottom-up analysis, the tool prototype developed is currently suitable to support a shorter-term planning cycle of no more than 4 to 5 years (e.g., a typical horizon for STIPs). For periods exceeding 5 years, a linkage to asset management systems would be required to dynamically update project bundle recommendations based on what has or has not been able to be programmed under a financially constrained scenario as well as to link investment levels to long-range performance.
24 Guide to Cross-Asset Resource Allocation and the Impact on Transportation System Performance What could be done as a next step in the research would be to include the incorporation of win- dows of opportunity into the optimization for a long-range multiyear analysis period. For instance, if a bridge rehabilitation is not completed within sufficient time, then a bridge replacement might become the more prudent project. Pavement projects have similar concerns: if preventive main- tenance measures are not applied on-cycle, then far more costly repair projects may be required. When using the framework for LRTP development, project alternatives (preferably a more limited number to reduce computational time) can be defined for every year within the planning horizon as determined by the management system and what is or is not programmed within the specified window. The optimization would then be modified to have a constraint that only one alternative activity profile can be completed for each structure. The objective would then be to maximize the performance at the end of the planning horizon while keeping assets at a tolerable level of performance throughout, or to maximize performance each year in the planning horizon by dynamically updating projects throughout. For the top-down analysis, the tool prototype was designed to maximize performance at the end of a short-range planning horizon assuming constant annual funding. This is based on user inputs for performance over time at various annual investment levels. To build on this research, the next steps would include considering the impact of optimizing a variable annual funding amount over the planning horizon among the asset classes and programming this optimization into an updated tool. 2.4.2 Identifying Must-Do Projects While the flexibility to choose any set of projects based on agency discretion is attractive, there often exists a subset of projects considered as âmust doâ or âearmarkedââmeaning that they should be programmed above all others. Examples are policies that require an agency to dedicate preserva- tion dollars to critical assets prior to consideration of all other transportation assets, policies that require an agency to select projects based on risk tolerance, and policies that dedicate funding to a signature project with remaining funds available for other investments. The tool prototype accommodates such cases and others through the optimization process: if the user specifies a must- do project, the available discretionary funding can be appropriately adjusted and applied using the aforementioned decision science techniques. If the selected must-do projects exceed available funding, then the standard approach of using agency preferences to select projects still applies. 2.4.3 Ability to Analyze User-Specified Performance Measures Successful framework implementation will depend on the ability of the tool prototype to accommodate user-specified performance measures. As developed, the tool prototype is not limited to a subset of measures. Instead, users may specify the performance measure and respec- tive performance value with and without implementation of each candidate project. When com- bined with the decision science process of weighting and scaling the measures, the tool prototype allows for the comparison of any set of quantitative or qualitative measures. The only resulting limitation of this approach is the type of network-level trade-off curves that can be generated. To support any specified performance measure, the trade-off curves generated by the tool prototype will be limited to (a) a network average of the specified performance measure, (b) a percentage of the network beyond a specified performance threshold associated with the user-identified measure, or (c) the network total of the specified performance value. This excludes network statistics with a more complex linkage to the specified performance measures. 2.4.4 Identifying Performance Measures by Functional Class In recognizing the varying traffic volumes and economic activity among different roadways, it is often prudent to distinguish between functional classes when setting performance targets. Per the MAP-21 legislation, agencies are specifically tasked with keeping the National Highway System
Research Guidebook 25 (NHS) within an acceptable state of repair. In order to reflect varying performance by functional class, the tool prototype was designed to accommodate user-defined investment areas and perfor- mance measures. For instance, users may wish to create unique performance measures to reflect NHS and non-NHS structurally deficient bridge deck area and the International Roughness Index (IRI). Likewise, the user can create two investment areas representing the NHS and non-NHS pots of money from which to improve the corresponding metrics. Along with the distinction of functional class performance, the tool prototype allows the user to define varying states of repair by metric. 2.4.5 Handling Alternative Funding Structures In recognition of projects being eligible for certain programs, the tool prototype was designed to allow for multiple funding sources or investment areas. For each pot of money, users can specify budget floors or ceilings while identifying which funding source corresponds to which project. However, due to the complexity of funding structures across the country, the tool pro- totype is limited to the ability of an agency to consider internal processes (such as matching or partial funding from multiple programs) when setting budget totals by program. 2.4.6 Integrating Data from Existing Management Systems Manual collation and processing of data can be time consuming and inhibit agencies from pursuing more analytical resource allocation techniques. While an application and interface to pull and clean data sources could not be designed within the scope of the tool prototype, the framework was designed to accommodate commonly collected information. Data requirements will vary by the optimization approach being applied. From a bottom-up perspective, agencies are expected to have a list of candidate projects with a total cost estimate, project performance values with and without implementation, identification of primary funding source, and a unit of measure (e.g., project length) that could be used to develop a weighted net- work average (e.g., percent miles in good condition). From a top-down perspective, agencies are expected to input trade-off curve information: network performance values at various invest- ment levels. Conclusions and Next Steps (Chapter 5) suggests a full test deployment to allow for a streamlined enterprise solution for more automated data integration. 2.4.7 Allowing for Geographic Constraints As with environmental justice analyses, it is the responsibility of the agency to ensure the fair distribution of system performance benefits among its stakeholders. This is particularly acute at agencies with a more decentralized structure. To ensure the equity among districts/regions or urban/rural populations, the tool prototype can be used to define unique funding sources for each sub-area and then to set minimal performance targets or budget floors/ceilings for each area. 2.4.8 Clear Reporting of Performance Outcomes in a Simplified User Interface The interface of the tool prototype provides user comfort in navigating among the complex analyses by guiding users to essential inputs at the front end while reserving all complex math- ematical calculations for automated back-end processes. Additionally, the tool prototype allows for exploratory what-if analyses through quick and easy iterations among possible performance outcomes and strategic policies. Given the technical complexity behind the tool prototype, there is also a need to effectively tailor messages for audiences ranging from analysts to executives. The tool prototype uses info- graphics for communicating performance outcomes, which can be compared in a summary tab.
26 Guide to Cross-Asset Resource Allocation and the Impact on Transportation System Performance Such a dashboard (e.g., Figure 13) can be customized per agency definitions of LOS and SGR. Additional summaries can be generated for agencies wishing to see more detail within a specific management system. Many participants in the workshops and tool testing noted the desire to have additional func- tionality and screens in the tool prototype (Section 3.3) that were outside the scope of the proof- of-concept tool prototype. These modifications are suggested by the research team in Chapter 5. Figure 13. Tool prototype graphics of expected performance outcomes based on allocation strategy.