National Academies Press: OpenBook

A Comprehensive Development Plan for a Multimodal Noise and Emissions Model (2010)

Chapter: Appendix B: Model Design Evaluation

« Previous: Appendix A: Market Research
Page 47
Suggested Citation:"Appendix B: Model Design Evaluation ." National Academies of Sciences, Engineering, and Medicine. 2010. A Comprehensive Development Plan for a Multimodal Noise and Emissions Model. Washington, DC: The National Academies Press. doi: 10.17226/22908.
×
Page 47
Page 48
Suggested Citation:"Appendix B: Model Design Evaluation ." National Academies of Sciences, Engineering, and Medicine. 2010. A Comprehensive Development Plan for a Multimodal Noise and Emissions Model. Washington, DC: The National Academies Press. doi: 10.17226/22908.
×
Page 48
Page 49
Suggested Citation:"Appendix B: Model Design Evaluation ." National Academies of Sciences, Engineering, and Medicine. 2010. A Comprehensive Development Plan for a Multimodal Noise and Emissions Model. Washington, DC: The National Academies Press. doi: 10.17226/22908.
×
Page 49
Page 50
Suggested Citation:"Appendix B: Model Design Evaluation ." National Academies of Sciences, Engineering, and Medicine. 2010. A Comprehensive Development Plan for a Multimodal Noise and Emissions Model. Washington, DC: The National Academies Press. doi: 10.17226/22908.
×
Page 50
Page 51
Suggested Citation:"Appendix B: Model Design Evaluation ." National Academies of Sciences, Engineering, and Medicine. 2010. A Comprehensive Development Plan for a Multimodal Noise and Emissions Model. Washington, DC: The National Academies Press. doi: 10.17226/22908.
×
Page 51
Page 52
Suggested Citation:"Appendix B: Model Design Evaluation ." National Academies of Sciences, Engineering, and Medicine. 2010. A Comprehensive Development Plan for a Multimodal Noise and Emissions Model. Washington, DC: The National Academies Press. doi: 10.17226/22908.
×
Page 52
Page 53
Suggested Citation:"Appendix B: Model Design Evaluation ." National Academies of Sciences, Engineering, and Medicine. 2010. A Comprehensive Development Plan for a Multimodal Noise and Emissions Model. Washington, DC: The National Academies Press. doi: 10.17226/22908.
×
Page 53
Page 54
Suggested Citation:"Appendix B: Model Design Evaluation ." National Academies of Sciences, Engineering, and Medicine. 2010. A Comprehensive Development Plan for a Multimodal Noise and Emissions Model. Washington, DC: The National Academies Press. doi: 10.17226/22908.
×
Page 54
Page 55
Suggested Citation:"Appendix B: Model Design Evaluation ." National Academies of Sciences, Engineering, and Medicine. 2010. A Comprehensive Development Plan for a Multimodal Noise and Emissions Model. Washington, DC: The National Academies Press. doi: 10.17226/22908.
×
Page 55
Page 56
Suggested Citation:"Appendix B: Model Design Evaluation ." National Academies of Sciences, Engineering, and Medicine. 2010. A Comprehensive Development Plan for a Multimodal Noise and Emissions Model. Washington, DC: The National Academies Press. doi: 10.17226/22908.
×
Page 56
Page 57
Suggested Citation:"Appendix B: Model Design Evaluation ." National Academies of Sciences, Engineering, and Medicine. 2010. A Comprehensive Development Plan for a Multimodal Noise and Emissions Model. Washington, DC: The National Academies Press. doi: 10.17226/22908.
×
Page 57
Page 58
Suggested Citation:"Appendix B: Model Design Evaluation ." National Academies of Sciences, Engineering, and Medicine. 2010. A Comprehensive Development Plan for a Multimodal Noise and Emissions Model. Washington, DC: The National Academies Press. doi: 10.17226/22908.
×
Page 58
Page 59
Suggested Citation:"Appendix B: Model Design Evaluation ." National Academies of Sciences, Engineering, and Medicine. 2010. A Comprehensive Development Plan for a Multimodal Noise and Emissions Model. Washington, DC: The National Academies Press. doi: 10.17226/22908.
×
Page 59
Page 60
Suggested Citation:"Appendix B: Model Design Evaluation ." National Academies of Sciences, Engineering, and Medicine. 2010. A Comprehensive Development Plan for a Multimodal Noise and Emissions Model. Washington, DC: The National Academies Press. doi: 10.17226/22908.
×
Page 60
Page 61
Suggested Citation:"Appendix B: Model Design Evaluation ." National Academies of Sciences, Engineering, and Medicine. 2010. A Comprehensive Development Plan for a Multimodal Noise and Emissions Model. Washington, DC: The National Academies Press. doi: 10.17226/22908.
×
Page 61
Page 62
Suggested Citation:"Appendix B: Model Design Evaluation ." National Academies of Sciences, Engineering, and Medicine. 2010. A Comprehensive Development Plan for a Multimodal Noise and Emissions Model. Washington, DC: The National Academies Press. doi: 10.17226/22908.
×
Page 62
Page 63
Suggested Citation:"Appendix B: Model Design Evaluation ." National Academies of Sciences, Engineering, and Medicine. 2010. A Comprehensive Development Plan for a Multimodal Noise and Emissions Model. Washington, DC: The National Academies Press. doi: 10.17226/22908.
×
Page 63
Page 64
Suggested Citation:"Appendix B: Model Design Evaluation ." National Academies of Sciences, Engineering, and Medicine. 2010. A Comprehensive Development Plan for a Multimodal Noise and Emissions Model. Washington, DC: The National Academies Press. doi: 10.17226/22908.
×
Page 64
Page 65
Suggested Citation:"Appendix B: Model Design Evaluation ." National Academies of Sciences, Engineering, and Medicine. 2010. A Comprehensive Development Plan for a Multimodal Noise and Emissions Model. Washington, DC: The National Academies Press. doi: 10.17226/22908.
×
Page 65
Page 66
Suggested Citation:"Appendix B: Model Design Evaluation ." National Academies of Sciences, Engineering, and Medicine. 2010. A Comprehensive Development Plan for a Multimodal Noise and Emissions Model. Washington, DC: The National Academies Press. doi: 10.17226/22908.
×
Page 66
Page 67
Suggested Citation:"Appendix B: Model Design Evaluation ." National Academies of Sciences, Engineering, and Medicine. 2010. A Comprehensive Development Plan for a Multimodal Noise and Emissions Model. Washington, DC: The National Academies Press. doi: 10.17226/22908.
×
Page 67
Page 68
Suggested Citation:"Appendix B: Model Design Evaluation ." National Academies of Sciences, Engineering, and Medicine. 2010. A Comprehensive Development Plan for a Multimodal Noise and Emissions Model. Washington, DC: The National Academies Press. doi: 10.17226/22908.
×
Page 68
Page 69
Suggested Citation:"Appendix B: Model Design Evaluation ." National Academies of Sciences, Engineering, and Medicine. 2010. A Comprehensive Development Plan for a Multimodal Noise and Emissions Model. Washington, DC: The National Academies Press. doi: 10.17226/22908.
×
Page 69
Page 70
Suggested Citation:"Appendix B: Model Design Evaluation ." National Academies of Sciences, Engineering, and Medicine. 2010. A Comprehensive Development Plan for a Multimodal Noise and Emissions Model. Washington, DC: The National Academies Press. doi: 10.17226/22908.
×
Page 70
Page 71
Suggested Citation:"Appendix B: Model Design Evaluation ." National Academies of Sciences, Engineering, and Medicine. 2010. A Comprehensive Development Plan for a Multimodal Noise and Emissions Model. Washington, DC: The National Academies Press. doi: 10.17226/22908.
×
Page 71
Page 72
Suggested Citation:"Appendix B: Model Design Evaluation ." National Academies of Sciences, Engineering, and Medicine. 2010. A Comprehensive Development Plan for a Multimodal Noise and Emissions Model. Washington, DC: The National Academies Press. doi: 10.17226/22908.
×
Page 72
Page 73
Suggested Citation:"Appendix B: Model Design Evaluation ." National Academies of Sciences, Engineering, and Medicine. 2010. A Comprehensive Development Plan for a Multimodal Noise and Emissions Model. Washington, DC: The National Academies Press. doi: 10.17226/22908.
×
Page 73

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

B-1 APPENDIX B. MODEL DESIGN EVALUATION As dictated by the RFP, the recommendation for the multimodal model design should consider “costs associated with development and application of the resulting model versus expected benefits, as well as technical feasibility; capability to support demographic, transportation, and economic analysis of alternative scenarios and mitigation strategies; acceptability to regulatory agencies; and flexibility to meet changing needs.” This was accomplished through a structured evaluation process to support the choices on how to proceed from the current models and development projects to the end state of multimodal model design. Section B.1 explains the evaluation process. Section B.2 describes the alternative model design concepts that were constructed for the evaluation. Section B.3 identifies the benefits and drawbacks and each alternative. Section B.4 presents the results of the first round of evaluation. Section B.5 summarizes the round 2 evaluation process. Section B.6 presents the results of the second round. Section B.7 describes and justifies the model design that won the second round. B.1. Evaluation Process A critical step of the project is to decide on the multimodal noise and emission model design concept on which to then construct the MDP. The field of Decision Analysis (DA) offers various techniques and processes that could form the basis of a formal, quantitative evaluation process to justify that design against reasonable alternative designs. The project team explored 3 processes, or what are sometimes called multiple attribute decision models (MADM). They were: ● Pugh Matrix (B-1) ● Analytic Hierarchy Process (AHP) (B-2) ● Simple Multi-Attribute Rating Technique (SMART) (B-3) These three processes were chosen because they are some of the most commonly used decision tools when dealing with incomplete information. The team chose a version of the Pugh Matrix for the MDP evaluation. Section B.1.1 summarizes the Pugh Matrix. Section B.1.2 discusses the criteria selected for the MDP evaluation. Section B.1.3 provides the details on the application of the Pugh Matrix for the selection of the multimodal noise and emissions model design. B.1.1. Pugh Matrix The Pugh Matrix is a method for concept selection using a scoring matrix in which alternatives are scored relative to weighted criteria. It is widely used in the Six Sigma Method as it provides a straightforward means to choose the best alternative with limited information. It uses simple scoring of the relative merits of the alternatives based upon criteria that attempt to take into consideration the needs of the user. It is named after Stuart Pugh who described the method. (B-1) The Pugh Matrix is also referred to as the “Pugh Concept Selection”, “Pugh Method”, and “Criteria Based Matrix”, and “Datum Design Concept”. The Pugh Matrix compares a concept to a reference concept, usually referred to as the Datum. Researchers from Stanford University suggested an attractive modification to the Pugh Matrix which separates performance evaluation from the cost evaluation to provide a summary value score. (B-4) The modified Pugh Matrix was selected for this evaluation and an example is presented in Table B-1.

B-2 TABLE B-1 The Modified Pugh Matrix Evaluation Criteria Concepts Weight Datum A A1 A2 A3 R 4 W1 1 S S11 S12 S13 R 14 W2 2 S S21 S22 S23 R 24 W3 3 S S31 S32 S33 R 34 W4 4 S S41 S42 S43 R 44 W5 5 S S51 S52 S53 Performance Score (P) 54 P P1 P2 P3 Evaluation Criteria 4 Weight C 1 S S1 S2 S3 Cost Score (C) 4 C C1 C2 C3 Value Score (V) 4 1 1 C P 2 2 C P 3 3 C P 4 4 C P The MDP evaluation criteria (R) The weight applied to each evaluation criterion (W) is derived from the relative importance of that factor. Section B.1.3 defines the criteria weighted for this project. The sum of the weights must equal 1. The rating scheme assigns a criteria score (S) for each alternative (A). Each score is based on a comparative judgment against the datum or reference concept. The relative cost is represented by the evaluation criterion, C. The literature provided several examples of the rating schemes for the criteria score, S, which are used for rating both performance and costs. Table B-2 presents the rating scheme selected for this evaluation. TABLE B-2 Rating Scheme Relative Performance Rating (S) Much worse than reference concept 1 Worse than reference concept 2 Same as reference concept 3 Better than reference concept 4 Much better than reference concept 5 In Table B-1, the performance score (P) for each alternative is simply the summation of the individual rating multiplied by the criterion weight as shown in Equation B-1. ( )∑ = ⋅= 6 1i ijiij SWP Equation B-1 The most preferred concept is the one that achieves the highest relative value score (V), which is the ratio of the relative performance score (P) over the cost score (C). The modified Pugh Matrix scoring system seems a good fit for decision making on the choice of model design for the MDP. The next decision was on the evaluation criteria (R). B.1.2. Evaluation Criteria The evaluation criteria (R) are based on the project requirements that have been turned into the following attributes: ● Cost effectiveness;

B-3 ● Technical feasibility (practicality of the design and access to needed resources and expertise); ● Acceptability to the regulating agencies; ● International credibility (i.e., compliance with international technical standards and recommended practices); ● Scalability (i.e., flexible architecture and modular design to support airport-centric up to regional applications); ● Analytical proficiency (capability to support alternatives and mitigation analyses); and ● Responsiveness (flexible to changing demands). Since the 7 evaluation criteria are qualitative, Table B-3 provides more clarity on each to help guide the evaluators. TABLE B-3 Meaning of the 7 Evaluation Criteria Evaluation Criteria What does it entail? Agency Acceptance • Meets or exceeds agency technical requirements for: o Metrics o Data sources o Fidelity (accuracy) o Scope of analysis o Reliability o Verification & Validation • Meets agency need in terms of readiness, accessibility, and availability for use in NEPA and other requisite environmental impact assessments. Technical Feasibility • “Constructability” -- What is the availability of the resources (data, scientific, financial, expertise, etc) and how are they used to build the model? • Practicality -- How easy it to maintain and use? • Robustness – How well does it handle errors (human and otherwise) and failures (hardware and software) Analytical Proficiency • How capable is the model to support alternatives and mitigation analyses? • How well does the model perform the required applications? • What are the extent of the model limitations and how well are they managed? • What are the degrees of uncertainty in model outputs and how well are they understood? Scalability How flexible is system architecture and design to support the wide range of anticipated applications (airport-centric to system-wide)? Responsiveness How adaptable is the design to changing requirements from regulators, users, and stakeholders (source characterizations, new algorithms, input/output formats, etc.)? International Credibility • To what extent does the model meet all appropriate international standards and recommended practices? • Transparency – To what extent is information on the model and design available to any interested party?

B-4 TABLE B-3 Meaning of the 7 Evaluation Criteria (concluded) Evaluation Criteria What does it entail? Cost Implications • Development costs - To what extent does the design require research and development funding beyond levels of current projects? • Operating costs - How complex is the model to use (input requirements, special expertise, etc.)? • Life cycle costs – How much effort is required to maintain the model (training, tech support, updates, etc.)? These evaluation criteria form the basis of the justification for the preferred design and associated build sequence from current state of the art to the desired end state. As part of the modified Pugh Matrix, the project team and Panel performed a rigorous assessment of design alternatives to define an optimal roadmap (build sequence and end state), which becomes the blueprint for the development of the MDP. B.1.3. Pugh Matrix Scoring System for ACRP Project 02-09 The performance criteria (R) are taken from the seven desired attributes as described in the previous section. Using the modified Pugh Matrix, the cost effectiveness attribute is represented by the relative value score (V=P/C) and “Cost Implications” has been added as a criterion. The Datum or reference is the preferred design that Wyle described in its proposal. This Datum is described in Section B.2 along with the alternative design concepts. The weighting for each criterion, from the previous section, was derived from the market research that was discussed in Appendix A. The market research assessed the viability and utility of a multimodal environmental model. Through widely distributed questionnaire, literature review, and personal interviews; the market research attempted to gather information about customers and the market. Some of the key positive responses elicited by the questionnaire were: ● “Combining these two functions speak to airport community’s dual concerns of aircraft noise and aircraft-related pollutants as a primary irritant to their quality of life.” ● “Such a model would allow for improved understanding of tradeoffs between noise and emissions for a potential policy solution not only for a given transport mode, but between modes of transports.” ● “The use of a simulation approach would be an enormous improvement for realistic modeling and analysis of time varying impacts.” ● “Incremental build makes sense.” Some of the respondents’ key concerns are: ● “Cost.” ● “Stovepipe culture.” ● “Getting acceptance from governmental agencies to adopt the new model over existing ones, and to accept outputs from the model.” ● “Maintaining stakeholder support over a long period of time may be difficult.’ ● “We will end up with a big, data hungry, complicated, expensive model that no one either can or will use because of its complexity.”

B-5 The respondents were not asked to rank the attributes (criteria), but it is possible to glean from their responses enough information to make an initial judgment on how to weight each attribute. This initial judgment on weighting also draws upon comments received when this project was presented to the TRB ADC40 Committee on Transportation-Related Noise and Vibration (July 23, 2008 in Key West, FL) and to the Federal Interagency Committee on Aviation Noise (FICAN, on November 6, 2008). The criteria weightings are shown in Table B-4 along with a short rationale for the weight. The criteria in Table B-4 are listed in order of importance. Cost is an obvious important consideration and “Cost Implications” is handled separately in the modified Pugh Matrix. TABLE B-4 Proposed Multimodal Noise and Emissions Model Design Criteria Weighting Performance E valuation Criteria Weight Rationale Agency Acceptance 0.30 This issue was the one most frequently mentioned by the respondents and during the presentations to TRB ADC40 and FICAN. Technical Feasibility 0.20 These items are weighted equally just below Agency Acceptance to reflect desires of future users to be able to study tradeoffs and concern about creating a tool that is overly complicated. Analytical Proficiency 0.20 Scalability 0.15 The ability to do airport-level and regional analyses was mentioned by some respondents but less than the above. Responsiveness 0.10 This item was mentioned in respondents’ comments but is felt to be less important that the attributes above. International Credibility 0.05 This issue has never come up either in responses to the questionnaire or during presentations, but was identified by the ACRP 02-09 Panel. The performance scoring scheme is adopted from current practice as shown in Table B-5. The cost implications scoring scheme requires reversal of the performance rating system, such that, ‘1’ is the best cost score and ‘5’ is the worst cost score as shown in Table B-6. Thus, the perfect value score is ‘5’ (V=P/C=5/1). TABLE B-5 Proposed Multimodal Noise and Emissions Model Design Performance Scoring Scheme Relative Performance Rating Much worse than reference concept (Datum) 1 Worse than reference concept (Datum) 2 Same as reference concept (Datum) 3 Better than reference concept (Datum) 4 Much better than reference concept (Datum) 5

B-6 TABLE B-6 Proposed Multimodal Noise and Emissions Model Design Cost Scoring Scheme Relative Cost Implications Rating Much higher than reference concept (Datum) 5 Higher than reference concept (Datum) 4 Same as reference concept (Datum) 3 Lower than reference concept (Datum) 2 Much lower than reference concept (Datum) 1 Putting together the evaluation criteria with the performance and cost scoring schemes, the modified Pugh Matrix for multimodal model design evaluation is shown in Table B-7. TABLE B-7 Modified Pugh Matrix for Multimodal Noise and Emissions Model Design Evaluation Concepts Performance E valuation Criteria (R) Weight (w) Datum A A1 A2 A3 Technical Feasibility 4 0.20 3 Agency Acceptance 0.30 3 International Credibility 0.05 3 Scalability 0.15 3 Analytical Proficiency 0.20 3 Responsiveness 0.10 3 Performance Score (P) 3 Cost Evaluation Criteria Weight Cost Implications 1.00 3 Cost Score (C) 3 Value Score (V = P/C) 1.00 The Performance Score (P) is the weighted sum of the evaluation scores as shown earlier in Equation B-1. The most preferred design is the one that achieves the highest relative value score (V=P/C) greater than or equal to 1.00. It was important to have the broadest perspective practical for the evaluation of model design alternatives. Therefore each member of the ACRP 02-09 Panel (14 members) and the project team (13 members) were asked to evaluate the alternatives by filling out Table B-7. The value scores were compiled into something like Table B-8.

B-7 TABLE B-8 Compiled Value Score Sheet for Multimodal Noise and Emissions Model Designs Value Scores Panel/Team Member Datum A A1 A2 A3 Evaluator1 4 1.00 Evaluator2 1.00 Evaluator3 1.00 … 1.00 EvaluatorXX 1.00 Median Interquartile Range (IQR) Nonparametric statistics, median and IQR, were used because there was insufficient information to conclude that the members’ scores conform to a known probability distribution, such as normal distribution. The median is a surrogate for the average value score of all evaluators. The IQR can be used to gauge statistical difference between median values. The design evaluation consisted of two rounds. Members of the project team and the Panel evaluated 5 designs in the first round. The project team used the results of the first round as an opportunity to construct a better alternative design based on the compiled scores and comments. For example, one could take some of the positive attributes of design A1 and combine them with the positive attributes of design A3 and devise a new design Ax B.2. Model Design Candidates for Round 1 that was not considered in the first round. The construction of a new design alternative and the evaluation process for the second round are discussed in Section B.5. There were five multimodal model design concepts in the first round. The first concept, which was referred to as the Datum in the first round evaluation, is the design that was in the Wyle proposal; the initial preferred design concept. The remaining 4 are distinct alternatives for the multimodal noise and emissions model design. Task 2 of the project was to “formulate potential model designs to be considered in the comprehensive Model Development Plan (MDP).” Alternative design concept papers were prepared based on examination of ongoing model development projects and feedback from potential user communities; both of which came out of Task 1. The complete design concept papers for the first round can be found in Appendix F. The 5 design concepts are summarized in the following paragraphs. B.2.1. Current Preferred Design (Datum) The end state is a source (airplane, automobile, truck, marine vessel, etc.) simulation model with benefits evaluator to convert noise exposure and air quality changes into environmental costs. The model will simulate the sound propagation and air pollutant emissions for the moving sources. Rather than initiating a single, large-scale effort to design and develop the end state, the design incorporates a build sequence toward the end state in a series of steps, each step providing an improvement to some facet of the overall model. The build sequence is predicated on giving the users and agencies the tool that they need within expandable system architecture. The model will draw up ongoing model development projects sponsored by the federal government, such as, FAA’s Aviation Environmental Design Tool (AEDT) suite and DoD’s Advanced Acoustic Model (AAM).

B-8 B.2.2. Build on AEDT (Alternative #1) The FAA has developed a design tool named the Aviation Environmental Design Tool (AEDT). This tool is actually a suite of programs working together to perform not only environmental impact estimations, but also to allow policy decisions to be made in an informed way. This alternative explores continued development of the AEDT into a true multimodal noise and emissions model for all modes of transportation. This alternative would also include the construction of an environmental study clearinghouse where federal agencies could make available inputs and outputs of past modal studies for assistance in multimodal environmental assessments. B.2.3. Build on Existing Simulation Models (Alternative #2) This alternative design proposal outlines an approach to the development of a multimodal noise and emissions model centered on time-based simulation of source movements, source emissions, and propagation scenarios resulting in detailed output reports at receptor locations. The end-state of the model will functionally be the same as other design alternatives resulting in time-based simulation, such as the Datum. This proposal suggests a multimodal model development plan should be founded on existing single transportation mode simulation model implementations. Research and validation reports of outdoor noise and emissions algorithms are abundant both domestically and internationally. Fostering these efforts – which include studies of both heuristic and simulation approaches – will result in a model more scalable, accurate, and usable than one tethered to legacy approaches and limitations. B.2.4. Federal Adoption of Commercial Software (Alternative #3) The concept promotes a market-based option for the development of the multimodal noise and emissions model. Commercially designed software has been leveraged by engineers and designers of all disciplines to provide an efficient and documentable path to solutions of problems ranging from the simple to the complex. Commercial software is already available to noise and air quality engineers. This document focuses on two such software packages. One is maintained by the German company Braunstein + Berndt GmbH and is named SoundPLAN. The second is CadnaA, the product of another German company – DataKustik. This document does not determine which of CadnaA and SoundPLAN is the best commercially available software package. Rather, the purpose is to introduce elements of the idea that models sold commercially to the public domain could be adopted, regulated, validated, and provided with developmental assistance by the federal government. B.2.5. Build on EC IMAGINE Project Drawing on research completed by the European Commission (EC), the fundamental principle of the model design is the separation of description of the transportation source in terms of sound energy and exhaust emissions from the description of transmission to the receiver in terms of sound propagation and emissions dispersion. In May 2007, the EC completed its major noise modeling project, IMAGINE (Improved Methods for the Assessment of the Generic Impact of Noise in the Environment), which proved that it is technically feasible to build a noise model that can compute noise levels from a variety of sources. The results of the IMAGINE project fit in perfectly with the simulation modeling concepts, such as, DoD Advanced Acoustic Model (AAM). The end state is the same as the current preferred design (Datum). However, this end state is geared toward application on large, regional transportation projects where the environmental outcomes for more than one transportation mode are critical elements of the decision making.

B-9 B.3. Benefits and Drawbacks To assist the evaluators in the first round, preliminary assessments of each of the design alternatives were prepared. These assessments have been accumulated in the matrix contained in Table B-9. The cells contain short, qualitative statements on the pros and cons relative to the criteria. Pro statements are in green and begin with a plus sign ("+"). Con statements are in red and begin with a minus sign ("-"). B.4. Results of Round 1 Evaluations Twenty members from the Panel and the project team submitted scorebooks in which they evaluated 5 alternatives using the modified Pugh Matrix. The winner from Round 1 was the alternative that received the highest median value score (V). The winner was Alternative#1 – Build on AEDT. Table B-10 provides the median ratings and scores for all alternatives. The full set of evaluation results, statistics, and charts can be found in Appendix G. The evaluators had the option to provide comments along with their ratings and many did. Appendix H contains a compilation of the comments organized by design comparison (.e.g., Datum vs. Alternative #1, etc) and criteria (e.g., Agency Acceptance, Technical Feasibility, etc.). These comments along with the median scores were valuable in the construction of a new design alternative for the second round.

B-10 TABLE B-9 Preliminary Assessment of the Model Design Alternatives Evaluation Criteria Datum Building to Simulation Alternative #1 Build on AEDT Alternative #2 Build on Existing Simulation Models Alternative #3 Federal Adoption of Commercial Software Alternative #4 Build on EC IMAGINE Project Agency Acceptance + FICAN supports simulation + Agency acceptance test for each build + Multimodal capability available with first build (1 year) + Expansion on FAA’s AEDT. - DoD moving away from integration to simulation modeling. - No multimodal capability for some years + FICAN supports simulation + Agencies will continue to use existing tools + Development from beginning permits desirable options to be included + Open source code (understanding) - Only DOD is currently funding simulation - No multimodal capability for some years - Dramatic change in modeling techniques when completed -Publicly available source code (uncontrolled changes) + Agencies will continue to use existing tools in near-term + Agencies determine modules to use in the commercial product. + Multimodal product already exists - Agencies do not have complete control over source code - Known commercial models do not include US source databases - Eventual abandonment of current agency software development projects, like AEDT + FICAN supports simulation + Agencies agree on multimodal assessment requirements + Continue to use existing tools until agencies decide to change - Only DOD is currently funding simulation - No true multimodal capability for some years - Use of foreign methodologies - Air quality models would need to be added and databases harmonized. Technical Feasibility + DoD has simulation noise model + Prudent (technically & financially) build sequence - Current lack of source data + Based on proven noise and emissions models + EPA preferred AERMOD basis for a single air dispersion model. +Air quality, noise and cost analysis already integrated. + Success by others; DoD and FDOT have simulation noise models + Concepts proved by EC IMAGINE/Harmonoise projects + Much greater flexibility in applications + More detail available during model runs + Development from beginning permits desirable options to be included + All sources can be simulated - Current lack of source data - Simulation impossible for some scenarios (i.e., intersections) - Lack of development by most U.S. agencies + Multimodal products already exist and in use + Professionally developed and maintained software + Highly modular structures able to accept US approved code - Current lack of US source data -Criticality of detailed benchmarking for software approval + Concepts proved by EC IMAGINE/Harmonoise projects + DoD AAM provides framework + Flexibility in noise estimation - EC projects were noise only - Current lack of source data - Very different sound reference levels for many sources, requiring new database development - More validation is needed

B-11 TABLE B-9 Preliminary Assessment of the Model Design Alternatives (continued) Evaluation Criteria Datum Building to Simulation Alternative #1 Build on AEDT Alternative #2 Build on Existing Simulation Models Alternative #3 Federal Adoption of Commercial Software Alternative #4 Build on EC IMAGINE Project Analytical Proficiency +FICAN: simulation is better + Screening tools for secondary sources + End state can calculate any metric - Computationally complex +Air quality, noise and cost analysis already integrated. - Accuracy and usability remain static. - Integration model cannot accurately model many noise metrics, such as TA. - AEDT has a simplistic approach to motor vehicles. + FICAN: simulation is better + Calculate any metric + Sophisticated algorithms for accurate predictions + Selective use of computation complexion tied to fidelity + More detail available during model runs -Computationally complex - Substantially higher runtimes - Tedious input requirements +Air quality and noise analysis already integrated. +Access to various calculation methods + Built-in features for mapping and report making +Customizable propagation and transmissions calculations. +FICAN: simulation is better + Calculate any metric + Draws on EC IMAGINE guidance on accuracy tied to application - Computationally complex - Existing tools and all their faults retained for smaller single mode projects. - Very different sound reference levels for many sources, requiring new database development Scalability + Lessons learned from MAGENTA and SAGE to follow scalable roadmap - Too complicated for smaller projects + Global modeling from FAA’s SAGE and MAGENTA basis for regional modeling. +Simulation modeling based on first principals can be scalable to multiple sources and scenarios. + More detail available during model runs + Development from beginning permits desirable options to be included +Already in use on a variety of projects from EC noise mapping to taxiway noise analysis + End state designed for large, regional multimodal projects. + Existing tools retained for smaller single mode projects. + Flexibility in noise estimation - Air quality models would need to be added and databases harmonized. Responsiveness + Each build gives users what they need + AEDT modular design easily adapted to further enhancements. + Circumvents developmental constraints of legacy approaches. + All sources can be simulated + Open source code (understanding) + Highly modular structures; independent updates + Integration with other commercial software - Updates determined by commercial entities - Risk to projects if software developer goes out of business + Separation of source from propagation allows for flexibility and adaptability. - Very different sound reference levels for many sources, requiring new database development International Credibility + Learn from some foreign projects + Knowledgeable on international standards - No international coordination +Noise computation core accepted worldwide. - No international coordination + Success by others; Concepts proved by EC IMAGINE/Harmonoise projects + Source code non-proprietary so scientists/engineers can understand implementation - No international coordination + Commercial products already widely used worldwide + Similar to EC approach on noise mapping + Incorporation of EC IMAGINE results + Collaboration with IMAGINE team members + Proceeding toward a more global model development

B-12 TABLE B-9 Preliminary Assessment of the Model Design Alternatives (concluded) Evaluation Criteria Datum Building to Simulation Alternative #1 Build on AEDT Alternative #2 Build on Existing Simulation Models Alternative #3 Federal Adoption of Commercial Software Alternative #4 Build on EC IMAGINE Project Cost Implications + Draws from ongoing model development projects Incremental, increased funding tied to priority needs - Large, complex, and data intensive end state - Requires specialized expertise to use + Small additional costs on top of AEDT funding + Draws from ongoing simulation projects + Development efficiencies through use of professional software developers - Substantial new funding - Development time and cost - Lack of development by most U.S. agencies - No leverage with FAA’s AEDT development - New paradigm has user implications (training and operation) - Large, complex, and data intensive + Substantially reduced federal R&D funding focused on source data generation and benchmarking - Prohibitively high license fees for many users - New paradigm has user implications (training and operation) - No leverage with FAA’s AEDT development + Draws from ongoing simulation projects + Application only required for complex, critical, multimodal projects + Existing tools retained for single mode projects - Substantial new funding on top of current levels - Large, complex, and data intensive end state - Requires specialized expertise to use - Air quality models would need to be added and databases harmonized

B-13 TABLE B-10 Median Values from Round 1 Evaluation Datum #1 #2 #3 #4 Agency Acceptance 0.30 3.00 2.50 2.00 2.00 2.00 Technical Feasibility 0.20 3.00 3.50 2.50 3.00 3.00 Analytical Proficiency 0.20 3.00 2.00 3.00 3.00 3.00 Scalability 0.15 3.00 3.00 4.00 3.50 3.50 Responsiveness 0.10 3.00 2.50 3.00 2.00 3.00 International Credibility 0.05 3.00 3.00 3.00 3.00 4.00 3.00 2.78 2.73 2.55 2.63 3.00 2.00 3.50 4.00 4.00 1.00 1.31 0.77 0.63 0.71Value Score (V = P/C) Alternative Concepts Performance Evaluation Criteria (R) Weight (w) Performance Score (P) Cost Score (C) Note: Each cell in the Alternatives #1 to #4 columns of Table B-10 contains the median value from the applicable compilation tables on criteria, performance, cost, and value scores (Tables G-1 to G-10). For example, the Alternative #1 value score (V) of 1.31 is not the ratio of the P and C values above it (2.78/2.00), but is taken from the Value Score compilation table (Table G-10). B.5. Round 2 Process In Round 1, the winner was Alternative#1 – Build on AEDT. However, Alternative#1 was not considered the clear-cut winner as it did not outperform the Datum in performance, and its value score is primarily a product of its relatively low cost score. Therefore, in the second round, the project team evaluated the winning design from Round 1, based on the highest value score, against a new design alternative. Section B.6 discusses the process in which the compiled Round 1 scores and comments were used to construct a better alternative design. Section B.7 provides the basis for the design recommendation from the results of the second round evaluation. Table B-10 shows that Alternative#1 received the highest median rating in only a single performance category, Technical Feasibility. Alternative#1 came in second to Datum on the performance score, but achieved the highest value score by virtue of having the best (lowest) cost score. Note that certain cells in Table B-10 are shaded to identify the highest value for each criterion and score; except in the case of cost score, which is the lowest rating. The new design and Alternative#1 underwent a second round of evaluation. The new design concept became the Datum for the second round. The steps of Round 2 are described below. STEP 1. Identify Most and Least Desirable Design Attributes Evaluators’ scores are useful to identify the most desirable attributes (based on highest median values) and least desirable attributes (based on lowest median value) for each criterion as shown in Figure B-1. In the figure, the green ellipses identify the alternative(s) that received the highest median rating for each of the performance and cost criterion; same as shown in Table B-10. For example, Datum received the highest median rating for ‘Agency Acceptance’. The red rectangles identify the alternative(s) that received the lowest median rating, below 3, for each of the performance and cost criterion. For example, Alternatives # 3 and 4 received the worst cost implications rating. We then associate the flagged alternatives with their positive and negative attributes as shown in Tables B-11 and B-12.

B-14 Figure B-1. Round 1 performance and cost rating statistics.

B-15 TABLE B-11 Pro Statements from the Preliminary Assessments (Shaded cells identify alternatives that received the highest median rating in Round 1) Evaluation Criteria Datum Building to Simulation Alternative #1 Build on AEDT Alternative #2 Build on Existing Simulation Models Alternative #3 Federal Adoption of Commercial Software Alternative #4 Build on EC IMAGINE Project Agency Acceptance + FICAN supports simulation + Agency acceptance test for each build + Multimodal capability available with first build (1 year) + Expansion on FAA’s AEDT. + FICAN supports simulation + Agencies will continue to use existing tools + Development from beginning permits desirable options to be included + Open source code (understanding) + Agencies will continue to use existing tools in near-term + Agencies determine modules to use in the commercial product. + Multimodal product already exists + FICAN supports simulation + Agencies agree on multimodal assessment requirements + Continue to use existing tools until agencies decide to change Technical Feasibility + DoD has simulation noise model + Prudent (technically & financially) build sequence + Based on proven noise and emissions models + EPA preferred AERMOD basis for a single air dispersion model. +Air quality, noise and cost analysis already integrated. + Success by others; DoD and FDOT have simulation noise models + Concepts proved by EC IMAGINE/Harmonoise projects + Much greater flexibility in applications + More detail available during model runs + Development from beginning permits desirable options to be included + All sources can be simulated + Multimodal products already exist and in use + Professionally developed and maintained software + Highly modular structures able to accept US approved code + Concepts proved by EC IMAGINE/Harmonoise projects + DoD AAM provides framework + Flexibility in noise estimation Analytical Proficiency +FICAN: simulation is better + Screening tools for secondary sources + End state can calculate any metric +Air quality, noise and cost analysis already integrated. + FICAN: simulation is better + Calculate any metric + Sophisticated algorithms for accurate predictions + Selective use of computation complexion tied to fidelity + More detail available during model runs +Air quality and noise analysis already integrated. +Access to various calculation methods + Built-in features for mapping and report making +Customizable propagation and transmissions calculations. +FICAN: simulation is better + Calculate any metric + Draws on EC IMAGINE guidance on accuracy tied to application Scalability + Lessons learned from MAGENTA and SAGE to follow scalable roadmap + Global modeling from FAA’s SAGE and MAGENTA basis for regional modeling. +Simulation modeling based on first principals can be scalable to multiple sources and scenarios. + More detail available during model runs + Development from beginning permits desirable options to be included +Already in use on a variety of projects from EC noise mapping to taxiway noise analysis + End state designed for large, regional multimodal projects. + Existing tools retained for smaller single mode projects. + Flexibility in noise estimation

B-16 TABLE B-11 Pro Statements from the Preliminary Assessments (Shaded cells identify alternatives that received the highest median rating in Round 1) (concluded) Evaluation Criteria Datum Building to Simulation Alternative #1 Build on AEDT Alternative #2 Build on Existing Simulation Models Alternative #3 Federal Adoption of Commercial Software Alternative #4 Build on EC IMAGINE Project Responsiveness + Each build gives users what they need + AEDT modular design easily adapted to further enhancements. + Circumvents developmental constraints of legacy approaches. + All sources can be simulated + Open source code (understanding) + Highly modular structures; independent updates + Integration with other commercial software + Separation of source from propagation allows for flexibility and adaptability. International Credibility + Learn from some foreign projects + Knowledgeable on international standards +Noise computation core accepted worldwide. + Success by others; Concepts proved by EC IMAGINE/Harmonoise projects + Source code non-proprietary so scientists/engineers can understand implementation + Commercial products already widely used worldwide + Similar to EC approach on noise mapping + Incorporation of EC IMAGINE results + Collaboration with IMAGINE team members + Proceeding toward a more global model development Cost Implications + Draws from ongoing model development projects + Incremental, increased funding tied to priority needs + Small additional costs on top of AEDT funding + Draws from ongoing simulation projects + Development efficiencies through use of professional software developers + Substantially reduced federal R&D funding focused on source data generation and benchmarking + Draws from ongoing simulation projects + Application only required for complex, critical, multimodal projects + Existing tools retained for single mode projects

B-17 TABLE B-12 Con Statements from the Preliminary Assessments (Shaded cells identify alternatives that received the lowest median rating, less than 3, in Round 1) Evaluation Criteria Datum Building to Simulation Alternative #1 Build on AEDT Alternative #2 Build on Existing Simulation Models Alternative #3 Federal Adoption of Commercial Software Alternative #4 Build on EC IMAGINE Project Agency Acceptance - DoD moving away from integration to simulation modeling. - No multimodal capability for some years - Only DOD is currently funding simulation - No multimodal capability for some years - Dramatic change in modeling techniques when completed -Publicly available source code (uncontrolled changes) - Agencies do not have complete control over source code - Known commercial models do not include US source databases - Eventual abandonment of current agency software development projects, like AEDT. - Only DOD is currently funding simulation - No true multimodal capability for some years - Use of foreign methodologies - Air quality models would need to be added and databases harmonized. Technical Feasibility - Current lack of source data - Current lack of source data - Simulation impossible for some scenarios (i.e., intersections) - Lack of development by most U.S. agencies - Current lack of US source data -Criticality of detailed benchmarking for software approval - EC projects were noise only - Current lack of source data - Very different sound reference levels for many sources, requiring new database development - More validation is needed Analytical Proficiency - Computationally complex - Accuracy and usability remain static. - Integration model cannot accurately model many noise metrics, such as TA. - AEDT has a simplistic approach to motor vehicles. -Computationally complex - Substantially higher runtimes - Tedious input requirements - Computationally complex - Existing tools and all their faults retained for smaller single mode projects. - Very different sound reference levels for many sources, requiring new database development Scalability - Too complicated for smaller projects - Air quality models would need to be added and databases harmonized.

B-18 TABLE B-12 Con Statements from the Preliminary Assessments (Shaded cells identify alternatives that received the lowest median rating, less than 3, in Round 1) (concluded) Responsiveness - Updates determined by commercial entities - Risk to projects if software developer goes out of business - Very different sound reference levels for many sources, requiring new database development International Credibility - No international coordination - No international coordination - No international coordination Cost Implications - Large, complex, and data intensive end state - Requires specialized expertise to use - Substantial new funding - Development time and cost - Lack of development by most U.S. agencies - No leverage with FAA’s AEDT development - New paradigm has user implications (training and operation) - Large, complex, and data intensive - Prohibitively high license fees for many users - New paradigm has user implications (training and operation) - No leverage with FAA’s AEDT development - Substantial new funding on top of current levels - Large, complex, and data intensive end state - Requires specialized expertise to use - Air quality models would need to be added and databases harmonized.

B-19 Table B-11 compiles all the pro statements concerning the various designs. These statements have been taken from the preliminary assessments of designs for the Round 1 evaluations. The boxes shaded in green identify alternative(s) that received the highest median rating for that criterion; for example, the shading of the Datum-Agency Acceptance cell. Similarly, Table B-12 compiles all the con statements concerning the various designs; again from the pre-Round 1preliminary assessments of designs. The boxes shaded in red identify those alternative(s) that received the lowest median rating for that criterion; for example, the shading of both the Alternative#3-Cost Implications and Alternative#4-Cost Implications cells. STEP 2. Examine Evaluators’ Comments The Round 1 evaluators’ comments were examined for insights that might signal positive and negative aspects of the designs. Appendix H contains a compilation of the comments organized by design comparison (.e.g., Datum vs. Alternative #1, etc) and criteria (e.g., Agency Acceptance, Technical Feasibility, etc.). STEP 3. Find Better Design Elements Table B-13 constructs a rationale for better design elements on the basis of the ratings done in Step 1 and drawing upon both the pro and con statements in the preliminary assessments (Tables B-11 and B-12, respectively) and related evaluators’ comments (Appendix H). The column labeled “Most Desirable Attributes” identifies the alternative(s) that received the highest median rating for each criterion and assembles both the pro statements (from the green shaded cells of Table B-11) and comments that suggest the reasons for the rating. Comments that describe positive aspects were selected because they are linked to the high rating; while negative comments were not selected because they do not. The column labeled “Least Desirable Attributes” identifies the alternative(s) that received the lowest median rating for each criterion and assembles both the con statements (from the red shaded cells of Table B-12) and comments that suggest the reasons for the rating. Comments that describe negative aspects were selected because they are linked to the low rating; while positive comments were not selected because they do not. The far right column of Table B-13 lists the design elements and characteristics that would maximize the most desired attributes and minimize the least desirable attributes. Note that this selection of elements is done by collective consideration of the 6 performance and single cost criterion rather than individually. The latter, piecemeal approach would have produced a design of incongruent parts. The design alternatives, from which the design element was taken, are identified in brackets where possible. STEP 4. Draft New Alternative Design Concept Paper A new alternative design concept paper was drafted drawing from the Round 1 papers associated with the best design elements identified in Table B-10. STEP 5. Evaluate New Design vs. Round 1 Winner In the second round, members of the project team were asked to evaluate the new design concept drafted under Step 4 against Alternative #1 from Round 1. Just like Round 1, the evaluation was based on the modified Pugh Matrix. The results of Round 2 are discussed in the Section B.6 as part of the justification for the recommendation of a final design that is described in Section B.7.

B-20 TABLE B-13 Design Elements that Maximize the Most Desirable Attributes and Minimize the Least Desirable Attributes Evaluation Criteria Most Desirable Attributes Least Desirable Attributes Design Elements Agency Acceptance Datum received the highest rating. The preliminary assessment suggests these positive factors: • Agency acceptance test for each build • Multimodal capability available with first build (1 year) The evaluators’ comments related to this rating would suggest that: • Agencies would prefer a step-by-step build sequence to re-evaluate progress versus needs. • Multimodal capability in the near term is more important than the simulation capability in the near term since many of the advantages of simulation modeling cannot be realized until adequate source data are available. • Agencies will be hesitant to scrap ongoing projects, but should recognize the superiority of simulation modeling. Alternatives #2, 3, and 4 received the same lowest rating. The preliminary assessment suggests these negative factors: • No true multimodal capability for some years • Use of foreign methodologies • Lack of control over software The evaluators’ comments related to this rating would suggest that: • Agencies will be hesitant to scrap ongoing projects, but should recognize the superiority of simulation modeling. • Cannot imagine regulatory agencies agreeing to depend upon a commercial entity. • Coming from a Federal agency, this option provides a loss of control. This is not a turf issue, but rather a regulatory compliance issue. • The risk is far too great here should the vendor go out of business, unless licensing could be arranged to include the source code. • Agencies would be unlikely to rely on the market to dictate performance and costs - particularly costs to the users • Agencies would be reluctant to accept a European approach Taking into consideration the most and least desirable attributes across the 6 performance criteria along with the cost implications, the design elements and characteristics for a better design concept are: • Progressive build sequence [Datum & Alt#1] • Screening tools [Datum] • Multimodal capability right away [Datum] • Calculate all required metrics [Datum & Alt#1] • Leverage AEDT development [Datum & Alt#1] • Simulation end state [Datum, Alt#2, & Alt#4] • Official guidance from federal agencies on how to do multimodal environmental analysis [Alt#4] • Learn from EC IMAGINE [Alt#4] • Minimize user costs [not Alt#3] • Scalable from regional to single-site projects • Periodic agency acceptance test [Datum] Technical Feasibility Alternative #1 received the highest rating. The preliminary assessment suggests these positive factors: • Based on proven noise and emissions models • Air quality, noise and cost analysis already integrated. The evaluators’ comments related to this rating would suggest that: • While the ability to run individual analyses is probably easier in the datum alternative, the [Alternative #1] configuration management of the software, databases, and results as well as the information management of the input and output data is significantly improved making larger and more complex analyses more feasible. • Seemingly easier to implement and build upon existing methodologies or those already in development. Alternative #2 received the lowest rating. The preliminary assessment suggests these negative factors: • Current lack of source data • Simulation impossible for some scenarios (i.e., intersections) • Lack of development by most U.S. agencies The evaluators’ comments related to this rating would suggest that: • I am severely concerned about the impossibility of modeling intersections - FHWA project very often involve intersections. If this is not a possibility, then this option is much worse then datum. • … the question comes down to sequencing, and due to data availability and other reasons, having the multimodal capability before the simulation capability makes more sense.

B-21 TABLE B-13 Design Elements that Maximize the Most Desirable Attributes and Minimize the Least Desirable Attributes (continued) Evaluation Criteria Most Desirable Attributes Least Desirable Attributes Design Elements Responsiveness The 3 simulation-based concepts (Datum, Alternative #2, and Alternative#4) received the same highest rating. The preliminary assessments do not provide a clear indication of other positive factors in common. The evaluators’ comments related to this rating would suggest that: • With a multimodal capability being the focus, the Datum gets us there more rapidly [than Alternative #1]. • By virtue of the build sequence, the Datum could be considered the most responsive. Also, AEDT may not allow for new noise metrics • Datum approach facilitates responsiveness due to its 'loosely-coupled', highly modular approach. • The [Datum and Alternative#2] end state in terms of scalability is very similar if not identical. • With modularity and first principles approaches being roughly equivalent between Datum and Alt#4 there do not seem to be any differentiator in Responsiveness. Alternative #3 received the lowest rating. The preliminary assessment suggests these negative factors: • Updates determined by commercial entities • Risk to projects if software developer goes out of business. The evaluators’ comments related to this rating would suggest that: • Mainly because so much is out of the control of US agencies. • Commercial developers could give up any time they wished. … • While the alternative appears to more responsive technically, there is substantial risks that certain regulatory requirements would not be fulfilled if left to the determination of a commercial entity. • The risk is far too great here should the vendor go out of business, unless licensing could be arranged to include the source code. International Credibility Alternative #4 received the highest rating. The preliminary assessment suggests these positive factors: • Incorporation of EC IMAGINE results • Collaboration with IMAGINE team members • Proceeding toward a more global model development The evaluators’ comments related to this rating uniformly emphasized collaboration with EC IMAGINE as a major plus. However, one of these evaluators suggested that there will always be the question about U.S. domain knowledge working off of a European design. The U.S. will not be in a position of global leadership with this approach. The other 4 alternatives were equally rated at 3.

B-22 TABLE B-13 Design Elements that Maximize the Most Desirable Attributes and Minimize the Least Desirable Attributes (concluded) Evaluation Criteria Most Desirable Attributes Least Desirable Attributes Design Elements Cost Implications Alternative #1 received the best rating (lowest cost implications). The preliminary assessment suggests that this alternative would just add small costs on top of current funding for AEDT development. The evaluators’ comments related to this rating would suggest that: • Initial Alt#1 implementation higher but maintenance and life cycle cost lower. • It [Alternative #1] will continue and expand an ongoing project. As such, past investments will continue to see returns. • Cost comparison depends on how far along the build sequence the Datum would go. Alt# 1 would be less expensive to achieve the same end state. • Alternative #1 would probably have lower development costs over the entire cycle, but higher upfront costs and higher risks (due to tightly- coupled integration). User costs: probably greater for Alternative #1 in short term (large, single model). As well, maintenance costs for Alternative #1 is probably more (any advances in sub-models would have to be incorporated into large model). Alternatives #3 and 4 received the same worst rating (highest cost implications). The preliminary assessment suggests these negative factors: • Prohibitively high license fees for many users • No leverage with FAA’s AEDT development • Substantial new funding on top of current levels • Air quality models would need to be added and databases harmonized. The evaluators’ comments related to this rating would suggest that: Alternative #3 • All government agencies are used to having zero license fees for current models. I seriously doubt that possibility for this alternative. • Savings on future federal R&D funds is outweighed by high user fees and wasted expenditures on existing projects, such as AEDT. • It will be much more expensive for the user. State DOTs would have to purchase their own, as well as every consultant the does this work for a State DOT. These costs could adversely hinder small, minority or women lead businesses. • Substantial individual seat costs associated with these tools render this a non-viable option • Both the recurring costs to the user base and the maintenance of accessibility to results inventories will be costly for both the agencies and commercial vendors. • Lower costs for DOT, but potentially high costs for users. • Smaller development cost (to the government agencies) offset by higher user costs (purchase of licenses). Alternative#4 • FHWA would still need to fund TNM to keep it operational. • Working with foreign developers would be difficult and time consuming.

B-23 B.6. Results of the Round 2 Evaluation In the second round, the members of the project team evaluated the winner of Round 1 (Alternative #1 – Build on AEDT) against the new design constructed based on the scores and comments as described in the previous section. The new design concept became the Datum in the second round. The new concept is an amalgamation of good ideas from the alternative designs evaluated in the first round including the alternative to which it was judged in Round 2. It is a road map like that from Round 1 Datum. Because of the importance of the ongoing investment, the concept builds on AEDT development to bring in other modes. The design leads to a simulation model, but recognizes that it is not politically or economically practical to go straight to simulation (as espoused in Alternative #2 from Round 1), and uses progressive steps to a simulation end state. Table B-14 contains the median performance criteria ratings, median performance and cost scores, and median value scores as computed from the 9 scorebooks. The shaded cells identified the highest median value for each criterion and score; except in the case of cost score, which is the lowest rating. TABLE B-14 Round 2 Evaluation Scores Performance Evaluation Criteria Weighting Datum Alternative #1 Agency Acceptance 0.30 3.00 2.00 Technical Feasibility 0.20 3.00 3.00 Analytical Proficiency 0.20 3.00 2.00 Scalability 0.15 3.00 2.00 Responsiveness 0.10 3.00 2.00 International Credibility 0.05 3.00 2.00 3.00 2.35 3.00 2.00 1.00 0.93 Cost Score (C) Value Score (V = P/C) Round 2 Median Values Concepts Performance Score (P) The Datum is superior to Alternative #1 for 5 of the 6 performance criteria (and achieved the same median rating for the other performance criteria) and received the higher value score. Appendix H contains a compilation of the Round 2 evaluators’ comments organized by criteria (e.g., Agency Acceptance, Technical Feasibility, etc.). The comments show that many of the evaluators recognized the incorporation of good design ideas from the first round. Constructed from the first round evaluation and confirmed by the second round evaluation, the recommended design concept is presented in the next section. B.7. Recommended Model Design B.7.1. Progressive Build in Sync with Agencies’ and Users’ Requirements As the title indicates, the premise of this design concept is to build what the user community and agencies need when they need it. The principal characteristics of the design are: ● Provide a multimodal capability right away with combined output and screening tools; ● Leverage ongoing federal agency model development efforts, such as, FAA’s AEDT; ● Coordinate with regulatory agencies to ensure compatibility with official guidance on how to do multimodal environmental analysis; ● Look to the future with the stretch goal of a simulation end state;

B-24 ● Adapt from other research, such as, EC IMAGINE; and ● Learn from applications and users’ experiences. Rather than initiating a single, large-scale effort to design and develop the end state, the design incorporates a build sequence toward the end state in a series of steps, each step providing an improvement to some facet of the overall model. The build sequence is predicated on giving the users and agencies the tool that they need within expandable system architecture. The model will draw upon ongoing model development projects sponsored by the federal government, such as, FAA’s Aviation Environmental Design Tool (AEDT) suite and DoD’s Advanced Acoustic Model (AAM). AEDT is actually a suite of programs working together to perform not only environmental impact estimations, but also to allow policy decisions to be made in an informed way. This alternative explores continued development of the AEDT into a true multimodal noise and emissions model for all modes of transportation. The stretch goal is the eventual development (end state) of a source (airplane, automobile, truck, marine vessel, etc.) simulation model with benefits evaluator to convert noise exposure and air quality changes into environmental costs. The model will simulate the sound propagation and air pollutant emissions for the moving sources. In May 2007, the European Commission (EC) completed its major noise modeling project, IMAGINE (Improved Methods for the Assessment of the Generic Impact of Noise in the Environment), which proved that it is technically feasible to build a noise model that can compute noise levels from a variety of sources. The results of the IMAGINE project fit in perfectly with the simulation modeling concept. B.7.2. Functional Specifications The progressive build sequence of this concept is guided by the principal: “Think big; start small; act now.” The first 2 builds are intended to provide low cost multimodal environmental analysis capability based on the existing tools required by the various federal agencies for single mode analyses. Builds #3 and 4 leverage the ongoing AEDT development effort. The AEDT tool suite has various modules including an economics model and cost and benefit module in addition to the environmental impacts estimation module. The environmental impacts estimation module is based on four proven (nationally and internationally) noise and air quality models: INM, Magenta, EDMS, and SAGE. The noise models, based on the integrated approach, allow for a wide range of outputs including a range of metrics for A- and C-Weighted, and tone perceived levels, and an approximation for time above outputs. For air quality, all criteria pollutants plus carbon dioxide and speciated hydrocarbon outputs are available. The noise computation modules in AEDT are very detailed including spherical spreading, atmospheric absorption, terrain shielding, lateral attenuation, and ground effects for noise propagation calculations. This would provide a solid platform for modeling the noise from other modes of transportation. The database would have to be expanded to include reference emission levels for other modes of transportation. For air quality, a detailed emission inventory process is included for aircraft-related sources, motor vehicles, and some stationary sources with local dispersion based on the accepted air quality dispersion model, AERMOD. AERMOD has been used for many sources, is now being considered by FHWA for motor vehicles, which would allow for a single air dispersion model to be used although greater detail would be required. As determined by need and affordability, the remaining build sequences (Builds #5 and beyond) are to create the simulation end state. The ultimate requirement for noise would consist

B-25 of a time-history of the one-third octave band spectrum produced by each source operation. When combined with numbers of operations of the different sources, the simulation model would use common algorithms and: ● Calculate any noise metric for any transportation source; ● Propagate sound over any terrain, surface, barrier, structural effects (urban canyon reverberation, etc.) and through any meteorological condition; ● Compute that propagation with a precision that is proportional to the effort spent on terrain/meteorological input (will vary by type of project); ● Include complete and validated transportation sources databases; ● Integrate background noise estimation; ● Offer the level of accuracy that meets or exceeds any regulatory requirement; and ● Provide second-by-second noise. For emissions, the simulation model would: ● Predict fuel consumption which would serve as a basis for energy usage (needs to take into account the different fuel types); ● Provide emissions of both criteria pollutants and Greenhouse Gases (GHG); ● Predict emissions by specific modes (e.g., acceleration, takeoff, etc.) and equipment type (e.g., light-duty vehicles, Boeing 737-200, etc.); and ● Provide second-by-second emissions. For Air Quality, the simulation model would: ● Generate second-by-second atmospheric concentrations; ● Be able to model both transport and chemical transformations for characterized pollutants including Hazardous Air Pollutants (HAPs) and particulate matter (PM); and ● Take into account structural effects such as building wake effects, urban canyon effects, tunnels, etc. B.7.3. Justification Clearly, the development of a multimodal environmental model would require a major expenditure of funds and would take many years to complete. Rather than initiating a single, large-scale effort to design and develop the end state, a more realistic approach, consistent with feasible funding streams and practical stakeholder needs, would be to approach the end state in a series of steps, each step providing an improvement to some facet of the overall model. It is important for the architecture of the model to be sufficiently flexible so as to allow for a scalable roadmap towards a future end state. Extensive resources have been expended by FAA to build AEDT. This initial expenditure has built a strong base for the aviation sources which would significantly reduce the cost in comparison to other options since implementation time would be greatly reduced and only an expansion of the model would be required. The model has also overcome a large hurdle in that air quality, noise and economic considerations have been considered and integrated into this single model. While the data base sharing would need to be improved and would need to be expanded for other modes of transportation other than aviation, AEDT would provide a platform for inclusion of the other modes of transportation. Additionally, the models used in AEDT have been

B-26 promulgated and accepted by EPA for air quality and the noise model is accepted on an international basis. Implementation for the other modes of transportation could be done with the accepted modeling processes as well, again reducing the time requirements since these models have been previously accepted by other agencies. The modular design would allow for easy inclusion of other models so that other modes of transportation could be included without extensive redesign of the basic model platform. Rail, water and highway would have to be included as modes, but again, could be done in the modular design. The database design would allow other reference levels for noise and emission levels for air quality to be included in the same way, again without extensive changes to the system architecture. The advantages are considerable and include: ● Use of established models so that development is not needed nor is the long validation process; ● The database design allow inclusion of modes of transportation in the same way so that model components can be reused and similar; ● Inputs can be shared so that repetition is not required; ● The local and global modeling will work together; ● Algorithms can be repeated, leading to more streamline design; ● Global and regional modeling could be done by model expansion for other modes; ● Future expansion is enhanced because of the modularity of the system; and, ● The model is easily adapted for other cases allowing mitigation and future enhancements to be analyzed using the same model structure reducing cost and time of implementation. Integrated models, such as, AEDT, have computational limitations for dynamic processes. The Federal Interagency Committee on Aviation Noise (FICAN) has already recognized simulation noise models as having the most potential for accuracy and precision in situations requiring sophisticated analysis. Examples of the adoption of noise simulation include the National Parks Service’s adoption of NMSim (Noise Model Simulation), the development of AAM (Advanced Acoustics Model) through SERDP (the US Department of Defense Strategic Environmental Research and Development Program), and the adoption of RNM (Rotorcraft Noise Model) by NASA and NATO as the de-facto standard for helicopters and tilt-rotors outdoor noise propagation. International credibility of this approach is bolstered by the fact that the European Commission has undergone a multinational research and developmental effort resulting in algorithms and technical guidance for using a harmonized ground and air noise source and propagation methodology through the IMAGINE project. Justification for incorporating these air quality and noise models into a simulation model capable of handling multiple modes of transportation lies in the fact that simulation modeling has already been proven to be more accurate and will provide a step forward in environmental modeling for analysts of all agencies. Considerable advantages include: ● The continued use of the most sophisticated algorithms to most accurately predict results at points of interest. ● Building of current simulation models will circumvent developmental constraints caused by legacy approaches of lesser fidelity. ● Ray tracing algorithms for noise can be applied to any source regardless of transportation mode.

B-27 ● Proper inclusion of meteorological effects, terrain, and other heterogeneous scenarios. ● Sufficient detail in the output will provide thorough understanding of any scenario. ● Inputs provide accurate representation of sources more closely based on first principles rather than required assumptions or calculated metrics in a static case (as is the case with AEDT). ● Knowledge and validation from existing simulation models will streamline development. ● Updates to propagation and dispersion algorithms can be independent of source definitions. ● Sufficient detail in output will allow any standard or supplemental metric to be calculated. ● Existing tools that model source movements may be used and tracks may be translated into time-varying spatial and conditional source trajectories. ● The main drivers for noise and emissions, such as acceleration and power setting, can be directly listed or inferred from a sufficiently detailed trajectory file. ● Potential exists for a harmonized source definition file to contain noise and air quality data together as well as rules for interpolation and extrapolation thereof. ● The ability to define multiple emissions components emanating from a single source for a single mode (such as separate definitions for both the main and tail rotors of a helicopter). REFERENCES (B-1) Pugh, S., “Creating Innovative Products Using Total Design.” Addison-Wesley, 1996. (B-2) Saaty, T., “The Analytic Hierarchy Process: Planning, Priority Setting, Resource Allocation.” McGraw-Hill, 1980. (B-3) Edwards, W. (1971). Social utilities. Engineering Economist, Summer Symposium Series 6, 119–29. (B-4) Takai S. and Ishii K., Modifying Pugh’s Design Concept Evaluation Methods, Proceedings of ASME Design Engineering Technical Conference (DETC’04), September 28 – October 2, 2004, Salt Lake City, UT.

Next: Appendix C: Current States of the Art »
A Comprehensive Development Plan for a Multimodal Noise and Emissions Model Get This Book
×
 A Comprehensive Development Plan for a Multimodal Noise and Emissions Model
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

TRB’s Airport Cooperative Research Program (ACRP) Web-Only Document 11: A Comprehensive Development Plan for a Multimodal Noise and Emissions Model explores development of a tool that would allow for the assessment of the noise and air quality impacts on the population from multiple transportation sources, assess the total costs and impacts, and assist in the design and implementation of mitigation strategies. The availability of a multimodal noise and emissions model could help inform airport and policymakers charged with evaluating and making decisions on expanding transportation facilities.

READ FREE ONLINE

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!