National Academies Press: OpenBook

Modeling and Simulation in Manufacturing and Defense Acquisition: Pathways to Success (2002)

Chapter: 5 Modeling and Simulation Research and Development Topics

« Previous: 4 Systems-of-Systems, Distributed Simulations, and Enterprise Systems
Suggested Citation:"5 Modeling and Simulation Research and Development Topics." National Research Council. 2002. Modeling and Simulation in Manufacturing and Defense Acquisition: Pathways to Success. Washington, DC: The National Academies Press. doi: 10.17226/10425.
×
Page 77
Suggested Citation:"5 Modeling and Simulation Research and Development Topics." National Research Council. 2002. Modeling and Simulation in Manufacturing and Defense Acquisition: Pathways to Success. Washington, DC: The National Academies Press. doi: 10.17226/10425.
×
Page 78
Suggested Citation:"5 Modeling and Simulation Research and Development Topics." National Research Council. 2002. Modeling and Simulation in Manufacturing and Defense Acquisition: Pathways to Success. Washington, DC: The National Academies Press. doi: 10.17226/10425.
×
Page 79
Suggested Citation:"5 Modeling and Simulation Research and Development Topics." National Research Council. 2002. Modeling and Simulation in Manufacturing and Defense Acquisition: Pathways to Success. Washington, DC: The National Academies Press. doi: 10.17226/10425.
×
Page 80
Suggested Citation:"5 Modeling and Simulation Research and Development Topics." National Research Council. 2002. Modeling and Simulation in Manufacturing and Defense Acquisition: Pathways to Success. Washington, DC: The National Academies Press. doi: 10.17226/10425.
×
Page 81
Suggested Citation:"5 Modeling and Simulation Research and Development Topics." National Research Council. 2002. Modeling and Simulation in Manufacturing and Defense Acquisition: Pathways to Success. Washington, DC: The National Academies Press. doi: 10.17226/10425.
×
Page 82
Suggested Citation:"5 Modeling and Simulation Research and Development Topics." National Research Council. 2002. Modeling and Simulation in Manufacturing and Defense Acquisition: Pathways to Success. Washington, DC: The National Academies Press. doi: 10.17226/10425.
×
Page 83
Suggested Citation:"5 Modeling and Simulation Research and Development Topics." National Research Council. 2002. Modeling and Simulation in Manufacturing and Defense Acquisition: Pathways to Success. Washington, DC: The National Academies Press. doi: 10.17226/10425.
×
Page 84
Suggested Citation:"5 Modeling and Simulation Research and Development Topics." National Research Council. 2002. Modeling and Simulation in Manufacturing and Defense Acquisition: Pathways to Success. Washington, DC: The National Academies Press. doi: 10.17226/10425.
×
Page 85
Suggested Citation:"5 Modeling and Simulation Research and Development Topics." National Research Council. 2002. Modeling and Simulation in Manufacturing and Defense Acquisition: Pathways to Success. Washington, DC: The National Academies Press. doi: 10.17226/10425.
×
Page 86
Suggested Citation:"5 Modeling and Simulation Research and Development Topics." National Research Council. 2002. Modeling and Simulation in Manufacturing and Defense Acquisition: Pathways to Success. Washington, DC: The National Academies Press. doi: 10.17226/10425.
×
Page 87
Suggested Citation:"5 Modeling and Simulation Research and Development Topics." National Research Council. 2002. Modeling and Simulation in Manufacturing and Defense Acquisition: Pathways to Success. Washington, DC: The National Academies Press. doi: 10.17226/10425.
×
Page 88
Suggested Citation:"5 Modeling and Simulation Research and Development Topics." National Research Council. 2002. Modeling and Simulation in Manufacturing and Defense Acquisition: Pathways to Success. Washington, DC: The National Academies Press. doi: 10.17226/10425.
×
Page 89
Suggested Citation:"5 Modeling and Simulation Research and Development Topics." National Research Council. 2002. Modeling and Simulation in Manufacturing and Defense Acquisition: Pathways to Success. Washington, DC: The National Academies Press. doi: 10.17226/10425.
×
Page 90
Suggested Citation:"5 Modeling and Simulation Research and Development Topics." National Research Council. 2002. Modeling and Simulation in Manufacturing and Defense Acquisition: Pathways to Success. Washington, DC: The National Academies Press. doi: 10.17226/10425.
×
Page 91
Suggested Citation:"5 Modeling and Simulation Research and Development Topics." National Research Council. 2002. Modeling and Simulation in Manufacturing and Defense Acquisition: Pathways to Success. Washington, DC: The National Academies Press. doi: 10.17226/10425.
×
Page 92
Suggested Citation:"5 Modeling and Simulation Research and Development Topics." National Research Council. 2002. Modeling and Simulation in Manufacturing and Defense Acquisition: Pathways to Success. Washington, DC: The National Academies Press. doi: 10.17226/10425.
×
Page 93
Suggested Citation:"5 Modeling and Simulation Research and Development Topics." National Research Council. 2002. Modeling and Simulation in Manufacturing and Defense Acquisition: Pathways to Success. Washington, DC: The National Academies Press. doi: 10.17226/10425.
×
Page 94
Suggested Citation:"5 Modeling and Simulation Research and Development Topics." National Research Council. 2002. Modeling and Simulation in Manufacturing and Defense Acquisition: Pathways to Success. Washington, DC: The National Academies Press. doi: 10.17226/10425.
×
Page 95
Suggested Citation:"5 Modeling and Simulation Research and Development Topics." National Research Council. 2002. Modeling and Simulation in Manufacturing and Defense Acquisition: Pathways to Success. Washington, DC: The National Academies Press. doi: 10.17226/10425.
×
Page 96
Suggested Citation:"5 Modeling and Simulation Research and Development Topics." National Research Council. 2002. Modeling and Simulation in Manufacturing and Defense Acquisition: Pathways to Success. Washington, DC: The National Academies Press. doi: 10.17226/10425.
×
Page 97
Suggested Citation:"5 Modeling and Simulation Research and Development Topics." National Research Council. 2002. Modeling and Simulation in Manufacturing and Defense Acquisition: Pathways to Success. Washington, DC: The National Academies Press. doi: 10.17226/10425.
×
Page 98
Suggested Citation:"5 Modeling and Simulation Research and Development Topics." National Research Council. 2002. Modeling and Simulation in Manufacturing and Defense Acquisition: Pathways to Success. Washington, DC: The National Academies Press. doi: 10.17226/10425.
×
Page 99
Suggested Citation:"5 Modeling and Simulation Research and Development Topics." National Research Council. 2002. Modeling and Simulation in Manufacturing and Defense Acquisition: Pathways to Success. Washington, DC: The National Academies Press. doi: 10.17226/10425.
×
Page 100
Suggested Citation:"5 Modeling and Simulation Research and Development Topics." National Research Council. 2002. Modeling and Simulation in Manufacturing and Defense Acquisition: Pathways to Success. Washington, DC: The National Academies Press. doi: 10.17226/10425.
×
Page 101
Suggested Citation:"5 Modeling and Simulation Research and Development Topics." National Research Council. 2002. Modeling and Simulation in Manufacturing and Defense Acquisition: Pathways to Success. Washington, DC: The National Academies Press. doi: 10.17226/10425.
×
Page 102

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

5 Modeling and Simulation Research and Development Topics Recent advances in modeling and simulation (M&S) technologies make them increasingly appealing as a means of improving commercial manufacturing and defense acquisition. However, in order for these M&S technologies to support the desired applications in commercial manufacturing and defense acquisition, additional research and development (R&D) is needed. In its statement of task, the committee was asked to investigate emerging M&S technologies, assess ongoing efforts to develop them, and identify gaps that would have to be filled in order to make these emerging technologies a reality. The committee rephrased this task and sought to determine those M&S topics requiring R&D in order for M&S to be effectively used in commercial manufacturing and defense acquisition. The topics requiring R&D were identified by the committee on the basis of the expertise of its members and information obtained from expert briefings. In addition, the committee surveyed literature calling for M&S R&D. The committee grouped these topics into four broad categories: (1) modeling methods, (2) model integration, (3) model correctness, and (4) standards, which are discussed in the sections that follow. 77

78 MODELING AND SIMULATIONIN MANUFACTURING MODELING METHODS Lack of adequate modeling methods is one of the most serious shortfalls in using M&S (MORS, 2000~. In order to maximize the potential of M&S technologies for commercial manufacturing and defense acquisition, basic research must be undertaken to improve understanding of modeling methods and characteristics, including scalability, multiresolution modeling, agent-based modeling, semantic consistency, modeling complexity, fundamental limits of modeling and computation, and uncertainty. Scalability Scalability is the attribute of a system's architecture that pertains to the behavior and performance of the system as the size, complexity, and interdependence of its elements or applications increase. Difficulties in dealing with large-scale software systems are well documented (NRC, 2000~. Techniques that work for small systems often fail markedly when the scale is increased significantly. To be upwardly scalable, a system must assure consistency in both the functionality and the quality of the services it provides as the number of its users increases indefinitely. To scale by a million, an application's storage and processing capacity would have to be able to grow by a factor of 1 million just by adding more resources (NRC, 2000~. This implies that as a system expands or as performance demands increase, the underlying architecture must support the ability to reimplement the same functionality with more powerful or capable infrastructure, for example, replacing a single server with a high- performance server farm. Traditional modeling and simulation have focused on microlevel components rather than on macrolevel integration of these components. However, with the advent of large-scale systems such as extended enterprises and distributed mission training, it is necessary to develop approaches for designing scalable M&S system architectures, including process specifications, linguistic support, granularity, and levels of abstraction to support system architecture design. This effort includes modularization, interconnectivity, and integration platforms as well as the standardization of application programs, automatic installation of modules, and verification. Metrics for such designs include robustness, reliability, flexibility, and the ability of the system to adapt dynamically to changing conditions. Several levels of architectural scalability are illustrated in Figure 5-1.

RESEARCHAND DEVELOPMENT TOPICS _! am_ - · ~] ~~ E~en~ ente~se into . . . . conn~i~,it~~ t ~ Node ~pa~y ~ Enterprise A~hits~re Level 1~ 1 ,, Data dun management ~:~ ~del~ng and Simulation ~Ye' Figure 5-1 Levels of architectural scalability. 79 | Appl0~n Dyer ~ I R*~-t~rm~ event ct,anne! _ - , ........ _ Node capability 13- IT- Infrastru~L`~e Level Current and foreseeable trends are to employ object-oriented technology to enable scalability attributes. In object-oriented terms, the scalability problem can be stated as designing a system with the appropriate interface definitions that allow the implementations behind the interfaces to be upgraded from single objects to multiple coordinated objects or to objects of more capable classes. Abstraction, modularity, and layering are the basis of such interface design concepts (Messerschmitt, 2000~. Scalability designs must live within existing resources in communications bandwidth and computing power available from the underlying computing and network technologies. A practical approach to scalability also requires consideration of interoperability in order to address the problems of data heterogeneity that are due to a lack of accepted standards and the current multiplicity of approaches (IMTI, 2000~. Multiresolution Modeling "Multiresolution modeling" and/or "multiresolution simulation" is defined as the representation of realworld systems at more than one level of resolution in a model or simulation, respectively, with the level of resolution dynamically variable to meet the needs of the situation. R&D into multiresolution modeling has been recommended (NRC, 19971. It is considered especially important for SEA because acquisition programs will

so MODELING AND SIMULATION IN MANUFACTURING need to move up and down the resolution hierarchy and use the proper level of models and simulations to support iterative trade-off analyses (Ewen et al., 2000~. In addition, multiresolution simulation has the potential to improve the scalability and flexibility of simulation applications. A related concept is "multiviewpoint simulation." In this case, simulation takes place at a single level of resolution, but the execution events and results are presented at different levels of resolution, or viewpoints, as appropriate to the needs of the user. Significant unresolved issues in implementing multiresolution models, however, account for the need for research in this area. A number of multiresolution simulations have been implemented (Stober et al., 1995; Franceschini and Mukherjee, 1999), but that work has approached the problem largely from an experimental and practical point of view. As yet, no complete and coherent theoretical framework exists for multiresolution models, although some work leading toward such a framework has been completed (Davis, 1993; Franceschini and Mukherjee, 1999~. Some problematic issues arise in multiresolution models, including maintaining consistency between levels of resolution when aggregation and disaggregation operations occur (Davis, 1993; Franceschini and Mukherjee, 1999), dealing with "chain" or "spreading" disaggregation (Petty, 1995), allowing interactions between objects at different levels of resolution, and preserving consistency during reengagements. Some work has been done on each of these issues, but more is required. In addition, multiresolution modeling affects the architecture of the simulations that use it by requiring the ability to dynamically change object and event resolution during run time; those architectural issues are also the subject of ongoing work. One architectural approach that may resolve some of the problematic modeling issues just listed is to develop families of models, rather than single models, at various levels of abstraction (resolution) (Davis, 1995; NRC 1997; Davis and Bigelow, 1998~. Distributed simulation systems are being developed to support interoperation of such model families (Davis, 2001~. Agent-Based Modeling Agent-based modeling is a modeling method based on the simulation of what are called low-level entities, such as individual people or aircraft, that have simple behaviors but that can produce complex and unexpectedly realistic collective, or emergent, behavior (Epstein and Axtell, 1996~. As discussed earlier, such modeling methods are an important area of research for supporting realistic simulation of complex systems-of-systems (NRC, 1997; Ewen et al., 2000~. A sampling of the open research issues in agent- based modeling includes achieving satisfactory run-time performance

RESEARCH AND DEVELOPMENT TOPICS 81 when simulating large numbers of agents, determining an adequate level of fidelity for individual agents' behavior, validating agent-based models (Balmann, 2000, Axtel] and Epstein, 1994), and avoiding ad hoc assumptions during model development (Cederman, 1997~. Semantic Consistency Semantic consistency, also known as substantive interoperability, refers to consistent phenomenological representations of real-world systems and processes among interacting distributed simulations. For example, two combat simulations must have consistent models of intervisibility or they will be unable to interoperate meaningfully in a distributed simulation (Dahmann et al., 1998~. Research into semantic consistency and a general mathematical language for expressing models are recommended (NRC, 1997~. Dealing with Complexity and Errors Abstraction is the process of extracting a relatively sparse set of entities and relationships from a complex reality to produce a valid simplification of that reality. Abstraction is a general process; it includes simplification approaches such as aggregation, omission of variables and interactions, linearization, replacing stochastic processes by deterministic ones (and conversely), and changing the formalism in which models are expressed (Zeigler et al., 2000~. The complexity of a model is measured in terms of the time and space required to execute it as a simulation. The more detail included in a model, the greater the resources required of the development team to build it and to execute it as a simulation once it is built. Validity is preserved through appropriate morphism mappings at desired levels of specification. Thus, abstraction methods, such as aggregation, will be framed in terms of their ability to reduce the complexity of a model while retaining its validity relative to the given modeling objectives. inevitable resource constraints require working with models at various levels of abstraction. As noted above, the complexity of a model depends on the level of detail, which in turn depends on the size/resolution product. The size/resolution product reflects the fact that increasing the size, or number of components, and resolution, or number of states per component, leads to increasing complexity (Zeigler et al., 2000~. Since complexity depends on the size/resolution product, complexity can be reduced by reducing the size of the model or its resolution or both. Given fixed resources and a model complexity that exceeds these resources, a

82 MODELING AND SIMULA TION IN ~NUFAC TURING trade-off must be made between size and resolution. If some aspects of a system are represented very accurately, only a few components will be representable. Alternatively, a comprehensive view of the entire system can be provided, but only at a low resolution. Several new approaches to modeling complexity are being developed. One of them is the notion of coordinated families of simulations at different levels of resolution, which was mentioned previously. This approach presupposes the existence of effective ways to develop and correlate the underlying abstractions. A second approach, exploratory analysis, attempts to overcome computational complexity by addressing the issue of optimization, or searching through large spaces of alternatives for best solutions to a problem (Davis and Hillestad, 20009. This approach uses low-resolution models with a wide scope intended to capture the main features of an overall system or scenario. The approach seeks to exploit the reduction in the large space of alternatives that low-resolution, or highly abstracted model structures, may provide. A third approach fundamentally reconsiders the issue of optimization as a search for the best among many alternatives. The fast, frugal, and accurate (FFA) perspective on real- world intelligence provides a framework for insight into this issue (Gigerenzer and Todd, 1999; Gigerenzer and Goldstein, 2000~. FFA is taken from the domain of human decision making in which full optimization is associated with unbounded rationality. This perspective recognizes that the real world is a threatening environment in which knowledge is limited, computational resources are bounded, and little time is available for sophisticated reasoning. Simple building blocks that steer attention to informative cues, terminate search processing, and make final decisions can be put together to form classes of heuristics that perform at least as well as more complex, information- hungry algorithms. Moreover, such FFA heuristics are more robust when generalizing to new data, since they require fewer parameters to identify. They are accurate because they exploit the way that information is structured in the particular environments in which they operate. FFAs are a different breed of heuristics. They are not optimization algorithms that have been modified to run under computational resource constraints, such as tree searches that are cut short when time or memory runs out. Typical FFA schemes exploit minimal knowledge, such as object recognition and other one-reason bases for choice making under time pressure, elimination models for categorization, and "satisficing" heuristics for sequential search. In his radical departure from conventional rational- agent formulations, Simon asserted the bounded rationality hypothesis, namely, that an agent's behavior is shaped by the structure of its task environment and its underlying computational abilities (Simon and Newell,

RESEARCH AND DEVELOPMENT TOPICS 83 1964~. Fast and frugal heuristics are mechanisms that a mind can execute under limited time and knowledge availability and that could possibly have arisen through evolution. One illustration of Simon's "satisficing" alternative to optimization is the "take the first best" inferencing heuristic, which employs only a fraction of available knowledge and stops immediately when the first, rather than the best, answer is found. "Take the first best" does not attempt to integrate all available information into its decision. It is noncompensatory and nonlinear and can violate transitivity, the canon of rational choice. Fundamental Limits of Modeling and Computation In order to satisfy the needs of M&S for increasingly complex systems and processes, an integration of the statistics-oriented approach to M&S research must be emphasized by the academic community and the computer-science-oriented approach to M&S research must be emphasized by DOD and industry in acquisition and manufacturing. The statistics- oriented approach deals with prediction and management of uncertainty, whereas the computer-science-oriented approach deals with interoperability, reusability, integration, distributed operation, and human/machine interfaces. The computer-science-oriented approach is necessary for the future operational success of defense acquisition and commercial manufacturing, but as processes and systems become increasingly complex, estimation and management of uncertainties will become increasingly important. Some fundamental limitations in computation in dealing with complex systems must be recognized. The performance of any future complex system will be unavoidably stated in probabilistic terms. A suite of software and a collection of databases may be technically interoperable and can be used to calculate system performance under a given set of operating environments, but there is no way that these tools can estimate the percentage of time that the system will perform satisfactorily under different circumstances, what the expected performance will be under uncertainty, or what the confidence level of the estimate is. In order to answer these questions, Monte Carlo experiments must be run on the system. Here, one runs up against fundamental limitations of performance simulation involving uncertainties. "There are fundamental limitations to improve the simulation speed due to fact that confidence interval of

84 MODELING AND SIMULA TION IN MANUFACTURING performance estimate decreases at best at the rate of 1/n~/2 where n denotes the length of simulation." This is a heavy computational burden that may become too much for complex systems. In addition, in order to improve the system performance estimate by adjusting or tuning various parameters in different phases of the acquisition process, dimensionality, or combinatorial explosion, must be dealt with. The search space of system design parameters is combinatorially huge. The first fundamental limitation in computation states that each system performance evaluation via simulation is time consuming. The second limitation states that a very large number of such evaluations may be necessary. These difficulties are multiplicative. Finally, there is a third limitation. "No Free Lunch Theorem": Without specific structural assumptions, there exists no optimization or search algorithm that can perform better on the average than blind search in dealing with the first and second limitation. (Ho, 1999, p. 8) These three limitations are fundamental limits on computation in dealing with complex systems. No amount of theoretical, hardware, or software advances can overcome them. Consequently, a strategic redirection is called for in dealing with them. Several emerging trends that directly or indirectly address the problem of system engineering of complex systems are outlined below. One or more of these topics may blossom into proven tools for dealing with the preceding difficulties and enable a more quantitative and optimizing approach. Ordinal Versus Cardinal Optimization Order is much easier to ascertain than value is. If one holds two identical-looking boxes in either hand, it is easy to determine which one is heavier, but much harder to determine how much heavier one is than the other. In many complex decision problems, it is often sufficient to be able to determine which solution is better, or to determine which is in the top 1 percent, rather than which is the absolute best. A theory of ordinal optimization is being developed that may enable quantitative measurements of such assertions via simulation modeling without having ' Y.C. Ho, Ordinal Optimization Teaching Module. Available at <http://hrl.harvard.edu/ people/faculty/ho/DEDS/OO/ldea/SlideOl.html>. Accessed June 2002.

RESEARCH AND DEVELOPMENT TOPICS 85 to confront the first and second fundamental limitations on computation of complex systems (NRC, 1999b). Off cient Search Via Learning Blind search in a large space is inefficient. Therefore, to deal with the large search spaces imposed by the second and third computational limitations discussed above, the structure of specific problems must be learned along the way. A number of automated learning theories currently in vogue in artificial intelligence research, such as knowledge discovery, data mining, Bayesian networks, and Tabu search, may be significant for developing M&S capabilities. Tabu search is a heuristic technique for search in combinatorial optimization problems (Grover, 1990~. Errors in Distributed Simulations Given fixed resources and a model complexity that exceeds these resources, a trade-off must be made between size and resolution. If some aspects of a system are represented very accurately, only a few components will be representable. Alternatively, a comprehensive view of the entire system can be provided, but only at a low resolution. Such resolution may introduce errors that may pose particular problems in distributed simulations. In such complex, networked systems of models, owing to low resolution each model will typically be in error to some degree. Therefore, it is natural to expect that in a complex system of many linked models, even if individual inaccuracies are small, such errors can accumulate, propagate, and reinforce each other, rendering the behavior of the aggregate significantly different from the behavior of the real system. Error propagation in distributed simulations plays an important role in verification, validation, and accreditation, and therefore is an important area of research that needs to be strengthened. In the current state of the art, it is possible to suggest that such error propagation may, or may not be, a significant issue in distributed simulations. On the one hand, modeling errors in complex systems can be like noises that are more or less statistically independent. The cumulative effect of many independent errors behave according to the central limit theorem and decrease with increasing complexity under some reasonable assumptions. A simple case is the law of large numbers, which improves accuracy by averaging many measurements. A second mitigating factor is the theory of ordinal optimization, mentioned above. Research here has shown that for the purpose of comparison (i.e., which is better?), very crude models are quite sufficient. Consider the metaphor of two bags of gold. You are free to choose the heavier bag. Every one of us can unfailingly tell the heavier

86 MODELING AND SIMULATION IN MANUFACTURING bag, even with small differences. But most of us will have difficulty if we are asked to estimate accurately the difference in weight between the two bags. "Value" is much harder to estimate than "order." In most cases of simulation optimization, we only need to know the order or be able to locate the top 1 percent of the design. It is not necessary to know the performance "value" accurately. Approximate simulation models are quite adequate for the former purpose. Once the top 1 percent have been located with high probability, we can lavish our attention and computing budget on this much smaller subset. A large volume of literature on the theory and success stories has been built up on this subject during the past decade (Ho and Cassandras, 2001~. On the other hand, it is known from work on numerical analysis, that numerical methods can introduce instabilities that greatly magnify errors even if the underlying models are stable. To obviate error-induced instabilities, criteria that enable choice of time-step size and other controllable factors are well known for nondistributed simulations. However, the major difference between distributed simulations and their nondistributed counterparts is that control and data are encoded in time- stamped messages that travel from one computer to another over a (bandwidth limited) network (Fujimoto, 2000a). Traditional analyses in the design of numerical methods consider trade-offs between accuracy and speed of computation (Isaacson and Keller, 1966~. However, since distributed messaging requires that continuous quantities be coded into discrete packets and sent discontinuously, it is more appropriate to consider discrete event simulation as a natural means to consider accuracy or bandwidth trade-offs. Recent work has shown that significant reductions of message bandwidth demands (number and size of messages) with controllable error and local computation costs are possible2 (Zeigler et al., 1999~. Finally, the issue of numerical stability in complex simulation is related to the problem of sample path continuity with respect to parameter and timing perturbation. Here again, literature exists (Ho and Cassandras, 1 997~. Theory of Complex Systems Complex systems, such as the national electric power-grid and worldwide communications networks, are vulnerable to attacks and catastrophic failures (Amin, 2000~. A theory of complex systems is emerging that may shed light on the fundamental nature of such complex interconnected systems, why and how they fail, and the limits to and 2 The interested reader may wish to consult Chapters 14 and 16 in Zeigler et al. (2000) for an extended discussion of error in modeling and distributed simulation.

RESEARCHAND DEVELOPMENT TOPICS 87 disadvantages of complexity (Ho and Pepyne, 2001). This is related to the problem of inferring total system performance from that of components. Any system is assembled or constructed from a set of components and/or suboperations. When broken down to the elemental constituent part, each part or suboperation can be modeled, and its performance measured, even if probabilistically. However, each part's contribution to the overall performance success or failure—of the entire system is different. For example, in an unmanned combat air vehicle, the performance of the automatic target detection subsystem is more important than is the successful landing of the returning system. The former directly affects the success of the mission, while the latter may cause the destruction of an expendable system. There is need for an analysis technique for assessing the relative expected importance and contribution of each part or suboperation to the overall goal of a system engineering project as a function of network architecture and hierarchy. Such a tool would enable managers to measure the critical elements of a systems engineering project and direct resources at those parts more systematically and quantitatively. Uncertainty Uncertainty is becoming increasingly important in modeling and simulation. Characterization of uncertainty refers to methods for tracking and quantifying the propagation through a model's calculations of the uncertainty that is inevitably present in the attribute values and interactions of components within a simulation. Decision making under uncertainty refers to models that assist in evaluating uncertainty and risk in situations in which incomplete information is available. Exploratory analysis under uncertainty is a process of searching the space of possible simulation outcomes as a function of the many assumptions in a scenario in order to find and delimit interesting or dangerous outcome regimes (NRC, 1 997a,b). MODEL INTEGRATION The infrastructure for modeling consists of fools and capabilities that support the practice of modeling. This infrastructure must support model integration and interoperability in order for the M&S requirements of acquisition and manufacturing to be met. Important topics associated with model integration are interoperability, composability, integrating heterogeneous processes, and linking engineering with effectiveness simulations.

88 MODELING AND SIMULATIONIN MA1VUFAC TURING Interoperability Interoperability is the ability to interoperate different applications or the ability of different simulations to collaboratively simulate a common synthetic environment. Simulation interoperability can be considered at two levels (1) technical interoperability at the level of the communications protocol and (2) substantive interoperability at the level of the databases, models, and behaviors of simulations (Hall et al., 2000~. Both types of interoperability are prerequisites to integration of separate models and simulations into composite simulations. The DOD's high ]eve] architecture (HLA) for simulations was intended to support interoperab]e simulations by providing a common run-time infrastructure and simulation data definition method. Extensible markup language (XML) is a widely used format for structured documents and data on the World Wide Web. Although common data-interchange mechanisms and formats such as HLA and XML can support technical interoperability by enabling simulations to communicate, they do not guarantee substantive interoperabi]ity because they do not ensure that the communicated data are correctly usable by the receiver. Therefore, they are necessary, but not sufficient to produce interoperab]e simulations. Due to the coexistence of models at mu]tip]e ]eve]s of abstraction in the layered architecture, the problem of interoperabi]ity is not solved merely with common data formats, but also requires considerations of composability. Composability At the ]eve] of an abstract model, mode] composition is the creation of a model from a collection of reusable components, themselves models, in order to meet a specific set of objectives. A composition framework is a collection of theories, concepts, and associated tool sets that enables construction of mode] components and their synthesis into larger components or a Gina] mode]. Component-based software engineering has been identified as a key enabler in the construction of complex software systems. This idea is called mode] or simulation composabi]ity. Mode] composabi]ity contributes to robust and integrated use by enabling repositories of mode] components that can be accessed in a co]]aborative environment. The problem dies in identifying components that can stand on their own as commercial commodities with reusability attributes. Reusability is the capability of simulation components and databases to be reused for different applications. Object repositories and interface standards support the retention and interoperation of reusable object-based models and simulations. Models must be implemented in software or

RESEARCH AND DEVELOPMENT TOPICS 89 hardware and simulated in order to gain access to their behavior or predictions. Reuse at the model level therefore depends on a model composition framework that provides the theory for both model composability and the mapping of composite models into executable form. When developed in this manner, a wide range of simulation environments may be open to execute the synthesized models. When discussing a model's particular form as implemented in computer code, as opposed to a model as abstractly specified, the problem of composability takes on a different character. In this case, where the intent is to have program code that is composable in a simulation, reuse must be limited to only those compositions in which matched components can interoperate with each other. As noted above, HLA provides a mechanism for effecting such interoperation through distributed federations. However, by itself, it does not assure that the resulting federations are meaningful in the sense that a well-defined dynamic system emerges capable of meeting its objectives. For that assurance, model composition at the level of model abstractions is required. When synthesizing distributed interactive simulations from a model base of reusable components, several issues must be addressed: determining which sets of components can interoperate together; determining which components and compositions are valid under the conditions of the current application; determining, of those that are valid, which are best for the given situation; and determining how to test a composition for completeness or adequacy for the given applications (Aronson and Wade, 2000~. Several shortfalls must be overcome before a viable level of model composability can be achieved. These are lack of a robust theory on which to base selection of the size and content of modules, lack of a theory to guide the development of a methodology for simultaneously determining interdependencies between modules, and a means to constrain possible compositions based on knowledge of component interdependencies. Theory is needed to understand how modules might be related to specific requirements and groups of requirements and how modules can be properly combined to meet most closely the objectives of a given application. Finally, theory is lacking that can explain the extent to which prioritized requirements are met for one or more candidate compositions (Page, 19991. Model composability presumes solutions to more fundamental problems, such as the existence of common frameworks and model/simulation/tool reusability. As it matures, SEA will require common frameworks for models with temporal dynamics that are used in a great variety of components within DOD systems, such as flight control systems and operator training systems. Such frameworks must be capable of expressing a large variety of model formalisms, including traditional

go MODELING AND SIMULATIONIN MANUFACTURING continuous-state systems and discrete event systems that are increasingly employed both in control functions and in the efficient representation of physical systems. Such frameworks must have strong computational performance attributes, allowing simulation of models on a variety of platforms, yielding answers to system design problems, and supporting training exercises within problem-specific time lines. While many modeling frameworks and simulation systems have the expressiveness and performance capabilities to some extent, no single commercially existing product can support both to the level demanded by the SEA concept. Integrating Heterogeneous Processes Virtual environments offer a sense of immersion into reality with true-to-life graphics and animation. However, to be truly effective in their support of decision making, design, or training exercises, such environments must draw on highly accurate model representations. Often such representations require the integration of heterogeneous types of processes and their models and simulations, such as real time with logical time, analog with digital, and continuous with discrete. Real Time and Logical Time Real-time systems design connotes an approach to software design in which timeliness is as important as the correctness of the outputs. Timeliness of response does not necessarily imply speed, but rather predictability of response and reliable conformance to deadlines. Real-time systems usually have periodic tasks, such as monitoring processes. They must accept input of real-world external events from sensors and respond with outputs such as commands sent to actuators. Often, real-time considerations must be made for embedded systems, in which control is exerted through software modules built into, and distributed throughout, operational systems such as aircraft, nuclear reactors, chemical power plants, and automated systems in buildings. Performance estimation and design to meet performance requirements are crucial in real-time systems. Performance analysis often involves checking the task schedule for feasibility or conformance with the required timing constraints. In distributed networked systems, quality of service characteristics of the network, such as the timely delivery of events between system components, must be included in performance evaluation. Real-time considerations enter modeling and simulation in various ways. A real-time simulation is a real-time system in which some portion of the environment or portions of the real-time system itself are realized by

RESEARCH AND DEVELOPMENT TOPICS 91 simulations. When a simulation interacts with a surrounding environment, such as software modules, hardware components, or human operators, the simulation must handle external events Tom its environment in a timely manner. More generally, interfacing abstract models with real-world processes requires that the logical time base of the simulation be synchronized as closely as possible to the clock time of the underlying computer system. Work related to real-time simulation and control includes early research in DEVS-Scheme, the extension of the discrete event system specification (DEVS) formalism to the DEVS real-time formalism and its application to process control. These projects have now been extended to parallel, optimistic, real-time simulation (PORTS); operators, training distributed real-time simulation (OPERA); Ptolemy, a concurrent discrete event simulation, time-triggered message-triggered object-based, distributed real-time system development environment; and cluster simulation-support for distributed development of hard real-time systems using time-division multiple access-based communication. Interfacing between real-time component models presents several challenges. First, the environment must execute the associated models in real-time. This model usually handles two kinds of events, one a periodic event and the other a reactive event. The simulator must be able to schedule and process these events in real time. Second, the environment must ensure that messages exchanged among basic models are delivered in real time no matter where the models are located on the network. The environment must also be able to schedule high-priority threads first. Third, components may be running synchronously as well as asynchronously. Fourth, time service guarantees that consistent readings of a global clock are used no matter where the reading is done in a distributed system. The simulations must use such a time service to stay in synchronization with each other. And finally, the environment must correctly handle multiple events arriving at the same simulation at the same time. Network latency and jitter may make it difficult to know when all messages for a given time have been received. Analog and Digital Hardware description languages (HDLs) are indispensable for computer and digital design. Currently, very high speed integrated circuit hardware description language (VHDL) and Verilog dominate the market and represent a total industry and defense investment of over $ 1 billion. VHDL was developed by IBM, Texas Instruments, and lntermetrics in 1983 under contract with DOD, and became IEEE Standard 1076 in 1987 (Ghosh,2000~. Verilog is a less sophisticated HDL that was developed in 1983-1984 and became IEEE Standard 1364 in 1995 (Thomas and

92 MODELING AND SIMULATIONIN MANUFACTURING Moorby, 1991~. It is easier to learn, but lacks constructs to support system- leve] design. Unfortunately, VHDL is limited in the representation of mixed analog and digital processing. A framework for the modeling and simulation of hybrid analog/digital systems has long been needed. Today, the need to design mixed-signal chips (MSCs) to support the growth in wireless devices and next-generation automotive electronics has brought this problem to the foreground. MSCs have been implemented as custom- application-specific integrated circuits (ASlCs), but must now be mass- produced for use in wireless technology. MSCs receive analog signals, process and manipulate them mainly in digital form, and reconvert them back to analog form. The challenge for systems design is the high level of functionality of an MSC. It contains radio frequency components, such as receivers, antennas, filters, and amplifiers; analog components, such as digital-to-analog converters, battery and power supplies, and interfaces to sensors; and digital components, such as digital signal processors, microcontrollers, microprocessor memory, analog-to-digital converters, and interface buses. Hybrid design has traditionally been tackled by mapping the input- output behavior through thresholding and interpolation. The fundamental difficulty is that, driven by the needs of accuracy and efficiency, the resolutions of time in the respective simulations for the analog and discrete subcomponents may be different. This translates into different units of time. While techniques such as lock-step, fixed time step, ping-pony, and Calaveras have been proposed in the literature, they are essentially arbitrary and lack a scientific basis to yield a common notion of time. The difficulty is aggravated when analog and discrete subsystems occur in feedback loops. Current efforts to solve this problem merely extend the previous methodology by standardizing the input-output signals for exchange between the simulations of analog and discrete subsystems. New approaches are needed (Ghosh and Grambasi, 20011. Linking Engineering and Effectiveness Simulations It is useful to distinguish between two broad classes of simulations. The first is product modeling or engineering simulations, which simulate the physics of products or systems being designed with a high degree of detail and physical fidelity. The intent of these simulations is to assist design engineers in understanding the physical performance of the product or system as designed. They often simulate only one system or subsystem at a time and run slower than read time. They can be loosely defined as using M&S to determine how to build a system. The second class of simulations is performance modeling or effectiveness simulations, which

RESEARCH AND DEVELOPMENT TOPICS simulate products or systems that are assumed to exist and operate as designed. The intent of these simulations is to determine how effective the systems would be in use, or what performance parameters the systems must have in order to be effective in use. They often simulate scenarios involving many simulated systems and run in real time or faster. They can be loosely defined as using M&S to determine which system to build. The ability to link these two types of simulations is necessary for achieving the goals of defense acquisition. The ability to reuse engineering models and simulations in effectiveness simulations would save time and money. 93 MODEL CORRECTNESS Model correctness is the fundamental requirement of ensuring that the predictions of a simulation model can be relied upon (Zeigler, 1998~. The vision of defense acquisition contained in SBA requires the development of accurate and reliable models of real-world systems. A prerequisite to this is an understanding of the real-world systems and objects to be modeled, their contextual domains, and the phenomenology of their operations and interactions, all at a level of detail sufficient to justify the model. Once the models have been implemented as simulations, their correctness must be rigorously evaluated. Domain Knowledge Improved understanding of the real-world basis for models is needed in the areas of phenomenology of warfare, physics-based modeling, and human behavior modeling. Phenomenology of Warfare The military domain is of special importance because it is the primary focus of SBA and because it is the domain in which human lives are most likely to be risked on the basis of decisions made using M&S. Lack of recent investment is not compensated for by previous investment because of the rapidly changing nature of military technology, doctrine, and operations. For example, models are lacking in such emerging areas as information operations and operations other than war. Effort is needed to develop deeper, more rigorous, and more quantitative understanding of the phenomenology of warfare, especially involving the complex, interconnected, and nonlinear military systems and systems-of-systems

94 MODELING AND SIMULA TION IN MANUFACTURING planned for the future. Relatively little recent investment has been made in understanding the phenomenology of military operations at the mission and operational levels (NRC, 1997a,b). Physics-based Modeling Mathematical models in which the equations that constitute the model are those used in physics to describe or define the physical phenomenon being modeled are referred to as physics-based models. For example, physics-based flight dynamics models use aerodynamics equations rather than look-up tables to model the flight characteristics of a simulated aircraft. The physics of failure and assessment of a potential system's durability and operational availability is of special interest. Such assessments would greatly benefit from accurate physical models that support predictions of the modes and times of failure of physical systems. Several studies have concluded the need for improvements in physics- based modeling (Johnson et al., 1998; Hollis and Patenaude, 1999; Starr, 1998~. Physics-based modeling is arguably more important for defense manufacturing and acquisition than for other simulation applications such as training. Human Behavior Modeling Computer-generated forces are often used in training simulations to provide both opposing forces and supplemental friendly forces for human participants in a simulation. They are also often used to generate all of the entities in battlefield simulations being used for nontraining purposes, such as analysis, experimentation, and SEA. Automated or semiautomated entities are created, and their behavior is controlled by the computer system, perhaps assisted by a human operator, rather than by human participants in a simulator (Kerr et al., 1997; Petty, 1995~. These automated behaviors are produced by algorithms based on models of human behavior. The reliability of the results depends on the validity of the behavior-generation methods. While current behavior-generation methods are reasonably effective at producing behavior that is in accordance with straightforward tactical doctrine, they fall far short of producing realistically human behavior with all its unpredictability and sophistication. Several studies have concluded that a need exists for improvement in human behavior modeling (Ewen et al., 2000; NRC, 1998b; Hoagland et al., 2000; Starr, 1998; Johnson et al., 19981.

RESEARCHAND DEVELOPMENT TOPICS Verification, Validation, and Accreditation 95 Verification is the process of determining that a model implementation, or simulation, accurately represents the developers' conceptual description and specifications. Validation is the process of determining the degree to which a model and associated data are an accurate representation of the real world, with respect to the model's intended use. Accreditation is the process of official certification that a model or simulation is acceptable for use for a specific purpose. Several studies have identified verification, validation, and accreditation as important topics for research and development (Johnson et al., 1998; Ewen et al., 2000; Hollenbach, 2000; SBATF, 1998~. A crucial step in the acquisition of a defense system is operational testing and evaluation, the final assessment of a system's effectiveness and suitability prior to fielding. Traditionally done using real-world testing of actua] systems, operational testing has seen a gradual increase in the use of M&S to reduce time and costs. This application of M&S requires extremely accurate simulations and consequently requires highly reliable validation methods. As M&S is used more in operational testing, the demands on the validation of the simulation will increase. Severa] advances in statistical methods are relevant to validation of simulations used for defense acquisition and may provide the basis for needed improvements in validation methods (NRC, 1998a). The limits of applicability of M&S to operational testing have been clearly asserted by the commanders of the services' operational testing organizations (Besa] et al., 2001~. Results generated by models and simulations used may be the basis of decisions affecting human safety or expending large sums of money. Validation methods that quantify the bounds of validity and risk of error in a mode] can help to establish the limits of M&S app]icabi]ity in operational test and evaluation. STANDARDS Standards are at the intersection of technical and nontechnical issues. The ways in which standards are developed are complex and often more successful if done from the ground up rather than from the top down. The M&S community has historically been resistant to setting standards. Because many M&S practitioners are self-taught or have had largely on- thejob training, there are many different methods of doing things. The variety of modeling methods is commensurate with the range of systems modeled.

96 MODELING AND SIMULA TION IN A1ANUFAC TURING Currently, a state-of-the-art, standardized external model representation is lacking. Moreover, modeling languages do not adequately support the structuring of large, complex models and the process of model evolution in general. The development and application of standards, however, are essential to the achievement of the level of interoperability, integration, and reuse envisioned for commercial manufacturing and defense acquisition. This section discusses existing modeling and simulation standards, general software standards, and higher-layer standards, and needs for their development and integration. Modeling and Simulation Standards Limited interoperability exists among the modeling and simulation environments available today. However, several standards are emerging that are aimed at solving interoperability and model construction problems. High Level Architecture . HLA is a general-purpose architecture for simulation reuse and interoperability. It was developed under the leadership of the Defense Modeling and Simulation Offices (DMSO) for the purpose of supporting reuse and interoperability across the many different types of simulations developed and maintained by DOD. In 1996, HLA was approved as the standard technical architecture for all DOD simulations, and in 2000, it was approved as an open standard by the Institute for Electrical and Electronic Engineers (IEEE). DMSO sponsored the establishment of the Simulation Interoperability Standards Organization (SISO) as the organization responsible for the promulgation of applications of the HLA standard. Currently, HLA addresses technical interoperability, or the standardization of data interchange among model components at run time. However, it does not address substantive interoperability, the ability to assure that data have common meanings among components so that a coherent federation emerges capable of meeting the objectives of its designers. This capability should be developed. Modelica An early attempt at M&S standardization, the Continuous System Simulation Language (CSSL) was first published in 1967. CSSL defined requirements for a standard continuous simulation modeling language, but had limited impact. Mode]ica is the current manifestation of continuous system modeling standardization efforts (Elmqvist, l 9991. The Modelica

RESEARCHAND DEVELOPMENT TOPICS 97 Association, a nonprofit, nongovernmental association consisting of the members of the original Modelica Design Group, was established in 2000 to promote the development and application of the Modelica computer language for modeling, simulation, and programming of physical and technical systems and processes. The Modelica effort is based on recent research results. Object- oriented modeling languages have already demonstrated how object- oriented concepts can be successfully used to support hierarchical structuring, reuse, and evolution of large and complex models independent of the application domain. Noncausal modeling demonstrated that the traditional simulation abstraction can be generalized by relaxing the causality constraints, or by not committing ports to an input or output role too early. These results have the potential for enabling both simpler models and more efficient simulation. Discrete Event System Specif cation DEVSis a formal modeling and simulation framework based on generic dynamic systems concepts (Zeigler et al., 2000~.DEVS contains well-defined concepts for the coupling of components; hierarchical, modular model construction; supporting discrete event approximation of continuous systems; and supporting repository reuse with an object- oriented substrate. DEVS contains important abstract concepts underpinning the representation of mixed-signal electronic designs. The concepts of system modularity and component coupling to form composite models are well defined. The closure under coupling property allows coupled models to be treated as components and therefore supports hierarchical model composition. Advantages of the DEVS methodology for model development include well-defined separation of concerns supporting distinct modeling and simulation layers that can be independently verified and reused in later combinations with minimal reverification. The resulting divide-and-conquer approach can greatly simplify and accelerate model development, leading to greater model credibility with less effort. The DEVS methodology has been realized in high-level languages such as C++ and Java and has been extended for parallel and distributed execution. For example, DEVS-C++ models have been executed on parallel machines. Implementation of DEVS-C++ over message-passing interfaces can afford parallel execution of models and thus supports efficient, high-performance simulation of large-scale models. Furthermore, DEVS-C++is the basis for DEVS/CORBA, a distributed modeling and simulation environment formed by mapping the DEVS-C++ system onto the common object request broker architecture (CORBA) middleware.

98 MODELING AND SIMULA TION IN MANUFACTURING Models developed in DEVS-C++ or DEVS-JAVA can be directly simulated in parallel and/or distributed environments over any transmission control protocol/internet protocol (TCP/IP), asynchronous transfer mode (ATM), or other network. The DEVS formalism is both a universal and unique representation of discrete event dynamic systems. It has been combined with the differential equation formalism to form a composite formalism with a well-def~ned semantics that is able to express hybrid digital/analog systems. General Software Standards M&S standards fall outside the category of general software standards, because the body of knowledge intrinsic to M&S generates additional requirements that are left open in more general standards. However, it is worth reviewing the state of M&S-related software standards, as M&S standards must eventually mesh with them. Unit ed Modeling Language Software engineering promotes systematic, disciplined, and quantifiable approaches to the development, operation, and maintenance of software-intensive systems. By applying engineering principles to software, it strives to bring together methods, processes, and tools in a unified fashion. While fundamentally different approaches to software engineering have emerged in recent years, the object-oriented approach has become widely accepted and practiced (Booch, 1994, 1997; Pressman, 1996; UML, 2000~. In the object-oriented worldview, the software development process includes conceptualization, analysis, design, and evolution (Booch, 1997) and supports the architecture-driven paradigm based on a hybrid of spiral and concurrent software development processes. Adherents of the object-oriented approach consider it superior to other software development approaches such as functional and procedural. Furthermore, the modular architecture-driven approach can strongly support incremental, stepwise, iterative specification, design, and development of hardware and software components concurrently. Other advantages of the object-oriented approach include support for scalable high-performance execution and model development; dynamic reconfiguration; systematic and incremental verification and testing; and team-oriented development. The adaptation of object orientation to software engineering has become increasingly indispensable for systems exhibiting heterogeneity and demanding flexibility in terms of both software and interoperability with multiple hardware components.

RESEARCH AND DEVELOPMENT TOPICS 99 The unified modeling language (UML) has been managed by the vendor-neutral Object Management Group (OMG) since 1997. UML originated as a combination of approaches to software modeling developed by James Rumbaugh, Ivar Jacobson, and Grady Booch, but has now evolved into a public standard. OMG committees are defining ways in which the next version of UML can facilitate activities such as the design of Web applications, enterprise application integration, real-time systems, and distributed platforms. UML attempts to support a higher-level view of design and coding in terms of diagramming. However, the majority of developers still build in source code, working with linguistic rather than spatial intelligence. UML vendors are attempting to educate programmers to pay attention to design views, allowing users to decide which design view they want to see at any given time. UML definition is still in a state of flux. For example, many proponents believe that its features should be reduced to a small core, or kernel. One proposal for such a kernel would include use cases, class diagrams, and interaction diagrams but would exclude state charts and activity graphs that provide some of the richest semantics in UML. UML is aimed at general software development, primarily for business applications, and is not simulation-aware. UML is the union of at least 10 techniques for diagramming notation. However, there is much more to consider than diagramming in the realm of software engineering, and in particular, software development for models and simulations. In addition to the factors relating to all software, which include software design principles, exploiting patterns, and scalable architecture, the M&S developer must understand the particular characteristics of dynamic systems, the error properties of numerical algorithms, and the intricacies of parallel and distributed simulation protocols. Although state diagrams are included in UML, they are not adequate to handle the variety of dynamic systems of interest in M&S. UML does not support model construction from dynamic system components or from reusable model components as required for SBA. Fundamentally, UML should be applied to the development of software to support modeling and simulation, but not to the construction of dynamic system models. Common Object Request Broker Architecture Middleware technology evolved during the 1 990s to provide interoperability in support of the move to client/server architectures. The most widely publicized middleware initiatives are OMG's CORBA, Microsoft's distributed component object model (DCOM), and DOD's HLA run-time infrastructure (RTI) (Dahmann et al., 1998~. Middleware simplifies the integration of heterogeneous systems so that users can share

100 MODELING AND SIMULATIONIN MANUFACTURING infonnation more efficiently, more cost-effectively, more flexibly, and more extensively. It will become more critical as the Web matures and systems become even more distributed. Middleware services are sets of distributed software that exist between the application and the operating system and network services on a system node in the network. Middleware services provide a more functional set of application programming interfaces (APIs) than the operating system and network services in order to allow an application to locate transparently across the network, be independent from network services, be reliable and available, and scale up in capacity without losing function. The ability to operate in real time imposes additional stringent requirements on services that are not part of the middleware standard. Operating in real time implies not necessarily speed, but consistency or predictability of response as measured by small jitter, for example. Real- time object-oriented middleware attempts to provide parameterized objects that can be composed to provide quality of service guarantees to application-layer software. The ACE ORB (TAO), which is an extension of CORBA, is being developed to demonstrate the feasibility of using CORBA for real-time applications versus direct socket-level programming (Schmidt et al., 1998~. Real-time middleware being developed includes real-time extensions to message-passing interfaces (MPl/RT's) (Kanevsky et al., 1997) and real-time dependable (RTD) channel. The latter is based on CactusRT (Hiltunen et al., l 999), which was developed at the University of Arizona in an effort to make communication services with enhanced quality of service (QOS) guarantees related to dependability and real time in the context of distributed real-time computing. ARMADA is another set of communication and middleware services that provides support for fault-tolerance and end-to-end guarantees for embedded real- time distributed applications (Abdel Zaher et al., 19991. Higher-Layer Standards M&S is an enabling technology to the larger activities encompassed by systems engineering. Standards are emerging within this larger context as well, and it is important that these standards develop in a manner compatible with M&S. Generalized Enterprise Reference Architecture and Methodology As indicated earlier, GERAM is a developing standard for enterprise engineering, which is broadly concerned with designing and redesigning

RESEARCH AND DEVELOPMENT TOPICS 101 systems for industrial, administrative, and service applications (Vernadat, 1996~. Different modeling methods to support enterprise engineering have been proposed for different applications, including integrated computer- aided manufacturing definition methods for functional modeling, entity- relationship techniques for information systems, object-oriented approaches, decision system analysis, and activity-based costing methods for economic evaluation. However, few integrated methods exist to cover all of the aspects of a business entity. CIMOSA provides full coverage of four fundamental aspects of enterprise modeling - 1) function, (2) information, (3) resource, and (4) organization and clearly differentiates and represents the three fundamental types of flow in any enterprise: (1) materials, (2) information/decision, and (3) control flows (Vernadat, 1998~. However, current modeling and simulation tools are unable to support these modeling concepts fully, and standards are needed for such tools to support the growing use of the GERAM methodology. Data Exchange Standards Indust~y-based organizations have undertaken the development of several standards for data exchange that relate to and can advance the interoperability of models and simulations. The family of standards developed by the international Organization for Standardization known as the Standard for the Exchange of Product Model Data (STEP) aids in the exchange of computer-aided design (CAD), computer-aided manufacturing (CAM), and other types of product data. However, this family of standards has been over a decade in development, and there remains some resistance to its adoption in some commercial tools. During the last several years, significant progress has been made on the XML3 for data exchange. XML is applicable to the exchange of virtually any type of data, and a number of business and technical communities have developed associated standards using nomenclature common in those individual communities. CONCLUSIONS The complexity of planned and existing systems-of-systems is growing more rapidly than the power of the computational and modeling methodologies needed to simulate them. For example, multiresolution 3 Further information is available at <http:/twww.w3.org/xmI>.

102 MODELING AND SIMULA TION IN MANUFACTURING models that can reliably predict the effect of system design changes on the output of systems-of-systems operations do not exist. Achieving the comprehensive SBA vision requires an understanding of the fundamental limitations associated with the simulation and modeling of complex systems that does not currently exist. Those limitations cannot be overcome without advances in hardware and software and may require basic reformulation of the SBA problem. Research is needed to determine the theoretical and practice] limits of modeling and computation with respect to manufacturing and acquisition and to devise methods to work within and around those limits. To support the envisioned use of M&S, research is needed in modeling theory, especially multiresolution/ multiviewpoint modeling, agent-based modeling, and semantic consistency; and in modeling methodologies for dealing with uncertainty. Advances in technology, such as parallel computing, distributed computing, and distributed simulation, have begun to make integration and interoperability of simulation systems practical. However, the breadth of the comprehensive SBA vision, including model integration across all of the SBA viewpoints, is beyond current hardware and software capabilities. Research is needed to expand current model integration and interoperation, including that between engineering and effectiveness simulations. Setting standards for simulation interfaces and interoperability for system design data, including file formats or format descriptors, is timely and appropriate, and will allow improved interoperability and reuse. Standardization of tools may not be appropriate at this time. In order to ensure correctness of the models in use, research is needed in domain knowledge at a level of detail that can serve as the basis for models in domains relevant to manufacturing and acquisition. Research is needed in verification, validation, and accreditation, especially validation; and in human-behavior modeling, including modeling of cognition and belief. Finally, standards for interfaces and operability must be developed and applied to modeling and simulation software, general software, and the frameworks being developed for integrating other software systems.

Next: 6 Conclusions and Recommendations »
Modeling and Simulation in Manufacturing and Defense Acquisition: Pathways to Success Get This Book
×
 Modeling and Simulation in Manufacturing and Defense Acquisition: Pathways to Success
Buy Paperback | $80.00 Buy Ebook | $64.99
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

The Committee on Modeling and Simulation Enhancements for 21st Century Manufacturing and Acquisition was formed by the NRC in response to a request from the Defense Modeling and Simulation Office (DMSO) of DOD. The committee was asked to (1) investigate next-generation evolutionary and revolutionary M&S capabilities that will support enhanced defense systems acquisition; (2) identify specific emerging design, testing, and manufacturing process technologies that can be enabled by advanced M&S capabilities; (3) relate these emerging technologies to long-term DOD requirements; (4) assess ongoing efforts to develop advanced M&S capabilities and identify gaps that must be filled to make the emerging technologies a reality; (5) identify lessons learned from industry; and (6) recommend specific government actions to expedite development and to enable maximum DOD and U.S. commercial benefit from these capabilities. To complete its task, the committee identified relevant trends and their impact on defense acquisition needs; current use and support for use of M&S within DOD; lessons learned from commercial manufacturing; three cross-cutting and especially challenging uses of M&S technologies; and the areas in which basic research is needed in M&S in order to achieve the desired goals for manufacturing and defense acquisition.

READ FREE ONLINE

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!