National Academies Press: OpenBook
« Previous: Appendix B Social Behavioral Modeling
Suggested Citation:"Appendix C Composability." National Research Council. 2006. Defense Modeling, Simulation, and Analysis: Meeting the Challenge. Washington, DC: The National Academies Press. doi: 10.17226/11726.
×

C
Composability

Paul Davis

INTRODUCTION

Composability is the capability to select and assemble components in various combinations to satisfy specific user requirements meaningfully. In modeling and simulation, the components in question are themselves models and simulations. Although terminology on such matters varies, the committee distinguishes composability from interoperability by the requirement that it be readily possible to assemble components differently for different purposes. Interoperability, on the other hand, may be achieved only for a particular configuration, perhaps in an awkward one-time lash-up. To put it differently, composability is associated with modular building blocks. It is also useful to think of composability as a property of the models rather than of the particular programs that happen to implement them in particular ways.1

A recent monograph reviewed the issues of composability in some depth (Davis and Anderson, 2003). It identified and discussed the impediments along four dimensions:

  • Complexity of the system being modeled.

  • Difficulty of the objective for the context in which the composite M&S will be used.

  • Strength of the underlying science and technology, including standards.

  • Human considerations, such as the quality of management, having a common community of interest, and the skill and knowledge of the workforce.

The monograph had recommendations for actions in each category, but that material will not be repeated here. Instead, this discussion builds on the earlier work.

DISCUSSION OF SELECTED ISSUES

This section, which makes up the remainder of Appendix C, provides somewhat extended discussion of particular important topics meriting attention. It draws on the recent literature, but offers some new ideas as well.

Focusing on Challenging But Feasible Objectives, Not Grails

As discussed in Chapter 1, the committee recommends moving away from the plug-and-play ideal toward a more feasible but still challenging definition and conception of composability. It is sometimes argued, however, that establishing ideals and then moving toward achieving them is useful even if the ideals are unachievable. We believe that to be a dangerous strategy for the DoD as it seeks to promote composability. Among the problems with such an ideal-focused strategy are the following:

  • Those writing proposals to the government tend to promise to deliver what the government demands, even if they know better. The proposal that “stretches” the most (i.e., is the most unscrupulous in its promises) may win because those reviewing the proposal see lesser proposals as insufficiently ambitious or as insufficiently responsive.

  • Those developing the models will have strong disincentives for being clear about the shortcomings of their approach. Thus, they will emphasize their delivering composability when in fact what the are delivering will provide “engineered composability” within a narrow domain. The composability problem may seem to have been solved, when that is most certainly not the case.

  • Because the shortcomings are submerged, higher-level managers may not even recognize the need to be investing in methods that would mitigate the problems.

  • Overall, a kind of intellectual corruption can set in, in

1

These definitions and distinctions are based on suggestions drawn from several recent studies—Petty and Weisel (2003); Davis and Anderson (2003); and Page et al. (2004). Although by no means universal, they appear to be sound and quite useful.

Suggested Citation:"Appendix C Composability." National Research Council. 2006. Defense Modeling, Simulation, and Analysis: Meeting the Challenge. Washington, DC: The National Academies Press. doi: 10.17226/11726.
×

which public comments generally (and even, sometimes, informal discussions) skirt around known problems, to the point at where the proverbial emperor’s lack of clothes is not even consciously recognized.

In time, such problems are usually resolved because people seek to do a good job, and facts do matter. Moreover, it is natural for scientists and engineers to be questioning and assertive. It is gratifying to note that the technical literature now has numerous candid and thoughtful articles on the subject of composability, articles that go well beyond what they did a few years ago. Nonetheless, recovering from the promulgation of poorly conceived goals and initiatives can take many years.2

Syntax, Semantics, Pragmatics, Assumptions, and Validity

A continuing difficulty in the discussion of composability is distinguishing among the kinds of problems that arise. The usual discussion still refers to problems being either syntactic or semantic, but the situation is more complex than is conveyed by that description. The following items provide a tutorial and recommend distinctions that need to be made systematically. They deal with syntax, semantics, pragmatics, assumptions, and validity.

  • Syntax. In shorthand, consistency of syntax means that two models can operate together. That is, the digital output from one can be read as the digital input to the other. Protocols such as HLA are designed to assure syntactical consistency among models to be connected.

  • Semantics. Semantics is usually defined as “meaning.” Thus, if some data can flow from Model A to Model B, the semantic question is seen as whether those data are understood by both A and B to mean the same thing (e.g., the current personnel strength of a battalion). To computer scientists, however, the operational meaning of semantic consistency is often much narrower and computer oriented.3

  • Pragmatics. Consistency of meaning is not always straightforward because the same word means very different things depending on context. Moreover, key aspects of that context may not be explicit. This is the realm of pragmatics. The term “force ratio,” for example, may refer to the ratio of forces as measured at a theater level, an operational level, or a tactical level. Even if one knows that the tactical level is the one intended, the term remains ambiguous because it can refer to battalion-level conflict or something more microscopic, such as when individual fighting vehicles and infantry are engaged. The related force ratios are not the same. Another aspect of pragmatics involves ontology. One model may have a built-in concept that a squad is a component of a platoon, which is a component of a company, and so on. Another model may have no such assumed structure because, in the context for which it was built, such an assumption was not necessarily correct (as in a treatment of special forces operations, which often involve very small teams that are not effectively associated with higher units during their operations). For the two models to compose well, it may be necessary for any such discrepancy to be resolved.4

  • Assumptions. The difficulties continue. The data’s meaning may be well understood, but the way Model A calculates the data may not be suitable for what B needs. Sometimes this may be a matter of precision, but other times it may be considerably more subtle. “The temperature,” for example, might refer to a surface temperature, an average temperature over some path into the ocean relevant to a sensor, the ambient air temperature on a battlefield with very hot moving objects, etc. Still other times, the calculation reflects assumptions that are only sometimes valid. This is more than pragmatics as that term is usually used in linguistics; it involves “assumptions.”

  • Validity. And, finally, there is the question of whether the assumptions are correct. If composability includes the requirement that a composition be meaningful, that

2

3

Examples of these relatively straightforward semantic issues are lexicographic problems—i.e., the syntax may work, but the usage cannot make sense. Some homely examples are division by zero, providing an alleged value of an array that is inconsistent with the array’s dimensionality, or using a character not permitted by the language.

4

The issue of ontological assumptions is emphasized in recent work by Andreas Tolk and students (Turnitsa, 2005). Other authors regard such matters as more a matter of assumptions than of pragmatics, which they see as contextual meaning (Davis and Anderson, 2003; Hofmann, 2004). Most software engineers and computer scientists subsume pragmatics, assumptions, and validity under pragmatics (see, e.g., Szyperski, 2002).

Suggested Citation:"Appendix C Composability." National Research Council. 2006. Defense Modeling, Simulation, and Analysis: Meeting the Challenge. Washington, DC: The National Academies Press. doi: 10.17226/11726.
×

would seem to require adequate validity. Underlying assumptions, however, may be consistent but wrong.

The distinctions we have noted are closely related to, but not identical with, the conceptual levels of interoperability defined and discussed by Tolk and students at Old Dominion University (Turnitsa, 2005). Those range from no interoperabilty through physical, syntactic, semantic, pragmatic, dynamic, and conceptual, with the latter corresponding to complete substantive agreement between the models in question.

Separating Conceptual Model, Implemented Model, Simulator, and Experimental Frame
Background

The desirability of distinguishing between a conceptual model and a program representing a particular implementation of that conceptual model has been emphasized for decades by thoughtful scholars such as Ziegler et al. (2000) but ignored by the vast majority of model builders, who leap directly into programming and often leave behind very little that might pass as respectable documentation. The result is not only a product that is difficult to understand or modify, but one that is linked in subtle and sometimes insidious ways to the particular implementation environment (programming language and simulator). Furthermore, the result is often not well designed because it was not adequately reviewed and iterated at an abstract level, where design plays such an important role. These matters are being increasingly appreciated, as reflected in the success of the Model Driven Architecture (MDA) effort and some modern textbooks that teach design and clarity of thought while remaining practical (Blaha and Rumbaugh, 2005).

Rethinking the Issue in the Light of Conflicting Considerations

There are, then, strong reasons for advocating separation of conceptual model, implemented model, simulator, and experimental context in model development and usage. At the same time, there are strong technological pressures working in the opposite direction. No one today would think of working out the detailed specifications of a model on a typewriter, to be handed over subsequently to a programmer to implement. Even those who favor designing in UML sometimes have mixed emotions because, in practice, so much is learned by iteration—early design notions represented in UML may prove foolish when someone gets into details. If the same person is designing and implementing, the iteration may be easy, but if formalized separations exist, then the person closest to the code may have to go through what amounts to an appeals process in order to change the design reflected in the UML. That may be good in the sense of maintaining discipline and avoiding ad hoc changes, but it may be bad in the sense of delaying and obstructing important improvements.

The tensions between top-down and bottom-up are longstanding and will surely continue. They are accompanied today by the reality of horizontal, distributed collaborations and by the improved efficiency of integrated environments that provide tools for everything from diagram sketching through programming and statistical analysis of simulation outcomes. It would seem most unwise (and probably most unfruitful) to argue that the strict separations suggested by the older academic writings be reimposed. What, then, might be done?

Tentative Suggestions

If we rethink what the purposes of the separations are, the desirable path becomes clearer. Those purposes include the following:

  • A conceptual model should exist because it displays the big picture, the design, and the linkage between application and model. It is the conceptual model that can best be communicated, in different forms, to clients, to other modelers, or to those concerned with composition. Realistically, the existence of good documentation depends on existence of a conceptual model.

  • The virtues of a conceptual model disappear if it is cluttered by implementation details.

  • The ability to comprehend a conceptual model and conceive of alternative implementations depends upon the conceptual model being expressed in implementation-independent terms.

One way to deal with such considerations is to develop and maintain a rigorously independent conceptual model, such as might be expressed in UML diagrams augmented with other methods. However, no one who has worked with higher-level modeling environments such as Mathematica, Analytica, iThink, or Extend would regard that approach as the only way, or even necessarily the best way. Suppose, for example, one had designed a model using one of these systems. One would need all or most of the following: a visual representation of the model, a hierarchical text-based representation, a clear list of inputs and outputs, definitions, and probably many notes. If one wanted to implement the model in some other system, it would often be rather easy to do so. This would require recoding, not merely sending the electronic files from one system to another; coding itself, however, is not so time-consuming as are conceiving and designing.

What matters most, then, is that the conceptual model can be viewed and comprehended separately, without being caught up in the details of implementation. If that conceptual model happens to have been developed simultaneously with implementation, that fact does not materially interfere with

Suggested Citation:"Appendix C Composability." National Research Council. 2006. Defense Modeling, Simulation, and Analysis: Meeting the Challenge. Washington, DC: The National Academies Press. doi: 10.17226/11726.
×

the purpose of the conceptual model. Moreover, having proven the feasibility of the conceptual model with an implementation has great advantages.

As for having implemented the model in a particular language, that may or may not be a problem. Higher-level languages such as that in EXCEL, Analytica, etc. are for the most part usable as a kind of pseudocode when discussing the conceptual model. The clutter associated with the programming detail can be suppressed when doing so.5

Simulators. Separating the simulation model from the simulator is important because simulators often have built-in limitations that affect validity of the results. This could be something obvious, such as the time step permitted in time-stepped simulation, the inability to vary the size of time steps dynamically, or the inability to implement discrete-event simulation. It could be more insidious, however, such as when the “simulator” deals not only with time, but also with terrain and environment, in which case the “simulator” is actually modeling part of the system, perhaps in ways that constrain or override intentions of the conceptual model. This can not only frustrate intentions of the designer but also make verification and validation extremely difficult.

It seems, however, that the built-in simulator function of an integrated modeling environment should not be seen as particularly troublesome, because if its approach to simulation (e.g., continuous rather than discrete-event) is a problem, that will likely be evident and the developer can choose to reimplement the model once its basic design is frozen. Other reasons for doing so might also exist; they might involve efficiency, interoperability with other models, and so on. In contrast, it is indeed troublesome if a conceptual model has been implemented in a system that somehow locks in a particular concept of terrain (e.g., grids versus hexagons versus a vector approach), command and control, or other substantive features of the real world. Thus, special care should be taken to separate the conceptual model from those implementation-specific features. How to accomplish that in general is not clear, but the entanglement of models with their programming environment’s infrstructure (e.g., its treatment of terrain) can be very troublesome to composition efforts (Hofmann, 2004).

Experimental Context. The concept of explicitly defining the assumptions, purpose, and plan of analysis in an experimental plan (what is often referred to as defining the experimental frame) is very important.6 So also it is important to distinguish this aspect of the MS&A effort from model development, although thinking through use cases and the anticipated-and-plausible analytical requirements can strongly affect development. Since the tools available in the MS&A community are not generally well developed on this matter workers tend to do much of what is required “on the side,” perhaps using EXCEL or some special scripts to help themselves organize and drive simulation and subsequent analysis. An exception to this, which can be seen as an existence proof, is Mathematica. Those who use Mathematica are able to write text, design, program, simulate, analyze, chart, and record without leaving the Mathematica environment. Further, they can choose to some extent how they wish to conduct dynamic simulation and they can call upon a wide range of library functions, many of them subject-area-specific (economics, physics, and so on). There are both advantages and disadvantages to such an approach, but the advantages are considered persuasive by a great many people in the scientific and other communities (e.g., economics).

The panel’s tentative conclusions on this are (1) the move toward powerful integrated environments is technologically inexorable and unquestionably valuable, (2) the question, then, is how to mitigate the entanglement problems caused by such an approach, and (3) the solution is likely to be incorporating tools to generate and export implementation-independent characterizations of the conceptual model, method of simulation, and depiction of experimental context. The DoD should invest in understanding what is feasible here, what it might request or require, and what incentives would make such things feasible. Best-practices manuals might also prove quite useful, especially those with detailed examples.

Ontologies

Over the last decade, a great deal of research has gone into the development of ontologies. The applications include artificial intelligence, including that for autonomous systems, decision support, and many other examples. Ontology work is likely to prove quite important in the advancement of composability as well, since it is a key element in addressing semantic issues—by standardization in some cases and by agile transformation of representations in others. There is a rich literature on the subject of model ontologies, but we mention here only one example, the Web Ontology Language (OWL), which is under development by the World Wide Web Consortium (www.w3.org/203/Owl).

5

For expert programmers, even sketches of Java code can be effectively more like pseudocode than something locking people into a particular language.

6

Recently, suggestions have been made about treating “context” separately and formalizing it as an essential aid to efforts on composability (Yilmaz, 2004). Among the purposes is to increase the odds of recognizing important assumptions affecting sound use of a component. The article, and prior work with Tuncer Oren, also encourages further work on introspective, reflective models—i.e., models that have the ability to report on their own limitations and validity. Some of the most ambitious notions for such work have been discussed under the rubric of “wrapping” (Landauer and Bellman, 1996) in work to make models have surprisingly strong concepts of themselves and the ability to report on that. This goes well beyond what is ordinarily meant by the concept of “wrapper.”

Suggested Citation:"Appendix C Composability." National Research Council. 2006. Defense Modeling, Simulation, and Analysis: Meeting the Challenge. Washington, DC: The National Academies Press. doi: 10.17226/11726.
×
Standardization of Representations

Even in the autumn of 2003, it was evident that there were great opportunities for DoD to exploit recent commercial developments that are generating de facto representational standards. The best known of these is the Model Driven Architecture MDA work of the Object Management Group (http://www.omg.org). One recent manifestation of this is the recent release of the second edition of a wellknown text for modeling and design (Blaha and Rumbaugh, 2005), which has been substantially rewritten so as to use UML2 rather than the diagrammatic notation of the original edition. Enthusiasm for UML representation is strong and growing; it is a trend that DoD should join, support, and either influence or augment. Many of the issues of simulation composabilty are not solved by UML as it now exists, but—as so often happens—enthusiasms run high and shortcomings are often not mentioned. The committee believes that the concepts embodied in UML methods should be augmented by more detailed specification methods necessary in simulation, such as the DEVS methods developed at the University of Arizona (Ziegler et al., 2000) or the Systems Modeling Language (SysML) being developed by the Object Management Group, and that such augmentation will prove valuable to composability and interoperability. At a different level of detail, a strong base in both computer science and technology now exists for the mechanisms necessary to compose models. Much of this is associated with the Extensible Modeling and Simulation Framework (XMSF).7

Retrodocumentation

A continuing challenge for DoD composability is that many (even most) of the relevant existing models are poorly documented legacy code. Although the temptation exists to postulate new models for everything, the reality is that legacy models will be around for a very long time. This suggests to the panel that the DoD should define and invest in a substantial program of retrodocumentation, with or without the cooperation of the original developers, who may have moved on or who may have proprietary concerns.

Conceptual models can be constructed after the fact, and modern representational techniques make it likely that the results would have enduring value if the efforts were done well. Doing the job well could include systematically uncovering deeply buried assumptions about appropriate contexts for the models’ use and about phenomena that may not be correctly described. Some of the methods that could be adopted here include these:

  • Developing and testing a best-practices guide on how to conduct reviews designed to uncover hidden assumptions.8

  • Developing pockets of expertise in providing independent consulting and advice on such matters. Analogies exist to current-day teams that provide independent verification and validation, and to “red teams” that conduct independent tests of organizations’ information systems.

Improved Wrappers and Metadata

Building on the lessons from the retrodocumentation efforts, among others, it should be possible to develop wrappings for legacy models that will be significantly better than the interfaces currently developed for interoperability purposes and to include in them mechanisms for self-monitoring.9 For example, models could report when they are being fed inputs that are outside the realm of acceptable domain values or when they see internal state variables taking on values that are either implausible or indicative of operating regimes for which the models are unreliable. Such measures are potentially open-ended, of course, but the committee speculates that much could be accomplished with relatively modest effort. “The best” should not be the enemy of “the much-better-than-now.”

Reprogramming

DoD should also be willing, in a few high-leverage cases, to pay for reprogramming legacy models that appear to be substantively valuable but technologically obsolete in troublesome ways. Unfortunately, reprogramming can be quite expensive,10 so it should not be undertaken lightly. Wrapping methods are often better for near to midterm

7

See, for example, the Web site of the Naval Postgraduate School’s MOVES Institute and related papers (Brutzman et al., 2002).

8

Uncovering the unstated assumptions of an organization’s strategy or plan has become a well-developed methodology. Some of the same methods could be applied. For example, in assumptions-based planning (Dewar, 2003), one uncovers the assumptions and identifies those that are potentially most critical or “load-bearing.” Even if they currently appear well based, it is useful to create mechanisms for monitoring the situation so that warnings and adaptations can be made when and if conditions change. In the modeling context, we see the possibility of doing analogous things with metadata and wrappings.

9

Normal wrappings are merely interfaces that provide orderly and comprehensible mechanisms at the public interface for manipulating what may be complex internal mechanisms of older models or models from which only certain information is to be used in composition. More advanced wrappings, however, can include diagnostics (Landauer and Bellman, 1996, 1999).

10

It is worth noting, however, that the cost of reprogramming can sometimes be far less than originally estimated, at least if first-team programmers are enlisted. One example of this involved reprogramming the Army’s Janus system, which was done quickly and well and made it possible for the Army to have competition and choices of platform as it moved forward.

Suggested Citation:"Appendix C Composability." National Research Council. 2006. Defense Modeling, Simulation, and Analysis: Meeting the Challenge. Washington, DC: The National Academies Press. doi: 10.17226/11726.
×

patches, and substantial redesign is often better if the result is to have enduring value.

Rethinking the Ground Rules and Incentives

One vexing issue that frequently arises in seeking to achieve interoperability and composability is that of proprietary content. Companies that develop models and simulations invest a great deal of time and money in doing so, and the results reflect intellectual capital, which they commonly resist sharing. Programs become proprietary, with the compromise being that the programs have public interfaces necessary for required sharing, as in a federation of models. The remainder of the programs, however, is hidden. This practice, interestingly, extends even to organizations within large companies, where the various divisions may be only modestly more willing to share details with other company divisions than with outsiders.11 And, of course, the quality of model documentation is notoriously poor.

From a scientific perspective, this situation is appalling— the antithesis of open exchange—but the reasons for it are obvious. Much effort in recent years has gone into finding compromises that call for more of the details to be visible (gray-box approaches, rather than black-box approaches), but complete visibility and the opportunity to make modifications wherever needed is much more rare.

The gray-box approach can accomplish a great deal and is far superior to models being provided as black boxes with only narrow public interfaces. Nonetheless, the panel believes that DoD (and other government agencies) should rethink this entire subject and consider changes of policy and practice that would greatly increase openness. A great deal of empirical evidence exists on which to draw in reassessing. The open-source movement in software, with such triumphs as the UNIX and LINUX systems, is one obvious example, but others exist as well. Models and simulations developed at national laboratories, universities, and not-for-profit organizations have often been much more open.12 In the business world, moreover, it is not uncommon for an organization that has a simulation developed for it to demand that its full source code and documentation be deliverable, and that it, as the recipient organization, have full ownership and rights. Again, there are many variations, as when a developer makes available good documentation and very extensive mechanisms for modification, but hides—and retains ownership of—underlying machinery that it uses whenever it develops a comparable simulation for a client. This is common practice for financial programs.

This report makes no general recommendations on what DoD should seek to do on this general subject; any effort to impose a one-shoe-fits-all approach might be disastrous. Furthermore, decrees about openness would make no sense unless they were accompanied by fair and appropriate financial incentives and contractually binding legal language. These would require considerable thought based on experience as well as anticipation of behaviors. Nonetheless, the baseline reality is rather odious and is distinctly unhelpful if the DoD wishes to improve composability, reuse, and competition.

Theoretical Research

Based on the foregoing discussions, more theoretical work is needed to understand at least the following:

  1. Measures of potential composability that address the time and effort required and that consider the potential need for adaptations (Bartholet et al., 2005)

  2. Methods to estimate the reasonable cost of retrodocumentation, development of fewer and more intelligent wrappings, and even reprogramming to create suitable modularity.

  3. Methods to improve standardized representation of models and simulations, leaning as heavily as possible on the ongoing industry-sponsored activities but augmenting them as necessary. Currently, the representational methods are to some extent asserted to be good, without a solid description of their strengths and limitations.

  4. Achieving sound, mutually informed and calibrated families of models, simulations, and other sources of knowledge (Davis et al., 2005).

Methods for formalizing in practical ways the issues of pragmatics and assumptions and for best assuring that as many such issues as possible are addressed well in documentation and metadata.

REFERENCES

Bartholet, R.G., D.C. Brogran, and P.F. Reynolds. 2005. “The computational complexity of component selection in simulation reuse.” Winter Simulation Conference 2005. Orlando, Fla.

Blaha, Michael, and James Rumbaugh. 2005. Object-Oriented Modeling and Design with UNL, 2nd ed. Upper Saddle River, N.J.: Pearson Prentice Hall.

11

Even when the profit motive is not a consideration, organizations may refuse to share source code for several reasons. One is a desire to maintain tight configuration control so as to maintain quality and standardization. Another is the lack of enthusiasm for revealing imperfections: Computer programs are sometimes clumsy assemblages with less-than-first-rate coding. A third reason is simply that knowledge is power. Even if no direct financial benefit is to be had, there can be substantial indirect benefits from having unique knowledge and expertise.

12

A starting point for such an assessment might include old standby simulations such as Janus, TACWAR, EADSIM, and more recent systems such as MODSIM. The rules governing distribution of source code, access to it, and ability to make modifications have varied considerably, thereby producing different examples to consider.

Suggested Citation:"Appendix C Composability." National Research Council. 2006. Defense Modeling, Simulation, and Analysis: Meeting the Challenge. Washington, DC: The National Academies Press. doi: 10.17226/11726.
×

Brutzman, Don, Michael Zyda, J. Mark Pullen, and Katherine L. Morse. 2002. “Extensible modeling and simulation framework (Xmsf): Challenges for Web-based modeling and simulation: Findings and recommendations.” Technical Challenges Workshop, Strategic Opportunities Symposium. Monterey, Calif.

Davis, Paul K., and Robert H. Anderson. 2003. Improving the Composability of Department of Defense Models and Simulations. Santa Monica, Calif.: RAND.

Davis, Paul K., Jonathan Kulick, and Michael Egner. 2005. Implications of Modern Decision Science for Military Decision Support. Santa Monica, Calif.: RAND.

Dewar, James. 2003. Assumptions-Based Planning. London, England: Cambridge University Press.

Hofmann, Marco. 2004. “Challenges of model interoperation in military simulations.” Simulation 80(12):659-667.

Landauer, Chris, and Kirstie L. Bellman. 1996. “Constructed complex systems: Issues, architectures, and wrappings.” Proceedings EMCSR 96: Thirteenth European Meeting on Cybernetics and Systems Research, Symposium on Complex Systems Analysis and Design, Vienna, pp. 233-238.

Landauer, Christopher, and Kirstie L. Bellman. 1999. “Lessons learned from wrapping systems.” Fifth International Conference on Engineering of Complex Computer Systems (ICECCS). Los Alamitos, Calif.: IEEE Computer Society Press, pp. 132-142.

National Research Council (NRC). 1997. Modeling and Simulation. Vol. 9 of Technology for the United States Navy and Marine Corps: 2000-2035. Washington, D.C.: National Academy Press.

Page, Ernest H., Richard Briggs, and John A. Tufaralo. 2004. “Toward a family of maturity models for the simulation interconnection problem.” Proceedings of the Spring Simulation Operability Workshop. 045-SIW-145.

Petty, Mikel D., and Eric W. Weisel. 2003. “A composability lexicon.” Proceedings of the Spring Software Interoperability Workshop. Orlando, Fla.

Szyperski, Clemens. 2002. Component Software: Beyond Object-Oriented Programming, 2nd ed. New York, N.Y.: Addison-Wesley.

Turnitsa, Charles. 2005. Extending the Levels of Conceptual Interoperability Model. Summer Simulation Conference. Cherry Hill, N.J.

Yilmaz, Levent. 2004. “On the need for contextualized introspective models to improve reuse and composability of defense simulations.” Journal of Defense Modeling and Simulation 1(3):141-151.

Zeigler, Bernard, Herbert Praenhofer, and Tag Gon Kim. 2000. Theory of Modeling and Simulation, 2nd ed. San Diego, Calif.: Academic Press.

Suggested Citation:"Appendix C Composability." National Research Council. 2006. Defense Modeling, Simulation, and Analysis: Meeting the Challenge. Washington, DC: The National Academies Press. doi: 10.17226/11726.
×
Page 74
Suggested Citation:"Appendix C Composability." National Research Council. 2006. Defense Modeling, Simulation, and Analysis: Meeting the Challenge. Washington, DC: The National Academies Press. doi: 10.17226/11726.
×
Page 75
Suggested Citation:"Appendix C Composability." National Research Council. 2006. Defense Modeling, Simulation, and Analysis: Meeting the Challenge. Washington, DC: The National Academies Press. doi: 10.17226/11726.
×
Page 76
Suggested Citation:"Appendix C Composability." National Research Council. 2006. Defense Modeling, Simulation, and Analysis: Meeting the Challenge. Washington, DC: The National Academies Press. doi: 10.17226/11726.
×
Page 77
Suggested Citation:"Appendix C Composability." National Research Council. 2006. Defense Modeling, Simulation, and Analysis: Meeting the Challenge. Washington, DC: The National Academies Press. doi: 10.17226/11726.
×
Page 78
Suggested Citation:"Appendix C Composability." National Research Council. 2006. Defense Modeling, Simulation, and Analysis: Meeting the Challenge. Washington, DC: The National Academies Press. doi: 10.17226/11726.
×
Page 79
Suggested Citation:"Appendix C Composability." National Research Council. 2006. Defense Modeling, Simulation, and Analysis: Meeting the Challenge. Washington, DC: The National Academies Press. doi: 10.17226/11726.
×
Page 80
Next: Appendix D Committee Membership Biographies »
Defense Modeling, Simulation, and Analysis: Meeting the Challenge Get This Book
×
Buy Paperback | $36.00 Buy Ebook | $28.99
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

Modeling, simulation, and analysis (MS&A) is a crucial tool for military affairs. MS&A is one of the announced pillars of a strategy for transforming the U.S. military. Yet changes in the enterprise of MS&A have not kept pace with the new demands arising from rapid changes in DOD processes and missions or with the rapid changes in the technology available to meet those demands. To help address those concerns, DOD asked the NRC to identify shortcomings in current practice of MS&A and suggest where and how they should be resolved. This report provides an assessment of the changing mission of DOD and environment in which it must operate, an identification of high-level opportunities for MS&A research to address the expanded mission, approaches for improving the interface between MS&A practitioners and decision makers, a discussion of training and continuing education of MS&A practitioners, and an examination of the need for coordinated military science research to support MS&A.

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    Switch between the Original Pages, where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

    « Back Next »
  6. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  7. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  8. ×

    View our suggested citation for this chapter.

    « Back Next »
  9. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!