National Academies Press: OpenBook

Research Needs for Human Factors (1983)

Chapter: Human Decision Making

« Previous: Introduction and Overview
Suggested Citation:"Human Decision Making." National Research Council. 1983. Research Needs for Human Factors. Washington, DC: The National Academies Press. doi: 10.17226/759.
×

2
HUMAN DECISION MAKING

Work organizations, and those who staff them, rise and fall by their ability to make decisions. These may be major strategic decisions, such as the deployment of forces or inventories, or local tactical decisions, such as how to promote, motivate, and understand particular subordinates. To list the kinds of decisions that need to be made and the stakes that sometimes ride on them would be to repeat the obvious. Decisions are made explicitly whenever one consciously combines beliefs and values in order to choose a course of action. They are made implicitly whenever one relies on a ritualized response (habit, tradition) to cope with a choice between options. Repetition of past decisions may result in suboptimal choices; however, it may also provide a ready escape from the difficulties and expense of explicit decision making. The reasons decision making often seems (and is) so difficult are quite varied, as are the opportunities for interventions and the needs for human factors research to buttress those interventions.

One problem is information overload: More things need to be considered than can be held and manipulated in one’s head simultaneously. Coping with such computational problems is an ideal task for computers, and there are a variety of software packages available that in one way or another combine decision makers’ beliefs and values in order to produce a recommendation. Choosing between and using these decision aids forces one to face a second inherent difficulty of decision making: not knowing how to define (or structure) the decision problem and to assess one’s own values, that is, how to make trade-offs

The principal author of this chapter is Baruch Fischhoff.

Suggested Citation:"Human Decision Making." National Research Council. 1983. Research Needs for Human Factors. Washington, DC: The National Academies Press. doi: 10.17226/759.
×

between competing objectives. Because analytic decision-making methods cannot operate without guidance on these issues, judgment is an inevitable part of the decision-making process, as is the need for judgment elicitation methods to complement the decision aid (see Chapter 3). A third difficulty is knowing when to stop analyzing and start acting. Taking that step requires one to assess the quality of the decision-making process and reconcile any remaining conflicts between the recommendation it produces and that produced by one’s own intuitions. To help one through this step, a decision aid must reveal its own limits in ways that are psychologically meaningful. A fourth difficulty is that in many interesting decisions one knows too little to act confidently. When uncertainty is a fact of life, the role of good design is to ensure that the best use is made of all that is known.

The existence of these four problems is common knowledge. Their resolution is complicated by a fifth difficulty whose identification requires research: People’s commonsense judgments are subject to robust-and systematic biases. These biases make it difficult to rely on intuition as a criterion for the adequacy of decisions and the methods that produce them. Decision aids must accommodate these biases and may require supplementary training exercises lest their recommendations be adopted only when they affirm intuitions that are known to be faulty.

Given the multitude of decisions that are made, any research or design effort that made even a minute contribution to the quality of a minute proportion of all decisions would bear a large benefit in absolute terms. Proving that such a benefit had been derived would be as difficult as it is in most areas of human factors work. Whenever uncertainty is involved, better decisions will produce outcomes only over the long run. That makes it difficult to establish the validity of bona fide improvements and easy to fall prey to highly touted methods with good face validity, but little else. A sound research base is needed not only to develop better decision-making methods, but also to give users a fighting chance at being able to identify which methods are indeed better for their purposes.

BACKGROUND

Ad hoc advice to decision makers can be traced from antiquity to the Sunday supplements. Scientific study of

Suggested Citation:"Human Decision Making." National Research Council. 1983. Research Needs for Human Factors. Washington, DC: The National Academies Press. doi: 10.17226/759.
×

decision making probably begins with the development of statistical or Bayesian decision theory by Borel, Ramsey, de Finetti, von Neumann, Morgenstern, Venn, Wald, and others. They showed how to characterize and interrelate the primitives of a general model of decision-making situations, highlighting its subjective elements. The development of scientific decision aids could be traced in the work of Edwards, Raiffa, Schlaifer, and others, who showed how complex real-world decision situations could be interpreted in terms of the general model. Essential to this model is the notion that decision-making problems can be decomposed into components that can be assessed individually, then combined into a general recommendation that reflects the decision makers’ best interest. Those components are typically described as options, beliefs, and values or alternatives, opinions, and preferences, or some equivalent triplet of terms. They are interrelated by an integration scheme called a decision rule or problem structure (e.g., Fischhoff, et al., 1981; Sage, 1981).

More generally, decision-making models typically envision four interrelated steps.

  1. Identify all relevant courses of action among which the decision maker may choose. This choice among options (or alternatives) constitutes the act of decision; the deliberations that precede it are considered to be part of the decision-making process.

  2. Identify the consequences (advantages) that may arise as a result of choosing each option; assess their relative attractiveness. In this act the decision maker’s values find their expression. Although these values are essentially personal, they may be clarified by techniques such as multiattribute utility analysis and informed by economic techniques that attempt to establish the market value of consequences.

  3. Assess the likelihood of these consequences’ being realized. These probabilities may be elicited by straightforward judgmental methods or with the aid of more sophisticated techniques, such as fault tree and event tree analysis. If the decision maker knows exactly what will happen given each course of action, it then becomes a case of decision making under conditions of certainty and this stage drops out.

  4. Integrate all these considerations in order to identify what appears to be the best option. Making the best of what is or could be known at the time of the

Suggested Citation:"Human Decision Making." National Research Council. 1983. Research Needs for Human Factors. Washington, DC: The National Academies Press. doi: 10.17226/759.
×

decision is the hallmark of good decision making. The decision maker is not to be held responsible if this action meets with misfortune and an undesired option is obtained.

These steps are both demanding and vague. Fulfilling them requires considerable attention to detail and may be accomplished in a variety of ways. Moreover, they may not even be followed sequentially, if insights gained at one step lead the decision maker to revise the analysis performed at a different step. This flexibility has produced a variety of models and methods of decision making whose interrelations are not always clearly specified.

The opportunity for routinizing and merchandising these decision-making procedures led to one of the academic and consulting growth industries of the 1970s. A wide variety of software packages and firms can now bring the fruits of these theoretical advances to practicing decision makers. Decision analysis, the most common name for these procedures, is part of the curriculum of most business schools. Although it has met considerable initial resistance from decision makers because of its novelty and because of the explicitness about values and beliefs that it requires, decision analysis seems to be gaining considerable acceptance (e.g., Bonczek, et al., 1981; Brown, et al., 1974; Raiffa, 1968). This acceptance seems, even now, to go beyond what could be justified on the basis of any empirical evidence of its efficacy. Figure 2–1 gives some examples of the contexts within which decision-aiding schemes relying on interactive computer systems have been operating and have been reported in the professional literature. Figure 2–2 is similar to the summary printout of one such scheme, which offers physicians on-line diagnoses of the causes of dyspepsia.

Behavioral decision theory (e.g., Einhorn and Hogarth, 1981; Slovic, et al., 1979; Wallsten, 1980) has taken decision aiding out of the realm of mathematics and merchandising into the realm of behavioral research by recognizing the role of judgment in structuring problems and in eliciting their components. Researchers in this field have studied, in varying degrees of detail, the psychological processes underlying these judgments and the ways in which they can be improved through training, task restructuring, and decision-aid design. A particular focus has been on the identification and eradication of judgmental biases. The research described below is that which seems to be needed to help behavioral decision research fulfill this role.

Suggested Citation:"Human Decision Making." National Research Council. 1983. Research Needs for Human Factors. Washington, DC: The National Academies Press. doi: 10.17226/759.
×

FIGURE 2–1 Examples of Operating Decision-Aiding Systems

An important development in this research over the last decade has been its liberation from the mechanistic models of behavior inherited from economics and philosophy. The result has been more process-oriented theories, attempting to capture how people do make and would like to make decisions (e.g., Svenson, 1979). This change was prompted in part by the realization that mechanistic models offer little insight into central questions of applications, such as how action options are generated and when people are satisfied with the quality of their decisions. These developments are reflected in the research described below.

There may seem to be a natural enmity between those purveying techniques of decision analysis and those studying their behavioral underpinnings, with the latter revealing the limits of the procedures that the former are trying to sell. In general, however, there has been rather good cooperation between the two camps. Basic researchers have often chosen to study the problems that practitioners find most troublesome, and practitioners have often adopted basic researchers’ suggestions for how to improve their craft. For example, in both commercial and government use, one can find software packages and decision-making procedures that have been redesigned in

Suggested Citation:"Human Decision Making." National Research Council. 1983. Research Needs for Human Factors. Washington, DC: The National Academies Press. doi: 10.17226/759.
×

FIGURE 2–2 Summary Printout of a Medical Decision-Aiding Scheme

Source: D.C.Barber and J.Fox (1981).

Suggested Citation:"Human Decision Making." National Research Council. 1983. Research Needs for Human Factors. Washington, DC: The National Academies Press. doi: 10.17226/759.
×

response to basic research. Established channels (e.g., conferences, paper distribution lists) exist for members of this community to communicate with one another. Many of the leading practitioners have doctoral-level training, usually in psychology, management science, operations research, or systems engineering, and maintain academic contacts. Indeed, the quantity of basic research has been reduced by the diversion of potential researchers to applied work, although its quality may have benefited from being better focused. Although problems remain, research in this area has a fairly good chance of being useful and of being used. In addition, none of the research issues discussed in the following sections appears to pose any serious methodological difficulties. The conventional experimental methods of the behavioral sciences are suitable for performing the recommended investigations.

RESEARCH ON DECISION MAKING

Given the relatively good communication between decision-making researchers and practitioners, the primary focus of the recommendations that follow is the production of new research, as opposed to its dissemination. It seems reasonable to hope that the same communication networks that brought these applied problems to the attention of academics will carry their partial solutions back to the field. Research on decision making per se assumes that there are general lessons to be learned from studying the sorts of issues that recur in many decision problems and the responses typically made to them. In fact, the complexity of real decision problems is often so great as to prevent some lessons from being learned from direct study.

These recommendations are cast in terms of research needed to improve the use of computerized decision aids, referred to generically as decision analysis. These aids work in an interactive fashion, asking people to provide critical inputs (e.g., the set of actions that they are considering, the probability of those actions achieving various goals), combining those inputs into a recommendation of what action to take, and repeating the process until users feel that they have exhausted its possibilities. In order to be useful, an aid must: (a) deal with those aspects of decision making for which people require assistance, (b) ask for inputs in a language compatible with how people think intuitively about

Suggested Citation:"Human Decision Making." National Research Council. 1983. Research Needs for Human Factors. Washington, DC: The National Academies Press. doi: 10.17226/759.
×

decision making, and (c) display its recommendations in a way that properly captures their implications and definitiveness. Achieving these goals requires understanding of (a) how people assess the quality of human performance in decision-making tasks, (b) the nature of decision-making processes, and (c) how people assess the quality of decision-making processes, both those they perform and those performed for them. The research described below is intended to contribute to all three of these aspects of systems design. It is also intended to facilitate the development of supplementary components of decision-support systems, such as exercises for improving judgment or for more creative option generation.

In this light, research that contributes to hardware or software design should also be a useful adjunct to any formal or semiformal decision-making process in which judgment plays a role. Even the devotee of decision analysis often lacks the time or resources to do anything but an informal analysis.

Decision Structuring

Decision making is commonly characterized as involving the four interrelated steps described earlier. The first three of these give the problem its structure, by specifying the options, facts, and value issues to be considered as well as their interrelations. Prescriptive models of decision making elaborate on the way these steps should be taken. Most descriptive theories hypothesize some deviation of people’s practice from a prescriptive model (Fischhoff, Goitein, and Shapira, 1981). These deviations should, in principle, guide the development of the prescriptive model. That is, they show how the prescriptive models fail to consider issues that people want to incorporate in their decisions. In practice, however, the flow of information is typically asymmetrical, with prescriptive models disproportionately setting the tone for descriptive research.

As a result, decision structuring is probably the least developed aspect of research into both prescriptive and descriptive aspects of decision making (von Winterfeldt, 1980). Prescriptive models are typically developed from the pronouncements of economists and others regarding how people should (want to) run their lives or from ad hoc lists of relevant considerations. Descriptive models tend more or less to assume that these prescriptions are

Suggested Citation:"Human Decision Making." National Research Council. 1983. Research Needs for Human Factors. Washington, DC: The National Academies Press. doi: 10.17226/759.
×

correct. Neither seems to have explored fully the range of possible problem representations that people use when left to their own devices.

Paying more attention to the diverse ways in which people do make decisions would enable decision aiders to offer their clients a more diverse set of alternative ways in which they might make decisions, along with some elaboration on the typical strengths and weaknesses of each method. Some research projects that might serve this end follow.

  • Studies of dynamic structuring, allowing for iterations in the decision-making process, with each round responding to the insights gained from its predecessors (Humphreys and McFadden, 1980). Can people use such opportunities, or do they tend to stick to an initial representation? Are there initial structures that are less confining, which should be offered by the aids?

  • Studies of goals other than narrow optimization. In economic models, the goal of decision making is assumed to be maximizing the utility of the immediate decision. Recently attention has turned to other goals, such as reducing the transaction costs from the act of making a decision, improving trust between the individuals involved in a decision, making do with limited decision-making expertise, imposing consistency over a set of decisions, or facilitating learning from experience. Theoretical studies are needed to clarify the consequences of adopting these goals (e.g., how badly do they sacrifice optimization); empirical studies are needed to see how often people actually want to accept them (particularly after they have been informed of the results of the theoretical studies).

  • Option-generation studies. Decision makers can only choose between the options they can think of. Each decision need not be a new test of their imaginations, particularly because research indicates that imagination often fails. Research can suggest better formulation procedures and generic options that can be built into decision analysis schemes (Gettys and Fisher, 1979).

  • Many decision analysis schemes are sold as standalone systems, to be used by decision makers without the help of a professional decision analyst. The validity of these claims should be tested, particularly with regard to decision structuring, the area in which the largest errors can occur (Pitz, et al., 1980). Research could also show ways to improve the stand-alone capability (e.g., with better introductory training packets).

Suggested Citation:"Human Decision Making." National Research Council. 1983. Research Needs for Human Factors. Washington, DC: The National Academies Press. doi: 10.17226/759.
×

Measuring Preferences

Unless one is fortunate enough to find a dominating alternative, one that is better than all competitors in all respects, making decisions means making trade-offs. When one cannot have everything, it is necessary to determine the relative importance of different goals. Such balancing acts may be particularly difficult when the question is new and the goals that stand in conflict seem incommensurable (Fischhoff, et al., 1980). Dealing with hazardous technologies, for example, leads us daily to face questions such as whether the benefits of dyeing one’s hair are worth a vague, minute increase in the chances of cancer many years hence. Decision analysis schemes seem to complicate life by making these inherent conflicts apparent (McNeil, et al., 1978). They actually complicate it when they pose these questions in cumbersome, unfamiliar ways in order to elicit the information needed by their models—e.g., how great an increase in your probability of being alive in five years’ time would exactly compensate for the .20 probability that you will not recover from the proposed surgery—and does this trade-off depend on other factors?

Such questions are difficult in part because their format is dictated by a formal theory or the programmer’s convenience, rather than by the decision maker’s way of thinking. They are also difficult because of the lack of research guiding their formulation. Research on the elicitation of values has lagged behind research on the elicitation of judgments of fact (Johnson and Huber, 1977). Although there are many highly sophisticated axiomatic schemes for posing value questions, few have been empirically validated for difficult, real-life issues. In practice, perhaps the most common assumption is that decision makers are able to articulate responses to any question that is stated in good English.

The projects described below may help solve problems that currently are (or should be) worrying practitioners. Some similar needs have been identified by the National Research Council’s Panel on Survey-Based Measures of Subjective Phenomena (Turner and Martin, in press).

  • No opinion. In most behavorial decision research, as in most survey research, economics, and preference theory, people are typically assumed to know what they want. Careful questioning is all that is needed to reveal the decision maker’s implicit trade-offs between whatever

Suggested Citation:"Human Decision Making." National Research Council. 1983. Research Needs for Human Factors. Washington, DC: The National Academies Press. doi: 10.17226/759.
×

goals are being compared. The need for some response is often necessary for the analysis to continue. Knowing how to discover when decision makers have no opinions and how to cope with that situation would be of great value. Studies of “no opinion” in survey research (Schumann and Presser, 1979) would provide a useful base to draw on, although they often show that people have a disturbing ability to manufacture opinions on diverse (and even fictitious) topics.

  • Interactive value measurement. One possible response to situations in which decision makers’ values are poorly articulated (or nonexistent) is for the decision aider to engage in a dialogue with the client, suggesting alternative ways of thinking about the problem and the implications of various possible resolutions. Although there are obvious opportunities for manipulating responses in such situations, research may show how they could be minimized; at any rate they may be rendered no worse than the manipulation inherent in not confronting the ambiguity in respondents’ values. Of particular interest is the question of whether people are more frank about their values and less susceptible to outside pressures when interacting with a machine than with another human being. Again, some good leads could be found in the survey research literature, particularly in work dealing with the power and prevalence of interviewer effect.

  • Specific topics. In order to interact constructively with their clients, should decision aiders be able to offer a comprehensive, balanced description of the perspectives that one could have on a problem? The provision of such perspectives may be enhanced by a combination of theoretical and empirical work on how people could and do think about particular issues (Jungermann, 1980). For example, to aid decision problems that involve extended time horizons, one would study how people think about good and bad outcomes that are distributed over time. One might discover that people have difficulty conceptualizing distant consequences and therefore tend to discount them unduly; such a tendency could be countered by the use of scenarios that reify hypothetical future experiences. Medical counseling and the setting of safety standards are two other areas with specific problems that reduce the usefulness of decision technologies (e.g., the difficulty of imagining what it would be like to be paralyzed or on dialysis, unwillingness to place a value on human life).

Suggested Citation:"Human Decision Making." National Research Council. 1983. Research Needs for Human Factors. Washington, DC: The National Academies Press. doi: 10.17226/759.
×
  • Simulating values. One obvious advantage of computerized systems is to work quickly through calculations using alternative values of different parameters. A possible didactic use would be to help people clarify what they want, by simulating the implications of different sets of preferences (“If those were your trade-offs, these would be your choices”), both on the problem in question and on sample problems. Work along this line was done at one time in the context of social judgment theory (Hammond, 1971). Completing it and making it accessible to the users of other decision aids would be useful.

  • Framing. Recent research has demonstrated that formally equivalent ways of representing decision problems can elicit highly inconsistent preferences (Kahneman and Tversky, 1979; Tversky and Kahneman, 1981). Because most decision-aiding schemes have a typical manner of formulating preference questions, they may inadvertently be biasing the results they produce. This work should be continued, with an eye to characterizing and studying the ways in which decision analysis schemes habitually frame questions.

Evaluation

The decision maker looking for help may be swamped by offers. The range of available options may run from computerized decision analysis routines to super-soft decision therapies. Few of these schemes are supported by empirical validation studies; most are offered by individuals with a vested interest in their acceptance (Fischhoff, 1980). A comprehensive evaluation program would help decision makers sort out the contenders for their attention and to use those selected judiciously, with a full understanding of their strengths and limitations (Wardle and Wardle, 1978). Such a program might involve the following elements:

  • Collecting and characterizing the set of existing decision aids with an eye to discerning common behavorial assumptions (e.g., regarding the real difficulties people have in making decisions, the ways in which they want to have problems structured, or the quality of the judgment inputs they can provide to decision-making models).

  • Examining the assumptions identified above. This might include questions like: Can people separate judgments of fact from judgments of value? When decision

Suggested Citation:"Human Decision Making." National Research Council. 1983. Research Needs for Human Factors. Washington, DC: The National Academies Press. doi: 10.17226/759.
×

makers are set to act in the name of an institution, can they assess its preferences, unencumbered by their own? Can people introspect usefully about beliefs that have guided their past decisions, free from the biasing effects of hindsight?

  • Developing methods for evaluating the quality of decisions (such as are produced by different methods). For example, what weights should be placed on the quality of the decision process and on the quality of the outcome that arises? What level of successful outcomes should be expected in situations of varying difficulty? This work would be primarily theoretical (Fischer, 1976).

  • Clarifying the method’s degree of determinacy. To what extent do arbitrary changes (i.e., ones regarding which the method is silent) in mode of application affect the decisions that arise (Hogarth and Makridakis, 1981)? Similarly, one would like some general guidance on the sensitivity of the procedure to changes in various aspects of the decision-making process, in order to concentrate efforts on the most important areas (e.g., problem structuring or value elicitation). Conversely, one wants to know how sensitive the method is to the particulars of each problem and user. That is, does it tend to render the same advice in all circumstances?

  • Assessing the impact of different methods on “process” variables, such as the decision maker’s alertness to new information that threatens the validity of the decision analysis or the degree of acceptance that a procedure generates for the recommendation it produces (Watson and Brown, 1978). Such questioning of assumptions has been the goal of much existing research, which should provide a firm data base for new work (although many questions, such as the first two of the three raised, have yet to be studied).

Improving Realism

The simplified models of the world that decision analysis software packages use to represent decision problems are in at least one key respect very similar to the models generated by flight or weapons simulators. Their usefulness is constrained by the fidelity of their representations to the critical features of the world they hope to model. Although there is much speculation about process effects, it points in inconsistent directions and is seldom substantiated by empirical studies (either in the

Suggested Citation:"Human Decision Making." National Research Council. 1983. Research Needs for Human Factors. Washington, DC: The National Academies Press. doi: 10.17226/759.
×

laboratory or in operating organizations). Although these topics have been studied very little in this context, research could draw on whatever analogous studies have been conducted with other kinds of simulators. Some suggested research topics follow.

  • Hot and cold cognition. Decision analysis schemes are cold and calculating, and they expect the decision maker to be so as well. It is not clear how well their putative advantages survive when decision makers shift from “cold” to “hot” cognition. Such a shift occurs with emotional involvement, such as might happen when the stakes increase or the topic is arousing (Janis and Mann, 1977). The use of decision aids for medical patients pondering possible treatments assumes that decision quality will not deteriorate in such situations—or at least no more than it deteriorates without the aid. Another such shift involves time pressures, such as might arise in crisis decision making (Wright, 1974). Many proponents of decision analysis claim that time constraints actually enhance the usefulness of their tool, rather than threaten it, arguing that a quick-and-dirty analysis is often the most cost-effective way to use the technology. Evidence is needed regarding whether this is true, both when quickness is chosen and when it is imposed.

  • Contingency planning. Many of the most important uses of decision aids are for the sake of contingency planning. The essence of such planning is anticipating future situations and prescribing the actions needed should they actually occur. In principle, preplanning responses should allow a more leisurely and thoughtful analysis with better utilization of experts and decision aids than would be possible if one waited until a situation demanding an immediate response developed. The success of such efforts depends on the planner’s ability to imagine in advance how various contingencies will appear should they come about. If the actual contingency does not resemble its image, then the (preplanned) decisions based on that image will seem inappropriate. In such cases, the decision maker must decide on short notice whether to adhere to the plan (and assume that his or her immediate impression is faulty) or come up with a new plan on the spot (and assume that the event that was anticipated is not the event that occurred). Although the stakes riding on contingency plans are often very large, we have little systematic knowledge about the

Suggested Citation:"Human Decision Making." National Research Council. 1983. Research Needs for Human Factors. Washington, DC: The National Academies Press. doi: 10.17226/759.
×

correspondence between actual and planned contingencies. Research is needed on (1) when and why situations look (or feel) different when they occur than they did during planning and (2) what to do when plans made at an earlier time seem inappropriate.

  • Overriding recommendations. The moment of truth for the decision aid comes when the decision maker must decide to follow its recommendations or override them. Analogous moments face the users of most other human-machine systems, suggesting that the study of overriding would have broad implications. The research questions are: When do people even think about overriding? How valid are the cues that lead them to do so? How much better than the aid are their intuitive judgments? Does protracted reliance on decision aids increase or decrease intuitive decision-making ability? Existing research on the acceptance of computerized diagnoses in medicine, clinical psychology, and meteorology would provide a good basis for this research.

  • Better displays. Decision analysts have shown considerable ingenuity in translating formal decision theory into terms that may be understood by less sophisticated decision makers. More work needs to be done in this area, particularly if decision aids are to have stand-alone capacity. The features that the models capture are a mixture of those that are easy to capture and those that designers intuitively feel are important to include. Each of the four topics just described in this section is a factor that may affect the realism of decision aids and, if so, should be considered in their design and utilization. Research efforts to date have hardly begun to tap the potential of recent work in computer graphics for developing superior displays (e.g., to facilitate interpretation of how robust a recommendation is by showing its susceptibility to change with variation in the values of the input parameters). A particular problem is that both questions and recommendations typically appear without any indication of their rationale. As a result, decision makers may have little feeling for where the questioning is leading or how robust the concluding recommendations will be (or how they can be explained to others). Collaborative efforts might increase both the overall acceptance of decision analysis and the realism of its recommendations when it is used.

Suggested Citation:"Human Decision Making." National Research Council. 1983. Research Needs for Human Factors. Washington, DC: The National Academies Press. doi: 10.17226/759.
×

Aiding Diffuse Decisions

Common to most decision-making models is the assumption that decisions are made by an identifiable individual at an identifiable point in time. Clearly, however, this idealization often is not realized in practice: there may be many parties to a decision; some decisions just evolve over time (or at least are made to seem that way); other decisions are made by people who do not think of themselves as decision makers (e.g., supervisors monitoring and directing the behavior of subordinates or systems); some decisions are made by people who are not officially recognizable as decision makers (e.g., aides to a senior official). Rather different forms of research are needed to improve decision making in each setting; a number of them are outlined below.

  • Multiperson decisions. Decision theory methods are typically designed to explore and aggregate the beliefs and preferences of a single individual. One approach to dealing with multiple decision makers is a computational scheme for aggregating their beliefs and preferences prior to using them in a common decision model (Rohrbaugh, 1979). Theoretical work has suggested a variety of analytical aggregation schemes. Although this work should continue, it could be usefully complemented by empirical studies (using simulations and experimentation) of how greatly the results of these various schemes differ and how well they are accepted by users. Another approach is to have the parties aggregate their perspectives through some structured interaction (Sachman, 1975; Steiner, 1972). This approach, well worked by students of the risky shift and of the Delphi methods, might benefit from research using computerized systems that allow participants (perhaps at different sites) to go through many rounds of interactions with varying communication channels and protocols. For example, will decisions be reached more quickly and adopted more enthusiastically if the parties can observe visual images of one another, not just printed summary statements?

  • Evolving decisions. Insofar as decisions represent choices between alternative courses of action, any decision may be expressed as a statement of action (“I [or we] will do X”). Such translation of a complex decision process to its procedural implications can have drawbacks. One is that the underlying rationale of an

Suggested Citation:"Human Decision Making." National Research Council. 1983. Research Needs for Human Factors. Washington, DC: The National Academies Press. doi: 10.17226/759.
×

action is lost, making it difficult to understand why things are done the way they are, how to respond to new contingencies, and when it is time to rethink the whole decision. A second potential drawback is that those decisions that still have to be made are not addressed directly, leaving crucial steps to guesswork (e.g., an operator may be told something to the effect of “Figure out what is going on and then follow steps S1 to Sn”). A third possibility is that procedures may have internal inconsistencies or be at cross-purposes, and people either do not realize it or they realize it but do not quite know what is wrong. Systems that add rules over time may be particularly prone to this problem (the social security system is an example). Some combination of artificial intelligence, decision modeling, and experimental work might help people to diagnose the logic of the systems that they deal with and that they are called on to redesign (Corbin, 1980; Klein and Weitzenfeld, 1978).

  • Unwitting decision makers. Just as any decision may be thought of as an action, so may each action be thought of as a decision. Most students of decision making would probably agree with the hypothesis that people would be better off if they realized the decisions implicit in their actions, and structured them as such. For example, a supervisor contemplating the shutdown of a plant because of a malfunction would make wiser choices with even a rudimentary decision analysis (i.e., listing all possible courses of action, sketching out possible consequences and contingencies, crudely working through the expected utility of each action). Such structuring has become part of the training of some medical students. The user of computerized information retrieval systems (e.g., Prestel, Teletext) might be usefully seen as making a series of decisions (such as: These alternatives are ambiguous—which gives me the best chance of getting the information I need? Is it worth my time and money to use the system on this problem? Is the answer I got complete enough or should I keep working?). A useful way to exploit existing research would be to translate it into crude aids, adapted to the conditions and problems of particular work settings (along with an evaluation of their efficacy).

  • Unofficial decision makers. Senior officials in many organizations are too busy to make deliberative analyses of the many decisions they must consider. A common (and sensible) defense is to have aides conduct the analyses. For this strategem to work, the senior

Suggested Citation:"Human Decision Making." National Research Council. 1983. Research Needs for Human Factors. Washington, DC: The National Academies Press. doi: 10.17226/759.
×

official must communicate well enough with the aide to ensure that the appropriate problem is addressed; the aide must communicate well enough with the senior official to ensure that the rationale behind the decision-making method and the implications of its conclusions are understood well enough to be properly represented and afforded due consideration. Communication problems are likely to be particularly great when the official must present the conclusions to some larger public or when the training of official and aide are quite different. Consider, for example, the difficulties experienced by public officials enunciating the policies devised by economists or by those of junior executives trying to sell decision analyses to old-line senior executives. Better methods of communication (and for realizing the lack of it) would be a useful addition to the software accompanying any decision-making method. These methods could apply to the front end of an analysis (e.g., training films, practice exercises) or after it is complete (Federico, et al., 1980).

CONCLUSION

Decision aiding appears to be increasingly viable and popular. A variety of software packages are currently being marketed and used, each offering somewhat different operationalizations of the basic model. If their promises are not to outstrip their capabilities, they will need to be accompanied by behavorial research regarding how best to design and use that software. The five problem areas described in this chapter represent topics for which research is likely to be particularly useful and usable.

These projects require primarily experimental methods, building on the theory and hardware already available. To be most effective they need a context that affords ready contact with decision theorists and practicing decision analysts. The former can solve the questions of theory to which they are most suited; the latter can provide access to their machines (and perhaps to their clients) and facilitate the translation from research to practice.

REFERENCES

Bonczek, R.H., Holsapple, C.U., and Whinston, A.B. 1981 Foundations of Decision Support Systems. New York: Academic Press.

Suggested Citation:"Human Decision Making." National Research Council. 1983. Research Needs for Human Factors. Washington, DC: The National Academies Press. doi: 10.17226/759.
×

Brown, R., Kahr, A.S., and Peterson, C. 1974 Decisional Analysis for the Manager. New York: Holt, Rinehart & Winston.


Corbin, R. 1980 Decisions that might not get made. In T. Wallsten, ed., Cognitive Processes in Choice and Decision Behavior. Hillsdale, N.J.: Lawrence Erlbaum.


Einhorn, H., and Hogarth, R.M. 1981 Behavorial decision theory: processes of judgment and choice. Annual Review of Psychology 32:53–88.


Federico, P.A., Brun, K.E., and McCalla, D.B. 1980 Management Information Systems and Organization Behavior. New York: Praeger.

Fischer, G.W. 1976 Multidimensional utility models for risky and riskless choice. Organizational Behavior and Human Performance 17:127–146.

Fischhoff, B. 1980 Clinical decision analysis. Operations Research 28:28–43.

Fischhoff, B., Goitein, B., and Shapira, Z. 1981 The experienced utility of expected utility approaches. In N.Feather, ed., Expectancy, Incentive and Action. Hillsdale, N.J.: Lawrence Erlbaum.

Fischhoff, B., Lichtenstein, S., Slovic, P., Derby, S., and Keeney, R. 1981 Acceptable Risk. New York: Cambridge University Press.

Fischhoff, B., Slovic, P., and Lichtenstein, S. 1980 Knowing what you want: measuring labile values. In T.S.Wallsten, ed., Cognitive Processes in Choice and Decision Behavior. Hillsdale, N.J.: Lawrence Erlbaum.


Gettys, C.F. and Fisher, S.D. 1979 Hypothesis plausibility and hypothesis generation. Organizational Behavior and Human Performance 24:93–110.


Hammond, K.R. 1971 Computer graphics as an aid to learning. Science 172:903–908.

Hogarth, R.M. and Makridakis, S. 1981 Forcecasting and planning: an evaluation. Management Science 27:115–138.

Suggested Citation:"Human Decision Making." National Research Council. 1983. Research Needs for Human Factors. Washington, DC: The National Academies Press. doi: 10.17226/759.
×

Humphreys, D., and McFadden, W. 1980 Experiences with MAUD: aiding decision structuring versus bootstrapping the decision maker. Acta Psychologica 45:51–69.


Janis, I.L., and Mann, L. 1977 Decision Making. New York: Free Press.

Johnson, E.M., and Huber, G.P. 1977 The technology of utility assessment. IEEE Transactions on Systems Management & Cybernetics SMC-7:311–325.

Jungermann, H. 1980 Speculations about decision-theoretic aids for personal decision making. Acta Psychologica 45:7–34.


Kahneman, D., and Tversky, A. 1979 Prospect theory. Econometrica 47:263–292.

Klein, G.A., and Weitzenfeld, J. 1978 Improvement of skills for solving ill-defined problems. Educational Psychology 13:31–41.


McNeil, B.J., Weichselbaum, R., and Pauker, S.G. 1978 Fallacy of the 5-year survival rate in lung cancer. New England Journal of Medicine 299:1397–1401.


Pitz, G.F., Sachs, N.J., and Heerboth, M.T. 1980 Structure for individual decision analysis. Organizational Behavior & Human Performance 26:65–80.


Raiffa, H. 1968 Decision Analysis. Reading, Mass.: Addison-Wesley.

Rohrbaugh, J. 1979 Improving the quality of group judgment. Organizational Behavior and Human Performance 24:73–92.


Sachman, H. 1975 Delphi Critique. Lexington, Mass.: Lexington Books.

Sage, A.P. 1981 Behavorial and organizational considerations in the design of information systems and processes for planning and decision support. IEEE Transactions on Systems Management and Cybernetics SMC-11:640–678.

Schumann, H., and Presser, S. 1979 Assessment of no opinion in attitude surveys. Sociological Methodology 10:241–275.

Suggested Citation:"Human Decision Making." National Research Council. 1983. Research Needs for Human Factors. Washington, DC: The National Academies Press. doi: 10.17226/759.
×

Slovic, P., Fischhoff, B., and Liechtenstein, S. 1979 Behavioral decision theory. Annual Review of Psychology 28:1–39.

Steiner, I.D. 1972 Group Processes and Production. New York: Academic Press.

Svenson, O. 1979 Process descriptions of decision making. Organizational Behavior & Human Performance 23:86–112.


Turner, C., and Martin, E. in press Surveying Subjective Phenomena. Panel on Survey-Based Measures of Subjective Phenomena, Committee on National Statistics, National Research Council. New York: Russell Sage.

Tversky, A., and Kahneman, D. 1981 The framing of the decisions and the psychology of choice. Science 211:456–458.


von Winterfeldt, D. 1980 Structuring decision problems for decision analysis. Acta Psychologica 45:71–93.


Wallsten, T. 1980 Cognitive Processes in Choice and Decision Behavior. Hillsdale, N.J.: Lawrence Erlbaum.

Wardle, A., and Wardle, L. 1978 Computer-aided diagnosis—a review of research. Methods of Information in Medicine 17:15–28.

Watson, S.R., and Brown, R.V. 1978 The valuation of decision analysis. Journal of the Royal Statistical Society Series A(141):69–78.

Wright, P. 1974 The harassed decision maker. Journal of Applied Psychology 59:555–561.

Suggested Citation:"Human Decision Making." National Research Council. 1983. Research Needs for Human Factors. Washington, DC: The National Academies Press. doi: 10.17226/759.
×
Page 12
Suggested Citation:"Human Decision Making." National Research Council. 1983. Research Needs for Human Factors. Washington, DC: The National Academies Press. doi: 10.17226/759.
×
Page 13
Suggested Citation:"Human Decision Making." National Research Council. 1983. Research Needs for Human Factors. Washington, DC: The National Academies Press. doi: 10.17226/759.
×
Page 14
Suggested Citation:"Human Decision Making." National Research Council. 1983. Research Needs for Human Factors. Washington, DC: The National Academies Press. doi: 10.17226/759.
×
Page 15
Suggested Citation:"Human Decision Making." National Research Council. 1983. Research Needs for Human Factors. Washington, DC: The National Academies Press. doi: 10.17226/759.
×
Page 16
Suggested Citation:"Human Decision Making." National Research Council. 1983. Research Needs for Human Factors. Washington, DC: The National Academies Press. doi: 10.17226/759.
×
Page 17
Suggested Citation:"Human Decision Making." National Research Council. 1983. Research Needs for Human Factors. Washington, DC: The National Academies Press. doi: 10.17226/759.
×
Page 18
Suggested Citation:"Human Decision Making." National Research Council. 1983. Research Needs for Human Factors. Washington, DC: The National Academies Press. doi: 10.17226/759.
×
Page 19
Suggested Citation:"Human Decision Making." National Research Council. 1983. Research Needs for Human Factors. Washington, DC: The National Academies Press. doi: 10.17226/759.
×
Page 20
Suggested Citation:"Human Decision Making." National Research Council. 1983. Research Needs for Human Factors. Washington, DC: The National Academies Press. doi: 10.17226/759.
×
Page 21
Suggested Citation:"Human Decision Making." National Research Council. 1983. Research Needs for Human Factors. Washington, DC: The National Academies Press. doi: 10.17226/759.
×
Page 22
Suggested Citation:"Human Decision Making." National Research Council. 1983. Research Needs for Human Factors. Washington, DC: The National Academies Press. doi: 10.17226/759.
×
Page 23
Suggested Citation:"Human Decision Making." National Research Council. 1983. Research Needs for Human Factors. Washington, DC: The National Academies Press. doi: 10.17226/759.
×
Page 24
Suggested Citation:"Human Decision Making." National Research Council. 1983. Research Needs for Human Factors. Washington, DC: The National Academies Press. doi: 10.17226/759.
×
Page 25
Suggested Citation:"Human Decision Making." National Research Council. 1983. Research Needs for Human Factors. Washington, DC: The National Academies Press. doi: 10.17226/759.
×
Page 26
Suggested Citation:"Human Decision Making." National Research Council. 1983. Research Needs for Human Factors. Washington, DC: The National Academies Press. doi: 10.17226/759.
×
Page 27
Suggested Citation:"Human Decision Making." National Research Council. 1983. Research Needs for Human Factors. Washington, DC: The National Academies Press. doi: 10.17226/759.
×
Page 28
Suggested Citation:"Human Decision Making." National Research Council. 1983. Research Needs for Human Factors. Washington, DC: The National Academies Press. doi: 10.17226/759.
×
Page 29
Suggested Citation:"Human Decision Making." National Research Council. 1983. Research Needs for Human Factors. Washington, DC: The National Academies Press. doi: 10.17226/759.
×
Page 30
Suggested Citation:"Human Decision Making." National Research Council. 1983. Research Needs for Human Factors. Washington, DC: The National Academies Press. doi: 10.17226/759.
×
Page 31
Suggested Citation:"Human Decision Making." National Research Council. 1983. Research Needs for Human Factors. Washington, DC: The National Academies Press. doi: 10.17226/759.
×
Page 32
Next: Eliciting Expert Judgment »
Research Needs for Human Factors Get This Book
×
Buy Paperback | $50.00
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF
  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    Switch between the Original Pages, where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

    « Back Next »
  6. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  7. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  8. ×

    View our suggested citation for this chapter.

    « Back Next »
  9. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!