National Academies Press: OpenBook

Research Needs for Human Factors (1983)

Chapter: Eliciting Expert Judgment

« Previous: Human Decision Making
Suggested Citation:"Eliciting Expert Judgment." National Research Council. 1983. Research Needs for Human Factors. Washington, DC: The National Academies Press. doi: 10.17226/759.
×

3
ELICITING INFORMATION FROM EXPERTS

Many formal and informal processes in working organizations hinge on the effective communication of “expert information.” Risk analyses may require a metallurgist to assess the likelihood of a valve’s fracturing under an anticipated stress or a human factors expert to assess the likelihood of its failing to open due to faulty maintenance. Strategic analyses may require substantive experts to assess the growth rate of the Soviet economy or the proportion of its expenditures directed to arms. Tactical planning in marketing or the military may demand real-time reports by field personnel of what seems to be happening “at the front.” Air traffic control typically requires succinct, unambiguous status reports from all concerned. Computerized career-counseling routines or procedures for establishing entitlement to social benefits assume that lay people can report on those aspects of their own lives about which they are the ranking experts. The U.S. Census Bureau makes similar assumptions when asking people about their employment status, as a step toward directing federal policies and jobs programs. In product liability trials technical experts give evidence in a highly stylized manner.

As can be seen from these examples, experts may talk to the consumers of their advice directly, to elicitors who then translate what they say into a form usable by a computer, or to a computer. Insofar as computers have been designed by people, all of these communication modes assume some fairly high level of interpersonal understanding. The elicitors must ask questions that people can sensibly answer. The recipients of those answers

The principal author of this chapter is Baruch Fischhoff.

Suggested Citation:"Eliciting Expert Judgment." National Research Council. 1983. Research Needs for Human Factors. Washington, DC: The National Academies Press. doi: 10.17226/759.
×

must interpret them with an appreciation of the errors and ambiguities they may conceal. The quality of that communication is likely to depend on the novelty of the problems, the historic level of interaction between questioner and answerer, and the quickness with which miscommunications produce diagnostic signs. Poor elicitation by air traffic controllers may become visible very quickly; whereas employment surveys may (and have) elicited biased responses and misdirected economic planning for years without the error’s being detectede Particularly clumsy elicitation may lead users to reject the eliciting system, thereby avoiding mistakes but also wasting the resources that have been invested in its design.

New research about elicitation and the translation of existing research findings into more usable form could benefit a wide variety of enterprises. As this chapter discusses, elicitation is not a field of inquiry or application in and of itself, but a function that recurs in many problems. This creates special difficulties for the accumulation and dissemination of knowledge about it.

BACKGROUND

Perhaps because elicitation is a part of many problems but all of none, it has emerged neither as a discipline nor as an area that is seen to require special expertise. The typical assumption is that elicitation is not a particular problem, as long as things stay fairly simple and one uses common sense. The validity of that assumption may not be questioned until some egregious problem has clearly arisen from a particular failure. When problems arise, the lack of a coherent body of knowledge may encourage ad hoc solutions, with little systematic testing or accumulation of knowledge. Solutions are generated from the resources of those working on a particular problem and viewed from their narrow perspective.

One reason for aggregating these elicitation issues into a single chapter is to keep them from being orphaned, as parts of many problems for which there is no focus of responsibility. Another reason is to suggest that there are enough recurrent themes to generate a coherent body of knowledge, thereby reducing the degree to which each system designer faced with an elicitation problem must start from scratch. Although work may still focus on specific problems, conceptualizing them in a general way

Suggested Citation:"Eliciting Expert Judgment." National Research Council. 1983. Research Needs for Human Factors. Washington, DC: The National Academies Press. doi: 10.17226/759.
×

may increase both the pool of talent they draw on and the breadth of perspective with which their solutions are interpreted and reported. Because a common element of these projects is dealing with substantive experts, their cumulative impact should be to generate a better understanding of the judgmental processes of experts.

The research bases for the following projects are sufficiently diverse that further details are given within each context. In some cases, there is a distinct research literature on which new projects can be based. In others, the proposed topic does not exist as a separate pursuit, or at least not within the context of human factors; the literature cited is suggestive of the kinds of approaches that have proven useful in other fields or related problems that might be drawn on.

RESEARCH ON ELICITATION

Ensuring a Common Frame of Reference

An obvious precondition for communication is ensuring that elicitor and respondent are talking about the same thing. In ordinary conversation the participants have some opportunity for detecting and rectifying misunderstandings. If questions are set down once for all respondents, then misunderstandings must be anticipated in advance. Some implicit theory of potential (mis)interpretations must guide the question composers for management systems, accident report forms, or automatic diagnostic routines that rely on expert judgment.

These problems are not, of course, unique to human factors. They are probably best understood by professionals whose central concern for the longest periods of time has been asking questions; these include anthropologists (Agar, 1980), linguists, historians (Hexter, 1971), survey researchers (Payne, 1952), philosophers, and some social psychologists (Rosenthal and Rosnow, 1969). Two general conclusions that one can derive from their work is that the opportunities for misinterpretation are much greater than most people would presuppose and that the nature of possible specific misinterpretations is hard to imagine intuitively.

The chances for miscommunication are likely to increase to the extent that elicitor and respondent come from different cultures and have had little opportunity to interact. Systems designed by technical experts for lay users

Suggested Citation:"Eliciting Expert Judgment." National Research Council. 1983. Research Needs for Human Factors. Washington, DC: The National Academies Press. doi: 10.17226/759.
×

often fail into this category, especially when the elicitation is far removed physically or temporally from the design effort. Consider, for example, a computerized job search program that requires unemployed workers to characterize their experience in terms of one of the 12,000 categories of the Dictionary of Occupational Titles (DOT) code (e.g., handkerchief presser). Although a considerable intellectual effort has gone into imposing a semblance of order on the world of work, that order may be very poorly matched to the way in which applicants conceptualize their experience. Indeed, even those who elicit such information from job applicants and translate it into the DOT code on a full-time basis may have considerable difficulty. Similar problems may face a system designed to clarify entitlement to social services or a computerized system for diagnosing car or radio problems on the basis of a user’s description of presenting symptoms. These problems may persist even with the clearest display and the most lucid users’ manual.

Although the details of each problem are unique, seeing their common elements can enable designers to exploit a larger body of existing research and research methods. One strategy is literature reviews that make accessible the methods used by fields such as anthropology to uncover misunderstandings. Using these methods with small samples of users prior to designing systems or in the early stages of design could effectively suggest minor changes or even major issues (such as whether the system could ever stand alone, or whether it will always need an interpreter between it and the actual user). Such strategies are increasingly, being used in survey design; they may even lead to some revision in the categories of Justice Department statistics so as to make them more compatible with the ways in which victims of crimes think about their experience (National Research Council, 1976).

Another research strategy is to review existing case studies of mishaps (e.g., in diplomacy, survey research, police work, or software design) for evidence of problems due to questioners and respondents unwittingly speaking different languages (Brooks and Bailar, 1978). Such studies would help establish the prevalence of such problems and create a stock of cautionary tales for educational and motivational purposes.

A third strategy involves experimental and observational studies of groups of individuals who regularly communicate with one another, in order to see how well they understand one another’s perspectives. Software

Suggested Citation:"Eliciting Expert Judgment." National Research Council. 1983. Research Needs for Human Factors. Washington, DC: The National Academies Press. doi: 10.17226/759.
×

designers and less educated users, engineers and machine operators, and market researchers and consumers are a few such dyads. The intuitive beliefs of the elicitors in each of these dyads regarding the perspectives of their respondents might provide some productive hypotheses and reveal some misconceptions worthy of correction.

Better ways of eliciting information should also suggest better ways of presenting it. Informing and counseling patients about medical risks is one area in which these problems are currently under active study (see Chapter 2).

Matching Questions to Mental Structures

A presumption of many elicitation efforts is that the respondent has an answer to any question that the elicitor can raise (Turner and Martin, in press). One contributing factor to this belief is the fact that elicitors often cannot accept “no answer” for an answer, needing some best guess at the answer in order to get on with business. A second contributing factor may be the tendency, long known to surveyors, for respondents to offer opinions on even nonexistent issues, perhaps reflecting some feeling that they can, should, or must have opinions on everything. A third factor may be the elicitors’ (intuitive or scientific) models of memory that presume a coherent store of knowledge waiting to be tapped by whatever question proves most useful to the elicitor (Lindley, et al., 1979).

Coping with situations in which the respondent has little or no knowledge about the topic in question is dealt with in the next section, on how to elicit assessments of information quality. Alternatively, the respondent may have the needed information, but not in the form required by the question. Whenever there is incompatibility between the way in which knowledge is organized and the way in which it is elicited, the danger arises that the expert may not be used to best advantage, may provide misleading information, or may be seduced into doing a task to which his or her expertise does not extend. For example, risk assessment programs often require the designers of a technical system to describe it in terms of the logical interrelationships between various components (including its human operators, repair people, suppliers, etc.) and to assess the probability of these components’ failing at various rates, perhaps as a function of several variables (Jennergren and Keeney, 1981).

Suggested Citation:"Eliciting Expert Judgment." National Research Council. 1983. Research Needs for Human Factors. Washington, DC: The National Academies Press. doi: 10.17226/759.
×

Given these judgmental inputs, these programs may perform miraculous simulations and calculations; however, the value of such analyses is contingent on the quality of the judgments. The processes by which experts are recruited may or may not take into consideration the need for these special skills. In some situations, no one may have them.

Research designed to improve the compatibility of questions with the way in which knowledge is stored should be guided by substantive theories about that storage as well as practical knowledge of the information needed. The citations given here represent different approaches to conceptualizing such mismatches between precise questions and differently organized or unorganized knowledge. As an example of the kinds of testable hypotheses that emerge from these literatures, consider the possibility that many experts experience the topics of their expertise one by one, whereas elicitors often need a summary (e.g., of the rate of target detections by sonar operators, the conditional probability of misreading an altimeter given a particular number of hours of flying experience, the distribution of hearing deficits associated with various noise levels). If experts are not accustomed to aggregating their experience, then they will respond differently to procedures that request aggregate estimates immediately and those that focus first (and perhaps entirely) on the recall of individual incidents (Fischhoff and Whipple, 1981). This particular research could build somewhat on probability learning studies or attempts to distinguish between episodic and semantic memory.

Efforts to design the best response mode assume that respondents have the knowledge that the elicitor needs, but not organized in the most convenient form. A more troublesome situation arises when they do not have it organized at all. In that case the elicitor’s task becomes to evoke all of the relevant bits and pieces, then devise some scheme for interpreting them. Doing so first requires discovering that incoherence exists, which may not be easy, insofar as a set of questions may elicit consistent responses simply because it has consistently imposed one of several possible perspectives. Although sensitive elicitors may already be poking around creatively, there are few codified and tested procedures. Such procedures might involve standard sets of questions designed to produce diverse perspectives, which the respondent would then integrate to provide a best guess

Suggested Citation:"Eliciting Expert Judgment." National Research Council. 1983. Research Needs for Human Factors. Washington, DC: The National Academies Press. doi: 10.17226/759.
×

(or set of best guesses) for the problem at hand. For example, one might always ask about case-by-case and aggregate estimates, in that order. Such efforts might also prompt and be helped by the development of memory models allowing for multiple, incoherent representations.

Clarifying Information Quality

Before taking action on an expert’s opinion, one wants to know how good that best guess is. Great uncertainty might prompt one to try to uncover its sources or to take alternative courses of action (e.g., hedging one’s bets). Although explicit assessments of uncertainty are becoming a greater part of enterprises such as risk analysis (Fairley, 1977), weather forecasting (Murphy and Winkler, 1977), and strategic assessment (Daly and Andriole, 1980), such experiences are rare for most people. As one would expect in novel elicitation situations, the responses that people give are not always to be trusted. Assessments of information quality (or confidence or probability) have been the subject of extensive research over the last decade (Lichtenstein, et al., 1982). It has produced a fairly robust set of methods for eliciting uncertainty and a moderately good understanding of human performance in this regard. The clearest finding is that people have a partial but not complete appreciation of the extent of their own knowledge. Most commonly, this partial knowledge expresses itself in overconfidence, which seems quite impervious to most attempts at debiasing, except for intensive training (Fischhoff, 1982).

Many practical problems could be solved in this area with a moderate investment in completing the research that has already been started. This research could use the stock of elicitation techniques already available to understand better the range and potency of overconfidence biases, to clarify how worrisome they are, and to determine the most effective training and how far it can be generalized. Of particular interest is the extent to which experts are prone to these problems when making judgments in their areas of expertise; current evidence suggests that they are, but it is still inconclusive given the importance of the question (Spetzler and stael von Holstein, 1975).

The practical steps that can be taken subsequent to such research are developing and testing training procedures, identifying the least bias-prone elicitation

Suggested Citation:"Eliciting Expert Judgment." National Research Council. 1983. Research Needs for Human Factors. Washington, DC: The National Academies Press. doi: 10.17226/759.
×

methods for situations in which training is impossible or ineffective, and anticipating the extent of bias with different methods and situations in order to apply ad hoc corrections. Choosing between these steps and implementing them efficiently will require a more detailed understanding of the cognitive processes involved in representing and integrating probabilistic information. Although existing research covers much of the ground between basic cognitive psychology and field applications, it has not quite touched bases with either extreme. Coping with this practical problem might provoke some interesting theoretical work in the representation of knowledge.

Eliciting Systems

In the examples used in the preceding sections, the knowledge that experts were asked to provide dealt with the components of some large system (e.g., a failure probability, a job choice, a burnout rate). At times, however, experts are required to describe the entire system (Hayes–Roth, et al., 1981). Software packages that attempt to elicit a big picture include some of those used in decision structuring, failure probability modeling (U.S. Nuclear Regulatory Commission, 1981), map making, route planning, and economic analysis. Once such systems have been programmed well enough to work at all, one must ascertain the degree of fidelity between the representations they produce and the conceptual or physical systems they are meant to model; attempts to develop better elicitation methods or to cope with known limits or errors should follow (Brown and Van Lehn, 1980). The research strategies outlined below, based in part on the initial work already begun and in part on discussions with troubled system elicitors, may shed some light on these problems. In each case one would want to know whether a change in procedure made a difference and, if so, whether one method would be preferred in some or all situations. Because so little systematic knowledge is available on how results may vary with different elicitation procedures, generalizing the existing research findings should be done cautiously.

  • Determining whether formally equivalent ways of eliciting the same information produce different responses. For example, a category of events may be judged differently when considered as a whole and when disaggregated into component categories.

Suggested Citation:"Eliciting Expert Judgment." National Research Council. 1983. Research Needs for Human Factors. Washington, DC: The National Academies Press. doi: 10.17226/759.
×
  • Evaluating the effectiveness of methods that require more and less “deep” (or analytical or inferential) judgments about system operation. For example, if a process produces a distribution of events (e.g., failure rates), one could assess that distribution directly or judge something about the data-generating process.

  • Varying the amount of feedback provided about how the elicited system operates. For example, when a simulation of an industrial process is designed according to an expert’s judgment, it may be run a few times, just to see if it produces more or less sensible results. The expert could then introduce apparently needed adjustments. Such tinkering should lead to successive improvements in the model; however, it can also prevent simulations from producing nonintuitive (i.e., surprising, informative) results. It also threatens the putative independence of the models created by different experts in areas such as climatology and macroeconomics. The convergence of these models’ predictions (about the future of the economy, for example) is used as a sign of their validity. In practice, however, econometricians monitor one another’s models and adjust theirs if they produce outlying predictions.

  • Assessing experts’ ability to judge the completeness of a representation. How well can they tell whether all important components have been included? Available evidence suggests that considerations that are out of sight are also out of mind; once experts have begun to think about a model in a particular way, the accessibility of other perspectives is apprecially reduced (Fischhoff, et al., 1978). If this is generally true, an elicitor might try to evoke a variety of perspectives on the system superficially before pursuing any in depth (as a sort of intra-expert brainstorming).

Estimating Numerical Quantities

A common form of uncertainty is knowing something about a topic, but not a necessary fact. If that fact is a number (e.g., the number of tanks an enemy has or the percentage of those tanks that are in operating order), it may be possible to use the related facts in a systematic way if one can devise a rule or algorithm for composing them (Armstrong, 1977). The validity of such estimates depends on the appropriateness of the algor-

Suggested Citation:"Eliciting Expert Judgment." National Research Council. 1983. Research Needs for Human Factors. Washington, DC: The National Academies Press. doi: 10.17226/759.
×

ithms, the quality of the component estimates, and the accuracy of their composition. Used appropriately algorithms can make otherwise impenetrable judgmental processes explicit and subject both to external criticism and to self-improvement, as one can systematically update one’s best estimate whenever more is learned about any component (Singer, 1971).

Although there are many advocates of algorithmic thinking and anecdotal evidence of its power, there do not seem to be many empirical studies of their usefulness (Hogarth and Makridakis, 1981). Such studies of algorithm efficacy as do exist seem concentrated on the solving of deterministic logical problems for which all relevant evidence is presented to the respondent and a clear criterion of success exists, rather than estimation tasks in which the accuracy of the estimate will be unclear until some external validation is provided. Like any other judgmental technique, algorithmic thinking could be more trouble than it is worth if it increases confidence in judgment more than it improves judgment.

A primary research project here would be to compile a set of plausible and generally applicable algorithmic strategies. Process tracing of the judgmental processes of expert estimators might be one source. The algorithms discovered in the study of logical problem solving might be another. A subsequent project could attempt to teach people to use these algorithms, then, looking at the fidelity with which they can be applied, measure the accuracy of their results and their influence on confidence. The use of multiple algorithms and people’s ability to correct the results of imperfect algorithms are also worth study. The best algorithms could then become part of management information systems, decision support systems, and the like.

Two interpretive literature reviews might provide useful adjuncts to this research. One would look at work on mental arithmetic of the sort required when people must execute algorithms in their heads. Although computational devices should be able to eliminate the need for such exercises, judges may still be caught without their tools or may use unwritten mini-algorithms in order to produce component estimates (once they’ve gotten the general idea). The second review would summarize, in a form accessible to designers, the psychophysics literature on stimulus-presentation and response-mode effects (Poulton, 1977). That literature shows the degree of variability in magnitude estimation that can arise from

Suggested Citation:"Eliciting Expert Judgment." National Research Council. 1983. Research Needs for Human Factors. Washington, DC: The National Academies Press. doi: 10.17226/759.
×

“artifactual” changes in procedure (e.g., order of alternative presentation, kind of numbers used).

Detecting Reporting Bias

The preceding sections have assumed that elicitor and respondent are engaged in an honest, unconflicted attempt to produce a best estimate of some quantity or relationship. When research identifies difficulties, one assumes a mutual good faith effort on the part of elicitors and experts to eliminate them. In the real world, however, many wrong answers are deliberate; their producers do not wish to have them either detected or corrected. If the citations given here are at all representative, systematic misrepresentation has been of greatest interest to those concerned with the social and economic context within which behavior takes place. Such misrepresentations may be usefully divided into two categories. The first includes deliberate attempts to deceive in order to gain some advantage. For example, economists chronically mistrust verbal reports of people’s preferences (i.e., surveys) for fear that respondents engage in strategic behavior, trying to “put one over” on the questioner and distort the survey’s results (Brookshire, et al., 1976). Some critics of survey research are even advocating that respondents do so deliberately so as to stop the survey juggernaut (see Turner and Martin, in press), as do some people in organizations who feel threatened by computerized information systems and wish to see them fail.

The second category of misreports reflects cultural or subcultural norms. In a business or military unit, for example, optimism (or grousing) may be the norm for communication between members of some ranks (Tihansky, 1976). Or there may be a norm of exaggerating one’s wealth or weight. Those who share the norms know how to recode the spoken word to gain a more accurate assessment; however, mechanical systems designed by people outside the culture may take those reports at face value and thereby introduce systematic errors into their workings.

Although investigating misreporting is likely to be quite difficult, identifying it is part of systems design. One way to start is to review the relevant literature in fields that have dealt with these questions (e.g., sociology, economics). A second is to interview experts off the record about how (and how often) they try to manipulate systems that pose questions to them. A

Suggested Citation:"Eliciting Expert Judgment." National Research Council. 1983. Research Needs for Human Factors. Washington, DC: The National Academies Press. doi: 10.17226/759.
×

third is to observe ongoing elicitations for which it is possibile to validate responses.

Difficulties, once identified, must still be treated. One method is to institute penalties for misreporting. A second is to make consistency checks to detect errors. A third is to eliminate the reasons for misreporting (e.g., ensuring confidentiality). A fourth is to correct misreports for known biases. For example, the Central Electricity Generating Board in Great Britain discovered that it could quite accurately predict the time needed to return a power station to operation by doubling the time estimates reported by the chief plant engineers. One difficulty with such adjustments is that people may change their reporting practices if they find out about them (Kidd, 1970).

Reporting Past Events

Many planning and design activities are heavily guided by reports of past events, particularly accidents or other failures (Petzoldt, 1977; Rasmussen, 1980). One reconstructs the way in which a system should have operated, contrasts that with the way in which it actually operated, and uses that comparison to improve future design (perhaps assigning guilt and enacting penalties along the way).

Such retrospections are inevitably colored by the reporter’s knowledge of what has happened. As common sense suggests and the citations below partially document, that coloring can be the source of needed detail or of systematic distortion. It has been found, for example, that people seem to exaggerate in hindsight what could have been (and was) known in foresight; they use explanatory schemes so complicated and so poorly specified as to defy empirical test; they remember people as having been more like their present selves than was actually the case; they fail to remember crucial acts that they themselves performed. These problems seem to afflict both the garden-variety retrospections evoked in laboratory studies and those of professional historians, strategic analysts, and eyewitnesses (Fischhoff, 1975).

One needed project is to make these studies available to those engaged in eliciting or using retrospective reports. Another is to attempt to replicate them in human factors domains. Of particular interest are cases in which the direction of bias has been documented sufficiently to allow recalibration of biased retro-

Suggested Citation:"Eliciting Expert Judgment." National Research Council. 1983. Research Needs for Human Factors. Washington, DC: The National Academies Press. doi: 10.17226/759.
×

spections. In cases in which distortions are less predictable, techniques should be developed to help experts reconstruct their view of the situation before, during, and after the event. For example, such research may show that people exaggerate the probability they assigned (or would have assigned) to past events before they occurred by about 20 percent, on the average. That knowledge may make it possible to adjust retrospective probability assessments, but not to eliminate distortions in the way particular events and causal links are drawn.

For assigning blame or understanding how an accident situation looked to an operator just before things started to go wrong, strict (accurate) reconstruction is essential. For understanding how the system actually operates, one needs to be wary of the danger that experts have learned too much from a particular event, thereby misinterpreting the importance and generality of the causal forces involved. Generals who prepare for the last war may fit this stereotype, as may the operators of supervisory control systems who respond to each mishap by ensuring that it will not happen again, then rest confident that the system as a whole is now fail-safe.

Three research strategies appear to offer some promise for clarifying these questions. One is to review the reports of historians, judges, journalists, and others about how they detect and avoid biases. A second is to do theory-based experiments, looking at how memory accommodates new information, particularly to see which processes are reversible. The third is research on debiasing, looking at the effect of directly warning people, of raising the stakes riding on a decision, or of instructing them to change the structure of the task to one that uses their intellectual skills to better advantage.

CONCLUSION

Eliciting information from experts successfully is important to a variety of systems and organizations. The care taken in elicitation varies greatly, from detailed studies of the elicitation of some specific recurrent judgments, to careful deliberations unsupported by empirical research, to casual solutions. Even though elicitation is not a discipline per se, research such as that suggested in this chapter could focus more attention on it and make a body of knowledge accessible to designers. In

Suggested Citation:"Eliciting Expert Judgment." National Research Council. 1983. Research Needs for Human Factors. Washington, DC: The National Academies Press. doi: 10.17226/759.
×

part, that knowledge would be borrowed from related fields (with suitable translations); in part it would be created expressly to solve human factors problems. Some of these projects could be undertaken in their own right; others would be best developed as part of ongoing projects, with more emphasis on elicitation than might otherwise be the case. The interdisciplinary aspect of many projects may generate interest in human factors problems on the part of workers in other fields (e.g., memory representation, workplace culture), and their expertise could contribute to human factors research.

REFERENCES

Agar, M. 1980 The Intimate Stranger. New York: Academic Press.

Armstrong, J.S. 1977 Long Range Forecasting. New York: Wiley.


Brooks, A., and Bailar, B. A. 1978 An Error Profile: Employment as Measured by the Current Population Survey. Statistical Policy Working Paper 3. Washington, D.C.: U.S. Department of Commerce.

Brookshire, D.S., Ives, B.C., and Schulze, W.D. 1976 The valuation of aesthetic preferences. Journal of Environmental Economics and Management 3:325–346.

Brown, J.S., and Van Lehn, K. 1980 Repair theory: a generative theory of bugs in procedural skills. Cognitive Science 4:379–426.


Daly, J.A., and Andriole, S.J. 1980 The use of events/interaction research by the intelligence community. Policy Sciences 12:215–236.


Fairley, W.B. 1977 Evaluating the “small” probability of a catastrophic accident from the marine transportation of liquefied natural gas. In W.B.Fairley and F.Mosteller, eds., Statistics and Public Policy. Reading, Mass.: Addison-Wesley.

Fischhoff, B. 1982 Debiasing. In D.Kahneman, P.Slovic, and A. Tversky, eds., Judgment Under Uncertainty: Heuristics and Biases. New York: Cambridge University Press.

Suggested Citation:"Eliciting Expert Judgment." National Research Council. 1983. Research Needs for Human Factors. Washington, DC: The National Academies Press. doi: 10.17226/759.
×

Fischhoff, B. 1975 Hindsight ≠ foresight: the effect of outcome knowledge on judgment under uncertainty, Journal of Experimental Psychology: Human Perception and Performance 1:288–299.

Fischhoff, B., Slovic, P., and Lichtenstein, S. 1978 Fault trees: sensitivity of estimated failure probabilities to problem representation. Journal of Experimental Psychology: Human Perception and Performance 4:330–344.

Fischhoff, B., and Whipple, C. 1981 Risk assessment: evaluating errors in subjective estimates. The Environmental Professional 3:272–281.


Hexter, J.H. 1971 The History Primer. New Haven: Yale University Press.

Hogarth, R.M., and Makridakis, S. 1981 Forecasting and planning: an appraisal. Management Science 27:115–138.


Jennergren, L.P., and Keeney, R.L. 1981 Risk assessment. In Handbook of Applied System Analysis. Laxenburg, Austria: International Institute of Applied Systems Analysis.


Kidd, J.B. 1970 The utilization of subjective probabilities in production planning. Acta Psychologica 34:338–347.


Lichtenstein, S., Fischhoff, B., and Philips, L.D. 1982 Calibration of probabilities: state of the art to 1980. In D.Kahneman, P.Slovic, and A.Tversky, eds., Judgment Under Uncertainty; Heuristics and Biases. New York: Cambridge University Press.

Lindley, D.V., Tversky, A., and Brown, R.V. 1979 On the reconciliation of probability assessments. Journal of the Royal Statistical Society. Series A(142) Part 2:146–180.


Murphy, A.H., and Winkler, R. 1977 Can weather forecasters formulate reliable probability forecasts of precipitation and temperature? National Weather Digest 2:2–9.


National Research Council 1976 Surveying Crime. Panel for the Evaulation of Crime Surveys, Committee on National Statistics. Washington, D.C.: National Academy of Sciences.

Suggested Citation:"Eliciting Expert Judgment." National Research Council. 1983. Research Needs for Human Factors. Washington, DC: The National Academies Press. doi: 10.17226/759.
×

Payne, S.L. 1952 The Art of Asking Questions. Princeton, N.J.: Princeton University Press.

Pezoldt, V.J. 1977 Rare Event/Accident Research Methodology. Washington, D.C.: National Bureau of Standards.

Poulton, E.C. 1977 Quantitative subjective assessments are almost always biased, sometimes completely misleading. British Journal of Psychology 68:409–425.


Rasmussen, J. 1980 What can be learned from human error reports. In K.D.Duncan, M.Gruneberg, and D.Wallis, eds., Changes in Working Life. New York: Wiley.

Rosenthal, R., and Rosnow, R. 1969 Artifact in Experimental Design. New York: Academic Press.


Singer, M. 1971 The vitality of mythical numbers. The Public Interest 23:3–9.

Spetzler, C.S., and Stael von Holstein, C-A. 1975 Probability encoding in decision analysis, Management Science 22:340–358.


Tihansky, D. 1976 Confidence assessment of military air frame cost predictions. Operations Research 24:26–43.

Turner, C. and Martin, E., eds. in press Surveying Subjective Phenomena. Panel on Survey-Based Measures of Subjective Phenomena, Committee on National Statistics, National Research Council. New York: Russell Sage.


U.S. Nuclear Regulatory Commission 1981 Fault Tree Handbook. Washington, D.C.: U.S. Nuclear Regulatory Commission.


Waterman, D., and Hayes-Roth, F. 1982 An Investigation of Tools for Building Expert Systems. Report prepared for the National Science Foundation. Santa Monica, Calif: Rand Corporation.

Suggested Citation:"Eliciting Expert Judgment." National Research Council. 1983. Research Needs for Human Factors. Washington, DC: The National Academies Press. doi: 10.17226/759.
×
Page 33
Suggested Citation:"Eliciting Expert Judgment." National Research Council. 1983. Research Needs for Human Factors. Washington, DC: The National Academies Press. doi: 10.17226/759.
×
Page 34
Suggested Citation:"Eliciting Expert Judgment." National Research Council. 1983. Research Needs for Human Factors. Washington, DC: The National Academies Press. doi: 10.17226/759.
×
Page 35
Suggested Citation:"Eliciting Expert Judgment." National Research Council. 1983. Research Needs for Human Factors. Washington, DC: The National Academies Press. doi: 10.17226/759.
×
Page 36
Suggested Citation:"Eliciting Expert Judgment." National Research Council. 1983. Research Needs for Human Factors. Washington, DC: The National Academies Press. doi: 10.17226/759.
×
Page 37
Suggested Citation:"Eliciting Expert Judgment." National Research Council. 1983. Research Needs for Human Factors. Washington, DC: The National Academies Press. doi: 10.17226/759.
×
Page 38
Suggested Citation:"Eliciting Expert Judgment." National Research Council. 1983. Research Needs for Human Factors. Washington, DC: The National Academies Press. doi: 10.17226/759.
×
Page 39
Suggested Citation:"Eliciting Expert Judgment." National Research Council. 1983. Research Needs for Human Factors. Washington, DC: The National Academies Press. doi: 10.17226/759.
×
Page 40
Suggested Citation:"Eliciting Expert Judgment." National Research Council. 1983. Research Needs for Human Factors. Washington, DC: The National Academies Press. doi: 10.17226/759.
×
Page 41
Suggested Citation:"Eliciting Expert Judgment." National Research Council. 1983. Research Needs for Human Factors. Washington, DC: The National Academies Press. doi: 10.17226/759.
×
Page 42
Suggested Citation:"Eliciting Expert Judgment." National Research Council. 1983. Research Needs for Human Factors. Washington, DC: The National Academies Press. doi: 10.17226/759.
×
Page 43
Suggested Citation:"Eliciting Expert Judgment." National Research Council. 1983. Research Needs for Human Factors. Washington, DC: The National Academies Press. doi: 10.17226/759.
×
Page 44
Suggested Citation:"Eliciting Expert Judgment." National Research Council. 1983. Research Needs for Human Factors. Washington, DC: The National Academies Press. doi: 10.17226/759.
×
Page 45
Suggested Citation:"Eliciting Expert Judgment." National Research Council. 1983. Research Needs for Human Factors. Washington, DC: The National Academies Press. doi: 10.17226/759.
×
Page 46
Suggested Citation:"Eliciting Expert Judgment." National Research Council. 1983. Research Needs for Human Factors. Washington, DC: The National Academies Press. doi: 10.17226/759.
×
Page 47
Suggested Citation:"Eliciting Expert Judgment." National Research Council. 1983. Research Needs for Human Factors. Washington, DC: The National Academies Press. doi: 10.17226/759.
×
Page 48
Next: Supervisory Control Systems »
Research Needs for Human Factors Get This Book
×
 Research Needs for Human Factors
Buy Paperback | $50.00
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

READ FREE ONLINE

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    Switch between the Original Pages, where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

    « Back Next »
  6. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  7. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  8. ×

    View our suggested citation for this chapter.

    « Back Next »
  9. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!