Skip to main content

Currently Skimming:

11 Structuring Accountability Systems in Organizations: Key Trade-Offs and Critical Unknowns--Philip E. Tetlock and Barbara A. Mellers
Pages 249-270

The Chapter Skim interface presents what we've algorithmically identified as the most significant single chunk of text within every page in the chapter.
Select key terms on the right to highlight them within pages of the chapter.


From page 249...
... The research literature on accountability spans work in social psychology, organization theory, political science, accounting, finance, and microeconomics (agency theory) -- and offers us an initially confusing patchwork quilt of findings guaranteed to frustrate those looking for quick fixes.
From page 250...
... One could hold governments accountable for policy miscalculations; governments could hold intelligence agencies accountable for flawed guidance; agency heads could hold their managers accountable for failure to check errors; and managers could hold individual analysts accountable for making the initial errors. To make this chapter manageable, we focus on the accountability pressures operating on individual analysts in their immediate working environment.
From page 251...
... The official answer for intelligence agencies in the early 21st century -- providing timely and accurate information that enables policy makers to advance our national interest -- is too openended to be of much practical value. We need specific guidance on the types of "criterion variables" that proposed accountability systems are supposed to be maximizing or minimizing.
From page 252...
... The second is very much under the control of intelligence agencies -- how they should structure their internal accountability systems for defining and facilitating excellence -- and we devote most attention to this topic. #1 Balancing Clashing Needs for Professional Autonomy and Political Responsiveness How dependent should those who preside over intelligence analysis be on the approval or disapproval of their democratically elected political masters in Congress and the Executive Branch of government?
From page 253...
... Advocates of process accountability maintain that the best way to reach the optimal forecasting frontier -- and stay there -- is to hold analysts responsible for respecting certain logical and empirical guidelines. As we can see from the list of standards for analytic tradecraft contained in Intelligence Community Directive No.
From page 254...
... Indeed, strong arguments can be made for stressing process over outcome accountability. Proponents of process accountability warn that it is unfair and demoralizing to hold analysts responsible for outcomes palpably outside their control -- and doing so may stimulate either risk-averse consensus forecasts (herding of the sort documented among managers of mutual funds -- whereby individuals believe, they can't fire all of us -- Bikhchandani et al., 1998; Scharfstein and Stein, 1990)
From page 255...
... process accountability can readily ossify into bureaucratic rituals and mutual backscratching -- Potemkin-village facades of process accountability and rigor designed to deflect annoying questions from external critics (Edelman, 1992; Meyer and Rowan, 1977)
From page 256...
... Again, whether the current system has found a sound compromise between competing accountability design templates is beyond the purview of this chapter. But again there should be little doubt about the need to factor these trade-offs into organizational design, and little doubt that where one comes down on this continuum of accountability design options will be influenced by one's implicit or explicit assumptions about the relative risks of process accountability being corrupted and degenerating into a bureaucratic formality versus the relative risks of holding people unfairly accountable for the inherently unforeseeable, and prompting them to engage in ever more elaborate forms of trickery designed to inflate their accuracy scores.
From page 257...
... Figure 11-1 formalizes these demands, drawing on signal detection theory, a mainstay of behavioral science (Green and Swets, 1966; as well as this volume's McClelland, Chapter 4, and Arkes and Kajdasz, Error Thresholds Probability of Occurrence C B A rs ato Pro lifer life pro rato Non rs Strength of Evidence for Proliferation Risk FIgURE 11-1 A world that permits modest predictability. SOURCE: Generalized from Green and Swets (1966)
From page 258...
... Perfect forecasters achieved a 100 percent hit rate at 0 percent cost in false alarms, falling at the point in the top left corner. Forecasters with no ability, who simply guessed, would fall along the main diagonal, at a point reflecting their understanding of the system's tolerance for hits versus false alarms.
From page 259...
... The solid curve in Figure 11-4 shows that the corresponding forecasting frontier has been pushed toward the top left corner. As a result, analysts can now achieve the same hit rate with a much lower
From page 260...
... . false alarm rate, or the same false 11-3.eps with a much higher hit rate.
From page 261...
... . This mix of deep ignorance with high stakes makes a strong case for conducting low-cost studies designed to explore the likely yield from developing sophisticated accuracy metrics and then institutionalizing level playing-field competitions that pit different analytical mindsets/methods against each other repeatedly across domains and time.
From page 262...
... To check this possibility -- and the acceptability of the price -- researchers need to conduct comparisons of forecasting accuracy for both low and high base-rate outcomes, and manag ers need to communicate to analysts the value-weighted accuracy functions that they want analysts to maximize (e.g., I am willing to tolerate dozens of false alarms to avoid a single miss for these rare events, but I attach equal importance to avoiding false alarms and misses for these more common events)
From page 263...
... THE EVALUATIVE STANDARD The signal detection theory framework, embodied in Figures 11-1 through 11-4, assumes that what matters, when holding analysts accountable, is their ability to reduce the risks of false negatives and false positives, weighted by the costs of each type of error. Such a long-term, large-sample perspective protects analysts who have had bad luck on an issue -- and protects policy makers from analysts who get lucky.
From page 264...
... Applying agency theory runs into the same problem as applying accountability schemes. There is no unbiased measure of y, the output of
From page 265...
... In the intelligence context, achieving this solution requires identifying process or outcome metrics that correlate with rigorous, open-minded analyst behavior that maximizes the chances of political assessments that are useful to analysts' clients. Unfortunately, agency theory does not offer off-the-shelf solutions.
From page 266...
... . For instance, critics of corporate America tend not to trust corporate personnel managers to implement affirmative action programs rigorously and tend to suspect that companies' process-accountability systems for ensuring equal employment opportunity are mere Potemkin village facades of compliance.
From page 267...
... Imagine that 10 years from now, various private-sector and prediction-market initiatives start reliably outperforming intelligence agencies in certain domains -- and, during his or her daily intelligence briefing, the President turns to the Director of National Intelligence and says: "I can get a clearer sense of the odds of this policy working by averaging public sources of probability estimates." Step #3: Preempt Politicization and Clarify Arguments Third, we can do a better job of preempting politicization and clarifying where the factual-scientific arguments over enhancing intelligence analysis should end and the value-driven political ones should begin. Once the scientific community has enumerated the organizational design tradeoffs and key uncertainties, and has compared the opportunity costs of inaction with the tangible costs of the needed research, policy makers must set value priorities, asking: Do the net potential benefits of undertaking the embedded-organizational experiments and validity research sketched here outweigh the net observed benefits of not rocking the bureaucratic boat and continuing to insulate current policies and procedures from scientific scrutiny and challenge?
From page 268...
... Organizational Behavior and Human Decision Processes 38:230–256. Director of National Intelligence.
From page 269...
... Organizational Behavior and Human Decision Processes 60(1)
From page 270...
... Accountability, agency, and ideology: Exploring managerial preferences for process versus outcome accountability. Unpublished manuscript, Wharton School of Business, University of Pennsylvania.


This material may be derived from roughly machine-read images, and so is provided only to facilitate research.
More information on Chapter Skim is available.