National Academies Press: OpenBook

Intelligence Analysis: Behavioral and Social Scientific Foundations (2011)

Chapter: 6 Individual Reasoning--Barbara A. Spellman

« Previous: Part III: Analysts
Suggested Citation:"6 Individual Reasoning--Barbara A. Spellman." National Research Council. 2011. Intelligence Analysis: Behavioral and Social Scientific Foundations. Washington, DC: The National Academies Press. doi: 10.17226/13062.
×

6
Individual Reasoning

Barbara A. Spellman


The job of an analyst is to make sense of a complicated mass of information—to understand and explain the current situation, to reconstruct the past that led to it, and to use it as the basis of predictions for the future.1 To do so requires many types of sophisticated reasoning skills.

This chapter first describes a prominent historical characterization of overall individual human reasoning—that reasoning is filled with “irrationalities.” The chapter then remarks on more recent characterizations of reasoning that try to uncover the judgment mechanisms that produce these irrationalities, including recognizing that human reasoning might best be thought of as involving both unconscious and conscious components that have different strengths and weaknesses. Finally, it describes two important characteristics of reasoning abilities: that people seek coherence, and that people are particularists (i.e., that we tend to emphasize the uniqueness of each situation). The chapter illustrates how these characteristics apply in several general tasks involved in analysis, including interpreting questions, searching for information, assessing information, and assessing our own judgments.

CHARACTERIZATIONS OF REASONING

Views about human rationality have differed widely over the years. In the mid-20th century, psychologists were optimistic about human

1

Or, as Fingar states (this volume, Chapter 1), “to evaluate, integrate, and interpret information in order to provide warning, reduce uncertainty, and identify opportunities.”

Suggested Citation:"6 Individual Reasoning--Barbara A. Spellman." National Research Council. 2011. Intelligence Analysis: Behavioral and Social Scientific Foundations. Washington, DC: The National Academies Press. doi: 10.17226/13062.
×

rationality and claimed that people were “intuitive statisticians” and “intuitive scientists.” The heuristics and biases research program changed that perspective; current views that incorporate research on emotion, culture, and the unconscious have changed it yet again.

Heuristics and Biases Approach

Since at least the 1970s, psychologists and decision theorists have been documenting the many fallibilities and “irrationalities” in individual judgment. Countless examples show that people do not reason according to the rules of logic and probability, that we fail to recognize missing information, and that we are overconfident in our judgments. That list is just a small sample of what was discovered by the “Heuristics and Biases Program” (for an anthology of the classic works, see Kahneman et al., 1982; for a more recent update, see Gilovich et al., 2002). Lists of reasoning fallacies can be found in many places and, indeed, Heuer’s (1999) classic work, Psychology of Intelligence Analysis, was an attempt to interpret those findings with respect to the intelligence analyst. Among the better known irrationalities are the availability and representativeness heuristics and the hindsight and overconfidence biases (all discussed below). However, creating lists of fallacies is not very useful; more are likely to be found, and when attempting to “repair” one such leak, others may emerge. To better understand, predict the occurrence of, and, perhaps, remedy such irrationalities, it is useful to understand when, why, and how they arise.

Perhaps the most important thing to know about reasoning errors is that the errors are not random. That observation (Tversky and Kahneman, 1974)—that the errors are systematic (or, as in Ariely’s 2008 clever book title, Predictably Irrational)—is what makes such errors interesting, informative, and sometimes treatable. If such irrationalities are built into our reasoning, what in our cognitive system causes them?

Some theorists argued that many of the “irrationalities” were just laboratory tricks—specific to the presentation of the problems and the populations routinely used in such studies. Indeed, some errors may be reduced when minor changes are made to the presentation (e.g., when information is presented in frequency rather than probability format or when people see that a random generator was at work; e.g., Gigerenzer and Hoffrage, 1995). However, most errors cannot be made to disappear and most are (1) present in experts as well as novices and (2) resistant to debiasing attempts.

Suggested Citation:"6 Individual Reasoning--Barbara A. Spellman." National Research Council. 2011. Intelligence Analysis: Behavioral and Social Scientific Foundations. Washington, DC: The National Academies Press. doi: 10.17226/13062.
×
Attribute Substitution

One compelling account of many of these errors is that they are the result of “attribute substitution”—a kind of reasoning by proxy. People often have to make a judgment about some attribute—perhaps an external attribute such as how frequently some event occurs, or an internal attribute such as how happy you are. When the attribute is complicated because important information about it is unknown, the information is difficult to assess, or too much information is available, people substitute that attribute judgment with one that is simpler to make. Typically the simpler judgment is based on a related, but different, “attribute” at issue (Kahneman and Frederick, 2005). Take, for example, the “availability heuristic.” Suppose you are asked: Do more countries in the United Nations begin (in English) with the first letter P or I or N? Because you do not have the current list embossed in your memory, and going through your mental map of the world would be tedious, you decide to think up the names of countries that begin with those letters and guess whether they are in the United Nations. Some examples easily pop into mind because of recent news stories; other might come to mind after cueing your memory with a question like: “From which countries do many Americans originate?” Note that in many situations, this technique will work because often things for which you can think of examples are actually more likely (e.g., Are more Americans named John or Nandor?). However, for the United Nations problem, substituting what you can think up for what is true is likely to lead you to fail.2

The Inside View

“Attribute substitution” explains many other reasoning biases. A common and important type of attribute substitution is the use of the “inside view”—that when asked to make judgments about various qualities, we query our own phenomenological experiences, or run our own mental simulations of events, and provide that as the “answer.”


Imagining versus doing Consider this oft-told story by the Nobel-prize winning psychologist Danny Kahneman.3 (It is an example of the “Planning Fallacy.”) Kahneman was part of a group trying to develop a high school

2

It is easy to think of the eight countries beginning with I: Iraq, Iran, India, and Israel are related to current U.S. issues in the Middle East and East Asia; many American families originate from Ireland and Italy; Iceland and Indonesia might also come to mind for various current events reasons. However, nine countries begin with “N” and “P” each. See http://www.un.org/en/members/index.shtml for a current list of United Nations member states [accessed August 2009].

3

Kahneman won the Nobel Prize in Economics; there is no Nobel Prize in Psychology.

Suggested Citation:"6 Individual Reasoning--Barbara A. Spellman." National Research Council. 2011. Intelligence Analysis: Behavioral and Social Scientific Foundations. Washington, DC: The National Academies Press. doi: 10.17226/13062.
×

course and textbook on judgment and decision making. The group had been meeting for about a year and had written a few lessons and chapters. One day, Kahneman asked each group member to privately estimate how much more time each one thought would be needed to finish the book. The estimates ranged from 1.5 to 2.5 years. Then he asked the curriculum expert how long other groups like this one had taken finish a textbook. The expert seemed chagrined. He reported that about 40 percent of such groups never actually finished their books, and of those that did, completion times ranged from 7 to 10 years. (Completion ended up taking 8 years.) Such misestimates occur because when we consider how things will pan out, we think about how much work we could possibly get done in a period of time, and we think of the best case scenario (and forget to expect the usual unexpected types of distractions and delays). Judgments about the time needed to do a task are important to both the analysts’ own work and in predicting the abilities and actions of others. For our own planning, we are usually better off with the “outside view”—comparing ourselves to a similar situation.

If we have actually performed a task ourselves, we may be good at judging how long others will take to do it—but the usefulness of that judgment can be destroyed. For example, suppose you are asked: How difficult is it to find the anagram for FSCAR? People who need to find the answer themselves are better at judging the relative difficulty of an anagram problem than people who solve it after having seen the answer4 (Kelley and Jacoby, 1996). When you need to work out problems for yourself, you can use your own subjective difficulty as a good predictor for the subjective difficulty of others. However, if you have previously seen the answer, the informativeness of your subjective difficulty is ruined. In the study, those who had earlier seen the answer in a list of words (but didn’t necessarily remember seeing it) solved the anagram faster. They then used their own speed as the basis for their judgments—making them bad at predicting the difficulty other people would have. Those who had seen the anagram and answer presented right next to each other knew not to rely on their own subjective experience in solving the anagram. Instead, they came up with hypotheses about why some anagrams should be more difficult to solve than others and made good predictions of other peoples’ performance.


Hindsight bias The FSCAR example is related to the hindsight bias (or “Monday morning quarterbacking”)—once we know something, we are bad at judging what we would have thought or done without that knowledge. In many studies (see Fischhoff, 2007, for a historical review), people read about the prelude to an obscure battle between the British and the

4

Spoiler: Something you wear around your neck in winter.

Suggested Citation:"6 Individual Reasoning--Barbara A. Spellman." National Research Council. 2011. Intelligence Analysis: Behavioral and Social Scientific Foundations. Washington, DC: The National Academies Press. doi: 10.17226/13062.
×

Gurkhas. Some people were told that the British won, others that the Gurkhas won, and others were not told who was victorious. Then all were told to ignore the victor information. Later, asked to judge the probability of various outcomes or when asked to judge what others who did not know the outcome would think, people who read a particular outcome were more likely to respond that that particular outcome is the one that would have occurred or that others would guess.

This inability to forget or ignore what we know can be a pernicious problem in the courtroom. For example, judgments of negligence should reflect whether an injury was “foreseeable”; that is, whether someone should have known beforehand that the injury might have occurred. However, once an injury has occurred, the hindsight bias comes into play. Thus, although it might seem unlikely that people would badly misuse a consumer product, once it happens, jurors are likely to conclude that the use, and resulting injury, were foreseeable (Kamin and Rachlinski, 1995).

Indeed, once something has occurred, accusations of how something was “obvious” or could easily have been discovered or stopped beforehand are rife in the world of law enforcement and intelligence.


Assessing ourselves A very important judgment that analysts (and others) commonly have to make is how confident they are in what they know or in the predictions they have made. As described by Arkes and Kajdasz (this volume, Chapter 7, Intuitive Theory #2), people are typically overconfident in their judgments. For example, predictions made with 90 percent confidence are likely to happen less than 90 percent of the time. In addition, people are not always good at discriminating between events that should be believed with high confidence and those that should not.

Why might overconfidence occur? Correlations between beliefs (like predictions) and actuality typically go awry when the factors affecting the judgment are different from the factors affecting the reality. To make predictions about the likelihood of an event, we typically use the “inside view”—we run mental simulations and try to think of scenarios that will, or will not, lead to the predicted outcome. Like other mental processes that rely on the availability of “what comes to mind,” we are likely to miss relevant information and be affected by ideas or events that are more recent or obvious. We thus end up with more confidence in outcomes that come to mind more easily.

A related problem in assessing ourselves is that we view ourselves as more fair and less biased than others. When we think about how we came to a conclusion, we don’t feel ourselves being biased. We don’t feel like we have been affected by our prior beliefs or by what we (or our boss) wanted the answer to be, or by the order in which information has been presented

Suggested Citation:"6 Individual Reasoning--Barbara A. Spellman." National Research Council. 2011. Intelligence Analysis: Behavioral and Social Scientific Foundations. Washington, DC: The National Academies Press. doi: 10.17226/13062.
×

to us or by how difficult it was to get. But we are, and all of those affect our predictions. For a good review of the above work, see Dunning (2007).

Incorporating the Unconscious, Emotion, and Culture

During the past several decades, researchers broadened their investigations regarding the inputs to our reasoning, including examining the effects of unconscious knowledge, emotion, and culture. Emotion was long considered to be a detriment to reasoning, but current thinking suggests that emotion might give us accurate information and change our thinking strategies in ways appropriate to the situation (Clore and Palmer, 2009). Research on cultural effects on reasoning demonstrates a variety of differences in what might have been thought to be common human reasoning processes. An important recent article on that topic points out that nearly all of the research in psychology journals (including most of what is cited in this chapter) was conducted with U.S. participants, typically university undergraduates (although increasingly less so). The reasoning of these “WEIRD” people (white, educated, industrialized, rich, democratic) is different from that of people from other regions, groups, and cultures in many ways (Henrich et al., 2010). Thus, the research described herein is likely to characterize the reasoning of analysts themselves, but it might not characterize individuals from the various populations that analysts may consider.

Two Systems of Reasoning

A huge amount of research has been conducted during the past two decades on the role of unconscious thought, or “intuition,” in reasoning. Malcolm Gladwell’s (2005) bestselling book, Blink, described some of that research. Unfortunately, many people took the lesson from the book that intuition is always good and reliable. A better lesson is that sometimes intuition is good—but only when conditions are right. Gladwell did not specify what those conditions were, but a recent “debate” between Kahneman and Klein (2009) attempts to do so. In the past, on the surface, these authors seemed to disagree—Kahneman demonstrated that intuition (heuristics) can often give the wrong result, whereas Klein demonstrated that, especially in the hands of experts, intuition often yields the correct result. What Kahneman and Klein agree on is that intuition can be a good tool when: (1) the environment is predictable (so what happened previously is a good predictor of what will be likely to happen again); and (2) the person has had the “opportunity to learn the regularities of the environment” through repeated exposure and feedback. They also agree that a person’s confidence in an intuitive judgment is not independently a good indicator of its accuracy.

Suggested Citation:"6 Individual Reasoning--Barbara A. Spellman." National Research Council. 2011. Intelligence Analysis: Behavioral and Social Scientific Foundations. Washington, DC: The National Academies Press. doi: 10.17226/13062.
×
Definitions of System 1 and System 2

The mind can be thought of as having two reasoning systems, often labeled System 1 and System 2 (Stanovich and West, 2002).5 In broad strokes, System 1 is the “intuitive system”—it works unconsciously, reaches conclusions fast, engages emotion, and relies on heuristics—whereas System 2 works consciously and deliberately, comes to conclusions slowly, and uses logic.6 When presented with a problem or decision, both systems engage. But System 1 comes up with an answer more quickly. Then System 2 might check and either approve or override that answer (Evans, 2008).

Consider, for example, the following problem:

A bat and a ball cost $1.10 in total. The bat costs $1 more than the ball. How much does the ball cost?

Most people will initially think the answer is 10 cents; 10 cents was mentioned and it seems about the right size. However, if System 2 is engaged to check the math, seeing how that answer is wrong is simple.7 Yet most people, most of the time, including students at the best universities, will report the answer as 10 cents. Note, however, that how people answer depends somewhat on various features of the situation—such as the time available for making the decision, the way the information is presented—and on various features of the individual—such as IQ and statistical training (Kahneman and Frederick, 2005).

At a global level, analysis is more a System 2 than a System 1 process. Even when decisions need to be made quickly, they do not need to be made instantly; there is time for System 2 to check the work of System 1. Still, the thoughts generated quickly by System 1 may serve as inputs (for better or worse) to later reasoning.

Interaction of reasoning systems

System 2 can play the “overriding” role in many ways. So, for example, in the classic irrationality findings in which System 1 makes an attribute substitution (e.g., substituting ease of retrieval for systematic counting), System 2 can slow things down to reach the correct answer (e.g., in the United Nations example above). Making people conscious of attribute substitutions that affect their judgments can often change judgments for

5

Theorists debate not only about whether there are two (or more) reasoning systems, but also whether these two are really a dichotomy or represent ends of a continuum (see Evans, 2008).

6

When I try to remember which system is which, my mnemonic is that ONE came first—it is thought to be evolutionarily older and shared with animals—whereas TWO is thought to be newer and require language.

7

Spoiler: The answer is 5 cents. Check your work.

Suggested Citation:"6 Individual Reasoning--Barbara A. Spellman." National Research Council. 2011. Intelligence Analysis: Behavioral and Social Scientific Foundations. Washington, DC: The National Academies Press. doi: 10.17226/13062.
×

the better because people might then be able to use the real rather than the substituted attribute. For example, when researchers phone people and ask, “How happy are you?” the answers are affected by the weather at the time—when the weather is better, people report being happier. However, if the researchers preface the happiness question with a seemingly banal question about the weather, the weather—an irrelevant factor—no longer affects mood judgments; that is, people eliminate its influence (Schwarz and Clore, 1983). This result is similar to the FSCAR example above: When people are aware of something that could be throwing off their judgment, they may be able to set it aside and rely on different (possibly better) information when making the judgments.

However, just because System 2 has the labels “conscious” and “logical” as opposed to System 1’s “unconscious” and “heuristic” does not mean that System 2 is always better. Becoming conscious of a factor that is relevant to an answer can cause that factor to be overweighted. So, when college students were asked the following two questions—“How happy are you with your life in general?” and “How many dates did you have last month?”—the order in which they answered the questions made a huge difference in the relation between the answers. When the “general” question was answered first, the two answers showed little relation; however, when the “date” question was answered first, there was a huge positive correlation between the answers, suggesting that the students used the simple numerical answer to the dating question as a proxy for the answer to the more amorphous question about happiness (Strack et al., 1988). Indeed, with complex multidimensional problems, System 1 may be valuable for considering multiple factors proportionally and finding the most coherent story.8

Embodied Cognition

An even more recent line of theorizing broadens the factors that influence thinking to include the human body. This line points out that reasoning is not a disembodied activity; rather, it takes inputs from human sensory systems, occurs in brains molded by evolution to fit human needs, and serves the goal of facilitating human action. The range of findings show how our moods and emotions, our bodily states (e.g., being tired), our physical environment (e.g., being hot or cold), and our social environment

8

Much debate is happening about the “deliberation without attention” effect—the finding that when solving complex problems, people whose attention was distracted made better choices and decisions than people who were continuously focused on the problem (see Lassiter et al., 2009, for a critique).

Suggested Citation:"6 Individual Reasoning--Barbara A. Spellman." National Research Council. 2011. Intelligence Analysis: Behavioral and Social Scientific Foundations. Washington, DC: The National Academies Press. doi: 10.17226/13062.
×

(e.g., in the presence of friends or enemies) can affect how we reason (see Spellman and Schnall, 2009, for a review).

CHARACTERISTICS OF REASONING I: PEOPLE SEEK COHERENCE

People actively try to understand and make sense of the world. Among the important relevant properties of human reasoning are that we seek patterns and explanations, that we use both top-down and bottom-up processing, and that our imaginations are often constrained by reality. These characteristics of reasoning have important implications for various analytic tasks.

People Seek Patterns

People are adept at finding patterns in the world, even when such patterns are not “real.” These days we look up at the constellations Ursa Major and Ursa Minor9 and wonder, what were second-century astronomers thinking? Did they really see bears in those patterns of stars? Yet giving a name to what would otherwise be a scattered collection helps us to identify, describe, and use it when it is helpful.

Although people are good at finding patterns, we are bad at both detecting and generating randomness. For example, it is commonly believed that basketball players have “hot streaks”—short bursts when their performance is better than what would be predicted by chance and their baseline level of ability. Yet Gilovich et al. (1985) showed that such streaks in performance are what would be generated by chance.

People also typically think randomness should “look more random” than it actually does. In a classic demonstration of the representativeness heuristic, people are asked to decide which was a more likely string of tosses of a fair coin (where H = heads and T = tails): HHHTTT or HTTHTH. People more often choose the second string even though they are equally likely. Similarly, people commonly commit the “gambler’s fallacy”—believing that after a coin is tossed and comes up H, H, and H again, the chance of tails on the next toss is much greater than 50 percent when, in fact, it is the same 50 percent as always. Randomness sometimes generates long sequences of the same thing. When people are told to generate something at random themselves—for example, to write down “heads” and “tails” as if flipping a fair coin—they will have more switching back and forth

9

In Latin, Ursa Major and Ursa Minor mean Great Bear and Little Bear, respectively. The seven brightest stars of Ursa Major form the Big Dipper and of Ursa Minor form the Little Dipper.

Suggested Citation:"6 Individual Reasoning--Barbara A. Spellman." National Research Council. 2011. Intelligence Analysis: Behavioral and Social Scientific Foundations. Washington, DC: The National Academies Press. doi: 10.17226/13062.
×

between heads and tails and fewer long sequences than an actual fair coin. Note that this inability to be random can yield important information when one is trying to detect whether something has happened by chance (e.g., a series of fires, several train crashes) or by human design. (See Oskarsson et al., 2009, for a review.)

Patterns in Deception

The patterns that come from deceptive sources of information are likely to be different from the patterns that do not—but those differences are likely to be difficult to detect. There is a vast literature on “detecting deception”—the cues that people use when trying to determine whether someone is lying to them while speaking (e.g., shifting gaze, fidgeting, etc.) (Bond and DePaulo, 2006). But there is little psychology research on how people determine whether a pattern of behavior is likely to be deceptive and how information about such suspected behavior is used.

Suppose you are given a choice: You can take the advice of someone who has always said accurately in the past that five coins were behind door A or take the advice of someone who has been correct only 80 percent of the time in the past that seven coins are behind door B. People vary on which they choose, but that is not what is at issue.10 Suppose you believe the person with the 80 percent accuracy rate is really trying to help you—he or she does not benefit from your errors and apologizes when he or she cannot deliver. Contrast that person (“uncertain”) to another person (“deceptive”) who has also been 80 percent reliable in the past, but whom you know benefits from your errors and who takes delight when you wrongly choose his door. Although overall the odds are the same with the uncertain and deceptive informants, people are much more cautious about taking the gamble (whether or not it is rational to do so) when the 80 percent informant is deceptive rather than uncertain.

Of course, truly deceptive people would never advertise themselves as such. But can individuals pick up on patterns of deception? Suppose you can now choose between believing someone who has been 70 percent accurate in the past or someone who has been 80 percent accurate in the past. Whom do you choose? The answer should be: It depends. Table 6-1 depicts the accuracy of information provided by two informants (e.g., 9/10 means that of 10 pieces of information, 9 were accurate). You can see that for both low- and high-value information, Informant A is more accurate (in terms of percentages) than Informant B. Yet overall, Informant B is more accurate. (This seeming contradiction is called Simpson’s paradox and is explained in

10

Yes, the expected value of the “sure thing” is 5 and of the “gamble” is 0.8 × 7 = 0.56. So it is rational to take the gamble. But that is not the important comparison here.

Suggested Citation:"6 Individual Reasoning--Barbara A. Spellman." National Research Council. 2011. Intelligence Analysis: Behavioral and Social Scientific Foundations. Washington, DC: The National Academies Press. doi: 10.17226/13062.
×

TABLE 6-1 Information and Accuracy of Two Informants

 

Low-Value Information (easy to uncover)

High-Value Information (difficult to uncover)

Overall Total

Informant A

9/10 = 90%

5/10 = 50%

14/20 = 70%

Informant B

87/100 = 87%

1/10 = 10%

88/110 = 80%

another context by Zegart, this volume, Chapter 13.) Informant B is exhibiting a deceptive pattern—giving away lots of low-stakes information, but being deceptive on high-stakes issues. However, in a study with a similar structure, unwary participants thought Informant B was more reliable and less deceptive than Informant A—presumably because he was correct more often overall.

Of course, analysts are wary of the possibility of deception. Having the motivation to look for such patterns or the belief that they might exist (i.e., top-down knowledge; see the next section) will help one to discover such patterns (Spellman et al., 2001). But if information about source behavior is not effectively collected, collated, and provided, trying to discriminate deceptive from uncertain sources will be difficult.

People Use Both Top-Down and Bottom-Up Processing

People do not come to every new situation with a blank mind; we obviously already know or believe many things about how the world is likely to work. Thus, when perceiving something new, we use two kinds of information: “bottom-up” information is the information contained in the stimulus and “top-down” information is what we already know. Top-down and bottom-up processing work in parallel to help us make sense of information. A simple perceptual example is found in Figure 6-1. Assume that some blobs of ink have fallen on a manuscript and you have to decide what it says. What most English-speaking people see are the words THE CAT. Now look more carefully at the middle letters in each word. They are printed exactly the same, but we interpret them differently—one as an H and one as an A—because of our top-down knowledge of English words.

FIGURE 6-1 Ink blobs.

FIGURE 6-1 Ink blobs.

Suggested Citation:"6 Individual Reasoning--Barbara A. Spellman." National Research Council. 2011. Intelligence Analysis: Behavioral and Social Scientific Foundations. Washington, DC: The National Academies Press. doi: 10.17226/13062.
×

Users of other writing systems or non-English speakers might see them as the same. The fact that top-down knowledge affects interpretation means (among other things) that two people with the same information—be it a low-resolution satellite photograph or the incomplete facts surrounding a death—can logically interpret it differently given different prior knowledge.

People Seek Explanations and Causes

From telling stories about the gods of Mount Olympus to examining the tiniest bits of matter, people try to make sense of the world by figuring out the causes of events. The causal stories allow us to explain, and we hope to predict or even control, our world.

The desire to find patterns and the use of top-down knowledge combine in the quest for finding causal explanations. When events co-occur we may “see” a cause–effect pattern that really is not there. For example, when people are asked to push buttons and then decide whether they are causing a variable light to turn on and off, they often overestimate the control they have over the light (see Alloy and Tabachnik, 198411). When people read stories in which a sequence of events occur, but no causal words are used (e.g., “John held the glass” and “The glass broke”), people are likely to misremember hearing causal links that were never stated (e.g., “John broke the glass”; see Fletcher et al., 1990). Furthermore, when people hear complex, competing information about how an event occurred—the kind of information a juror (or analyst) might hear—they try to extract the most complete and coherent explanation they can from the information. Once they are set on one story, however, they tend to devalue and misremember information that is inconsistent with the explanation they believe is best (Pennington and Hastie, 1986).

Analysts often try to assess causation. They have the important tasks of seeking information, reaching and explaining judgments, and assessing the quality of the information and their confidence in those judgments.

Searching for Information

Whether to answer a specific question or to keep abreast of current conditions (and thus know whether there is something that should be told), analysts must be aware of vast amounts of information. Years ago, analysts often suffered from a dearth of information; now, there is often too much information—and it becomes difficult to sift what is relevant and reliable out of all the noise.

11

This article provides an old, but excellent, review of the findings regarding the interacting influences of top-down and bottom-up knowledge on causal judgments.

Suggested Citation:"6 Individual Reasoning--Barbara A. Spellman." National Research Council. 2011. Intelligence Analysis: Behavioral and Social Scientific Foundations. Washington, DC: The National Academies Press. doi: 10.17226/13062.
×

Looking at a mass of information and making sense of it is nearly impossible without a question in mind. Yet when people are too focused on one question, they may miss important information that is right in front of them. Everyone has been to a restaurant, interacted with a waiter, then, later, when it was relevant, failed to remember what the waiter looked like. A fabulous demonstration of a failure to notice things can be seen at http://viscog.beckman.illinois.edu/flashmovie/15.php [accessed August 2009]. The watcher is supposed to count the number of times that the players in white shirts pass the basketball. When people are intent on doing that, they miss the unusual event in the scene. (Try it before reading this footnote.12)

When people have a particular answer to a question or a particular hypothesis in mind, they may suffer from “confirmation bias.” Much has been written about confirmation bias in the analysis literature and, indeed, many analytic tools have been developed to address different aspects of it. The term has been used to describe various flaws in reasoning that, although often lumped together, are distinct. They include (1) only searching for information that is consistent with one’s favored hypothesis, and (2) devaluing, ignoring, or explaining information that is not consistent with one’s favored hypothesis.

The suggestion that people only search for information that is consistent with their hypotheses, even if true, may not be as bad as it appears. When searching for information to support a hypothesis, you are also likely to find information that will undermine your hypothesis (Klayman and Ha, 1987). Suppose, for example, you suspect a country is developing various types of weapons of mass destruction (WMDs). You search, but find no evidence of anything related to creating nuclear weapons. However, you do find evidence for some enhanced biological research activities. Thus, in looking for evidence to support your broader theory (of developing all types of WMDs), you have disconfirmed it. With the new evidence you might decide to revise and narrow your theory to believing the country is only creating biological WMDs. (Of course, you might form a new theory that it is trying to upgrade its medical technology, or you might keep your initial theory but add the assumption that it has managed to hide the other evidence.)

Therefore, whether looking for information to confirm a hypothesis is bad depends on the relationship between the hypotheses and the true state of the world (which is, of course, unknown). However, other processes that fall under the term “confirmation bias” have more insidious effects, as described below.

12

Spoiler: There is a person dressed in a gorilla suit walking through the game. Once you know it, you can’t fail to see it.

Suggested Citation:"6 Individual Reasoning--Barbara A. Spellman." National Research Council. 2011. Intelligence Analysis: Behavioral and Social Scientific Foundations. Washington, DC: The National Academies Press. doi: 10.17226/13062.
×
Revaluing and Rejecting Information

Sometimes information that is discovered must be revalued or ignored. The following are two examples of real-world situations that cause people to revalue or ignore information.


Duplicate sources A problem that arises when there is too much information comes from the duplication of information from the same, rather than independent, sources. Information that is repeated will be overweighted even if the repetition does not add independent verification because it comes from a redundant source. When people learn, for example, that three pieces of information come from the same source, they can devalue it appropriately, but only if they learn that it comes from the same source before they are exposed to the information. Once it is integrated with other knowledge it is difficult to devalue. (See Ranganath et al., 2010, for a review regarding information sources.)


Hidden information Consider the (classic television) courtroom situation in which a witness blurts out some incriminating evidence and the judge instructs the jury to disregard it. Results from numerous studies on this issue are consistent with intuitions—typically jurors don’t fully disregard that information. But why? Some explanations are cognitive (e.g., that jurors can’t forget information that has been woven into the causal explanation of the case); other explanations are more social (e.g., they don’t want to let a guilty person go free).13 Sometimes jurors who are told to disregard a piece of information pay even more attention to it than jurors who are not told to disregard it. (See Steblay et al., 2006, for a review.) An additional hypothesis suggests that jurors pay more attention to information they believe people are trying to hide (Walker-Wilson et al., unpublished).

Regardless of whether sources are trying to hide information, it is likely that people treat information that takes longer to find as more valuable than information that is obvious or easy to find. In addition, anecdotal evidence from analysts suggests that more highly classified information may be treated as more valuable information—despite not necessarily being either more relevant or reliable. (This effect sounds like an attribute substitution effect.)

Explaining Judgments

After searching and evaluating information, an analyst must come to a conclusion—often before he or she feels ready to do so. It has long been

13

But note that even judges (who could be considered “experts”) may be influenced by information they know they should not consider (Wistrich et al., 2004).

Suggested Citation:"6 Individual Reasoning--Barbara A. Spellman." National Research Council. 2011. Intelligence Analysis: Behavioral and Social Scientific Foundations. Washington, DC: The National Academies Press. doi: 10.17226/13062.
×

known that when there are no good reasons for a decision (or equally good reasons for all decisions), people will make up reasons. For example, when people are presented with four products of identical quality and asked to pick which one they prefer, they will pick one (most often the right-most one) and proclaim that it was best because of some made-up difference (Nisbett and Wilson, 1977). More importantly, even when there are good reasons for a decision, people often cannot explain why they made a judgment, and they make up explanations. Worse yet, by articulating some reasons, they may overweight those reasons and lose access to other reasons.

The examples of how asking about dating or the weather first influences subsequent judgments of happiness described earlier illustrate how thinking about some reasons causes overweighting of those reasons. Illustrations of how articulating only some knowledge or reasons that can impair decision making come from the “verbal overshadowing” literature. Suppose you and a friend have witnessed a truck bombing and suspects running from the scene. You are asked to describe the suspects’ faces, but your friend is not. Later you are both shown pictures of faces similar to the suspects. Who will be more accurate at picking out the suspects? Your friend. Faces are made up of many features and some of them are holistic. When you described the faces, you described some features and not others; your later memory is biased toward the features you described. This problem may be eliminated if you talk to someone else who gave a different description (Dodson et al., 1997).

Of course, in intelligence, not explaining reasoning is not an option. But we should be aware that explaining after the fact is not always complete or accurate, that the act of explanation can change memory, and that previous potential influences on judgment should be recorded and considered.

Assessing Information

Another important judgment that analysts must make is about the quality of information and its value in supporting a potential conclusion.


Fluency Consistent with the research on the “inside view,” people believe that answers or judgments that more easily come to mind are more sound. But ease (also known as “fluency”) can be simply manipulated, and the confidence based on such ease isn’t warranted. So, for example, recall the bat and ball problem for which people often give the incorrect answer of 10 cents. People who see the problem in a hard-to-read font (thus making the problem seem more difficult) are more likely to get the correct answer of 5 cents—presumably because they did not have the same sense of ease in solving the problem so they are more likely to check their own work (Oppenheimer, 2008).

Suggested Citation:"6 Individual Reasoning--Barbara A. Spellman." National Research Council. 2011. Intelligence Analysis: Behavioral and Social Scientific Foundations. Washington, DC: The National Academies Press. doi: 10.17226/13062.
×

Disconfirming hypotheses The second set of types of confirmation bias (devaluing, ignoring, or explaining away conflicting information) occurs when assessing information and can have disturbing effects. See Arkes and Kajdasz (this volume, Chapter 7, end of Intuitive Theory #5). For example, people take more seriously (and find fewer flaws) in information consistent with their own hypotheses regardless of whether they are ultimately correct (Lord et al., 1979).

A common mistaken belief is that not only can a hypothesis never be proven no matter how many confirming instances are found (this is accurate), but also that one disconfirming instance can disprove a hypothesis (this is generally inaccurate). For example, in high school physics, our class was told to drop a ball in a vacuum tube, measure how long it took to drop, then calculate the earth’s gravitational force. Our answer was 7.8 m/s2 (rather than the “more traditional” 9.8 m/s2). Did we tell the teacher that on a Tuesday afternoon, in a small town in New York, gravity had changed? No, my lab partner and I checked the stopwatch, checked the vacuum, remeasured the tube, redid our calculations, blamed each other, and duly reported our answer as 9.4 m/s2.14 One moral of this story is that disconfirmation information in itself is often not sufficient to disprove a theory (particularly a well-established theory—and rightly so). Information comes with “auxiliary assumptions” (Quine, 1951), and when those are attacked so is the value of the information. Another moral, however, is that people are more likely to find such problems in the information when they are motivated to search for them—that is, when the information is inconsistent with their preferred hypothesis. If we had originally gotten an answer we wanted (of about 9.8 m/s2), we would never have checked for faulty equipment or faulty logic.

Note that in the physics classroom, the answer we wanted to get was also the answer we knew (from reading ahead in the textbook) was objectively correct. When people reason they usually have one of two goals: one is to try to find the most accurate answer, and the other is to find a particular answer. The goal will affect the reasoning strategies chosen; the strategies chosen will affect what is concluded (see Kunda, 1990, for an excellent description of the strategies and processes involved). However, when people are motivated to find a particular answer (whether by internal or external pressures), they are more likely to do so, regardless of the accuracy of that answer.

Of course, sometimes people do not have an initial preferred answer. When choosing between two equally novel and appealing products—or hypotheses—people are more influenced by information learned early

14

Of course, we didn’t report the exact answer. Then the teacher would have been sure we had faked it.

Suggested Citation:"6 Individual Reasoning--Barbara A. Spellman." National Research Council. 2011. Intelligence Analysis: Behavioral and Social Scientific Foundations. Washington, DC: The National Academies Press. doi: 10.17226/13062.
×

rather than later. Then, once there is a preferred alternative, people bias their interpretation of later incoming information to be consistent with that initial preference, perhaps to maintain consistency (Russo et al., 2008).

Imaginations Stick Close to Reality

Our reasoning and imagination often stick very closely to reality. For example, when children are asked to draw creatures from another planet, the aliens almost always have an even number of limbs and exhibit bilateral symmetry (as do most terrestrial animals) (Karmiloff-Smith, 1990). When adults read a story in which a bad outcome occurs and they are asked to change the story so that the outcome would be different, the responses (“counterfactuals”) they generate tend to converge on certain minimal changes of reality. For example, people may read about Mr. Jones, who decides to take the scenic route home from work one day, brakes hard to stop at a yellow light, and is hit by a drunk driver. When asked to change the story so that the outcome would be different, most people suggest not taking the unusual route, or not braking at the light, or the driver not being drunk. Few people suggest considering what would have happened if Mr. Jones had not gone to work that day at all or if the driver did not own a car. No one suggests considering what if cars were made of rubber or gravity had been suspended (e.g., Mandel and Lehman, 1996).

Generating counterfactuals helps us figure out the causes of events. The fact that our counterfactuals tend to be narrow can impede our considerations of the possible consequences of actions—an important skill for individuals and analysts (see papers in this volume by Fingar, Chapter 1, and Bueno de Mesquita, Chapter 3).

In addition, people often display “functional fixedness”—the inability to break free of conventional procedures or uses of objects. A standard functional fixedness task asks how one might attach a lighted candle to a wall so it does not drip on the floor. You are given a candle, a box of matches, and some thumbtacks.15 One must overcome functional fixedness to see fertilizer as an ingredient for explosives or to view airplanes as potential missiles—or to stay one step ahead of people who will do so.

CHARACTERISTICS OF REASONING II: PEOPLE ARE PARTICULARISTS

A second important characteristic of human reasoning is that people can be both generalizers and particularists, depending on context. What

15

The task is difficult because it requires each object to be used unconventionally: open the box, thumbtack the side of the bottom piece to the wall, then place the lit candle in it.

Suggested Citation:"6 Individual Reasoning--Barbara A. Spellman." National Research Council. 2011. Intelligence Analysis: Behavioral and Social Scientific Foundations. Washington, DC: The National Academies Press. doi: 10.17226/13062.
×

does that mean? As generalizers we see things as alike and treat them as alike. So, for example, we have general knowledge structures such as categories (groups of objects that we know are different, but treat as the same for some purposes) and scripts (general outlines of what to do in a situation similar to previous situations). When we meet a new dog, we know it might bite; when we enter a new fast food restaurant, we know to go to the counter to order, pay, and pick up our food.16

Yet, of course, we also know how to distinguish members of categories from each other. As particularists we see individuating characteristics in objects of interest, and often use those to make judgments in individual cases. Yes, some pit bulls are easily provoked, but not the one lovingly raised by your sister (or so you thought). Which takes precedence in decision making—treating things as similar and relying on category information, or treating things as different and relying on particular information? As for so many types of reasoning, what people do depends on the context. But in many analysis-related contexts, people may too often focus on the unique features of a situation and fail to rely on important similarities to other situations.17

Seeing Similarities and Differences

On first thought it seems as though similarity and difference are simply opposites—the more things are similar, the less they are different (and vice versa). However, whether, how, and how much things are judged as similar or different depends on both the context and the judge.

In a classic cold war example, Tversky (1977) asked some people which pair was more similar: (1) West Germany and East Germany, or (2) Ceylon and Nepal; 67 percent picked the former. Other people were asked which pair was more different; 70 percent picked the former. How can the same pair be both more similar and more different? Tversky argued that people knew more about the former countries than the latter countries. When asked about similarity, they weighted the similar features more; when asked about differences, they weighted the different features more—thus resulting in the seemingly contradictory answers.

Of course, the most important issue in assessing similarity is keeping

16

See Arkes and Kajdasz (this volume, Chapter 7, Intuitive Theory #3) for a discussion of schemas, which are another type of generalized knowledge.

17

Relevant findings include the classic research on the failure to use base rates when given individuating information (e.g., Tversky and Kahneman, 1973) and on the superiority of actuarial to clinical judgments (see Arkes and Kajdasz, this volume, Chapter 7, Intuitive Theory #3, and the discussion of experts versus algorithms). People are also particularists in legal settings when they agree that some law is good in general, but don’t like it to be applied to a particular case at hand.

Suggested Citation:"6 Individual Reasoning--Barbara A. Spellman." National Research Council. 2011. Intelligence Analysis: Behavioral and Social Scientific Foundations. Washington, DC: The National Academies Press. doi: 10.17226/13062.
×

the question in mind: Similar with respect to what? Which pair is more similar: (1) United States and China, or (2) United States and Italy? The answer differs depending on whether the question is about production capabilities or how the government is likely to respond to protests against it.

Thus, how we judge similarity (of people, events, situations) depends on time, context, the question asked, and the judge’s knowledge (Spellman, 2010). How similar we judge two situations to be affects how relevant we will believe one is to understanding or predicting the other. Assumptions about the level of analysis (categories or individuals) and about the relative importance of similarities and differences are key to how an analyst might interpret and answer questions.

Interpreting Questions

As described by Fingar (this volume, Chapter 1), one job of analysts is to answer direct questions from customers. Other jobs mentioned include providing warning and assessing current developments and new information. Each of those is also like answering a question—but the less specified question: “Is there something that should be told?”18


Level of categorization Answering a question depends very much on understanding why the question was asked and the appropriate level of categorization. The level at which a question is asked and answered can affect estimations. For example, experiment participants were told that each year in the United States, about 2 million people die. Some were asked to estimate the probability of people dying from “natural causes”; the average was 58 percent. Others were asked to estimate the probability of dying from “heart disease, cancer, or some other natural cause”; the average was 73 percent. These questions asked for exactly the same information—the difference was whether the category (“natural causes”) was decomposed into subcategories (Tversky and Koehler, 1994). Of course, in this example, numbers can be looked up, but often they cannot be (e.g., how many insurgents does Country Z have?). This phenomenon (“subadditivity”—in which the whole is less than the sum of the parts) is even exhibited by political historians when asked about potential counterfactual outcomes to the Cuban missile crisis (Tetlock and Lebow, 2001). It importantly illustrates that when people think about a general category, they often don’t “unpack” it into all of the relevant subcategories.

18

That broad question includes the following types of other questions: Have we learned something new and important? Has something important changed? Has something unexpected happened? Is something now relevant to the current situation that wasn’t relevant in the past?

Suggested Citation:"6 Individual Reasoning--Barbara A. Spellman." National Research Council. 2011. Intelligence Analysis: Behavioral and Social Scientific Foundations. Washington, DC: The National Academies Press. doi: 10.17226/13062.
×

Compared to what? The issue of “compared to what” is implicit in nearly every question. An American colleague recently took a year-long prestigious British fellowship and was often asked: “Why are you here?” Depending on who asked, he would answer: “Because I could take time off this year rather than next year,” or “Because it’s nicer here than at that other British university,” or “Because they couldn’t get Antonin Scalia.” Each is an answer to the question—but a different implicit part of the question (why now rather than the future, why here rather than there, why him rather than someone else). In analysis it is essential to get the “compared to what” correct (e.g., “Why is this country taking that action now?” could be answered with regard to the country, the action, or the timing).

Note that when people ask themselves multiple questions, the order in which the questions are asked can affect the answers because different questions will bring to mind different features and comparisons (recall the weather, dating, and happiness examples). Like the other processes described earlier, how people answer a question will depend not only on what they already know and what the context is, but also on assumptions about what is relevant to the questioner. Those assumptions can easily be wrong.

Analogy: Using the Past to Understand the Present

An important and useful reasoning skill for analysis that uses similarities and difference is analogy. People use analogies when trying to make sense of what to do with a new problem; they find past similar problems that are better understood (“source analogs”) and may use the relevant similarities to help them solve the current problem. Analogies can be used to help understand a situation, to predict what will happen, and to persuade others. For example, nearly every time the United States considers a new global intervention, the two great source analogs emerge in popular debate: World War II (the necessary, well-fought, victorious war) and Vietnam (the questionable, poorly fought, lost war).19 When the first President Bush considered military action against Iraq in 1991, those two analogies were often discussed. The World War II analogy won (Saddam Hussein was like Hitler, he had already annexed Kuwait, he would move against other countries in the region, he needed to be stopped) and the United States went to war. The situation in Iraq in 2003 did not provide a good analogy to World War II and Persian Gulf War II did not garner the public and international support of Persian Gulf War I.

What makes a good analogy? The key is finding relevant similarities—typically similarities that matter to causal relationships—between the two

19

These are the popular press characterizations.

Suggested Citation:"6 Individual Reasoning--Barbara A. Spellman." National Research Council. 2011. Intelligence Analysis: Behavioral and Social Scientific Foundations. Washington, DC: The National Academies Press. doi: 10.17226/13062.
×

situations. For example, in drawing an analogy, does it matter whether countries are large or small, have the same climate or geography, are similar in population, major religion, or type of government? That depends on the question. An important distinction to make when using analogies is between superficial and structural similarities. Superficial similarities are usually observable attributes (e.g., that a country is poor) that situations have in common. Structural similarities are underlying relationships (e.g., that one country attacked another) that situations have in common. These latter relational similarities are typically more important when using analogy for understanding and prediction (Holyoak and Koh, 1987).

When retrieving potential source analogs from memory, people typically first think of situations that share superficial similarities. For example, in 1991 college students were asked to “fill in” the analogy: “People say Saddam is like Hitler. If so, who/what would they say is: Iraq? George H. W. Bush? The United States? Kuwait?” Most students said that President Bush was like Franklin Delano Roosevelt and the United States (in 1991) was like the United States of World War II (U.S. WWII). That analogy has a high degree of superficial similarity—U.S. 1991 is in many ways like U.S. WWII and a current president is like a past one. However, when students read brief passages about World War II before filling in the analogy, depending on the passage, many of them preferred to say that U.S. 1991 was like Great Britain of World War II and President Bush was like Churchill.20 That mapping has less superficial similarity, but more structural similarity in that it captures the relations and forces at work (Spellman and Holyoak, 1992).21

Note that when under time pressure, the more obvious superficial features of a situation are processed more quickly and may form the basis of similarity/analogy decisions (Medin et al., 1993) even though structural similarity is usually more important to understanding a situation.

Experts’ Use of Analogy

When using analogies, experts are better at ignoring superficial similarities and using structural similarities; indeed, part of developing expertise is learning about the important underlying structures of information. For example, novice and expert physicists were given cards with illustrations of physics problems and asked to sort them. Novices sorted them by the simple machines involved (e.g., pulleys, axles, levers) whereas experts

20

The various passages were all historically accurate, but emphasized different aspects of World War II.

21

For example, Great Britain actively went to war over Germany’s actions in Eastern Europe whereas the United States did not declare war against Germany until after Pearl Harbor.

Suggested Citation:"6 Individual Reasoning--Barbara A. Spellman." National Research Council. 2011. Intelligence Analysis: Behavioral and Social Scientific Foundations. Washington, DC: The National Academies Press. doi: 10.17226/13062.
×

sorted them by the underlying principles involved (e.g., conservation of momentum) (Chi et al., 1981). Thus, an important part of expertise in any field is having a base of experiences from which to extract the information relevant to the present situation.

Analogies at War (Khong, 1992) describes the many analogies the United States considered informative as it became involved in Vietnam in the 1960s. The book illustrates how what one sees as the important similarities between two situations will not only affect the judgments made based on the similarities, but also the lessons learned. The war in Vietnam has left us with two contradictory lessons that continue to frame foreign policy debates. The phrase “no more Vietnams” meant for some people “that the United States should abstain from intervening in areas of dubious strategic worth, where the justice of both the cause and of the means used are likely to be questionable, and where the United States is unlikely to win” (p. 258); it meant for others “that is was the imposition of unrealistic constraints on the military by civilians unschooled in modern warfare that led to the defeat in Vietnam” (p. 259) and that in the future the military should be allowed to do whatever it needs to do to win.

But, indeed, whether any lessons learned are applied to future situations depends on whether the past examples are viewed as sufficiently similar to be relevant. Experts may be especially prone to particularizing situations rather than generalizing them precisely because of their extra knowledge, information, and expertise. On the one hand, the more potential source analogs someone is aware of, the more easily he or she will be able to access them from memory, and find one that seems relevant to the present situation. On the other hand, the more one knows about the past and present, the easier it is to find features that distinguish the current situation from other situations. It is well documented that experts in a variety of fields rely too much on what they see as special circumstances of the present case rather than relying on the common features of a case.22

Especially important in the context of analysis, there may be a reward structure in place that values characterizing current situations as different from past ones. For example, an expert might “get credit” for expertise when pointing out how a new situation is different from the past rather than saying, as any non-expert could, that it is the same as the past. Such a reward structure would accentuate looking for and more highly weighting differences—which will then be found—rather than the relevant similarities.

Thus, experts are both best poised to use analogies (because they can identify the important underlying structural similarities between situations), but also best poised to (mistakenly) dismiss them even when relevant.

22

See Arkes and Kajdasz (this volume, Chapter 7, Intuitive Theories #3 and #4).

Suggested Citation:"6 Individual Reasoning--Barbara A. Spellman." National Research Council. 2011. Intelligence Analysis: Behavioral and Social Scientific Foundations. Washington, DC: The National Academies Press. doi: 10.17226/13062.
×

CONCLUSION

Just as much has changed in the world of intelligence in the past 25 years, so has much changed in theorizing about how humans think and reason. The list of “irrationalities” has grown longer, but we now have more insights into what they have in common and how they arise. Such knowledge can help us design ways to improve the products of individual reasoning.

REFERENCES

Alloy, L. B., and N. Tabachnik. 1984. Assessment of covariation by humans and animals: The joint influence of prior expectations and current situational information. Psychological Review 91:112–149.

Ariely, D. 2008. Predictably irrational. London, UK: HarperCollins.

Bond, C. F., Jr., and B. M. DePaulo. 2006. Accuracy of deception judgments. Personality and Social Psychology Review 10(3):214–234.

Chi, M. T. H., P. J. Feltovich, and R. Glaser. 1981. Categorization and representation of physics problems by experts and novices. Cognitive Science 5:121–152.

Clore, G. L., and J. Palmer. 2009. Affective guidance of intelligent agents: How emotion controls cognition. Cognitive Systems Research 10:21–30.

Dodson, C. S., M. K. Johnson, and J. W. Schooler. 1997. The verbal overshadowing effect: Why descriptions impair face recognition. Memory and Cognition 25(2):129–139.

Dunning, D. 2007. Prediction: The inside view. In A. W. Kruglanski and E. T. Higgins, eds., Social psychology: Handbook of basic principles, 2nd ed. (pp. 69–90). New York: Guilford Press.

Evans, J. S. B. T. 2008. Dual-processing accounts of reasoning, judgment, and social cognition. Annual Review of Psychology 59:255–278.

Fischhoff, B. 2007. An early history of hindsight research. Social Cognition 25:10–13.

Fletcher, C. R., J. E. Hummel, and C. J. Marsolek. 1990. Causality and the allocation of attention during comprehension. Journal of Experimental Psychology: Learning, Memory, and Cognition 16:233–240.

Gigerenzer, G., and U. Hoffrage. 1995. How to improve Bayesian reasoning without instruction: Frequency formats. Psychological Review 102:684–704.

Gilovich, T., R. Vallone, and A. Tversky. 1985. The hot hand in basketball: On the misperception of random sequences. Cognitive Psychology 17:295–314.

Gilovich, T., D. Griffin, and D. Kahneman. 2002. Heuristics and biases: The psychology of intuitive judgment. New York: Cambridge University Press.

Gladwell, M. 2005. Blink: The power of thinking without thinking. New York: Little Brown.

Henrich, J., S. J. Heine, and A. Norenzayan. (2010). The weirdest people in the world? Behavioral and Brain Sciences 33:61–83.

Heuer, R. J., Jr. 1999. Psychology of intelligence analysis. Washington, DC: Center for the Study of Intelligence, Central Intelligence Agency.

Holyoak, K. J., and K. Koh. 1987. Surface and structural similarity in analogical transfer. Memory and Cognition 15:332–340.

Kahneman, D., and S. Frederick. 2005. A model of heuristic judgment. In K. J. Holyoak and R. G. Morrison, eds., The Cambridge handbook of thinking and reasoning (pp. 267–294). New York: Cambridge University Press.

Kahneman, D., and G. Klein. 2009. Conditions for intuitive expertise: A failure to disagree. American Psychologist 64:515–526.

Suggested Citation:"6 Individual Reasoning--Barbara A. Spellman." National Research Council. 2011. Intelligence Analysis: Behavioral and Social Scientific Foundations. Washington, DC: The National Academies Press. doi: 10.17226/13062.
×

Kahneman, D., P. Slovic, and A. Tversky, eds. 1982. Judgments under uncertainty: Heuristics and biases. New York: Cambridge University Press.

Kamin, K. A., and J. J. Rachlinski. 1995. Ex post ? ex ante: Determining liability in hindsight. Law and Human Behavior 19:89–104.

Karmiloff-Smith, A. 1990. Constraints on representational change: Evidence from children’s drawing. Cognition 34:57–83.

Kelley, C. M., and L. L. Jacoby. 1996. Adult egocentrism: Subjective experience versus analytic bases for judgment. Journal of Memory and Language 35(2):157–175.

Khong, Y. F. 1992. Analogies at war: Korea, Munich, Dien Bien Phu, and the Vietnam decisions of 1965. Princeton, NJ: Princeton University Press.

Klayman, J., and Y.-W. Ha. 1987. Confirmation, disconfirmation, and information in hypothesis testing. Psychological Review 94:211–228.

Kunda, Z. 1990. The case for motivated reasoning. Psychological Bulletin 108:480–498.

Lassiter, G. D., M. J. Lindberg, C. Gonzalez-Vallejo, F. S. Belleza, and N. D. Phillips. 2009. The deliberation-without-attention effect: Evidence for an artifactual interpretation. Psychological Science 20:671–675.

Lord, C. G., L. Ross, and M. R. Lepper. 1979. Biased assimilation and attitude polarization: The effect of prior theories on subsequently considered evidence. Journal of Personality and Social Psychology 37:2098–2109.

Mandel, D. R., and D. R. Lehman. 1996. Counterfactual thinking and ascriptions of cause and preventability. Journal of Personality and Social Psychology 71:450–463.

Medin, D. L., R. L. Goldstone, and D. Gentner. 1993. Respects for similarity. Psychological Review 100(2):254–278.

Nisbett, R. E., and T. D. Wilson. 1977. Telling more than we can know: Verbal reports on mental processes. Psychological Review 84(3):231–259.

Oppenheimer, D. O. 2008. The secret life of fluency. Trends in Cognitive Sciences 12:237–241.

Oskarsson, A. T., L. Van Boven, G. H. McClelland, and R. Hastie. 2009. What’s next? Judging sequences of binary events. Psychological Bulletin 135:262–285.

Pennington, N., and R. Hastie. 1986. Evidence evaluation in complex decision making. Journal of Personality and Social Psychology 51:242–258.

Quine, W. V. O. 1951. Two dogmas of empiricism. The Philosophical Review 60:20–43. Reprinted in W. V. O. Quine. (1953). From a logical point of view: Nine logico-philosophical essays. Cambridge, MA: Harvard University Press.

Ranganath, K. A., B. A. Spellman, and J. A. Joy-Gaba. 2010. Cognitive “category-based induction” research and social “persuasion” research are each about what makes arguments believable: A tale of two literatures. Perspectives on Psychological Science 5(2):115–122.

Russo, J. E., K. A. Carlson, M. G. Meloy, and K. Yong. 2008. The goal of consistency as a cause of information distortion. Journal of Experimental Psychology: General 137(3):456–470.

Schwarz, N., and G. L. Clore. 1983. Mood, misattribution, and judgments of well-being: Informative and directive functions of affective states. Journal of Personality and Social Psychology 45(3):513–523.

Spellman, B. A. 2010. Judges, expertise, and analogy. In D. Klein and G. Mitchell, eds., The psychology of judicial decision making (pp. 149–164). New York: Oxford University Press.

Spellman, B. A., and K. J. Holyoak. 1992. If Saddam is Hitler then who is George Bush? Analogical mapping between systems of social roles. Journal of Personality and Social Psychology 62:913–933.

Spellman, B. A., and S. Schnall. 2009. Embodied rationality. Queen’s Law Journal 35(1): 117–164.

Suggested Citation:"6 Individual Reasoning--Barbara A. Spellman." National Research Council. 2011. Intelligence Analysis: Behavioral and Social Scientific Foundations. Washington, DC: The National Academies Press. doi: 10.17226/13062.
×

Spellman, B. A., C. M. Price, and J. M. Logan. 2001. How two causes are different from one: The use of (un)conditional information in Simpson’s paradox. Memory and Cognition 29:193–208.

Stanovich, K. E., and R. West. 2002. Individual difference in reasoning: Implications for the rationality debate? In T. Gilovich, D. Griffin, and D. Kahneman, eds., Heuristics and biases: The psychology of intuitive judgment (pp. 421–440). New York: Cambridge University Press.

Steblay, N., H. M. Hosch, S. E. Culhane, and A. McWethy. 2006. The impact on juror verdicts of judicial instruction to disregard inadmissible evidence: A meta-analysis. Law and Human Behavior 30(4):469–492.

Strack, F., L. L. Martin, and N. Schwarz. 1988. Priming and communication: The social determinants of information use in judgments of life satisfaction. European Journal of Social Psychology 18:429–442.

Tetlock, P. E., and R. N. Lebow. 2001. Poking counterfactual holes in covering laws: Cognitive styles and historical reasoning. American Political Science Review 95(4):829–843.

Tversky, A. 1977. Features of similarity. Psychological Review 84(4):327–352.

Tversky, A., and D. Kahneman. 1973. On the psychology of prediction. Psychological Review 80:237–251. Reprinted in D. Kahneman, P. Slovic, and A. Tversky, eds. (1982). Judgments under uncertainty: Heuristics and biases (pp. 48–68). New York: Cambridge University Press.

Tversky, A., and D. Kahneman. 1974. Judgment under uncertainty: Heuristics and biases. Science 185(4157):1124–1131. Reprinted in D. Kahneman, P. Slovic, and A. Tversky, eds. (1982). Judgments under uncertainty: Heuristics and biases (pp. 3–20). New York: Cambridge University Press.

Tversky, A., and D. J. Koehler. 1994. Support theory: A nonextentional representation of subjective probability. Psychological Review 101(4):547–567.

Walker-Wilson, M., B. A. Spellman, and R. M. York. (unpublished manuscript). Beyond instructions to disregard: When objections backfire and interruptions distract. Available: https://public.me.com/bobbie_spellman [accessed January 2011].

Wistrich, A. J., C. Guthrie, and J. J. Rachlinski. 2004. Can judges ignore inadmissible information? The difficulty of deliberately disregarding. University of Pennsylvania Law Review 153:1,251–1,345.

Suggested Citation:"6 Individual Reasoning--Barbara A. Spellman." National Research Council. 2011. Intelligence Analysis: Behavioral and Social Scientific Foundations. Washington, DC: The National Academies Press. doi: 10.17226/13062.
×

This page intentionally left blank.

Suggested Citation:"6 Individual Reasoning--Barbara A. Spellman." National Research Council. 2011. Intelligence Analysis: Behavioral and Social Scientific Foundations. Washington, DC: The National Academies Press. doi: 10.17226/13062.
×
Page 117
Suggested Citation:"6 Individual Reasoning--Barbara A. Spellman." National Research Council. 2011. Intelligence Analysis: Behavioral and Social Scientific Foundations. Washington, DC: The National Academies Press. doi: 10.17226/13062.
×
Page 118
Suggested Citation:"6 Individual Reasoning--Barbara A. Spellman." National Research Council. 2011. Intelligence Analysis: Behavioral and Social Scientific Foundations. Washington, DC: The National Academies Press. doi: 10.17226/13062.
×
Page 119
Suggested Citation:"6 Individual Reasoning--Barbara A. Spellman." National Research Council. 2011. Intelligence Analysis: Behavioral and Social Scientific Foundations. Washington, DC: The National Academies Press. doi: 10.17226/13062.
×
Page 120
Suggested Citation:"6 Individual Reasoning--Barbara A. Spellman." National Research Council. 2011. Intelligence Analysis: Behavioral and Social Scientific Foundations. Washington, DC: The National Academies Press. doi: 10.17226/13062.
×
Page 121
Suggested Citation:"6 Individual Reasoning--Barbara A. Spellman." National Research Council. 2011. Intelligence Analysis: Behavioral and Social Scientific Foundations. Washington, DC: The National Academies Press. doi: 10.17226/13062.
×
Page 122
Suggested Citation:"6 Individual Reasoning--Barbara A. Spellman." National Research Council. 2011. Intelligence Analysis: Behavioral and Social Scientific Foundations. Washington, DC: The National Academies Press. doi: 10.17226/13062.
×
Page 123
Suggested Citation:"6 Individual Reasoning--Barbara A. Spellman." National Research Council. 2011. Intelligence Analysis: Behavioral and Social Scientific Foundations. Washington, DC: The National Academies Press. doi: 10.17226/13062.
×
Page 124
Suggested Citation:"6 Individual Reasoning--Barbara A. Spellman." National Research Council. 2011. Intelligence Analysis: Behavioral and Social Scientific Foundations. Washington, DC: The National Academies Press. doi: 10.17226/13062.
×
Page 125
Suggested Citation:"6 Individual Reasoning--Barbara A. Spellman." National Research Council. 2011. Intelligence Analysis: Behavioral and Social Scientific Foundations. Washington, DC: The National Academies Press. doi: 10.17226/13062.
×
Page 126
Suggested Citation:"6 Individual Reasoning--Barbara A. Spellman." National Research Council. 2011. Intelligence Analysis: Behavioral and Social Scientific Foundations. Washington, DC: The National Academies Press. doi: 10.17226/13062.
×
Page 127
Suggested Citation:"6 Individual Reasoning--Barbara A. Spellman." National Research Council. 2011. Intelligence Analysis: Behavioral and Social Scientific Foundations. Washington, DC: The National Academies Press. doi: 10.17226/13062.
×
Page 128
Suggested Citation:"6 Individual Reasoning--Barbara A. Spellman." National Research Council. 2011. Intelligence Analysis: Behavioral and Social Scientific Foundations. Washington, DC: The National Academies Press. doi: 10.17226/13062.
×
Page 129
Suggested Citation:"6 Individual Reasoning--Barbara A. Spellman." National Research Council. 2011. Intelligence Analysis: Behavioral and Social Scientific Foundations. Washington, DC: The National Academies Press. doi: 10.17226/13062.
×
Page 130
Suggested Citation:"6 Individual Reasoning--Barbara A. Spellman." National Research Council. 2011. Intelligence Analysis: Behavioral and Social Scientific Foundations. Washington, DC: The National Academies Press. doi: 10.17226/13062.
×
Page 131
Suggested Citation:"6 Individual Reasoning--Barbara A. Spellman." National Research Council. 2011. Intelligence Analysis: Behavioral and Social Scientific Foundations. Washington, DC: The National Academies Press. doi: 10.17226/13062.
×
Page 132
Suggested Citation:"6 Individual Reasoning--Barbara A. Spellman." National Research Council. 2011. Intelligence Analysis: Behavioral and Social Scientific Foundations. Washington, DC: The National Academies Press. doi: 10.17226/13062.
×
Page 133
Suggested Citation:"6 Individual Reasoning--Barbara A. Spellman." National Research Council. 2011. Intelligence Analysis: Behavioral and Social Scientific Foundations. Washington, DC: The National Academies Press. doi: 10.17226/13062.
×
Page 134
Suggested Citation:"6 Individual Reasoning--Barbara A. Spellman." National Research Council. 2011. Intelligence Analysis: Behavioral and Social Scientific Foundations. Washington, DC: The National Academies Press. doi: 10.17226/13062.
×
Page 135
Suggested Citation:"6 Individual Reasoning--Barbara A. Spellman." National Research Council. 2011. Intelligence Analysis: Behavioral and Social Scientific Foundations. Washington, DC: The National Academies Press. doi: 10.17226/13062.
×
Page 136
Suggested Citation:"6 Individual Reasoning--Barbara A. Spellman." National Research Council. 2011. Intelligence Analysis: Behavioral and Social Scientific Foundations. Washington, DC: The National Academies Press. doi: 10.17226/13062.
×
Page 137
Suggested Citation:"6 Individual Reasoning--Barbara A. Spellman." National Research Council. 2011. Intelligence Analysis: Behavioral and Social Scientific Foundations. Washington, DC: The National Academies Press. doi: 10.17226/13062.
×
Page 138
Suggested Citation:"6 Individual Reasoning--Barbara A. Spellman." National Research Council. 2011. Intelligence Analysis: Behavioral and Social Scientific Foundations. Washington, DC: The National Academies Press. doi: 10.17226/13062.
×
Page 139
Suggested Citation:"6 Individual Reasoning--Barbara A. Spellman." National Research Council. 2011. Intelligence Analysis: Behavioral and Social Scientific Foundations. Washington, DC: The National Academies Press. doi: 10.17226/13062.
×
Page 140
Suggested Citation:"6 Individual Reasoning--Barbara A. Spellman." National Research Council. 2011. Intelligence Analysis: Behavioral and Social Scientific Foundations. Washington, DC: The National Academies Press. doi: 10.17226/13062.
×
Page 141
Suggested Citation:"6 Individual Reasoning--Barbara A. Spellman." National Research Council. 2011. Intelligence Analysis: Behavioral and Social Scientific Foundations. Washington, DC: The National Academies Press. doi: 10.17226/13062.
×
Page 142
Next: 7 Intuitive Theories of Behavior--Hal R. Arkes and James Kajdasz »
Intelligence Analysis: Behavioral and Social Scientific Foundations Get This Book
×
Buy Paperback | $70.00 Buy Ebook | $54.99
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

The U.S. intelligence community (IC) is a complex human enterprise whose success depends on how well the people in it perform their work. Although often aided by sophisticated technologies, these people ultimately rely on their own intellect to identify, synthesize, and communicate the information on which the nation's security depends. The IC's success depends on having trained, motivated, and thoughtful people working within organizations able to understand, value, and coordinate their capabilities.

Intelligence Analysis provides up-to-date scientific guidance for the intelligence community (IC) so that it might improve individual and group judgments, communication between analysts, and analytic processes. The papers in this volume provide the detailed evidentiary base for the National Research Council's report, Intelligence Analysis for Tomorrow: Advances from the Behavioral and Social Sciences. The opening chapter focuses on the structure, missions, operations, and characteristics of the IC while the following 12 papers provide in-depth reviews of key topics in three areas: analytic methods, analysts, and organizations.

Informed by the IC's unique missions and constraints, each paper documents the latest advancements of the relevant science and is a stand-alone resource for the IC's leadership and workforce. The collection allows readers to focus on one area of interest (analytic methods, analysts, or organizations) or even one particular aspect of a category. As a collection, the volume provides a broad perspective of the issues involved in making difficult decisions, which is at the heart of intelligence analysis.

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    Switch between the Original Pages, where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

    « Back Next »
  6. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  7. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  8. ×

    View our suggested citation for this chapter.

    « Back Next »
  9. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!