National Academies Press: OpenBook

Intelligence Analysis: Behavioral and Social Scientific Foundations (2011)

Chapter: 8 Group Processes in Intelligence Analysis--Reid Hastie

« Previous: 7 Intuitive Theories of Behavior--Hal R. Arkes and James Kajdasz
Suggested Citation:"8 Group Processes in Intelligence Analysis--Reid Hastie." National Research Council. 2011. Intelligence Analysis: Behavioral and Social Scientific Foundations. Washington, DC: The National Academies Press. doi: 10.17226/13062.
×

8
Group Processes in Intelligence Analysis

Reid Hastie

WHAT DO INTELLIGENCE TEAMS DO?

“The mission of intelligence analysis is to evaluate, integrate, and interpret information in order to provide warning, reduce uncertainty, and identify opportunities,” Fingar writes in Chapter 1 of this volume. Intelligence analysis encompasses a vast variety of intellectual tasks and aims to achieve these objectives. Most analyses are performed in a social context with analysts interacting face to face or electronically in formal or informal teams to create estimates, answer questions, and solve problems that serve the interests of diplomatic, political, military, and law enforcement customers (see this volume’s Fingar, Chapter 1, and Skinner, Chapter 5).

To idealize some role assignments, analysts occupy an organizational niche located between collectors and policy makers. Collectors are responsible for acquiring and initially processing “raw” intelligence information, described by a veritable dictionary of acronyms (e.g., HUMINT, SIGINT, MASINT). One reason for the separation of roles between collector and analyst is because collection often involves highly specialized technical skills (e.g., monitoring a telecommunications channel or maintaining an electronic system that transmits satellite images). Another reason is to protect the original sources from exposure in case, for example, the product of an analysis is acquired by an adversary. On the other side of the chain, analysts and policy makers are separated to protect the analyst’s objectivity and single-minded focus on “what is true,” without considerations of what is desirable or politically expedient. This unusual, insulated role is

Suggested Citation:"8 Group Processes in Intelligence Analysis--Reid Hastie." National Research Council. 2011. Intelligence Analysis: Behavioral and Social Scientific Foundations. Washington, DC: The National Academies Press. doi: 10.17226/13062.
×

central to intelligence analysis, and there are no other close organizational analogues (Zegart, this volume, Chapter 13). Of course, these distinctions are not quite as sharp in practice as they sound from this description because analysts are often involved in the collection process and work in a close relationship with policy makers in order to provide the most relevant information and to communicate effectively.

The typical product of an analysis is a written document that describes the conditions in a politically significant situation, sometimes with evaluations of more than one interpretation of the true situation. The best known products of American intelligence analysis, the President’s Daily Brief and National Intelligence Estimates, often look like news reports. However, they are likely to be more forward looking and include predictions of significant events, dissenting views, and confidence assessments (customarily expressed on a verbal scale indexed by terms such as “remote, unlikely, even chance, probably likely, and almost certainty”). Some estimates provide answers to specific questions (e.g., How many armed Taliban insurgents are present today in Kabul?), and many aim to provide a more comprehensive understanding of a situation (e.g., How is Israel likely to respond to Iran’s increased nuclear weapons capacity?).

Analytic activities vary along many dimensions. Some involve immediate, in-person interactions among analysts, while others involve indirect, usually electronically mediated, interactions among individuals in remote geographical locations; some involve one-shot, time-intensive interactions, while others involve sustained, long-term interactions; some involve integrating information from several sources into a summary description, while others involve complex inferences about events that might occur under alternate uncertain scenarios; and still others require the generation of innovative responses to diplomatic, economic, or political problems. This heterogeneity creates a challenge for someone who attempts to give prescriptive advice to improve the many different processes. I address that challenge by focusing on one idealized analysis task and then generalizing from that example to other analysis tasks.

Distinguishing among three idealized, truth-seeking analytic tasks is useful, with the following scenarios provided as examples:

  1. Judgment and estimation tasks involve integrating information from several sources into a unitary quantitative or qualitative estimate or descriptive report of a specific or general situation: Provide a best estimate of the date when Iran will have the capacity to launch a nuclear warhead missile strike on Israel (if its development of nuclear capacities continues at the current rate);

  2. Detection tasks involve the detection of a signal that a change has occurred, that there is a pattern of interrelated events occurring, or

Suggested Citation:"8 Group Processes in Intelligence Analysis--Reid Hastie." National Research Council. 2011. Intelligence Analysis: Behavioral and Social Scientific Foundations. Washington, DC: The National Academies Press. doi: 10.17226/13062.
×

that “something funny” is happening: Has the opium production rate changed in Faizabad during the past few months? Has Kim Jong-Il’s control of the government of North Korea changed at all during the past week?

  1. Complex problem-solving tasks require generating and applying novel solutions in a specified context: Will the current regime in Pakistan stay in power for the next 12 months? What is the likeliest scenario that would result if the current regime fails?

WHAT IS DISTINCTIVE ABOUT INTELLIGENCE ANALYSIS?

Of course, these dimensions also describe aspects of many other important team performance situations in business, science, and government settings. But several conditions converge in intelligence analysis to create a distinctive, if not unique, situation:

  • First, as noted above, analysts have a special, indirect connection to many sources of their intelligence—the front line of collectors acquire information, then pass it on to the analysts. This means there are special challenges in evaluating the validity and credibility of information because the analyst is not directly involved in the initial acquisition (see Schum, 1987, for a discussion of the special problems of cascaded and hierarchical inference that arise in intelligence and forensic contexts).

  • Second, more than in any other domain, denial and deception must be considered when evaluating the credibility and validity of information. In many analytic situations, adversaries are present and trying to undermine and defeat the analysis.

  • Third, many outcomes of intelligence analysis involve low-probability, high-impact consequences that can mean life or death for thousands of people. Furthermore, analysts must anticipate and infer what policy makers will want to know and even how they are likely to weight multifaceted outcomes, including the inevitable trade-offs between false alarms (e.g., weapons of mass destruction) and misses (e.g., 9/11) that are inherent in every policy decision.

  • Fourth, the organizational relationship between the analysts and their customers can include the temptation to bias answers to fit what the customer wants to hear.

  • Fifth, as in any complex collection of interdependent organizations, some of these activities occur in the intelligence community’s fragmented, “siloed” organizational terrain with 16 loosely connected agencies attempting to cooperate while they simultaneously pursue sometimes conflicting and nonaligned objectives.

Suggested Citation:"8 Group Processes in Intelligence Analysis--Reid Hastie." National Research Council. 2011. Intelligence Analysis: Behavioral and Social Scientific Foundations. Washington, DC: The National Academies Press. doi: 10.17226/13062.
×
  • Finally, feedback is especially rare and unreliable. For many important analytic estimates, outcomes remain unknown for a long time or cannot ever be known. Furthermore, often the U.S. government itself or another party will take an action that changes the outcomes that were the subject of the original analysis, making learning from feedback even more difficult.

The difficulty of learning from feedback is compounded by the intense scrutiny and criticisms in hindsight of every visible intelligence failure, while successes are rarely attributed to the analysts and, under many conditions, are unobserved (see Bruce, 2008, for a catalog of publicized failures, but see Jervis, 2006, for a defense of achievements of the intelligence community). There will always be room for improvement, but there is ample evidence for the high levels of professionalism and dedication in intelligence analysis (cf., Dawes, 1993; Fischhoff, 1975; Gladwell, 2003). One essential means to improving intelligence analysis is to develop systematic methods to evaluate the validity and accuracy of estimates (cf., Tetlock, 2006; Arkes and Kajdasz, this volume, Chapter 7; McClelland, this volume, Chapter 4) and then to apply these criteria to identify and reward best practices.

In this paper, I will focus on short-range, tactical intelligence estimates in the international domain, made by small teams of three to seven analysts working together face to face or through electronic communication. I will restrict the discussion to tasks for which the goal is to achieve the highest possible levels of accuracy in describing or forecasting a state of the external world. Our knowledge of how teams perform such tasks comes from all of the social sciences, sociology, social psychology, economics, political science, and anthropology as well as from composite fields of study, such as management science and cognitive science, although social psychology is the primary source for the current conclusions about truth-seeking group judgments.

FOUR ESSENTIAL CONDITIONS FOR EFFECTIVE TEAMWORK

In the most general terms, four basic conditions must be met if a team is to perform effectively in a larger organizational context (Hackman, 2002; Wageman, 2001). First, the team must have an identity as a distinct social unit in the larger organization (Tinsley, this volume, Chapter 9). It must be recognized as autonomous and be given a well-defined, organizationally significant set of objectives. It must be given the essential resources to achieve those objectives, including effective channels of communication with other units in the larger organization, especially the agent outside the team who oversees the team’s activities. Under some conditions, the team should have a distinctive identity and even a “subculture” appropriate for its task within

Suggested Citation:"8 Group Processes in Intelligence Analysis--Reid Hastie." National Research Council. 2011. Intelligence Analysis: Behavioral and Social Scientific Foundations. Washington, DC: The National Academies Press. doi: 10.17226/13062.
×

the larger organization (Tinsley, this volume, Chapter 9). In general terms, the more distributed and independent the team’s later working procedures will be, the more important it is to establish a distinctive identity at the beginning (Moreland and Levine, 1982).

Second, the team must have a compelling direction, with clear, challenging, and consequential objectives. Its members should be autonomous, and individual activities should not be micromanaged by team leaders or organizational authorities outside of the team. Each member’s personal goals must, to some extent, be subordinate to and aligned with the team’s organizationally defined objectives. This means that both tangible incentives (e.g., financial or status rewards) and intrinsic incentives (e.g., social recognition, positive internal feelings) should be conditional on achievements relevant to the team’s goals.

Third, the team must have an “enabling design” that provides the proper individual composition (skills, diversity, size), specialized role assignments if appropriate to the larger task, and plans and technological support for intermember communication, coordination, and a “group memory” of task-relevant information (Fiore et al., 2003).

Finally, the team must have a self-conscious, meta-level perspective that is constantly monitoring and correcting member motivations; refining operating procedures; and providing short-term feedback and eventual evaluation to allow members and the team to learn from experience performing the task.

BREAKING THE OVERARCHING ANALYTIC TASK INTO SUBTASKS

Each of these four conditions is essential for teams performing any task, but the specific manner in which each is accomplished depends on the task type. Each of the analytic tasks—integration, detection, and problem solving—can be described in terms of a stylized process model that breaks the larger task down into its component subtasks. This conceptual breakdown describes the task as it might be performed by an individual, a team, or even by an automated software system. What is distinctive about the performance of a team is the collection of special motivation and coordination problems that arise when independent agents collaborate on the task. Two closely related tensions describe the essential dilemma for effective teamwork: (1) individualistic-selfish motives versus collective-organizational motives; and (2) promotion of diversity and independence versus promotion of consensus and interdependence. Good team performance depends on addressing these tensions flexibly and effectively. The second requires the design of explicit incentives that will motivate individual members to work for the good of the team and the organization in

Suggested Citation:"8 Group Processes in Intelligence Analysis--Reid Hastie." National Research Council. 2011. Intelligence Analysis: Behavioral and Social Scientific Foundations. Washington, DC: The National Academies Press. doi: 10.17226/13062.
×

which it is embedded. Implicit incentives, often attributed to the team and organizational “culture,” are also important. The second requires careful oversight by the team’s leader (or external manager) so that when certain subtasks are performed, independence is promoted; in other subtasks, consensus-conformity is promoted, appropriate to the local objectives of each subtask. (This last motivational problem is what economists call the principal-agent problem. There is a large literature on the subtle solutions to the problem, including discussions of conditions that seem to have no known theoretical solution; see Baron and Kreps, 1999, and Chen et al., 2009, for discussions of methods of motivating individuals in teams.)

Judgment and simple estimation tasks can be described as an ideal analytic process in terms of five component activities: Subtask 1, define the problem; Subtask 2, acquire relevant information; Subtask 3, terminate the information acquisition process; Subtask 4, integrate the information into a summary statement (estimate of a state of the past, present, or future world; descriptive summary report); and Subtask 5, generate an appropriate response (see Hinsz et al., 1997, for a similar discussion of “groups as information processors”; see Lee and Cummins, 2004, for a similar task analysis). (In the case of intelligence analysis, the “response” is nearly always the provision of information to a policy maker or a military actor, who decides on an appropriate action based on the intelligence.) The primary advantages of teams over individuals in performing such tasks are the teams’ capacity for acquiring and pooling more information than any individual can contribute, and the teams’ ability to “damp errors,” as different views counterbalance one another, yielding a central consensus belief in discussion when integrating information and opinions from several sources.

The potential advantages of performing tasks requiring information integration and estimation with a team are derived from the greater store of information (including analytic skills) available to a team of several people and from the capacity of the group to leverage diverse perspectives to damp errors and converge on a sensible central value or solution. This implies that in the early stages of the team process, care must be taken to promote diversity in information acquisition; in the middle stages, coordinated information pooling; and in the later stages, convergence on a unitary “solution” or consensus response. Let’s look at the requirements for effective team performance of each component subtask of the larger judgment process (for complementary analyses, also see Heuer, 2007; Kozlowski and Ilgen, 2006; and Straus et al., 2008).

Team Composition

Several affirmative suggestions can be made about how to design effective teams before they begin work on their analytic tasks (see Hackman,

Suggested Citation:"8 Group Processes in Intelligence Analysis--Reid Hastie." National Research Council. 2011. Intelligence Analysis: Behavioral and Social Scientific Foundations. Washington, DC: The National Academies Press. doi: 10.17226/13062.
×

2002, for similar advice). First, there are organizational issues: The team needs to be embedded appropriately in the larger organization in which it functions. This means effective lines of communication must define the team’s operational goals in terms of the organizational objectives. In other words, the team needs to know what its task, goals, and performance criteria are in terms of what would help the organization. The team also needs resources from the larger organization and needs to be insulated from interference from the larger organization (e.g., to prevent micromanagement or undue influence from the organizational manager to whom the team reports).

Teams are usually composed of members from a larger organization or individuals recruited by that organization to support the team’s performance (Kozlowski, this volume, Chapter 12). Team composition is obviously significant, although it is difficult to specify useful selection criteria that are general across tasks. Three conditions seem essential: (1) task-relevant diversity of knowledge and skills; (2) a capacity for full, open, and truthful exchange (i.e., communication skills); and (3) a commitment to the team’s goal (the capacity or willingness to align one’s own interests with the team goal to produce an accurate estimate). Composition depends on the task contents, so formulating more specific prescriptions for good practice is difficult.

Two generalities emerge from the behavioral literature: In practice, teams are usually too large (Hackman and Vidmar, 1970) and not diverse enough (Page, 2007). Of course, there is a paradox posed by the fact that smaller teams (e.g., an implication of much of the behavioral literature is that a typical analysis team should be composed of about five members) must be less diverse than larger teams. Part of the paradox arises from the fact that larger teams have more resources of all types than smaller teams, but larger teams also suffer from more “process losses” than smaller teams (Steiner, 1972). Process losses include the variety of conditions that impede group productivity in any goal-directed task: difficulties in communication and coordination; within-group social conflicts; lower cohesion; and confusions about group identity, to name the most obvious problems.

Note that the term “diversity” refers to task-relevant diversity in terms of knowledge, skills, perspectives, and opinions that promote variety in the types of task-relevant information and solutions that contribute to the team’s performance. This kind of task-relevant diversity is likely to be correlated with differences in gender, cultural background, or personality, but not necessarily so. Page (2007) has provided the most comprehensive research-based argument for the advantages of task-relevant diversity over raw expertise in team problem solving. Some of his proofs take the form of abstract theoretical analyses of the capacities for multiple idealized interacting agents to solve mathematical problems. These results are abstract,

Suggested Citation:"8 Group Processes in Intelligence Analysis--Reid Hastie." National Research Council. 2011. Intelligence Analysis: Behavioral and Social Scientific Foundations. Washington, DC: The National Academies Press. doi: 10.17226/13062.
×

but support strong claims for the advantages of task-relevant diversity. He also reviews sociological analyses of diverse versus homogeneous groups in behavioral experiments and natural settings, and again finds support for the value of diversity. Mannix and Neale (2005) have also reviewed the behavioral literature and reach pessimistic conclusions with regard to the effects of increased social diversity (race, gender, age) on team performance. Like Page, they note the potential value of task-relevant diversity (knowledge, skills, social-network resources), especially in performing tasks that involve information seeking, information evaluation, and creative thinking. But they also conclude that social diversity inevitably increases process losses through interpersonal conflict, communication problems, and lowered cohesion. Another aspect of this trade-off was pointed out by Calvert (1985) in a theoretical analysis of how a rational decision maker should weight biased information. One of the counterintuitive implications of his rational analysis was that, under many conditions, teammates who are biased to agree with you are more reliable sources of divergent information than those who are biased to disagree with you.

On the basis of current scientific results, it is impossible to spell out specific prescriptions for recruiting members with productively diverse characteristics without knowing something about the details of the team’s task and the context in which it performs. Nonetheless, a good practice is always to oversample for diversity when a team is composed because the common tendency is to err in the direction of uniformity. At a minimum, a priori differences of opinion on the correct solution improve the performance of most problem-solving groups (Nemeth, 1986; Schulz-Hardt et al., 2006; Winquist and Larson, 1998). Several behavioral studies demonstrate the importance of member diversity, but also of the necessity that members know the specialties of other members, so that appropriate role assignments and coordination are supported (Austin, 2003; Moreland et al., 1996; Stasser et al., 1995). Hackman and colleagues (2008) provide a thoughtful discussion of team composition in intelligence analysis that promotes the design of teams that balance members’ diverse cognitive skills (see also Pashler et al., 2008, for a discussion of the concept of cognitive styles). They also report a behavioral study that demonstrates the importance of aligning individual differences in skill sets (visual versus verbal thinking styles) with matching role assignments (navigation versus acquisition of targets) to maximize the contribution of member diversity to team performance.

To repeat, subtle trade-offs are always present between independence and conformity with the ultimate impact on team productivity (Mannix and Neale, 2005). With too much independence and diversity, team performance suffers because of loss of identification, decreased motivation, and simple coordination problems. Too much dependence and uniformity

Suggested Citation:"8 Group Processes in Intelligence Analysis--Reid Hastie." National Research Council. 2011. Intelligence Analysis: Behavioral and Social Scientific Foundations. Washington, DC: The National Academies Press. doi: 10.17226/13062.
×

undermine the team’s ability to perform components of the overall task that require divergent thinking. This balancing problem has no simple “fixes.” This problem, of course, highlights the need for more rigorous research on analytic teamwork, based on objective measures of team performance.

Subtask 1:
Defining the Problem

When the team initiates its performance on an analytic task, an essential step is to thoughtfully execute each of the subtasks of the overarching task. Completion of each subtask, in some manner, is necessary to produce a good solution, but many teams perform component subtasks in a perfunctory manner. Many teams fail to verify that every member understands and agrees on the target of the estimate, including criteria for a successful solution and a sense of cost–benefit trade-offs. The decision to terminate information search is next most likely to be performed in a careless manner; the most common postmortem evaluation of a poor team judgment is that information was not acquired or pooled effectively.

The first subtask of team performance, defining the problem, requires a mixture of independence and consensus (cf., Eisenhardt et al., 1997). During this stage, each team member grasps the goal state or target of the judgment and other relevant criteria for a successful or accurate response. This discussion should include consideration of the costs and benefits associated with potential errors (over- and underestimates or false alarms and misses). These criteria need to be shared with other team members; as the old saying goes, the team will fail if some members are headed for Los Angeles, when the primary destination is San Francisco. Each member also assesses “the givens,” the information that is in hand or needs to be acquired to make a good estimate. At this point, independence and member diversity are probably best in the sources of information or evidence that will be used. The notion here is that “triangulation” based on independent sources of information (given a shared judgment objective) will promote innovation, error damping, and robustness in the final estimate.

Subtask 2:
Information Acquisition

The second subtask, information acquisition, is the one for which independence and diversity of perspectives count the most. Team judgments have two major advantages (compared to individual judgments): Teams have more information than any one member and teams can damp errors in individual judgments and converge on an accurate “central tendency” (Sunstein, 2006, and Surowiecki, 2004, provide popularized accounts of these principles). Several devices can be used to achieve independence and diversity: recruiting a diverse set of perspectives and expertise sets when

Suggested Citation:"8 Group Processes in Intelligence Analysis--Reid Hastie." National Research Council. 2011. Intelligence Analysis: Behavioral and Social Scientific Foundations. Washington, DC: The National Academies Press. doi: 10.17226/13062.
×

team members are selected; working anonymously and in dispersed settings during the information acquisition (and pooling) subtask; and cycling back and forth between searching for and pooling information, so that information from other members can stimulate new directions in search for each member.

Information acquisition (Subtask 2) and information pooling (Subtask 4) are probably most effectively promoted by careful design of the team’s composition—by having a good mix of members with diverse information, backgrounds, and skill sets. At least two negative conditions, discussed below, need to be avoided (also see the discussion of Groupthink, below).

Association Blocking

If members interact with one another when they seek or pool information, association blocking can occur. Association blocking refers to a condition that occurs when individual team members get “locked into” a whirlpool of similar associations, and individual capacities for divergent thinking are impaired as they naturally respond associatively to one another’s communications. For example, when a first interpretation concludes that certain aluminum tubing is likely to be used for uranium enrichment, then the mind is primed automatically to retrieve and interpret additional information as relevant to nuclear weapons, rather than, for example, ordinary military rockets. The phenomenon is most apparent when people try to generate unrelated, novel solutions to an innovation problem while interacting in person (Diehl and Stroebe, 1987; Nijstad et al., 2003; Paulus and Yang, 2000).

Several interaction process solutions to association blocking involve isolating members and promoting independent thinking. One method is to cycle between independent individual analysis and social interaction, and to have individuals acquire information separately; or in the case of pooling, each individual should pool information separately. The best practice is to start independently, share ideas, then return to independent search or generation, then back to social interaction. Several “unblocking” techniques, borrowed from group brain-storming practices, are available to promote novel search and generation by introducing haphazard or new directions (Kelley and Littman, 2001). Another method is to vary the composition of the group by adding new members (Choi and Thompson, 2005).

Information Pooling and the Common-Knowledge Effect

Beyond association blocking there is also a tendency to focus discussion on shared information and its implications, while neglecting to pool

Suggested Citation:"8 Group Processes in Intelligence Analysis--Reid Hastie." National Research Council. 2011. Intelligence Analysis: Behavioral and Social Scientific Foundations. Washington, DC: The National Academies Press. doi: 10.17226/13062.
×

unshared information. This phenomenon has been observed most dramatically in “hidden profile” tasks (Larson et al., 1994; Stasser and Titus, 1985, 2003) and was dubbed the “Common Knowledge Effect” by Gigone and Hastie (1993, 1997). The Hidden Profile method was invented by Stasser and Titus and provides a powerful test bed to evaluate team performance on elementary inference and judgment tasks. The basic method involves designing a judgment task that provides an opportunity for high levels of achievement by individuals and groups who have been provided with full information relevant to the judgment. However, to create hidden profiles, the researcher distributes the information to members of the to-be-tested team in a way that no member has sufficient information to perform at a high level of isolation, although the team has all of the relevant information—albeit dispersed in a manner that provides a stiff challenge to the information-pooling capacity of the team. Of course, cases of widely distributed and vastly unshared information are the norm in intelligence analysis. Adding to the difficulty is the fact that often analysts with different regional or technical specialties must communicate with one another to converge on the truth. For example, regional experts, satellite image technicians, and nuclear scientists were all involved in the effort to determine if Saddam Hussein was developing nuclear weapons.

In its most diabolical form, the Hidden Profile method capitalizes on two fundamental human weaknesses to create a nearly insurmountable challenge. First, in the extreme form of the task, each member has an incorrect impression of the correct solution. The full set of information is distributed, so that the individual member subsets each favor a nonoptimal solution—in other words, a reasonable person begins the task with the wrong answer in mind. This creates a strong cognitive bias toward confirmatory thinking, and many naïve teams begin discussion by eliminating the correct solution because, after all, no individual member believes it might be the solution. Intelligence analysis, which involves many verified cases in which one party attempts to deceive another party by seeding communications with false and misleading information, represents one situation in which the diabolical forms of “hidden profiles” occur in naturally occurring contexts (others are cases of corporate strategic deception and some personnel matters in which individuals attempt to deceive others about professional qualifications). Furthermore, there are the social biases to underpool unshared information and overpool shared information, which if not resisted, amplify the bias against the correct solution. Finally, time pressure increases the negative effects of the confirmatory thinking and information-pooling challenges (Lavery et al., 1999).

Qualitative analysis of the content of group discussions shows that when shared information is mentioned, it is likely to be followed by affirmative statements and relevant discussion (Larson et al., 1994). When

Suggested Citation:"8 Group Processes in Intelligence Analysis--Reid Hastie." National Research Council. 2011. Intelligence Analysis: Behavioral and Social Scientific Foundations. Washington, DC: The National Academies Press. doi: 10.17226/13062.
×

unshared information is mentioned, reactions are usually less responsive and the subject of discussion is likely to shift to another topic. Finally, the problem of pooling unshared information is exacerbated when other stages of the judgment process are mixed with information acquisition. For example, when members are both acquiring and sharing information and proposing answers to the current problem or estimate, the acquisition process is undermined by confirmatory thinking, and sharing disconfirming information is inhibited.

Several social procedures can increase the chances that a team will solve a hidden profiles problem. First, as noted before, if different members favor different solutions at the outset of discussion, dissent can promote more effective information pooling (Nemeth, 1986; Schulz-Hardt et al., 2006; Winquist and Larson, 1998). Second, if individual team members are assigned task-relevant roles (e.g., one is the HUMINT expert, one is the SIGINT expert, etc.), the team is likelier to succeed (Stasser et al., 1995; and see discussion of “Shared Mental Models” below). Finally, any other method that promotes more vigorous discussion is likely to improve performance to some degree, such as creating adversarial subteams or assigning one member to the social facilitator role (Kramer et al., 2001; Oxley et al., 1996).

Another well-defined method for promoting effective pooling is the Nominal Group Technique, which involves alternating between isolated individuals and interacting groups for task performance. The first cycle of information acquisition and recording is carried out individually, in isolation, followed by group information pooling in a round-robin procedure or by facilitated pooling (e.g., over a local network) to ensure that everyone is prompted to fully share their individual contributions. In some applications, this cycle is repeated several times. Rohrbaugh (1981) conducted an evaluation of the Nominal Group Technique in a simple estimation task (predicting the outcomes of horse races) and found that the Nominal Group Technique performed at about the level of the most proficient team member, who might not prevail in an interacting face-to-face group. Plous (1995) found that Nominal Groups were better calibrated, assigning more appropriate confidence intervals around quantitative estimates than individuals or interacting groups. Another method with a demonstrated record of success, the Advocacy Method, involves assigning members to roles to advocate one solution or another; this method is most likely to be successful if the roles are reassigned several times and if the team has practiced the advocacy method before (Greitemeyer et al., 2006).

Versions of these methods have been applied in intelligence analysis and are taught as “tradecraft” at the Sherman Kent School (U.S. Government, 2009). Furthermore, high-tech, electronic network-based facilities such as A-Space and Intellipedia were designed to solve these types of

Suggested Citation:"8 Group Processes in Intelligence Analysis--Reid Hastie." National Research Council. 2011. Intelligence Analysis: Behavioral and Social Scientific Foundations. Washington, DC: The National Academies Press. doi: 10.17226/13062.
×

information pooling and networking problems and are receiving good reviews from practitioners (Yasin, 2009). Although primarily journalistic, other accounts support these methods in industry settings as remedies for the same information-pooling problems (e.g., Sunstein, 2006; Surowiecki, 2004; Tapscott and Williams, 2006).

Subtask 3:
Terminating Information Acquisition

The third subtask involves a decision to terminate information acquisition and move to the information integration subtask. In practice, this subtask is often not explicitly recognized and acquisition simply stops when time runs out or when the flow of new information runs dry. In a well-defined mathematical estimation problem, it is possible to prescribe optimal stopping rules, but this requires exact knowledge (or assumptions) of the costs and benefits of the solution (and errors) and the value and probability of acquiring information items (De Groot, 1970). But information to compute optimal stopping solutions is usually lacking in practical analytic tasks. What can be done is to recognize that a decision to terminate acquisition is implicitly or explicitly inevitable and to deliberately plan a team process with that limit in mind.

Subtask 4:
Information Integration

The process in the fourth subtask, information integration, depends on the nature of the product format, originally learned in the first subtask. If the estimate is a unitary numerical or category-membership judgment, the process often takes the form of an oral discussion, perhaps with calculation or voting on proposals for the solution. For example, a team might review members’ estimates for a quantity such as the number of troops massed at a border location or a “category” such as the voting intention of a United Nations Security Council member and then select an answer based on an informal average or vote. If the product is a summary report, the process usually takes the form of drafting a written document, often with subpart assignments to member subject matter experts, followed by discussion to combine the pieces into a unitary product.

Assuming that information acquisition and pooling have been executed effectively, information integration is best served by vigorous discussion and debate. The basic problem is to avoid overconformity to an early solution that interferes with thorough evaluation of alternate solutions the team has generated (i.e., avoid confirmatory thinking). After Congress reviewed the National Intelligence Estimate that was in error on the extent to which weapons of mass destruction were available to Saddam Hussein in Iraq

Suggested Citation:"8 Group Processes in Intelligence Analysis--Reid Hastie." National Research Council. 2011. Intelligence Analysis: Behavioral and Social Scientific Foundations. Washington, DC: The National Academies Press. doi: 10.17226/13062.
×

around 2003, there has been an obsession with avoiding Groupthink and confirmatory thinking in the analytic process.

The most basic precept is that all discussions should focus on tasks, and be uninhibited and vigorous. Many methods can be used to “have a good fight” in team discussions (Tinsley, this volume, Chapter 9). Eisenhardt and colleagues (1997; see also Okhuysen and Eisenhardt, 2002) studied several problem-solving teams in engineering- and biotechnology-oriented companies and identified some differences that predicted the performances of the more successful and less successful teams. They list some of the most important conditions for vigorous team problem solving: First, shared goals; second, a rich (diverse) information acquisition and pooling process; third, emphasis on data-driven analysis and dispute resolution; fourth, well-defined role assignments so it is always clear how discussion will proceed and how contingent decisions will be made; and finally, a willingness to decide with dissent or based on “consensus with qualification.” Of course, group facilitation techniques, where one member focuses mostly on promoting an effective process (and is usually disengaged from substantive contributions), are helpful, even simply requiring a team to pause and deliberately plan a process (Larson et al., 1996). All of the effective solutions are enhanced if the team is embedded in a productive organizational culture that promotes candid, but not ad hominem evaluations of proposed solutions.

Subtask 5:
Response Generation

The final subtask in performing a judgment task is to express the response in an appropriate format to satisfy the original objectives of the assignment. For most intelligence products, this means designing a summary of information—evidence and conclusions—in a form that is readily comprehended by the customer. The primary concerns are clarity, completeness, and transparency of expression (Fischhoff, this volume, Chapter 10). At this time, the costs of possible errors (over or under; false alarms or misses) should be considered and expressed.

The fifth subtask often involves compromises, where individuals with divergent views agree to subordinate their opinions to the consensus opinion. The report should include a summary of the degree of consensus among analysts on the team, including, if appropriate, a “dissenting opinion.” The expectation that dissent will be reported can increase the efficacy of the analytic process in some prior stages of the task—especially the rigor of the information integration process (Hackman, 2002). Furthermore, the major conclusions in the report should be accompanied by systematic, ideally quantitative expressions of the team’s confidence in those conclusions.

Suggested Citation:"8 Group Processes in Intelligence Analysis--Reid Hastie." National Research Council. 2011. Intelligence Analysis: Behavioral and Social Scientific Foundations. Washington, DC: The National Academies Press. doi: 10.17226/13062.
×

Again, simply requiring such an assessment can enhance performance of earlier stages in the overall team process.

Groupthink and Overconformity

One condition that definitely undermines the team analysis process is overconformity or Groupthink. Irving Janis popularized the term Groupthink in an influential book, Victims of Groupthink (1972), in which he reviewed several American policy decisions that led to bad outcomes (e.g., Bay of Pigs invasion of Cuba; failure to anticipate the surprise attack on Pearl Harbor; escalation of military commitment in the Vietnam War, see also Janis, 1982). The term appears to have been introduced into popular culture by William H. Whyte in 1952 (see also William Safire’s editorial comment on the Senate Select Committee on Intelligence’s Report on the U.S. Intelligence Community’s Prewar Assessments on Iraq, 2004). Janis proposed that a systematic social pathology could explain these fiascos. The central explanatory concept was overconformity to the course of action favored by a charismatic or dominating leader, especially when other conditions produced a high degree of social cohesion among team members.

The details of Janis’s analysis have not fared well as a coherent scientific claim. For example, experimental tests have not found that his “recipe” for Groupthink or consistently produced the effects he attributed to his historical examples. But Janis’s basic insight that overconformity to powerful leaders or to the wishes of a customer can undermine good judgment seems indisputable and important (Baron, 2005; Kerr and Tindale, 2004; Paulus, 1998; Turner and Pratkanis, 1998; Whyte, 1998).

This general sense of the term “Groupthink” was referred to in the 2004 Senate Select Committee on Intelligence report (p. 18): “… [a] group think dynamic led intelligence community analysts, collectors, and managers to both interpret ambiguous evidence as conclusively indicative of a WMD [weapons of mass destruction] program as well as to ignore or minimize evidence that Iraq did not have active and expanding weapons of mass destruction programs.” Political scientist Robert Jervis (2006, pp. 20–21) comments: “Taken literally, this is simply incorrect. Groupthink is, as its name implies, a small group phenomenon…. Intelligence on Iraq was not developed by small groups, however. A great deal of work was done by individuals, and the groups were large and of shifting composition. In fairness to the SSCI [Senate Select Committee on Intelligence], it is using the term Groupthink in a colloquial rather than a technical sense. What is claimed to be at work are general pressures of conformity and mutual reinforcement. Once the view that Iraq was developing WMD was established there not only were few incentives to challenge it, but each person

Suggested Citation:"8 Group Processes in Intelligence Analysis--Reid Hastie." National Research Council. 2011. Intelligence Analysis: Behavioral and Social Scientific Foundations. Washington, DC: The National Academies Press. doi: 10.17226/13062.
×

who held this view undoubtedly drew greater confidence from the fact that it was universally shared.”

In intelligence analysis, the pressure to conform to a premature conclusion can come from many sources, from a team leader or other senior analyst, from a coalition of team members who share a commitment to an answer or to an analytic approach that leads myopically to one answer, or from the wishes of a customer who favors a particular answer to the analytic question (cf., Davis, 2008). Good leaders seem to anticipate the problem and design team processes to avoid it, especially at the earliest stages of performance when selecting members and instilling a “team culture” (e.g., Goodwin, 2005). Groups in which there are sincere differences of opinion, represented in coalitions with more than one member, do seem to be less likely to exhibit signs of confirmatory thinking or overconformity (Schwenk, 1988). But introducing contrived dissent, using methods like assigning an individual to the role of a devil’s advocate, has not produced consistent improvements in team judgments or decisions (e.g., Nemeth et al., 2001; Schweiger et al., 1986; Schwenk, 1990). Unfortunately, there seem to be no scientific evaluations of more vigorous adversarial role assignment methods such as red team/blue team exercises (e.g., where three-member subteams are created to develop alternate perspectives on a solution or strategy), although there is considerable informal enthusiasm for such exercises (Gold and Hermann, 2003). Another method to remedy Groupthink, with considerable face-validity (though untested scientifically), is to adapt Structured Analytic Techniques, such as the Analysis of Competing Hypotheses, for collaborative applications (see Heuer, 2008, for the argument in favor of collaborative Structured Analysis).

Polarization and Overconfidence

Two general properties of the solutions generated by groups making simple estimates were not reviewed in detail in this chapter: attitude polarization (Tinsley, this volume, Chapter 9) and overconfidence (Arkes and Kajdasz, this volume, Chapter 7). Several commentators have expressed concern about the nearly universal tendency for group discussions (or just individual expressions of opinions, without discussion) to produce polarization of individual opinions (e.g., Sunstein, 2009). For example, when a group of like-minded citizens meet and discuss a controversial issue such as affirmative action policies, gay civil unions, or the right to own guns, their individual postdiscussion views are more extreme in the direction of the average (or median) initial inclination. Thus, a group of liberals discussing those issues would conclude “more liberal” after discussion; a group of conservatives would shift away from neutrality and become “more conservative” (e.g., Schkade et al., 2007). Thoughtful political analysts, like

Suggested Citation:"8 Group Processes in Intelligence Analysis--Reid Hastie." National Research Council. 2011. Intelligence Analysis: Behavioral and Social Scientific Foundations. Washington, DC: The National Academies Press. doi: 10.17226/13062.
×

Sunstein, are concerned that these very general perceptual and behavioral tendencies will produce an “enclaving” phenomenon in a large heterogeneous society. Subgroups of like-minded citizens will form (the tendency to associate with similar others is also a universal tendency), they will discuss political issues, and individuals will move toward consensus and toward the extremes on opinion dimensions (again, the two movements—toward one another and toward the extreme—are universal tendencies). Furthermore, these potentially negative intragroup effects typically are accompanied by tendencies to view an opposing group as more extreme and inferior (cf., discussion of intergroup dynamics by Tinsley, this volume, Chapter 9). The result would be many local groups of extremist, antagonistic citizens—leading to indecision, intergroup conflict, and a degraded democratic process at the societal level.

Polarization is certainly likely to degrade intergroup relationships, such as those among intelligence agencies. However, at the team level, little behavioral evidence shows polarization in truth-seeking groups (e.g., intelligence teams whose primary goal is to make accurate estimates). Virtually all demonstrations of polarization involve bipolar attitude or evaluative dimensions. (But there are a few suggestive results, and polarization is expected to occur when the group is assessing beliefs as well as values.) It is also unclear that polarization in a truth-seeking group is a bad property of the process if the group is doing its task effectively and correctly zeroing in on the truth. In fact, the repeated advice to compose teams and design procedures to preserve task-relevant diversity is the best practice known to avoid mindless polarization. The common result of discussion in groups that include diverse attitudes or beliefs is depolarization, not polarization.

Overconfidence would seem more worrisome than polarization, although the author is unaware of any published studies that clearly demonstrate groups are more overconfident than individuals (see Sniezek, 1992, for a thoughtful discussion of hypotheses about group confidence). A frequently cited study by Puncochar and Fox (2004) measured confidence in student answers to questions about psychology course materials. The study found that groups of three to four students were more confident than individual students on both correct and incorrect answers. However, this study does not actually demonstrate relative overconfidence because groups were also more likely to be correct, so the higher confidence ratings might represent the same degree of calibration as for individuals. (Furthermore, participants answered individually and then in small groups in all experimental conditions. Thus, the study did not provide a clean individual versus group performance comparison, as the group task was confounded with performing the task for the second time.) In another frequently cited study, Zarnoth and Sniezek (1997) found that more confident individuals within a group have a greater impact on the group’s answer to general-knowledge

Suggested Citation:"8 Group Processes in Intelligence Analysis--Reid Hastie." National Research Council. 2011. Intelligence Analysis: Behavioral and Social Scientific Foundations. Washington, DC: The National Academies Press. doi: 10.17226/13062.
×

questions, but this also does not demonstrate relative overconfidence for groups. Rather, it reaches a conclusion about individual impact on group solutions. As with polarization, it is not clear that the overconfidence effect does not occur in teams, only that reliable research has not yet demonstrated such an effect. Also as with polarization, I believe the prescriptions outlined above for effective team performance include advice on the best practices currently supported by behavioral research.

“Group Cognition”

Cognitive scientists, usually working in multidisciplinary teams of engineers, psychologists, and mathematicians, have made a substantial contribution to our understanding of teamwork, with a focus on distributed workgroups that do not meet in person, and on the selection and training of team members (Kozlowski, this volume, Chapter 12; Fiore et al., 2003; Paris et al., 2000). The aspirations of these researchers are high, to create a practical theory that synthesizes most of the topics covered in the present chapter, adding selection and training of team members and the design of software systems to support and enhance teamwork. But the achievements are still modest. Much of the research involves pioneering observational studies (e.g., Klein and Miller, 1999), and many conclusions are in the form of useful conceptual frameworks (e.g., Bell and Kozlowski, 2002; Fiore et al., 2003; Klein et al., 2003). These foundations are critically important for the development of a comprehensive scientific analysis, but are in their infancy; they are useful as the source of hypotheses and research questions, but not a fount of practical advice or empirically verified conclusions.

For present purposes, the major contribution of these research programs has been the development of the concept of shared cognition or shared mental models (see Rouse and Morris, 1986; Wilson and Rutherford, 1989, for background on the concept of mental models). These are concepts about “interrelationships between team objectives, team mechanisms, temporal patterns of activity, individual roles, individual functions, and relationships among individuals” (Paris et al., 2000, p. 1055). As implied by this broad definition, it is difficult to provide a precise specification for a theoretical representation of a shared mental model, and the operational measurement of shared mental models appears to be ad hoc and varies from study to study. Nonetheless, the notion of a shared mental model and practices that will support effective mental representations of “the team” seem to be an important element of any effort to improve team performance.

For example, Mathieu et al. (2000) studied the performance of college student dyads completing missions “flying” a simulated F-16 fighter plane. Mathieu and colleagues measured individual mental models as ratings of the perceived relationships between operational components of operating

Suggested Citation:"8 Group Processes in Intelligence Analysis--Reid Hastie." National Research Council. 2011. Intelligence Analysis: Behavioral and Social Scientific Foundations. Washington, DC: The National Academies Press. doi: 10.17226/13062.
×

the aircraft (e.g., banking and turning, selecting and shooting weapons), then used a correlation coefficient as the index of the degree to which mental models of the situation (not team member interrelationships, as in the definition quoted above) were shared. The shared mental model index was correlated at moderate levels with performance of the flying missions (correlations ranging from 0.05 to 0.38), increasing over time on the task.

The most tangible advice, based on the notion that enhancing shared mental models will improve team performance, is the suggestion to train teammates together (Hollingshead, 1998; Moreland and Myaskovsky, 2000). Providing specific role assignments and fully informing team members of one another’s primary capacities and duties in performing a collective task is the most effective remedy for information-pooling inefficiencies in Hidden Profiles problems (Stasser and Augustinova, 2007; Stasser et al., 1995; discussed above in the section on “Information Pooling and the Common-Knowledge Effect”).

High-Tech Alternatives to Face-to-Face Teamwork

Importantly, several usually web-based techniques are available for performing simple estimation and categorization tasks. Surowiecki (2004) and Sunstein (2006) review several of these methods, all of which have been used in intelligence analysis (Kaplan, this volume, Chapter 2). The simplest methods involve mechanically combining individual judgments into a summary solution—usually some kind of average value or election winner.

Delphi Method

The Delphi Method relies on a systematic social interaction process to find a central tendency in individual estimates (invented at the RAND Corporation in the 1950s by Helmer, Dalkey, Rescher, and others [see Rescher, 1998, for review of the method and its invention], cf., Linstone and Turoff, 1975). In its simplest form, the Delphi Method participants (usually selected for subject area expertise) make a series of estimates and reestimates anonymously, with a requirement to adjust on each round toward the center of the distribution of estimates from the prior round (e.g., each estimate must be within the interquartile range of the previous estimates). Some versions of the method also require participants to provide reasons for their estimates and adjustments. Although the method has been widely used in the intelligence community, few vigorous evaluations of its merits have been conducted. It does seem to outperform simple statistical aggregation methods (e.g., taking averages or even averages weighted by estimators’ confidence; e.g., Rowe and Wright, 1999). But, there are no definitive comparisons of the Delphi Method against the performance of

Suggested Citation:"8 Group Processes in Intelligence Analysis--Reid Hastie." National Research Council. 2011. Intelligence Analysis: Behavioral and Social Scientific Foundations. Washington, DC: The National Academies Press. doi: 10.17226/13062.
×

expert in-person teams, although it compares favorably with procedures based on statistical learning with feedback (a version of Social Judgment Theory; Cooksey, 1996; Hammond et al., 1977) and with prediction markets (Green et al., 2008; Rohrbaugh, 1979).

Prediction Markets

Another popular method, prediction markets, has participants buy and sell shares in an estimate (usually a forecast) that is paid off when the true outcome is revealed (e.g., Hanson et al., 2006; Wolfers and Zitzewitz, 2004). In applications to predict the outcomes of events (e.g., elections, sports contests), the prices of the estimates can be converted into probability-of-occurrence assessments. The method is used in many business and popular culture applications (e.g., predicting the outcomes of media awards and political elections) and has substantial journalistic evidence for accuracy. Nonetheless, a prediction market is just a market, and markets were designed to assess aggregate values, not true states of the world. Markets have many demonstrated weaknesses, even as “evaluation devices.” Most published evaluations of prediction markets are theoretical and make arguments based on economic models, not on empirical data, for the efficacy or limits of the method (e.g., Manski, 2006; see Erikson and Wlezien, 2008, for an empirical evaluation of political election markets). Graefe and Weinhardt (2008) provide a “soft” evaluation that concludes that prediction markets and the Delphi Method perform at comparable levels of accuracy.

Following the negative public reaction to the Defense Advanced Research Projects Agency–sponsored Policy Analysis Market, the use of prediction markets in government agencies has been reduced, but not eliminated. (The original Policy Analysis Market was attacked by some members of Congress for promoting betting on assassinations and terrorist events, and the project was cancelled. See Congressional Record, 2003, and Hulse, 2003, for more information.) Note that prediction markets are restricted to applications in which a well-defined outcome set to occur in the near future can be verified. Furthermore, no market can be expected to perform efficiently without a substantial number of participants with different views on the “values” of the commodities being traded. Prediction markets are yet another tool for intelligence analysis that merit further exploration accompanied by hard-headed evaluations of efficacy (Arrow et al., 2007).

The Delphi Method does not have this restriction to verifiable outcomes and is more generally applicable. The requirement for verification is especially restrictive in intelligence applications. One caveat is that users of a partly mechanical system need to think carefully about the impact of the method on information pooling. Recall that a major failing of socially interacting teams is to thoroughly acquire and pool relevant information. A

Suggested Citation:"8 Group Processes in Intelligence Analysis--Reid Hastie." National Research Council. 2011. Intelligence Analysis: Behavioral and Social Scientific Foundations. Washington, DC: The National Academies Press. doi: 10.17226/13062.
×

method is needed that encourages participants to share information relevant to the estimates as well as opinions on the correct solution. Some versions of the Delphi Method partially achieve this by requiring that on each round, each participant report an estimate and provide at least one item of information that he or she believes is an important cue to the solution. Similarly, prediction markets are often accompanied by chat room bulletin boards on which participants are encouraged to share relevant arguments about the information they used. (Note that some market mechanisms—e.g., posted bid double auctions—promote sharing information [participants want others to value investments they themselves have chosen], whereas others—e.g., parimutuel betting markets—promote secrecy.)

Detection and Problem-Solving Tasks

To summarize, the first general admonition for good performance is to make solid plans and be self-conscious about the team process, to understand the nature of the task you are performing, and to deliberately balance subtask demands for independence and consensus. Second, for estimation tasks, many research-supported suggestions are available on how to execute each subtask most effectively. Early subtasks tend to demand more independence and to profit most from task-relevant diversity. Later subtasks demand more interdependence, coordination, and even conformity. But what if a team is performing another task type? The best advice is to begin by analyzing the task, breaking it down into subtasks, and then figuring out what properties of the team process are demanded by the subtasks. Below are two additional subtask breakdowns for the next most commonly performed analytic tasks.

The second major task performed by intelligence teams is the detection of informative signals in the vast spectrum of noise produced by collectors and sources at an incredible rate. Probably the most common individual analyst task is to forage through the morning’s incoming flood of electronic and other media. For a prototypical analyst, this usually involves searching various e-mail and news sources for something on a specific topic (e.g., Is anything relevant to the objective of detecting a local terrorist plan to attack a major U.S. target during the visit from a head of state?), or just for something out of place, strange, or anomalous (e.g., What does the sudden appearance of references to “nail polish remover” in e-mails intercepted between two suspected conspirators mean?). For such detection tasks, the research supports a six-subtask process model: (1) sample information; (2) construct an image or mental model of the “normal” or “status quo” conditions; (3) sample more information; (4) detect a difference (or not)

Suggested Citation:"8 Group Processes in Intelligence Analysis--Reid Hastie." National Research Council. 2011. Intelligence Analysis: Behavioral and Social Scientific Foundations. Washington, DC: The National Academies Press. doi: 10.17226/13062.
×

that is “large enough” or “over criterion” to explore further; (5) interpret the difference—important or not; and (6) generate an appropriate response.

The analysis and performance of detection tasks is helped greatly by the availability of an optimal model for the detection decision, such as Signal Detection Theory (McClelland, this volume, Chapter 4). Even if the actual Signal Detection calculations cannot be performed, the model provides a useful organizing framework. Hundreds of concrete applications of the model have been reported in well-defined, real-world detection problems in medicine, meteorology, and other domains of practical activity. (Research by Sorkin and his colleagues is at the cutting edge of knowledge on team performance of detection tasks, e.g., Sorkin et al., 2001, 2004.)

For problem-solving and decision-making tasks, there is also an idealized subtask breakdown (although no model for optimal performance): Subtask 1, comprehending the problem and immersion in the relevant knowledge domains; Subtask 2, hypothesis (solutions) generation; Subtask 3, solution evaluation and selection; and Subtask 4, solution application and implementation. Again, the sheer volume and diversity of information offer many advantages that can be brought to bear on a solution by a team compared to an individual. The immersion, selection, and implementation subtasks can all be enhanced as more team members are included in a project. Something analogous to error damping can occur in the selection subtask, when diverse critical perspectives are focused on selecting the best generated solution. Furthermore, effectively deployed teamwork can increase the variety and quantity of different solutions that are produced in the innovative solution generation subtask. (Laughlin’s research on “collective induction” is the best starting place, e.g., Laughlin, 1999.)

Learning from Experience in Teams

Including opportunities to learn from experience is essential for team performance. Effective leaders make sure that individuals receive feedback and coaching to improve both individual problem-solving skills and social teamwork skills. Ideally, when a team completes a task (e.g., by successfully executing the five subtasks that compose an information integration estimation task), a final subtask would be executed to evaluate the team’s achievements and to extract lessons at the team and individual levels to improve future performance. To some extent objective feedback on the quality of the product will be of use (e.g., the accuracy of an estimate). But outcome feedback also provides indirect and partial information about the quality of the team process.

Suggested Citation:"8 Group Processes in Intelligence Analysis--Reid Hastie." National Research Council. 2011. Intelligence Analysis: Behavioral and Social Scientific Foundations. Washington, DC: The National Academies Press. doi: 10.17226/13062.
×

WHY TEAMWORK IS IMPORTANT IN INTELLIGENCE ANALYSIS

Why have teams performed judgment, problem-solving, or decision-making tasks at all? Why not simply find the best individuals and have them perform all of the tasks? This question is often asked in the academic literature on small-group performance. A common answer is that there is no good reason to use teams or at least face-to-face teams (e.g., Armstrong, 2006). The reasoning is that in most controlled laboratory analyses that provide clear comparisons of group versus individual performance, groups perform at lower levels than the best individuals. Loosely speaking, teams perform between the median and the best member, usually closer to the median (Gigone and Hastie, 1997; Hastie, 1986). So, why not focus on methods to identify the most effective individuals or, at least, move to software-supported collaboration systems that do not require face-to-face meetings? The problem with this advice is that it is unrealistic and derived from scientifically valid studies, but studies of relatively simple, controlled tasks; these are tasks that can be performed effectively by both individuals and groups. But, in the real world of intelligence analysis, many tasks cannot be performed by one individual acting alone. There is no plausible comparison between individual and team performance, because unaided individuals cannot do the tasks. In many areas of intelligence analysis, teamwork is not an option, it is a necessity.

REFERENCES

Armstrong, J. S. 2006. How to make better forecasts and decisions: Avoid face-to-face meetings. Foresight: The International Journal of Applied Forecasting 5:3–8.

Arrow, K. J., S. Sunder, R. Forsythe, R. E. Litan, M. Gorham, E. Zitzewitz, R. W. Hahn, R. Hanson, D. Kahneman, J. O. Ledyard, S. Levmore, P. R. Milgrom, F. D. Nelson, G. R. Neumann, M. Ottaviani, C. R. Plott, T. C. Schelling, R. J. Shiller, V. L. Smith, E. C. Snowberg, C. R. Sunstein, P. C. Tetlock, P. E., Tetlock, H. R. Varian, and J. Wolfers. 2007. Statement on prediction markets. Pub. No. 07-11. Washington, DC: Brookings Institution.

Austin, J. R. 2003. Transactive memory in organizational groups: The effects of content, consensus, specialization, and accuracy on group performance. Journal of Applied Psychology 88(5):866–878.

Baron, J. S., and D. M. Kreps. 1999. Strategic human resources: Frameworks for general managers. New York: John Wiley and Sons.

Baron, R. S. 2005. So right, it’s wrong: Groupthink and the ubiquitous nature of polarized decision making. Advances in Experimental Social Psychology 37:219–253.

Bell, B. S., and S. W. J. Kozlowski. 2002. A typology of virtual teams: Implications for effective leadership. Group and Organizational Management 27(1):12–49.

Bruce, J. B. 2008. The missing link: The analyst–collector relationship. In R. Z. George and J. B. Bruce, eds., Analyzing intelligence: Origins, obstacles, and innovations (pp. 191–210). Washington, DC: Georgetown University Press.

Suggested Citation:"8 Group Processes in Intelligence Analysis--Reid Hastie." National Research Council. 2011. Intelligence Analysis: Behavioral and Social Scientific Foundations. Washington, DC: The National Academies Press. doi: 10.17226/13062.
×

Calvert, R. L. 1985. The value of biased information: A rational choice model of political advice. Journal of Politics 47(2):530–555.

Chen, G., R. Kanfer, R. P. DeShon, J. E. Mathieu, and S. W. J. Kozlowski. 2009. The motivating potential of teams: Test and extension of Chen and Kanfer’s (2006) cross-level model of motivation in teams. Organizational Behavior and Human Decision Processes 101(1):45–55.

Choi, H.-S., and L. Thompson. 2005. Old wine in a new bottle: Impact of membership change on group creativity. Organizational Behavior and Human Decision Processes 98(2):121–132.

Congressional Record. 2003. (Senate), July 29, pp. S10082–S10083. Available: http://www.fas.org/sgp/congress/2003/s072903.html [accessed June 2010].

Cooksey, R. W. 1996. Judgment analysis: Theory, methods, and applications. San Diego, CA: Academic Press.

Davis, J. 2008. Why bad things happen to good analysts. In R. Z. George and J. B. Bruce, eds., Analyzing intelligence: Origins, obstacles, and innovations (pp. 157–170). Washington, DC: Georgetown University Press.

Dawes, R. M. 1993. Prediction of the future versus understanding of the past: A basic asymmetry. American Journal of Psychology 106(1):1–24.

De Groot, M. H. 1970. Optimal statistical decisions. New York: McGraw-Hill (reprinted, 2004, Wiley Classics Library).

Diehl, M., and W. Stroebe. 1987. Productivity loss in brainstorming groups: Toward a solution of a riddle. Journal of Personality and Social Psychology 53(3):497–509.

Eisenhardt, K. M., J. L. Kahwajy, and L. J. Bourgeois, III. 1997. How management teams can have a good fight. Harvard Business Review 75(4):77–85.

Erikson, R. S., and C. Wlezien. 2008. Are political markets really superior to polls as election predictors? Public Opinion Quarterly 72(2):190–215.

Fiore, S. M., E. Salas, H. M. Cuevas, and C. A. Bowers. 2003. Distributed coordination space: Toward a theory of distributed team process and performance. Theoretical Issues in Ergonomic Science 4(3):340–364.

Fischhoff, B. 1975. Hindsight is not equal to foresight: The effect of outcome knowledge on judgment under uncertainty. Journal of Experimental Psychology: Human Perception and Performance 1(3):288–299.

Gigone, D. M., and R. Hastie. 1993. The common knowledge effect: Information sharing and group judgment. Journal of Personality and Social Psychology 65:959–974.

Gigone, D., and R. Hastie. 1997. The proper analysis of the accuracy of group judgments. Psychological Bulletin 121:149–167.

Gladwell, M. 2003. Connecting the dots: The paradoxes of intelligence reform. The New Yorker (March 10):83–89.

Gold, T., and B. Hermann. 2003. The role and status of DoD Red Teaming activities. Technical Report, September, No. A139714. Washington, DC: Storming Media USA.

Goodwin, D. K. 2005. Team of rivals: The political genius of Abraham Lincoln. New York: Simon and Schuster.

Graefe, A., and C. Weinhardt. 2008. Long-term forecasting with prediction markets—A field experiment on applicability and expert confidence. Journal of Prediction Markets 2(2):71–91.

Green, K. C., J. S. Armstrong, and A. Graefe. 2008. Methods to elicit forecasts from groups: Delphi and prediction markets compared. Foresight: The International Journal of Applied Forecasting 8:17–20.

Greitemeyer, R., S. Schulz-Hardt, F. C. Brodbeck, and D. Frey. 2006. Information sampling and group decision making: The effects of an advocacy decision procedure and task experience. Journal of Experimental Psychology: Applied 12(1):31–42.

Hackman, J. R. 2002. Leading teams: Setting the stage for great performances. Boston, MA: Harvard Business School Press.

Suggested Citation:"8 Group Processes in Intelligence Analysis--Reid Hastie." National Research Council. 2011. Intelligence Analysis: Behavioral and Social Scientific Foundations. Washington, DC: The National Academies Press. doi: 10.17226/13062.
×

Hackman, J. R., and N. Vidmar. 1970. Effects of size and task type on group performance and member reactions. Sociometry 33(1):37–54.

Hackman, J. R., S. M. Kosslyn, and A. W. Woolley. 2008. The design and leadership of intelligence analysis teams. Unpublished Technical Report No. 11. Available: http://groupbrain.wjh.harvard.edu [accessed February 2010].

Hammond, K. R., J. Rohrbaugh, J. Mumpower, and L. Adelman. 1977. Social judgment theory: Applications in policy formation. In M. F. Kaplan and S. Schwartz, eds., Human judgment and decision processes in applied settings (pp. 1–30). New York: Academic Press.

Hanson, R., R. Oprea, and D. Porter. 2006. Information aggregation and manipulation in an experimental market. Journal of Economic Behavior and Organization 60(4):449–459.

Hastie, R. 1986. Experimental evidence on group accuracy. In B. Grofman and G. Owen, eds., Information pooling and group decision making (pp. 129–157). Greenwich, CT: JAI Press.

Heuer, R. J., Jr. 2007. Small group processes for intelligence analysis. Unpublished manuscript, Sherman Kent School of Intelligence Analysis, Central Intelligence Agency. Available: http://www.pherson.org/Library/H11.pdf [accessed February 2010].

Heuer, R. J., Jr. 2008. Computer-aided analysis of competing hypotheses. In R. Z. George and J. B. Bruce, eds., Analyzing intelligence: Origins, obstacles, and innovations (pp. 251–265). Washington, DC: Georgetown University Press.

Hinsz, V. B., R. S. Tindale, and D. A. Vollrath. 1997. The emerging conception of groups as information processors. Psychological Bulletin 121(1):43–64.

Hollingshead, A. B. 1998. Group and individual training: The impact of practice on performance. Small Group Research 29(2):254–280.

Hulse, C. 2003. Pentagon abandons plans for futures market on terror. New York Times. July 2.

Janis, I. L. 1972. Victims of Groupthink: A psychological study of foreign-policy decisions and fiascos. Boston, MA: Houghton Mifflin.

Janis, I. L. 1982. Groupthink: Psychological studies of policy decisions and fiascos, 2nd ed. Boston, MA: Houghton Mifflin.

Jervis, R. 2006. Reports, politics, and intelligence failures: The case of Iraq. Journal of Strategic Studies 29(1):3–52.

Kelley, T., and J. Littman. 2001. The art of innovation: Lessons in creativity from IDEO, America’s leading design firm. New York: Random House.

Kerr, N. L., and R. S. Tindale. 2004. Group performance and decision making. Annual Review of Psychology 55:623–655.

Klein, G., and T. E. Miller. 1999. Distributed planning teams. International Journal of Cognitive Ergonomics 3(3):203–222.

Klein, G., K. G. Ross, B. M. Moon, D. E. Klein, and E. Hollnagel. 2003. Macrocognition. IEEE Intelligent Systems May–June:81–85.

Kozlowski, S. W. J., and D. R. Ilgen. 2006. Enhancing the effectiveness of work groups and teams. Psychological Science in the Public Interest 7(3):77–124.

Kramer, T. J., G. P. Fleming, and S. M. Mannis. 2001. Improving face-to-face brainstorming through modeling and facilitation. Small Group Research 32(5):533–557.

Larson, J. R., Jr., P. G. Foster-Fishman, and C. B. Keys. 1994. Discussion of shared and unshared information in decision-making groups. Journal of Personality and Social Psychology 67(3):446–461.

Larson, J. R., Jr., C. Christensen, A. S. Abbot, and T. M. Franz. 1996. Diagnosing groups: Charting the flow of information in medical decision-making teams. Journal of Personality and Social Psychology 71(2):533–557.

Laughlin, P. R. 1999. Collective induction: Twelve postulates. Organizational Behavior and Human Decision Processes 80(1):50–69.

Suggested Citation:"8 Group Processes in Intelligence Analysis--Reid Hastie." National Research Council. 2011. Intelligence Analysis: Behavioral and Social Scientific Foundations. Washington, DC: The National Academies Press. doi: 10.17226/13062.
×

Lavery, T. A., T. M. Franz, J. R. Winquist, and J. R. Larson, Jr. 1999. The role of information exchange in predicting group accuracy on a multiple judgment task. Basic and Applied Social Psychology 21(4):281–289.

Lee, M. D., and T. D. R. Cummins. 2004. Evidence accumulation in decision making: Unifying the “take the best” and the “rational” models. Psychonomic Bulletin and Review 11(2):343–352.

Linstone, H. A., and M. Turoff, eds. 1975. The Delphi Method: Techniques and applications. Reading, MA: Addison-Wesley Educational. Available: http://www.is.njit.edu/pubs/delphibook/ [accessed February 2010].

Mannix, E., and M. A. Neale. 2005. What differences make a difference? The promise and reality of diverse teams in organizations. Psychological Science in the Public Interest 6(2):31–55.

Manski, C. F. 2006. Interpreting the predictions of prediction markets. Economics Letters 91(4):425–429.

Mathieu, J. E., T. S. Heffner, G. F. Goodwin, E. Salas, and J. A. Cannon-Bowers. 2000. The influence of shared mental models on team process and performance. Journal of Applied Psychology 85:273–283.

Moreland, R. L., and J. M. Levine. 1982. Socialization in small groups: Temporal changes in individual-group relations. Advances in Experimental Social Psychology 15:137–192.

Moreland, R. L., and L. Myaskovsky. 2000. Exploring the performance benefits of group training: Transactive memory or improved communication. Organizational Behavior and Human Decision Processes 82(1):117–133.

Moreland, R. L., L. Argote, and R. Krishnan. 1996. Socially shared cognition at work: Transactive memory and group performance. In J. L. Nye and A. M. Browker, eds., What’s social about social cognition (pp. 128–141). Thousand Oaks, CA: Sage.

Nemeth, C. J. 1986. Differential contributions of majority and minority influence. Psychological Review 93(1):23–32.

Nemeth, C. J., K. Brown, and J. Rogers. 2001. Devil’s advocate versus authentic dissent: Stimulating quantity and quality. European Journal of Social Psychology 31(6):707–720.

Nijstad, B. A., W. Stroebe, and H. F. M. Lodewijkx. 2003. Production blocking and idea generation: Does blocking interfere with cognitive processes? Journal of Experimental Social Psychology 39(4):531–548.

Okhuysen, G. A., and K. M. Eisenhardt. 2002. Integrating knowledge in groups: How formal interventions enable flexibility. Organization Science 13(4):370–386.

Oxley, N. L., M. T. Dzindolet, and P. B. Paulus. 1996. The effects of facilitators on the performance of brainstorming groups. Journal of Social Behavior and Personality 11(4): 633–646.

Page, S. E. 2007. The difference: How the power of diversity creates better groups, firms, schools, and societies. Princeton, NJ: Princeton University Press.

Paris, C. R., E. Salas, and J. A. Cannon-Bowers. 2000. Teamwork in multi-person systems: A review and analysis. Ergonomics 43(8):1,052–1,075.

Pashler, H., M. McDaniel, D. Rohrer, and R. Bjork. 2008. Learning styles: Concepts and evidence. Psychological Science in the Public Interest 9(3):105–119.

Paulus, P. B. 1998. Developing consensus about Groupthink after all these years. Organizational Behavior and Human Decision Processes 73(2/3):362–374.

Paulus, P. B., and H.-C. Yang. 2000. Idea generation in groups: A basis for creativity in organizations. Organizational Behavior and Human Decision Processes 82(1):76–87.

Plous, S. 1995. A comparison of strategies for reducing interval overconfidence in group judgments. Journal of Applied Psychology 80(4):443–454.

Puncochar, J. M., and P. W. Fox. 2004. Confidence in individual and group decision making: When “two heads” are worse than one. Journal of Educational Psychology 96(3):582–591.

Suggested Citation:"8 Group Processes in Intelligence Analysis--Reid Hastie." National Research Council. 2011. Intelligence Analysis: Behavioral and Social Scientific Foundations. Washington, DC: The National Academies Press. doi: 10.17226/13062.
×

Rescher, N. 1998. Predicting the future. Albany: State University of New York Press.

Rohrbaugh, J. 1979. Improving the quality of group judgment: Social judgment analysis and the Delphi technique. Organizational Behavior and Human Performance 24(1):73–92.

Rohrbaugh, J. 1981. Improving the quality of group judgment: Social judgment analysis and the nominal group technique. Organizational Behavior and Human Performance 28(2): 272–288.

Rouse, W. B., and N. M. Morris. 1986. On looking into the black box: Prospects and limits in the search for mental models. Psychological Bulletin 100(3):349–363.

Rowe, G., and G. Wright. 1999. The Delphi technique as a forecasting tool: Issues and analysis. International Journal of Forecasting 15(3):353–375.

Safire, W. 2004. On language: Groupthink. Available: http://www.nytimes.com/2004/08/08/magazine/the-way-we-live-now-8-8-04-on-language-groupthink.html?sec=&spon=&pagewanted=1 [accessed February 2010].

Schkade, D., C. R. Sunstein, and R. Hastie. 2007. What happened on deliberation day? California Law Review 95(3):915–940.

Schulz-Hardt, S., F. C. Brodbeck, A. Mojzisch, R. Kerschreiter, and D. Frey. 2006. Group decision making in hidden profile situations: Dissent as a facilitator for decision quality. Journal of Personality and Social Psychology 91(6):1,080–1,093.

Schum, D. A. 1987. Evidence and inference for the intelligence analyst (2 vols.). Lanham, MD: University Press of America.

Schweiger, D. M., W. R. Sandberg, and J. W. Ragan. 1986. Group approaches to improving strategic decision making: A comparative analysis of dialectical inquiry, devil’s advocacy, and consensus. Academy of Management Journal 29(1):51–71.

Schwenk, C. R. 1988. The essence of strategic decision making. Lexington, MA: Lexington Press.

Schwenk, C. R. 1990. Effects of devil’s advocacy and dialectical inquiry on decision making: A meta-analysis. Organizational Behavior and Human Decision Processes 47(1):161–176.

Senate Select Committee on Intelligence. 2004. Report of the Select Committee on Intelligence on the U.S. intelligence community’s prewar intelligence assessments on Iraq. Available: http://www.gpoaccess.gov/serialset/creports/iraq.html [accessed June 2010].

Sniezek, J. A. 1992. Groups under uncertainty: An examination of confidence in group decision making. Organizational Behavior and Human Decision Processes 52(2):124–155.

Sorkin, R., C. Hays, and R. West. 2001. Signal-detection analysis of group decision making. Psychological Review 108(1):183–203.

Sorkin, R., S. Luan, and J. Itzkowitz. 2004. Group decision and deliberation: A distributed detection process. In D. J. Koehler and N. Harvey, eds., Blackwell handbook of judgment and decision making (pp. 464–484). Malden, MA: Blackwell.

Stasser, G., and M. Augustinova. 2007. Social engineering in distributed decision-making teams: Some implications for leadership at a distance. In S. P. Weisband, ed., Leadership at a distance: Research in technologically supported work (pp. 151–168). Mahwah, NJ: Erlbaum Associates.

Stasser, G., and W. Titus. 1985. Pooling of unshared information in group decision making: Biased information sampling during discussion. Journal of Personality and Social Psychology 48(6):1,467–1,478.

Stasser, G., and W. Titus. 2003. Hidden profiles: A brief history. Psychological Inquiry 14(3/4):304–313.

Stasser, G., D. D. Stewart, and G. M. Wittenbaum. 1995. Expert roles and information exchange during discussion: The importance of knowing who knows what. Journal of Experimental Social Psychology 31:244–265.

Steiner, I. D. 1972. Group process and productivity. San Diego, CA: Academic Press.

Suggested Citation:"8 Group Processes in Intelligence Analysis--Reid Hastie." National Research Council. 2011. Intelligence Analysis: Behavioral and Social Scientific Foundations. Washington, DC: The National Academies Press. doi: 10.17226/13062.
×

Straus, S. G., A. M. Parker, J. B. Bruce, and J. W. Dembosky. 2008. The group matters: A review of the effects of group interaction processes and outcomes in analytic teams. Working Paper WR-580-USG. Santa Monica, CA: RAND Corporation. Available: http://www.rand.org/pubs/working_papers/2009/RAND_WR580.pdf [accessed February 2010].

Sunstein, C. R. 2006. Infotopia: How many minds produce knowledge. New York: Doubleday.

Sunstein, C. R. 2009. Going to extremes: How like minds unite and divide. New York: Oxford University Press.

Surowiecki, J. 2004. The wisdom of crowds. New York: Doubleday.

Tapscott, D., and A. D. Williams. 2006. Wikinomics: How mass collaboration changes everything. New York: Penguin.

Tetlock, P. E. 2006. Expert political judgment: How good is it? How can we know? Princeton, NJ: Princeton University Press.

Turner, M. E., and A. R. Pratkanis. 1998. Twenty-five years of Groupthink theory and research: Lessons from the evaluation of a theory. Organizational Behavior and Human Decision Processes 73(2–3):105–115.

U.S. Government. (2009) Tradecraft primer: Structured analytic techniques for improving intelligence analysis. Available: https://www.cia.gov/library/center-for-the-study-of-intelligence/csi-publications/books-and-monographs/Tradecraft%20Primer-apr09.pdf [accessed February 2010].

Wageman, R. 2001. How leaders foster self-managing team effectiveness: Design choice versus hands-on coaching. Organization Science 12(5):559–577.

Whyte, G. 1998. Recasting Janis’s Groupthink model: The key role of collective efficacy in decision fiascoes. Organizational Behavior and Human Decision Processes 73(2–3): 185–209.

Whyte, W. H., Jr. 1952. Groupthink. Fortune Magazine 45(March):6–7.

Wilson, J. R., and A. Rutherford. 1989. Mental models: Theory and application in human factors. Human Factors 31(5):617–634.

Winquist, J. R., and J. R. Larson, Jr. 1998. Information pooling: When it impacts group decision making. Journal of Personality and Social Psychology 74(2):371–377.

Wolfers, J., and E. Zitzewitz. 2004. Prediction markets. Journal of Economic Perspectives 18(2):107–126.

Yasin, R. 2009. National security and social networking are compatible. Government Computer News, July 23. Available: http://www.gcn.com/Articles/2009/07/23/Social-networking-media-national-security.aspx?Page=1 [accessed February 2010].

Zarnoth, P., and J. A. Sniezek. 1997. The social influence of confidence in group decision making. Journal of Experimental Social Psychology 33(4):345–366.

Suggested Citation:"8 Group Processes in Intelligence Analysis--Reid Hastie." National Research Council. 2011. Intelligence Analysis: Behavioral and Social Scientific Foundations. Washington, DC: The National Academies Press. doi: 10.17226/13062.
×
Page 169
Suggested Citation:"8 Group Processes in Intelligence Analysis--Reid Hastie." National Research Council. 2011. Intelligence Analysis: Behavioral and Social Scientific Foundations. Washington, DC: The National Academies Press. doi: 10.17226/13062.
×
Page 170
Suggested Citation:"8 Group Processes in Intelligence Analysis--Reid Hastie." National Research Council. 2011. Intelligence Analysis: Behavioral and Social Scientific Foundations. Washington, DC: The National Academies Press. doi: 10.17226/13062.
×
Page 171
Suggested Citation:"8 Group Processes in Intelligence Analysis--Reid Hastie." National Research Council. 2011. Intelligence Analysis: Behavioral and Social Scientific Foundations. Washington, DC: The National Academies Press. doi: 10.17226/13062.
×
Page 172
Suggested Citation:"8 Group Processes in Intelligence Analysis--Reid Hastie." National Research Council. 2011. Intelligence Analysis: Behavioral and Social Scientific Foundations. Washington, DC: The National Academies Press. doi: 10.17226/13062.
×
Page 173
Suggested Citation:"8 Group Processes in Intelligence Analysis--Reid Hastie." National Research Council. 2011. Intelligence Analysis: Behavioral and Social Scientific Foundations. Washington, DC: The National Academies Press. doi: 10.17226/13062.
×
Page 174
Suggested Citation:"8 Group Processes in Intelligence Analysis--Reid Hastie." National Research Council. 2011. Intelligence Analysis: Behavioral and Social Scientific Foundations. Washington, DC: The National Academies Press. doi: 10.17226/13062.
×
Page 175
Suggested Citation:"8 Group Processes in Intelligence Analysis--Reid Hastie." National Research Council. 2011. Intelligence Analysis: Behavioral and Social Scientific Foundations. Washington, DC: The National Academies Press. doi: 10.17226/13062.
×
Page 176
Suggested Citation:"8 Group Processes in Intelligence Analysis--Reid Hastie." National Research Council. 2011. Intelligence Analysis: Behavioral and Social Scientific Foundations. Washington, DC: The National Academies Press. doi: 10.17226/13062.
×
Page 177
Suggested Citation:"8 Group Processes in Intelligence Analysis--Reid Hastie." National Research Council. 2011. Intelligence Analysis: Behavioral and Social Scientific Foundations. Washington, DC: The National Academies Press. doi: 10.17226/13062.
×
Page 178
Suggested Citation:"8 Group Processes in Intelligence Analysis--Reid Hastie." National Research Council. 2011. Intelligence Analysis: Behavioral and Social Scientific Foundations. Washington, DC: The National Academies Press. doi: 10.17226/13062.
×
Page 179
Suggested Citation:"8 Group Processes in Intelligence Analysis--Reid Hastie." National Research Council. 2011. Intelligence Analysis: Behavioral and Social Scientific Foundations. Washington, DC: The National Academies Press. doi: 10.17226/13062.
×
Page 180
Suggested Citation:"8 Group Processes in Intelligence Analysis--Reid Hastie." National Research Council. 2011. Intelligence Analysis: Behavioral and Social Scientific Foundations. Washington, DC: The National Academies Press. doi: 10.17226/13062.
×
Page 181
Suggested Citation:"8 Group Processes in Intelligence Analysis--Reid Hastie." National Research Council. 2011. Intelligence Analysis: Behavioral and Social Scientific Foundations. Washington, DC: The National Academies Press. doi: 10.17226/13062.
×
Page 182
Suggested Citation:"8 Group Processes in Intelligence Analysis--Reid Hastie." National Research Council. 2011. Intelligence Analysis: Behavioral and Social Scientific Foundations. Washington, DC: The National Academies Press. doi: 10.17226/13062.
×
Page 183
Suggested Citation:"8 Group Processes in Intelligence Analysis--Reid Hastie." National Research Council. 2011. Intelligence Analysis: Behavioral and Social Scientific Foundations. Washington, DC: The National Academies Press. doi: 10.17226/13062.
×
Page 184
Suggested Citation:"8 Group Processes in Intelligence Analysis--Reid Hastie." National Research Council. 2011. Intelligence Analysis: Behavioral and Social Scientific Foundations. Washington, DC: The National Academies Press. doi: 10.17226/13062.
×
Page 185
Suggested Citation:"8 Group Processes in Intelligence Analysis--Reid Hastie." National Research Council. 2011. Intelligence Analysis: Behavioral and Social Scientific Foundations. Washington, DC: The National Academies Press. doi: 10.17226/13062.
×
Page 186
Suggested Citation:"8 Group Processes in Intelligence Analysis--Reid Hastie." National Research Council. 2011. Intelligence Analysis: Behavioral and Social Scientific Foundations. Washington, DC: The National Academies Press. doi: 10.17226/13062.
×
Page 187
Suggested Citation:"8 Group Processes in Intelligence Analysis--Reid Hastie." National Research Council. 2011. Intelligence Analysis: Behavioral and Social Scientific Foundations. Washington, DC: The National Academies Press. doi: 10.17226/13062.
×
Page 188
Suggested Citation:"8 Group Processes in Intelligence Analysis--Reid Hastie." National Research Council. 2011. Intelligence Analysis: Behavioral and Social Scientific Foundations. Washington, DC: The National Academies Press. doi: 10.17226/13062.
×
Page 189
Suggested Citation:"8 Group Processes in Intelligence Analysis--Reid Hastie." National Research Council. 2011. Intelligence Analysis: Behavioral and Social Scientific Foundations. Washington, DC: The National Academies Press. doi: 10.17226/13062.
×
Page 190
Suggested Citation:"8 Group Processes in Intelligence Analysis--Reid Hastie." National Research Council. 2011. Intelligence Analysis: Behavioral and Social Scientific Foundations. Washington, DC: The National Academies Press. doi: 10.17226/13062.
×
Page 191
Suggested Citation:"8 Group Processes in Intelligence Analysis--Reid Hastie." National Research Council. 2011. Intelligence Analysis: Behavioral and Social Scientific Foundations. Washington, DC: The National Academies Press. doi: 10.17226/13062.
×
Page 192
Suggested Citation:"8 Group Processes in Intelligence Analysis--Reid Hastie." National Research Council. 2011. Intelligence Analysis: Behavioral and Social Scientific Foundations. Washington, DC: The National Academies Press. doi: 10.17226/13062.
×
Page 193
Suggested Citation:"8 Group Processes in Intelligence Analysis--Reid Hastie." National Research Council. 2011. Intelligence Analysis: Behavioral and Social Scientific Foundations. Washington, DC: The National Academies Press. doi: 10.17226/13062.
×
Page 194
Suggested Citation:"8 Group Processes in Intelligence Analysis--Reid Hastie." National Research Council. 2011. Intelligence Analysis: Behavioral and Social Scientific Foundations. Washington, DC: The National Academies Press. doi: 10.17226/13062.
×
Page 195
Suggested Citation:"8 Group Processes in Intelligence Analysis--Reid Hastie." National Research Council. 2011. Intelligence Analysis: Behavioral and Social Scientific Foundations. Washington, DC: The National Academies Press. doi: 10.17226/13062.
×
Page 196
Next: 9 Social Categorization and Intergroup Dynamics--Catherine H. Tinsley »
Intelligence Analysis: Behavioral and Social Scientific Foundations Get This Book
×
Buy Paperback | $70.00 Buy Ebook | $54.99
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

The U.S. intelligence community (IC) is a complex human enterprise whose success depends on how well the people in it perform their work. Although often aided by sophisticated technologies, these people ultimately rely on their own intellect to identify, synthesize, and communicate the information on which the nation's security depends. The IC's success depends on having trained, motivated, and thoughtful people working within organizations able to understand, value, and coordinate their capabilities.

Intelligence Analysis provides up-to-date scientific guidance for the intelligence community (IC) so that it might improve individual and group judgments, communication between analysts, and analytic processes. The papers in this volume provide the detailed evidentiary base for the National Research Council's report, Intelligence Analysis for Tomorrow: Advances from the Behavioral and Social Sciences. The opening chapter focuses on the structure, missions, operations, and characteristics of the IC while the following 12 papers provide in-depth reviews of key topics in three areas: analytic methods, analysts, and organizations.

Informed by the IC's unique missions and constraints, each paper documents the latest advancements of the relevant science and is a stand-alone resource for the IC's leadership and workforce. The collection allows readers to focus on one area of interest (analytic methods, analysts, or organizations) or even one particular aspect of a category. As a collection, the volume provides a broad perspective of the issues involved in making difficult decisions, which is at the heart of intelligence analysis.

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    Switch between the Original Pages, where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

    « Back Next »
  6. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  7. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  8. ×

    View our suggested citation for this chapter.

    « Back Next »
  9. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!