National Academies Press: OpenBook

Human-AI Teaming: State-of-the-Art and Research Needs (2022)

Chapter: 3 Human-AI Teaming Processes and Effectiveness

« Previous: 2 Human-AI Teaming Methods and Models
Suggested Citation:"3 Human-AI Teaming Processes and Effectiveness." National Academies of Sciences, Engineering, and Medicine. 2022. Human-AI Teaming: State-of-the-Art and Research Needs. Washington, DC: The National Academies Press. doi: 10.17226/26355.
×

3

Human-AI Teaming Processes and Effectiveness

WHAT DOES IT MEAN FOR AI TO BE A TEAMMATE?

A team is an interdependent group of members, each with their own roles and responsibilities, that come together to address a particular goal (Salas et al., 1992). An AI system can be a member of a team if it takes on roles and responsibilities and can function interdependently. In the committee’s opinion, the word teammate does not imply humanness; human-animal pairs make good teams. An AI team member does not necessarily need to replicate actions that humans can already do. AI is a very different sort of intelligence compared to humans, with different strengths and limitations. As discussed in Chapter 2, the human-animal team metaphor may be better suited than that of human-human teaming for this reason (see Forbus, 2016). AI ought to do what AI does best (e.g., high computational speed, expansive memory) or what humans would rather not do (e.g., work that is dull, dirty, and dangerous in the case of embodied AI) (Wojton et al., 2021).

In the committee’s judgment, human-AI teaming is a step beyond human-AI interaction. The terms team and teammate express a system that is expanded from one-human-one-machine (e.g., a human-AI interaction or a human-robot interaction) to a team of more than two heterogeneous entities, each with their own roles and responsibilities (technically two members can form a team, however, the team literature tends to involve teams of three or more). Researchers can look to the human team literature, as well as to the human-animal team literature, to find novel methods to improve human-AI team effectiveness. In general, the teamwork and teammate concepts are useful for extending the science of teamwork into the field of human-centered AI.

In the committee’s opinion, considering an AI system to be a teammate does not indicate that the AI system is a human, human-like, or on the same level as humans. Humans tend to anthropomorphize machines of all types (e.g., Roombas, cars, Alexa) and AI is no exception. However, in the committee’s judgment, given that AI differs from humans in many ways, it is misleading to encourage anthropomorphism by designing an AI system with human-like features (Salles, Evers, and Farisco, 2020). Additionally, an AI system as a teammate does not imply loss of human control. Control structure is independent of the team concept, and the control that teammates exert over other teammates is dependent on the mission or specific task. Finally, designing an AI system to be an effective teammate does not imply that the AI system is not human-user centered. Designing an AI system to work well as a teammate increases human-centeredness, based on the results of more than three decades of teamwork literature providing extensive guidance for effective teaming (Wojton et al., 2021). Ultimately, an effective human-AI team augments human capabilities and raises performance beyond that of either entity.

Suggested Citation:"3 Human-AI Teaming Processes and Effectiveness." National Academies of Sciences, Engineering, and Medicine. 2022. Human-AI Teaming: State-of-the-Art and Research Needs. Washington, DC: The National Academies Press. doi: 10.17226/26355.
×

In the committee’s judgment, AI developers who are unfamiliar with the science of team effectiveness too often presume to know what good human-AI teaming is and what it means for AI to be a good teammate. The committee finds that the science of team effectiveness needs to be better translated to AI development. Also, research is needed on specific mechanisms for human-AI teaming that may or may not be similar to methods of human-human teaming or human-animal teaming. The remainder of this chapter explores the state-of-the-art in the science of team effectiveness, the implications for human-AI teams, and research needed to fill the gaps in effective human-AI teaming.

PROCESSES AND CHARACTERISTICS OF EFFECTIVE HUMAN-AI TEAMS

How can we achieve effective human-AI teaming by drawing on what we know about human teaming and human-animal teaming? Cuevas and colleagues (2007) developed a framework for understanding how the introduction of machine teammates can influence both individual and team cognition, implying that models of effective teaming need to be adapted to reflect the introduction of this new type of teammate.

Within the broader discussion of social units and types of tasks, McGrath (1984) describes eight types of tasks, which could be used to guide the design of appropriate human-AI teams. Task type, however, is not the only important focus of team interactions; aspects of task coordination, information flow, and role support are also vital elements (Riley et al., 2006; Salas, Bowers, and Cannon-Bowers, 1995). These tasks, including planning and creative idea generation, persuasion and conflict negotiation, and competitions and physiological performances, require different types of team structures, functional roles, and allocations of tasks over the duration of team interactions. A team may also exist in consistent form for multiple cycles of performance or may reconstitute itself with different members for each distinct task cycle. Regardless of task performance demands, it can be assumed that interdependent management of activities, goals, knowledge, roles, and task constraints are critical components of team interactions. Beginning in the 1980s, studies of military teams have emphasized team performance outcomes, processes, and effectiveness of training protocols (i.e., methods for improving outcomes and processes) (Salas, Bowers, and Cannon-Bowers, 1995; Sottilare et al., 2017). Less is known from the team literature about the types of long-term, distributed, and agile teams that will be needed to function in military multi-domain operations (MDO).

Team Heterogeneity

In the committee’s judgment heterogeneity coupled with the interdependence of teammates is the main feature that distinguishes teams from groups. Teammates each have their own roles and responsibilities, which can be at the taskwork or teamwork level. For instance, one teammate may be responsible for flying the plane and another responsible for navigation; this is taskwork heterogeneity. In addition, the pilot teammate may be in command and responsible for making final decisions (i.e., teamwork heterogeneity). In the committee’s opinion, this same heterogeneity is also advantageous in an AI teammate. In a good team design, the AI system will do what AI does best (e.g., tasks that require high computational speed or expansive memory) or what humans do not want to do, and humans will do what humans do best (e.g., key decision making, adaptive planning) (Nadeem, 2021). This differentiation implies that an AI system will not replicate human capabilities and limitations and will also specialize in narrow tasks, like the animal in a human-animal team. Exceptions may exist in rare cases of team training, in which synthetic teammates stand in for human counterparts (Myers et al., 2018) and potentially in social robotics, in which AI performs human care-taking roles (Lee et al., 2017). Centaur teams, in which the human and machine serve as perfect complements of each other, have the potential to operate at levels that exceed the capability of human or machine alone (Case, 2018).

It is important to note that proper team composition goes beyond simple function allocation based on a men-are-better-at/machines-are-better-at approach (Roth et al., 2019). The interdependencies are also of critical importance. Heterogeneity is an element of team structure, but interdependencies reflect team process. Johnson and colleagues (2014) have developed a method of co-active team design that puts interdependencies at the forefront. Further, responsibilities, such as the control structure of a team, may depend on context. In the committee’s judgment, it is also important for long-term, distributed, and adaptive teams to have a degree of overlap in roles

Suggested Citation:"3 Human-AI Teaming Processes and Effectiveness." National Academies of Sciences, Engineering, and Medicine. 2022. Human-AI Teaming: State-of-the-Art and Research Needs. Washington, DC: The National Academies Press. doi: 10.17226/26355.
×

and responsibilities, so that teammates can back each other up or take over responsibilities when a teammate is absent. In the committee’s judgment, assembling long-term, distributed, agile teams that exhibit function allocation, interdependency management, and sufficient overlap of responsibilities is a challenge and represents a research gap. In addition, because of the increasing complexity of teams, AI may be useful in the role of team assembler (see Chapter 9).

Shared Cognition

The study of internal processes of team members (i.e., mental models) to identify, refine, and improve both team performance and the relevant measures of processes and outcomes is a distinct area of research (Rouse, Cannon-Bowers, and Salas, 1992). Mental models are “mechanisms whereby humans are able to generate descriptions of system purpose and form, explanation of system functioning and observed system states, and predictions of future states” (Rouse and Morris, 1985, p. 7).

One area of team research focuses on whether teammates hold a shared mental model. A shared mental model is a consistent understanding and representation, across teammates, of how systems work (i.e., the degree of agreement of one or more mental models). A shared mental model includes models of the technology and equipment, models of taskwork, models of teamwork, and models of teammates (i.e., teammates’ knowledge, skills, attitudes, and preferences) (Cannon-Bowers, Salas, and Converse, 1993). Relatedly, a team mental model is a mental model of one’s teammate(s) that provides an understanding of teammates’ capabilities, limitations, current goals and needs, and current and future performance (Cannon-Bowers, Salas, and Converse, 1993). The similarity of team mental models and task mental models among team members, as well as their accuracy, directly contributes to effective team processes, which significantly affect overall team performance (DeChurch and Mesmer-Magnus, 2010; Mathieu et al., 2000). Shared mental models within teams also contribute to the development of shared situation awareness (Cooke, Kiekel, and Helm, 2001; Endsley and Jones, 2001; Endsley, 2020b) (see Chapter 4). In addition, it should be noted that knowledge in teams can be emergent, with dynamic experience (Grand et al., 2016).

Thus, on a heterogeneous team, one should expect knowledge diversity (Cooke et al., 2013). Effective teammates need to have goals that are aligned; however, the true meaning of goal alignment is unclear. It is possible, especially in multi-team systems like those found in MDO, that goals are tied to tasks, roles, and responsibilities, and so may also diverge (Zaccaro, Marks, and DeChurch, 2012). Effective teammates understand the team’s overarching goal and have individual goals that may be disparate but do not conflict with those of their fellow teammates.

Knowledge specialization is expected within many MDO teams due to high levels of heterogeneity. A teammate’s knowledge of the task or team is generally tied to his or her roles and responsibilities. Thus, on a heterogeneous team, one should expect knowledge diversity (Cooke et al., 2013). Knowledge sharing is required when team members each hold unique information that is critical for the task and team (i.e., unique situation awareness requirements) (Endsley and Jones, 2001). Transactive memory systems represent another form of shared cognition (Brandon and Hollingshead, 2004). Transactive memory systems stipulate that knowledge of the task and team is distributed among interdependent team members, which increases the need for coordination and communication. See Chapter 4 for a discussion of team processes, mechanisms, and devices used for information sharing in teams.

Alignment of all types of information, including goals, is a form of coordination (Caldwell, Palmer, and Cuevas, 2008). In complex, long-term, distributed, agile teams, increasing complexity may result in an increased need for dynamic goal alignment, as well as teamwork and taskwork model alignment. MDO teams can be considered complex, long-term distributed teams that must be agile in their deployment and problem-solving abilities. In the committee’s opinion, research is needed on mechanisms of goal- and mental-model alignment in human-AI teams, and the potential role of AI in facilitating this alignment. The alignment of goals and mental models is one of many communication and coordination challenges covered in the next section (see Chapter 4).

Suggested Citation:"3 Human-AI Teaming Processes and Effectiveness." National Academies of Sciences, Engineering, and Medicine. 2022. Human-AI Teaming: State-of-the-Art and Research Needs. Washington, DC: The National Academies Press. doi: 10.17226/26355.
×

Communication and Coordination

Communication and coordination are essential for teamwork, given teamwork’s interdependent nature. Team cognition can in fact be characterized as communication and coordination processes in addition to knowledge or shared models because team cognition involves more than just knowledge (Cooke et al., 2013). Research on group communication extends back to the 1950s and includes Leavitt’s 1951 work describing circle, chain, and other configurations of people communicating with each other in a group. This research not only addresses the flow of task procedures for specific circumstances, but also the stability and robustness of communication patterns in response to changes in situation, resolution of error, and updates in plan (Gorman et al., 2020).

Communication can be verbal or nonverbal and can take place through various modalities, such as voice or text. Much progress has been made toward the creation of AI that understands natural human language; however, natural language processing remains a challenge for human-AI teaming. Moreover, natural language, with all its ambiguities, may not be the language of choice for effective teaming. For instance, humans and animals team effectively by signaling and by observing behavioral cues, without natural language communication. Similarly, in military contexts and aviation, various forms of signaling and brevity code are used (Achille, Gladwell Schulze, and Schimdt-Nielsen, 1995). In addition, it may be important to identify various communication modalities (e.g., visual, auditory, tactile) with the goal of balancing the load on each. Communication also needs to take place implicitly when direct communication is not possible. Research is needed on the language of effective human-AI teams, especially for those that are long term, distributed, and agile.

Communicating in a common language is just one requirement for effective teamwork. Communication also needs to be accurate and directed to the right team member at the right time or, in other words, coordinated. Effective teamwork requires “orchestrating the sequence and timing of interdependent actions” (Marks, Mathieu, and Zaccaro, 2001, p. 363). Recognizing “the right team member” and “the right time” can be subtle and may only be apparent with significant experience (Demir et al., 2018). In a study of three-agent remotely piloted aircraft control, a synthetic teammate succeeded in communicating with its human teammates in restricted natural language, but failed at coordination (Demir, McNeese, and Cooke, 2016; McNeese et al., 2018). Specifically, the synthetic teammate did not anticipate the information needs of human teammates (Entin and Serfaty, 1999), who consequently had to request necessary information, which delayed target processing. Interestingly, the human teammates entrained on the behavior of the synthetic teammate, and coordination ultimately broke down across the team. The level of coordination and teamwork needed for high-performing teams (e.g., players on a basketball team) requires that the AI system has a very deep model of its human teammates, including day-to-day variations in their status. This is likely an optimistic goal (Rasmussen, 1983).

On the other hand, the same study found that a synthetic teammate could model good coordination behavior and subtly coach the team’s coordination. This coordination coaching was also effective at improving team process in mock code-blue resuscitation exercises (Hinski, 2017). Imbuing AI with coordination capabilities along with communication capabilities is essential for effective teaming. The need for effective coordination behaviors is even greater in long-term, distributed, agile teams, as Caldwell (2005) found for space-operations teams that had distributed expertise. In the committee’s judgment AI could also play a role in coordination coaching—guiding a team’s effective coordination.

Social Intelligence

Human teammates can make use of social intelligence for effective teaming. They can understand the beliefs, desires, and intentions of fellow teammates by developing a theory of mind (i.e., by observing their teammates’ behaviors and ascribing mental states to them) (Premack and Woodruff, 1978; Rabinowitz et al., 2018; Wimmer and Perner, 1983). Humans can rely on theory of mind to make sense of teammate behavior and to assist with teamwork as needed. Theory of mind is also important in understanding deception. It is less clear how important theory of mind is in effective teaming. Animals are not thought of as having a theory of mind, but rather a theory of behavior (Schünemann et al., 2021). That is, animals understand the behavior of their human partners in context and can draw on this information for understanding human intent. There have been recent efforts directed toward

Suggested Citation:"3 Human-AI Teaming Processes and Effectiveness." National Academies of Sciences, Engineering, and Medicine. 2022. Human-AI Teaming: State-of-the-Art and Research Needs. Washington, DC: The National Academies Press. doi: 10.17226/26355.
×

imbuing AI with social intelligence (e.g., Dautenhahn, 2007), such as the Defense Advanced Research Project Agency’s ASIST program, for example, though this may resemble a theory of behavior more than a full theory of mind (Sandberg, 2021). Further, there is considerable overlap between theory of mind and team mental models. In the committee’s opinion, there is a gap in the knowledge base in terms of understanding the limitations of teaming with AI systems that possesses a theory of behavior and not a theory of mind.

Other Features of Effective Teams

Interpersonal trust and trust in the team as a whole are important in human teams and human-animal teams. Literature pertaining to trust in machine teammates is covered in depth in Chapter 7. In addition, teams do not begin as effective teams the moment they come together; instead, teams need to train together on individual and team skills. The same is true for human-animal teams. This is covered in depth in Chapter 9.

KEY CHALLENGES AND RESEARCH GAPS

The committee finds seven key gaps in the human-AI teamwork research base.

  • It is not clear whether the models of human teaming or human-animal teaming, and the methods of making these teams more effective, are appropriate for human-AI teams.
  • The teamwork literature has traditionally focused on teams that come together for short durations (i.e., hours, not days or weeks), are most often co-located, and are rigid in their structures. Less is known from the team literature about the types of long-term, distributed, and agile teams that will be needed to function in military MDO.
  • Very little is known about how to assemble long-term, distributed, and agile teams in terms of function allocation, management of interdependencies, and assuring sufficient redundancy.
  • There is limited knowledge of mechanisms of goal alignment for long-term, distributed, and agile teams with high complexity.
  • Little work has been done to develop a human-AI language to replace natural language.
  • It is not clear how AI can learn to coordinate across complex teams, as this is also a difficulty for human teams.
  • There is a need to understand the limitations of teaming with AI systems that possess a theory of behavior and not a theory of mind.

RESEARCH NEEDS

The committee recommends addressing two major research objectives for the development of effective teamwork processes for human-AI teams.

Research Objective 3-1: Human-AI Teamwork Skills in Multi-Domain Operations.

Research is needed on improving team effectiveness in long-term, distributed, and agile human-AI teams, in the areas of team assembly, goal alignment, communication, coordination, social intelligence, and a new human-AI language. Note that these areas also pose challenges for all-human teams, especially in complex environments. In human-AI team contexts, the ability of AI systems to exhibit important teamwork skills needs to be addressed, including: (1) providing support, which includes the ability of the AI system to proactively provide relevant, operation-related information, as well as to confirm and improve the confidence in other team members’ understandings and task selection; and (2) answering questions within the context of other team members’ expertise domains and operational constraints. Assessments of human-AI team performance need to include assessments of AI contributions in the areas of “provide support” and “answer questions,” not only in interactions with not only other human team members, but also non-human team members such as dogs, sea mammals, or other AI systems in the same or different modalities (e.g., air, ground, space, water).

Suggested Citation:"3 Human-AI Teaming Processes and Effectiveness." National Academies of Sciences, Engineering, and Medicine. 2022. Human-AI Teaming: State-of-the-Art and Research Needs. Washington, DC: The National Academies Press. doi: 10.17226/26355.
×

Research Objective 3-2: Support for Human-AI Teaming in Multi-Domain Operations.

Based on some success in situations in which AI guided coordination of the team, the possibility for AI to serve multi-domain systems by acting as a coordinator, orchestrator, or human resource manager would be useful to explore (Demir et al., 2018; Hinski, 2017). AI may be well-suited to manage human teams or human-AI teams by serving as team assembler, swapping team members in and out as needed. AI may also help to manage goal alignment and alert the team in cases of conflicting goals. AI might also serve as a communication and coordination hub, clarifying miscommunication, prioritizing messages, and connecting team members. Research is needed on this type of managerial role for AI.

SUMMARY

Designing an AI system to work well as a teammate is a means of increasing human-centeredness that draws on more than three decades of teamwork literature that provides extensive guidance on effective teaming. An effective human-AI team ultimately augments human capabilities and raises performance beyond that of the component entities. Another consideration is for AI to be used to aid teaming in multi-domain systems by acting as a coordinator, orchestrator, or human resource manager (Demir et al., 2018).

Suggested Citation:"3 Human-AI Teaming Processes and Effectiveness." National Academies of Sciences, Engineering, and Medicine. 2022. Human-AI Teaming: State-of-the-Art and Research Needs. Washington, DC: The National Academies Press. doi: 10.17226/26355.
×
Page 19
Suggested Citation:"3 Human-AI Teaming Processes and Effectiveness." National Academies of Sciences, Engineering, and Medicine. 2022. Human-AI Teaming: State-of-the-Art and Research Needs. Washington, DC: The National Academies Press. doi: 10.17226/26355.
×
Page 20
Suggested Citation:"3 Human-AI Teaming Processes and Effectiveness." National Academies of Sciences, Engineering, and Medicine. 2022. Human-AI Teaming: State-of-the-Art and Research Needs. Washington, DC: The National Academies Press. doi: 10.17226/26355.
×
Page 21
Suggested Citation:"3 Human-AI Teaming Processes and Effectiveness." National Academies of Sciences, Engineering, and Medicine. 2022. Human-AI Teaming: State-of-the-Art and Research Needs. Washington, DC: The National Academies Press. doi: 10.17226/26355.
×
Page 22
Suggested Citation:"3 Human-AI Teaming Processes and Effectiveness." National Academies of Sciences, Engineering, and Medicine. 2022. Human-AI Teaming: State-of-the-Art and Research Needs. Washington, DC: The National Academies Press. doi: 10.17226/26355.
×
Page 23
Suggested Citation:"3 Human-AI Teaming Processes and Effectiveness." National Academies of Sciences, Engineering, and Medicine. 2022. Human-AI Teaming: State-of-the-Art and Research Needs. Washington, DC: The National Academies Press. doi: 10.17226/26355.
×
Page 24
Next: 4 Situation Awareness in Human-AI Teams »
Human-AI Teaming: State-of-the-Art and Research Needs Get This Book
×
 Human-AI Teaming: State-of-the-Art and Research Needs
Buy Paperback | $30.00 Buy Ebook | $24.99
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

Although artificial intelligence (AI) has many potential benefits, it has also been shown to suffer from a number of challenges for successful performance in complex real-world environments such as military operations, including brittleness, perceptual limitations, hidden biases, and lack of a model of causation important for understanding and predicting future events. These limitations mean that AI will remain inadequate for operating on its own in many complex and novel situations for the foreseeable future, and that AI will need to be carefully managed by humans to achieve their desired utility.

Human-AI Teaming: State-of-the-Art and Research Needs examines the factors that are relevant to the design and implementation of AI systems with respect to human operations. This report provides an overview of the state of research on human-AI teaming to determine gaps and future research priorities and explores critical human-systems integration issues for achieving optimal performance.

READ FREE ONLINE

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    Switch between the Original Pages, where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

    « Back Next »
  6. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  7. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  8. ×

    View our suggested citation for this chapter.

    « Back Next »
  9. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!