National Academies Press: OpenBook

Human-AI Teaming: State-of-the-Art and Research Needs (2022)

Chapter:9 Training Human-AI Teams

« Previous: 8 Identification and Mitigation of Bias in Human-AI Teams
Suggested Citation:"9 Training Human-AI Teams." National Academies of Sciences, Engineering, and Medicine. 2022. Human-AI Teaming: State-of-the-Art and Research Needs. Washington, DC: The National Academies Press. doi: 10.17226/26355.
×

9

Training Human-AI Teams

Training has long been a hallmark of both DOD operations and teamwork. Team training is conducted in a manner that seeks to leverage knowledge, skills, and attitudinal competencies to improve general team processes (Salas et al., 2008). The reasons for team training are abundant. Team training improves performance (Salas et al., 2008), improves adaptive response (Gorman, Cooke, and Amazeen, 2010), decreases error (Wiener, Kanki, and Helmreich, 1993), and develops team cognition (Marks et al., 2002). For these reasons, team training has a long history, spanning multiple decades (Delise et al., 2010; Salas et al., 2008; Tannenbaum and Cerasoli, 2013). Throughout the years, many types of training mechanisms and modalities have been developed and implemented in multiple contexts, to unleash the full capabilities of teams. However, to date, team training has focused primarily on human-human teaming interactions because teams have traditionally consisted only of human team members, with very few exceptions (human-animal teams being one exception, see Chapter 3), but this is changing with the advent of human-AI teaming (see O’Neill et al., 2020 for a complete review).

In response to human-AI teaming, the committee finds that training needs to adapt to account for both perceptual and procedural teaming changes. Humans perceive AI teammates as fundamentally different from human teammates (Zhang et al., 2021) and work with AI teammates differently than they do human teammates (McNeese et al., 2018). Human-AI team training can benefit significantly by leveraging current and past human-human team-training standards to inform and jumpstart its own standards. However, it is essential to consider that human-human teaming is different from human-AI teaming and, for that reason, new methods, mechanisms, and modalities will need to be developed and introduced to fully leverage human-AI teaming capabilities. This chapter will review human-human team-training methods with an eye to how they can inform human-AI team training, identify key challenges, and present relevant research needs.

HUMAN-HUMAN TEAM TRAINING TO INFORM HUMAN-AI TEAM TRAINING

As noted, the concept of team training is well established, and its impacts on improved teaming are well founded. The foundational knowledge associated with team training is focused on training humans to collaborate with other human team members to effectively work toward a shared goal. Many types of team training have been proposed.

Suggested Citation:"9 Training Human-AI Teams." National Academies of Sciences, Engineering, and Medicine. 2022. Human-AI Teaming: State-of-the-Art and Research Needs. Washington, DC: The National Academies Press. doi: 10.17226/26355.
×

Strategies for Team Training

The strongest theme in the human-human team-training literature is the use of various training strategies. Although teams can be trained using various methods, the focus of this chapter is limited to the three main types of training found most frequently in the literature: procedural training, cross-training, and adaptive or perturbation training. These training methods have shown consistently positive results over the decades (Delise et al., 2010; Salas et al., 2008: Tannenbaum and Cerasoli, 2013).

Procedural training is a traditional methodology that focuses on repeated introduction of team members to task-related stimuli, with positive reinforcement provided in a standardized or procedural manner (Gorman, Cooke, and Amazeen, 2010). This type of team training is often used in environments with high workloads and stressors, in which deviating from the standard work protocol can have serious adverse consequences.

Cross-training is defined as “an instructional strategy in which each team member is trained in the duties of his or her teammates” (Volpe et al., 1996, p. 87). Cross-training seeks to introduce individual team member responsibilities and tasks to other team members, so each team member has a shared understanding of all aspects of team-related taskwork. In general, cross-training has a net positive effect on team effectiveness, especially in terms of shared understanding and interaction dynamics (Marks et al., 2002).

Adaptive or perturbation training is based on the idea of purposefully perturbing or manipulating a team-related task, which then requires adaptation at the team level, either through communication or coordination. Perturbation training has been shown to be effective and, in some cases, even more effective than cross-training when the two are directly compared (Gorman, Cooke, and Amazeen, 2010). Perturbation training within teams allows more flexibility and on-demand coordination, which may increase team resiliency. Additionally, perturbation training has been utilized in human-robot teaming, leading to the development of computational models that allow for joint strategies in coordination (Ramakrishnan, Zhang, and Shah, 2017).

It is important to note that there is no universally accepted team-training method, as context and team personnel can influence team-training effectiveness. Thus, to develop human-AI team training, the committee finds that it will be necessary to experiment with many types of team training, to ascertain their effectiveness within this new paradigm. Furthermore, research may find that many team-training strategies are not well aligned with the logistics of human-AI teams.

Are training strategies that are effective for human-human teaming adequate and appropriate for human-AI teaming tasks and environments? This question remains to be answered, and research is needed to translate team-training methods and validate their utilization in this new paradigm.

The Use of Simulation

Simulations are at the heart of most team-training initiatives because training becomes more effective when it is grounded within a meaningful context (Marlow et al., 2017). Teams are inherently linked to the context they operate within; context dictates the tasks teams work on, their environment, and the tools they utilize. Thus, for teams to train realistically, simulation environments that represent real-world operations are needed. For this reason, simulation-based team training (SBTT) is viewed as integral to effective team training. SBTT is an instructional technique used for skill development by leveraging real-world environments, and it is most often utilized and cited within the healthcare community (Owen et al., 2006; Weaver et al., 2010). Simulations and simulation task environments (STEs) are used in all types of contexts and are oriented for both the physical and digital worlds (Gray, 2002). Many STEs are digital representations of the real-world environment, lowering the cost of training while still promoting significant acquisition of domain and task-relevant knowledge. Another advantage of SBTT is that trainees can experience events that would rarely occur in the real world.

The committee finds that, as human-AI teaming advances, digital STEs will be critical to training both human and autonomous team members in meaningful environments. The representation of the autonomous agent (i.e., either physical (robot) or digital (synthetic)) will dictate the type of SBTT and/or the STE. In the committee’s judgment, the standardized environment of many STEs (1) will help humans to train with autonomous team members;

Suggested Citation:"9 Training Human-AI Teams." National Academies of Sciences, Engineering, and Medicine. 2022. Human-AI Teaming: State-of-the-Art and Research Needs. Washington, DC: The National Academies Press. doi: 10.17226/26355.
×

(2) can help autonomy to train with human team members, if properly designed; and (3) can help autonomy to train itself within the environment. In many cases, synthetic environments are the main environments in which AI teammates will be deployed.

Training Content: Taskwork and Teamwork

Teaming is generally composed of two interrelated foci: teamwork- and taskwork-related understandings and actions. The teamwork component generally focuses on understanding how team members should work together to accomplish shared goals, whereas taskwork focuses on team task-specific knowledge (Mohammed, Ferzandi, and Hamilton, 2010). Both aspects of teaming are interdependent and overlap, but it is essential to focus on specific aspects of teaming when implementing team training. To train a team generically, without a focus on specific components, will not translate into meaningful performance outcomes. Instead, in the committee’s judgment team training is best when it directly targets teamwork or taskwork, or a combination of the two. At a fundamental level, teams need to be trained to understand (1) their teammates; (2) teamwork-related processes; and (3) the team task.

Training human-AI teams on both teamwork and taskwork is a challenge, specifically training for the aspects of teamwork. Autonomous team members simply do not understand teamwork-related concepts, making training them for such concepts exceptionally difficult. In the committee’s judgment, first, fundamental teaming concepts need to be embedded into the AI system so that a foundation of teamwork knowledge can be established and built upon for the future. Until AI advances to the point of understanding basic teaming concepts, the focus will need to be on training taskwork team-related initiatives.

KEY CHALLENGES AND RESEARCH GAPS

The committee finds 10 key research gaps related to training human-AI teams that need to be addressed.

  • Human-AI teams require the need for multi-focused levels of training—humans teaming with humans, humans teaming with autonomous agents, and autonomous agents teaming with autonomous agents. However, within each of these foci, the team still needs to train together at the team level. Research is needed on these various levels of teaming.
  • In human-AI teaming, the human currently needs to train on and understand (1) his or her role; (2) the AI teammate; (3) how to interact with the AI teammate; and (4) how to interact with human teammates. Assimilating these aspects for training purposes, in a way that does not overwhelm the human operator, is challenging and needs further investigation.
  • Team training is difficult when the autonomous agent cannot fully understand natural language or deploy effective natural language processing, and more research is needed in this area.
  • Team training of teamwork-related components is difficult when the autonomous agent does not understand basic teaming concepts, so more work is needed in this area.
  • There is a current gap in understanding the impact that training has on human-AI trust in the team setting.
  • To allow for resilient teamwork in the case of AI system failure, team training needs to be conducted with both accurate and inaccurate AI. Perturbation training for human-AI teaming may be indicated.
  • Simulation environments need to be built to allow for team-level human-AI training.
  • The concept of the autonomous teammate as a team trainer for purposes such as coordination coaching (e.g., McNeese et al., 2018) needs to be explored.
  • Training will need to be constantly reassessed for potential updating, due to the autonomous teammate’s ability to train and update its skills and capabilities continually; work is needed in this area.
  • There is a gap in understanding when to train human-AI teams to a diversity of experiences (perturbation/adaption) versus when to train to standardization (procedural).
Suggested Citation:"9 Training Human-AI Teams." National Academies of Sciences, Engineering, and Medicine. 2022. Human-AI Teaming: State-of-the-Art and Research Needs. Washington, DC: The National Academies Press. doi: 10.17226/26355.
×

RESEARCH NEEDS

The committee recommends addressing six major research objectives for improving human-AI team training.

Research Objective 9-1: Developing Human-Centered Human-AI Team-Training Content.

There is a great deal of knowledge about, and resources for, training human-human teams, but none explicitly devoted to human-AI teams. Authentic training content materials and mechanisms need to be developed for human-AI teams. The human-AI teaming paradigm includes many potentially different foci, ranging from individual team responsibilities to understanding and interacting with both human and AI team members. This multi-level focus presents a challenge in terms of knowing not only what to focus and train on but also how to train on each of these areas. Directed research is needed to outline areas of focus and the content to be highlighted in training methods. For example, does a human team member need to be trained on what an AI system is and what it can do? If so, what training content is needed to impart that information? Similarly, given the issues related to training human-AI teams on teamwork-related content, how should this aspect of training best be approached?

Research Objective 9-2: Testing and Validating Traditional Team-Training Methods to Inform New Methods.

As previously noted, there is a long and rich history of team-training strategies that have been successful in the human-human context. Strategies such as procedural training, cross-training, and adaptive/perturbation training all need to be adapted and translated for the human-AI teaming environment. Then, each strategy needs to be empirically validated to understand (1) if it is feasible for the human-AI team paradigm; and (2) the impact of these strategies on overall human-AI teaming performance and related teamwork outcomes, such as team cognition, shared situation awareness, and communication and coordination. Through this understanding, existing training strategies can be explicitly adapted, or new strategies can be developed for human-AI teaming.

Research Objective 9-3: Training to Calibrate Human Expectations of Autonomous Teammates.

Recent work by Zhang and colleagues (2021) investigating humans’ perceptions relating to human-AI teaming highlighted that, in many cases, humans have unrealistic expectations and requirements regarding autonomous teammates. Specifically, humans often indicate that they want autonomous teammates to be as good or better than human teammates. This requirement is problematic because autonomous teammates currently have inherently limited capabilities that prevent them from doing many basic teamwork-related behaviors. Thus, there seems to be a perceptual misalignment between what humans expect from AI teammates and what AI teammates can do. Specific content is needed to set adequate expectations of autonomous teammates, related to Research Objective 9-1. In other words, it would be best if training materials do not only focus on teaming procedures, but also focus on the expectations and capabilities of the autonomous system.

Research Objective 9-4: Designing Platforms for Human-AI Team Training.

Human-AI teaming needs research platforms in which to develop and test teamwork procedures, especially platforms that allow for the testing of team-training strategies and methods. The explicit design of simulated task environments that allow humans and AI systems to work together is necessary. Rather than starting from the ground up, researchers could use existing videogame platforms that inherently contain both teaming and AI capabilities.

Research Objective 9-5: Adaptive Training Materials Based on Differing Team Compositions and Sizes.

There is no standard composition or size of human-AI teams, and training materials need to reflect that. A human-AI team may consist of 2–10 human or autonomous entities with differing ratios, for example. McNeese and colleagues (2021b) examined various human-AI teaming compositions and found performance differences between teams with differing ratios of humans and AI teammates. Thus, it is critical that training materials be developed for various types of human-AI teams.

Research Objective 9-6: Training That Works Toward Trust in Human-AI Teaming.

As outlined in Chapter 7, trust is central to effective human-AI interactions and both human-human teaming (Salas, Sims, and Burke,

Suggested Citation:"9 Training Human-AI Teams." National Academies of Sciences, Engineering, and Medicine. 2022. Human-AI Teaming: State-of-the-Art and Research Needs. Washington, DC: The National Academies Press. doi: 10.17226/26355.
×

2005) and human-AI teaming (McNeese et al., 2021a). Thus, team-training materials need to specifically account for the explicit development of team-level human-AI trust and calibration of that trust. It would be beneficial for training to focus not only on the human’s trust in the AI teammate, but also on the human’s trust in other human teammates. More work is needed to develop and test training methods designed to engender trust, and it would be useful for methods such as explanations and transparency to be a key focus when developing trust-related team-training material.

SUMMARY

Training human-AI teams is different from training human-human teams. Despite some similarities, human-human teams and human-AI teams are fundamentally different. The manners and methods by which human-AI teams conduct work and do procedural teamwork are, and will continue to be, fundamentally different from those of human-human teams. The tasks and environments in which human-AI teamwork occurs will also be different. Thus, a great deal of work is needed to create training strategies and methods to support human-AI teaming. The research community would undoubtedly benefit from exploring traditional human-human team-training techniques to inform human-AI team training, but they would also be best served to remain open to creating entirely new methods based on research outcomes. Significant research is needed to develop empirically driven training initiatives for human-AI teams.

Suggested Citation:"9 Training Human-AI Teams." National Academies of Sciences, Engineering, and Medicine. 2022. Human-AI Teaming: State-of-the-Art and Research Needs. Washington, DC: The National Academies Press. doi: 10.17226/26355.
×

This page intentionally left blank.

Suggested Citation:"9 Training Human-AI Teams." National Academies of Sciences, Engineering, and Medicine. 2022. Human-AI Teaming: State-of-the-Art and Research Needs. Washington, DC: The National Academies Press. doi: 10.17226/26355.
×
Page63
Suggested Citation:"9 Training Human-AI Teams." National Academies of Sciences, Engineering, and Medicine. 2022. Human-AI Teaming: State-of-the-Art and Research Needs. Washington, DC: The National Academies Press. doi: 10.17226/26355.
×
Page64
Suggested Citation:"9 Training Human-AI Teams." National Academies of Sciences, Engineering, and Medicine. 2022. Human-AI Teaming: State-of-the-Art and Research Needs. Washington, DC: The National Academies Press. doi: 10.17226/26355.
×
Page65
Suggested Citation:"9 Training Human-AI Teams." National Academies of Sciences, Engineering, and Medicine. 2022. Human-AI Teaming: State-of-the-Art and Research Needs. Washington, DC: The National Academies Press. doi: 10.17226/26355.
×
Page66
Suggested Citation:"9 Training Human-AI Teams." National Academies of Sciences, Engineering, and Medicine. 2022. Human-AI Teaming: State-of-the-Art and Research Needs. Washington, DC: The National Academies Press. doi: 10.17226/26355.
×
Page67
Suggested Citation:"9 Training Human-AI Teams." National Academies of Sciences, Engineering, and Medicine. 2022. Human-AI Teaming: State-of-the-Art and Research Needs. Washington, DC: The National Academies Press. doi: 10.17226/26355.
×
Page68
Next: 10 HSI Processes and Measures of Human-AI Team Collaboration and Performance »
Human-AI Teaming: State-of-the-Art and Research Needs Get This Book
×
Buy Paperback | $30.00
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

Although artificial intelligence (AI) has many potential benefits, it has also been shown to suffer from a number of challenges for successful performance in complex real-world environments such as military operations, including brittleness, perceptual limitations, hidden biases, and lack of a model of causation important for understanding and predicting future events. These limitations mean that AI will remain inadequate for operating on its own in many complex and novel situations for the foreseeable future, and that AI will need to be carefully managed by humans to achieve their desired utility.

Human-AI Teaming: State-of-the-Art and Research Needs examines the factors that are relevant to the design and implementation of AI systems with respect to human operations. This report provides an overview of the state of research on human-AI teaming to determine gaps and future research priorities and explores critical human-systems integration issues for achieving optimal performance.

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    Switch between the Original Pages, where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

    « Back Next »
  6. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  7. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  8. ×

    View our suggested citation for this chapter.

    « Back Next »
  9. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!