Skip to main content

Currently Skimming:

9 Training Human-AI Teams
Pages 63-68

The Chapter Skim interface presents what we've algorithmically identified as the most significant single chunk of text within every page in the chapter.
Select key terms on the right to highlight them within pages of the chapter.


From page 63...
... Humans perceive AI teammates as fundamentally different from human teammates (Zhang et al., 2021) and work with AI teammates differently than they do human teammates (McNeese et al., 2018)
From page 64...
... Are training strategies that are effective for human-human teaming adequate and appropriate for human-AI teaming tasks and environments? This question remains to be answered, and research is needed to translate teamtraining methods and validate their utilization in this new paradigm.
From page 65...
... can help autonomy to train itself within the environment. In many cases, synthetic environments are the main environments in which AI teammates will be deployed.
From page 66...
... Thus, there seems to be a perceptual misalignment between what humans expect from AI teammates and what AI teammates can do. Specific content is needed to set adequate expectations of autonomous teammates, related to Research Objective 9-1.
From page 67...
... It would be beneficial for training to focus not only on the human's trust in the AI teammate, but also on the human's trust in other human teammates. More work is needed to develop and test training methods designed to engender trust, and it would be useful for methods such as explanations and transparency to be a key focus when developing trust-related team-training material.


This material may be derived from roughly machine-read images, and so is provided only to facilitate research.
More information on Chapter Skim is available.