Skip to main content

Currently Skimming:

11 Conclusions
Pages 85-90

The Chapter Skim interface presents what we've algorithmically identified as the most significant single chunk of text within every page in the chapter.
Select key terms on the right to highlight them within pages of the chapter.


From page 85...
... However, decades of research have shown that people often struggle to perform this role adequately, due to both cognitive limitations (e.g., poor vigilance in monitoring, inappropriate levels of trust) and inadequate design of the technology (e.g., inadequate transparency, system designs that create low engagement levels or bias human decision making)
From page 86...
... Training is essential for building accurate mental models of an AI system to support SA and trust, and for forming accurate expectations regarding teamwork behaviors. While training will need to include formal instruction, it will also increasingly need to rely on simulated, structured practice scenarios in which perturbations, edge cases, and novel events can be introduced.
From page 87...
... Far-Term (11–15 years) Team Effectiveness 2-1: Human-AI Team     Effectiveness Metrics     2-2: AI Uncertainty Resolution 2-3: AI Over-Promise Rate 2-4: Human-AI Team Models Team Processes 3-1: Human-AI Teamwork Skills in MDO     3-2: Support for Human-AI Teaming in MDO Situation 4-1: Team SA in MDO   Awareness   4-2: Resilience of SA to   Information Attack   4-3: Human SA of AI Systems   4-4: Shared SA in Human-AI Teams 4-5: AI Awareness of Human Teammate     4-6: AI Self-Awareness 4-7: AI Situation and Task Models continued
From page 88...
...   Explainability   5-7: Explainability of Learned Information and Change 5-8: Machine Personae and Explanations   5-9: Machine Benefits from Human Explanations Interaction 6-1: H  uman-AI Team Task     Sharing   6-2: On-the-Loop Control   6-3: Multiple LOA Systems 6-4: Flexible Autonomy   Transition Support     6-5: Support for Flexible   Autonomy   6-6: GOC and AI Transparency   6-7: Playbook Extensions for Human-AI Teaming 6-8: Human-AI Team Emergent Behaviors   6-9: Human-AI Team Interaction Design Trust 7-1: Effects of Situations and Goals on Trust     7-2: Effects of Directability on   Trust   7-3: Cooperation as a Measure of Trust 7-4: Investigations of Distrust   7-5: Dynamic Models of Trust Evolution   7-6: T  rust Evolution in Multi Echelon Networks
From page 89...
... Far-Term (11–15 years) Human and AI 8-1: Human-AI Partnership     Bias in Continuous Learning   Environments     8-2: Adversarial Effects on Human-AI Team Biases 8-3: Biases from Small Datasets and Sparse Data 8-4: I nductive and Emerging   Human Biases 8-5: Preventative Detection and Mitigation of Human-AI Team Biases in Learning Systems Training 9-1: Developing Human-Centered Human-AI Team-Training Content     9-2: Testing and Validating     Traditional Team Training   Methods to Inform New Methods 9.3: T  raining to Calibrate Human Expectations of Autonomous Teammates 9-4: Designing Platforms for Human-AI Team Training   9-5: Adaptive Training Materials Based on Differing Team Compositions and Sizes 9-6: Training That Works Toward   Trust in Human-AI Teaming HSI Processes, 10-1: Human-AI Team Design   Measures, and and Testing Methods   Testing   10-2: Human-AI Team Requirements 10-3: Human-AI Team Development Teams 10-4: AI System Lifecycle Testing and Auditability 10-5: AI Cyber Vulnerabilities 10-6: Testing of Evolving AI Systems 10-7: Human-AI Team Testbeds 10-8: Additional Metrics for     Human-AI Teaming   10-9: HSI for Agile Software Development


This material may be derived from roughly machine-read images, and so is provided only to facilitate research.
More information on Chapter Skim is available.