Skip to main content

Currently Skimming:

7 Trusting AI Teammates
Pages 49-56

The Chapter Skim interface presents what we've algorithmically identified as the most significant single chunk of text within every page in the chapter.
Select key terms on the right to highlight them within pages of the chapter.


From page 49...
... the strict definition of trust that limits its study to factors affecting reliance or compliance behaviors in the context of risk, rather than as a process that develops across multiple interactions and decision situations and affects broader sociotechnical and societal outcomes, such as cooperation (Coovert, Miller, and Bennett, 2017; Lee and Moray, 1994; Riley, 1994)
From page 50...
... portray in Figure 7-1 that a trusting, highly capable automation depends on social decision situations that are embedded in a goal environment. Trust evolves from repeated interactions between a human agent (AH)
From page 51...
... The challenge emphasized in this report is less about the moral philosophy or ethics behind decisions, and more about how local goals will need to be adapted, negotiated, or aligned to achieve global use of human-AI teams in complex and dynamic task environments. In such environments, AI systems may be distrusted not because they perform poorly, but because they act on a broader information array that conflicts with the narrower information array available to the human, resulting in misaligned goals.
From page 52...
... Although formally representing situations as decision matrices can be one way for researchers and developers to quickly identify trust-relevant contextual similarities across empirical studies and to evaluate the generalizability of those studies, identifying these common structures can also help explain the variable and seemingly contradictory findings in the trust literature, and can help define the contexts within which we can begin to find consistent results. As such, field studies that leverage mixed methods, including anthropological studies with rich qualitative datasets, are also necessary to help identify the many situations that might exist within a task environment.
From page 53...
... Team situations require that people align their goals and cooperate. Although this requirement is often assumed during the formation of a team, in complex work environments with fast-paced, changing conditions, goal alignment and cooperation may additionally require setting aside individual goals (or initial goals)
From page 54...
... Dynamic models of trust evolution within specified goal environments are needed, which go beyond eliciting and categorizing the factors that could affect trust in an AI teammate. For example, research has shown that trust can be lost after a system failure and may take time to recover, and that automation failures have a greater effect on trust than automation successes (de Visser, Pak, and Shaw, 2018; Lee and Moray, 1992; Lewicki and Brinsfield, 2017; Reichenbach, Onnasch, and Manzey, 2010; Yang, Schemanske, and Searle, 2021)
From page 55...
... The proposed research objectives outline a path forward for understanding how organizational and social factors surrounding AI systems inform the interdependent process of trust in teams. These objectives go beyond the pervasive focus on calibrating trust solely for appropriate reliance and compliance.


This material may be derived from roughly machine-read images, and so is provided only to facilitate research.
More information on Chapter Skim is available.