National Academies Press: OpenBook

Human-AI Teaming: State-of-the-Art and Research Needs (2022)

Chapter: 5 AI Transparency and Explainability

« Previous: 4 Situation Awareness in Human-AI Teams
Suggested Citation:"5 AI Transparency and Explainability." National Academies of Sciences, Engineering, and Medicine. 2022. Human-AI Teaming: State-of-the-Art and Research Needs. Washington, DC: The National Academies Press. doi: 10.17226/26355.
×

5

AI Transparency and Explainability

The need for AI systems that are sufficiently transparent in their operations to support effective human interaction and oversight is widely recognized (Chen and Barnes, 2015; Endsley, 2017; Shively et al., 2017; USAF, 2015). Considerable attention has been paid to the idea of transparency of AI systems. Meanings associated with the term transparency include issues of organizational transparency, process transparency, data transparency, algorithmic (logic) transparency, and decision transparency (Ananny and Crawford, 2016; Felzmann et al., 2020). These issues are relevant to traditional forms of automation and will continue to be important with future AI systems as well. Here, the focus is on the system transparency required by the human charged with overseeing and interacting with an AI system to achieve operational objectives. This transparency is defined as “the understandability and predictability of the system” (Endsley, Bolte, and Jones, 2003, p. 146), including the AI system’s “abilities to afford an operator’s comprehension about an intelligent agent’s intent, performance, future plans, and reasoning process” (Chen et al., 2014a, p. 2). It will be increasingly difficult to train people to maintain accurate mental models of how AI systems work, due to the ability of these systems to learn and change their functioning and capabilities over time (USAF, 2015). Further, since AI systems may be applied in new contexts and situations they were not initially trained for (i.e., concept drift, Widmer and Kubat, 1996), it will be extremely important for AI systems to be transparent. AI system transparency involves two interrelated components (Figure 5-1):

  • Display transparency: Provides a real-time understanding of the actions of the AI system as a part of situation awareness (SA).
  • Explainability: Provides information in a backward-looking manner on the logic, process, factors, or reasoning upon which the system’s actions or recommendations are based.

In the committee’s opinion, in the dynamic, time-constrained situations common to many military and operational environments, explanations will primarily contribute to the development of improved mental models that can improve SA in the future, and decision making will be primarily reliant on real-time display transparency. In other situations that involve sufficient time for reviewing and processing explanations, both display transparency and explainability may be directly impactful on decision making. The reliance of AI systems on machine learning indicates that the ability of humans to maintain an accurate and up-to-date mental model will be considerably strained as the AI system learns and changes in its capabilities and the types of decisions and actions it will execute in any given situation. In addition, training time may be limited. Thus, the committee believes there will be an

Suggested Citation:"5 AI Transparency and Explainability." National Academies of Sciences, Engineering, and Medicine. 2022. Human-AI Teaming: State-of-the-Art and Research Needs. Washington, DC: The National Academies Press. doi: 10.17226/26355.
×
Image
FIGURE 5-1 Effect of AI transparency and explainability on situation awareness and mental models.

increased need for both transparent AI and explainable AI, which make clear the logic or rationale being used as the AI system changes over time, to compensate for inevitable mental model deficiencies.

System functions that are important for system transparency are shown in Table 5-1. This table was generated through a literature review, in which the committee selected the key points from each reference, sorted by level of SA and type of information. Most transparency taxonomies include an understanding of the current state of the system, in terms of what is it doing and its mode (if applicable). Further, in the research, there is general agreement on the need for transparency in an AI system’s purpose or goals, plans (if applicable), and its progress or performance in achieving those goals. Endsley (2019, 2020a) and Wickens (Trapsilawati et al., 2017; Wickens

Suggested Citation:"5 AI Transparency and Explainability." National Academies of Sciences, Engineering, and Medicine. 2022. Human-AI Teaming: State-of-the-Art and Research Needs. Washington, DC: The National Academies Press. doi: 10.17226/26355.
×

TABLE 5-1 Information Needed for System Transparency

Table

Table

SOURCE: Committee generated. Data compiled from the sources at the top of each column.

et al., 2022) also highlight the value of conveying the aspects of the situation (i.e., raw data) that the system is including in its assessments, to allow human teammates to better understand system limits or biases. Lyons (2013) further discusses information about environmental constraints that may affect system performance.

In addition to system status information, the behavior and recommendations of the system need to be understandable to the human teammate, to the degree that the system has an impact on human decision making. This understandability generally includes the availability of information about the system’s reasons, logic, or factors

Suggested Citation:"5 AI Transparency and Explainability." National Academies of Sciences, Engineering, and Medicine. 2022. Human-AI Teaming: State-of-the-Art and Research Needs. Washington, DC: The National Academies Press. doi: 10.17226/26355.
×

driving its behavior, as well as an understanding of the system’s capabilities and limitations, its ability to handle the current situation, and how it might err. Further, the amount of confidence or uncertainty underlying system assessments is relevant. Confidence in the AI system outputs (or its inverse, uncertainty) is a significant part of SA. Endsley and Jones (2012) provide a model showing that this occurs at several levels relevant to human decision making: (1) level 1 SA—data uncertainty based on the presence of missing data, reliability or credibility of the sensors or sources of data, incongruent or conflicting data, the timeliness of data, and ambiguous or noisy data; (2) level 2 SA—comprehension uncertainty based on system algorithms for integrating and classifying or categorizing the data; (3) level 3 SA—projection uncertainty, which includes projections of future events based on the current situation and models of system dynamics and future likelihoods; and (4) decision uncertainty, which is the likelihood that a selected course of action will result in desired outcomes. The amount of confidence a person has in an AI system’s outputs has both direct and independent links to the likelihood of acting on that information (Endsley, 2020b) and is an important SA need that should be supported by system transparency.

The predictability of an AI system is also important for transparency. Predictability includes planned actions or behaviors, predicted outcomes or consequences associated with planned actions, the ability of the system to perform in upcoming situations, and uncertainty associated with future projections. Some research has indicated that knowledge of an AI system’s history (Chen et al., 2014a; Lee and See, 2004) or general task reliability (Endsley, 2020b) should also be transparent. Finally, in moving toward consideration of a human-AI team, there will be an increased need for transparency related to team tasks (e.g., current goals, distribution of functions, plans, and the tasks of each teammate or shared tasks, which can change dynamically over time), as well as transparency regarding the relative states of the human and AI system for performing tasks, and the impact of ongoing tasks on the states of other team members (Chen et al., 2018; Lyons, 2013; USAF, 2015).

DISPLAY TRANSPARENCY

The goal of display transparency is “enabling the operator to maintain proper SA of the system in its tasking environment without becoming overloaded” (Mercado et al., 2016, p. 402). Display transparency has been shown to be valuable for:

In reviewing 15 studies of automation transparency, Wickens et al. (2022) found significant support for the benefits of system transparency for addressing the negative effects of out-of-the-loop performance. Mercado et al. (2016) showed1 that performance increased along with increasing levels of transparency (i.e., from SA transparency level 1 alone; level 1 and level 2; to all 3 levels), as did subjective levels of trust. Misuse and disuse of automation also decreased at higher levels of transparency. Selkowitz and colleagues (2016, 2017) similarly found improved SA, performance, and trust with the addition of prediction (SA transparency level 3) information. The

___________________

1 “The performance data indicated that participants’ correct rejection accuracy increased in relation to transparency level, whereas correct Intelligent Agent (IA) usage increased only from Level 1 to Level 1+2. The addition of reasoning information in Level 1+2 increased correct IA use by 11% and correct rejection rate by 12%. The addition of uncertainty information (Level 1+2+3 compared with Level 1+2) improved correct IA use rate by a small amount (2%) and correct rejection rate by 14%” (p. 411).

Suggested Citation:"5 AI Transparency and Explainability." National Academies of Sciences, Engineering, and Medicine. 2022. Human-AI Teaming: State-of-the-Art and Research Needs. Washington, DC: The National Academies Press. doi: 10.17226/26355.
×

committee found that the types of information included in various transparency studies vary widely, however, and more knowledge is needed regarding which information is the most valuable to provide in real time. Further, more research may be needed to define additional system display characteristics important for human-AI teaming. For example, Panganiban, Matthews, and Long (2020) showed that displaying autonomous system intent (benevolence) improved trust and team collaboration.

A review of research on trust showed that providing system reliability information helps to calibrate reliance on automation (Schaefer et al., 2016). Stowers et al. (2017), for example, found that adding information on system uncertainty to the other levels of transparency improved performance; Kunze et al. (2019) showed that adding this information improved trust and the performance of human take-over from the system. However, not all research has found a corresponding improvement in trust with the provision of uncertainty information (Chen and Barnes, 2015; Selkowitz, Lakhmani, and Chen, 2017; Stowers et al., 2017). Selcon (1990) showed that, for AI systems presenting the uncertainty or confidence associated with various recommendations, decision time increased when confidence levels were high. Endsley and Kiris (1994) found that decision time was significantly affected by a variety of different methods of conveying AI system confidence levels. Further research is needed on how to best determine and present AI system reliability or confidence information.

Although some research reports an increase in workload associated with increased transparency of uncertainty information (Kunze et al., 2019), other research reports that perceived workload does not increase with increased transparency (Chen et al., 2014a; Mercado et al., 2016; Selkowitz, Lakhmani, and Chen, 2017; Selkowitz et al., 2016; Stowers et al., 2017). Additional research showed that the type of transparency information provided could interact with certain operator personality types to affect the benefit of improved transparency, with too much information sometimes having a negative effect (Chen et al., 2018; Wright et al., 2016).

Key Challenges and Research Gaps

Although the benefits of transparency are apparent, the committee finds that, to discover how to best support transparency for AI systems in multi-domain operations (MDO), more research is needed in the following four areas:

  • The value of various types of transparency information across task types, contexts, temporal demands, and user types;
  • Best methods for providing system transparency to system operators for different types of transparency information;
  • Appropriate times for providing AI system transparency information for different classes of operations and temporal demands; and
  • Additional transparency requirements and methodologies for AI systems used in military MDO.

Research Needs

The committee recommends that four major research needs be addressed, to develop the levels of display transparency required for effective human-AI teams.

Research Objective 5-1: Transparency Information Requirements.

Further research is needed to determine the value of specific types of transparency information for supporting situation awareness, trust, and performance in the context of human-AI interactions. Current research demonstrates the value of improved system transparency; however, there is significant variability in the types of information considered. It would be helpful for research to focus on determining which aspects of AI knowledge and performance need to be made transparent for various types of tasks and human-AI teaming arrangements. Factors such as situation types, context, temporal demands, and user types would benefit from consideration.

Research Objective 5-2: Transparency Display Methods.

Research is needed to determine the best methods for providing system transparency to humans for the types of transparency information identified in Table 5-1, to

Suggested Citation:"5 AI Transparency and Explainability." National Academies of Sciences, Engineering, and Medicine. 2022. Human-AI Teaming: State-of-the-Art and Research Needs. Washington, DC: The National Academies Press. doi: 10.17226/26355.
×

improve performance, SA, and trust calibration without creating overload. Although integrated, simple, graphical displays are generally recommended, more research is needed to determine how best to present transparency information for AI systems performing realistic military tasks in multi-domain operations. Methods for supporting real-time understandability and predictability of AI systems and effective communication of confidence or uncertainty need particular emphasis. Methods for supporting human understanding of when the AI system is brittle (i.e., at or near the limits of its performance envelope) and unable to perform effectively deserve particular attention, particularly in cases when the AI system does not have sufficient self-awareness to recognize these limits. It would be advantageous to develop design guidelines for supporting transparent interfaces for AI systems.

Research Objective 5-3: Transparency Temporality.

Some have argued that the introduction of AI systems that learn will create the need for increased emphasis on real-time display transparency. These arguments postulate that, when these systems are used, it is much more likely for mental models to be outdated or insufficient, and operators will be increasingly unable to accurately understand and project future AI actions and capabilities (Endsley, 2020a; USAF, 2015). Others believe that, in time-constrained and demanding military environments, human attention will be too overloaded to review and evaluate the performance of an AI system, and thus transparency requirements will need to be met either a priori (e.g., during training, planning, pre-mission briefings) or a posteriori (e.g., during debriefing, after-action reviews) (Miller, 2021). The degree to which the presentation of various aspects of AI transparency are best supported in real time, in post-hoc reviews, or in prior planning and practice activities needs to be determined for specific classes of operations and temporal demands. Further, research is needed on whether transparency information of various types would best be provided continuously, sequentially, or on-demand (Sanders et al., 2014; Vered et al., 2020).

Research Objective 5-4: Transparency of Machine Learning-Based AI in Multi-Domain Operations.

Given that machine learning-based AI can change its capabilities, logic, and strategies in dynamic and perhaps unpredictable ways, and that learning systems can be opaque both in their reasoning processes and the effect of training inputs, research is needed to determine additional transparency requirements and methodologies for AI systems. The value of the transparency of the human teammate to the AI system for facilitating joint human-AI performance also needs to be determined. In addition, the effect of AI system transparency on trust and performance in distributed military operations, which include the potential for the military hierarchy and changing rules of engagement to effect decision making, needs to be explored. The effects of group dynamics, distributed responsibility, and locus of decision making in the context of human-AI interaction remain largely unexplored and would benefit from further research.

AI EXPLAINABILITY

In keeping with the definitions above, explanations are information about the rationale underlying an explanation-giver’s actions or decisions, generally provided after2 a decision or action is taken and intended to improve the questioner’s understanding (or mental model) of the reasoning processes of the explanation-giver.3 As such, extensive explanations generally cannot be provided or absorbed in moments of high workload characteristic of much human-automation collaboration but must usually be relegated to periods during which more capacity is available (e.g., provided for a recommended course of action, before a decision is to be made) or after the action is taken (e.g., in an after-action review). AI explainability is, then, the ability to provide satisfactory, accurate, and efficient explanations of the results (e.g., recommendations, decisions, and/or actions) of an AI system.

___________________

2 An exception occurs when explanations are provided in anticipation of a receiver’s questions—that is, when the explanation-giver anticipates the receiver’s interest in and lack of understanding of the explanation-giver’s rationale. Providing explanations, especially anticipatory explanations, is also a politeness strategy that can be used to signal power differentials, social distance, and imposition (Brown and Levinson, 1987). These explanations are still “after” the making of a decision but may be provided concurrently with, or even before, the presentation of that decision or action.

3 An exception occurs when explanations are requested to interrogate or check up on reasoning processes of the explanation-giver—as a teacher will do to a student.

Suggested Citation:"5 AI Transparency and Explainability." National Academies of Sciences, Engineering, and Medicine. 2022. Human-AI Teaming: State-of-the-Art and Research Needs. Washington, DC: The National Academies Press. doi: 10.17226/26355.
×

Explanations provided by automated systems, while varying widely in style, presentation, content, and context, have been shown to improve trust (Wang, Pynadath, and Hill, 2016), including in emergency situations (Nayyar et al., 2020). The embodiment of the explanation-giver and various social strategies (e.g., promises to repair errors) interact with such explanations to affect the resulting trust (Wang et al., 2018). Explanations perform this trust-related function at the risk of human over-reliance on the automation (Bussone, Stumpf, and O’Sullivan, 2015) even when those explanations do not provide meaningful new information to the receiver (Eiband et al., 2019; Nourani et al., 2019), (see discussion of trust in Chapter 7).

There are multiple mechanisms by which explanations affect trust and SA. Lee and See’s (2004) three-tiered model of trust formation and calibration provides a framework for thinking about these mechanisms. In their model, calibration of affective trust relies on emotional reaction—in essence, things that make a person feel good, safe, and rewarded will tend to be trusted more. This illustrates the importance of social aspects of explanation: explanations can reinforce or undermine factors including power dynamics, friendship, and perceived confidence and expertise, by providing information about the persona of the explanation-giver and his or her relationship to the receiver. Calibration of analogic trust occurs by reference to known patterns of behavior or reasoning—for example, behaving and talking “like a pilot” is a mechanism by which pilot-level trust can be awarded, independently of the content of the explanation itself. An explanation that uses terms, language, formats, and concepts appropriate to the given domain will lend credibility, while unfamiliar (e.g., intensely mathematical) data presentations may decrease analogic trust. Finally, analytic trust calibration stems from understanding the underlying reasoning by which the conclusion is derived. Explanations that reveal aspects of this reasoning will improve the receiver’s understanding of the explanation-giver and his or her mental model, but such explanations are both time-consuming and may place unrealistic demands on human understanding, especially when the AI system’s reasoning is beyond the comprehension of the typical user.

Explanation has been a holy grail for AI systems for almost as long as AI has been a concept. Early AI systems such as MYCIN4 (Buchanan and Shortliffe, 1984) used the rule structures of expert systems to provide explanations as, essentially, a trace of its chain of reasoning. These rule structures proved helpful at improving both trust and human insight into the system’s reasoning, but were unsatisfactory because, as Miller argued (2018), such explanations were based on a comparatively limited and myopic view of what constitutes a good explanation for humans. In the extreme viewpoint, AI explanations have tended toward what Chakraborti and colleagues (2017b) call soliloquies—long disquisitions representing the entire thought process by which the AI system arrived at its conclusions. At best, these explanations provide more information than the human receiver is interested in obtaining and, at worst, they present information based on reasoning models that the human does not understand.

More challenging still, recent improvements in AI systems (particularly those based on deep learning), have largely stemmed from the use of black-box computational techniques (Guidotti et al., 2018), which are inherently difficult for humans to understand and explain, and similarly difficult for machines to inspect and explain—akin to understanding and explaining how to ride a bike (Kuang, 2017). Improvement in AI performance through black-box systems, combined with the increasingly apparent lack of human trust (or ability to successfully intervene) in such systems, has generated attempts to provide interpretability to such learning systems (Carvalho, Pereira, and Cardoso, 2019; Molnar, 2020) and/or to understand the trade-off between black-box and more transparent and understandable white-box approaches, which provide interpretable models that include influencing variables and explanations for predictions (Rudin, 2019).

Key Challenges and Research Gaps

The committee finds five key challenges that remain in the area of explainability in human-AI teams.

  • There is a need for multi-factor models to explain dimensions of decisions involving trust and reliance, based on predictions of the trust-related impact of explanations across differing contexts.

___________________

4 One of several well-known programs that embodies some intelligence and provides data on the extent to which intelligent behavior can be programmed.

Suggested Citation:"5 AI Transparency and Explainability." National Academies of Sciences, Engineering, and Medicine. 2022. Human-AI Teaming: State-of-the-Art and Research Needs. Washington, DC: The National Academies Press. doi: 10.17226/26355.
×
  • Effective mechanisms to adapt explanations to receivers’ needs, prior knowledge and assumptions, and cognitive and emotional states are needed.
  • Human-centered approaches for providing improved explainability of AI systems are needed, including an understanding of the factors influencing human comprehension quality and speed when such systems are used.
  • The effects of anthropomorphism and message features of AI explanation on effective, calibrated trust are not well understood.
  • The benefits of the human’s explanations of his or her goals, intentions, or behaviors for informing and guiding an AI teammate’s future behaviors have not yet been established.

Research Needs

In addition to ongoing core improvements in the algorithmic mechanisms required to characterize and present explanatory information about AI reasoning, the committee recommends that five research objectives be addressed to improve AI explainability.

Research Objective 5-5: Explainability and Trust.

The offering of an explanation can have significant impacts on trust in the explanation-giver, either positive or negative, via multiple channels, as described above. Yet substantial work remains around the impact of explanations on trust, across multiple contexts. For example, how does explanation interact with the sociocultural forces within an organization to affect trust (e.g., Ho et al., 2017)? When is the offering of an explanation worthwhile in terms of enhanced trust or comprehension versus the time and attention needed to understand the explanation? How does temporality (i.e., when an explanation is offered and how long before, after, or during a decision or action the explanation is offered) contribute to the effect of an explanation on trust? It is apparent from the research cited above that explanations can sometimes affect trust in undesirable ways (e.g., by enhancing trust when it is not deserved or earned), so how can it be ensured that explanations are employed effectively? New research would be useful for the provision of improved, multi-factor models to describe the effects of various dimensions of explanation on trust and reliance decisions.

Research Objective 5-6: Adaptive (and Adaptable) Explainability.

Writers from Aristotle (in his Rhetoric5) to Stephen Toulmin (1958) to Chakraborti (2017b) have pointed out that effective explanations must be adapted to the needs, beliefs, and interests of the receiver. The uses of explanation in the formation of mental models and trust reviewed above suggest some ways that explanations could be adapted. Mechanisms to adapt explanations to receivers’ needs, prior knowledge and assumptions, and cognitive and emotional states need to be developed, evaluated, and their implications understood. A core question is whether (or more likely, when) automated, adaptive modification of an explanation to a receiver’s perceived needs is more effective than user-initiated, adaptable modification. Since human-human explanations are frequently interactive—with both parties navigating toward a mutually satisfactory explanation—AI systems likely need to use similar techniques if they are to prove satisfactory and efficient for human receivers. This may require that AI systems maintain a model of the human receiver, in which case efficient techniques for incorporating such a model will need to be refined as well. Concurrently, techniques to allow a receiver to rapidly hone in on the portion of the AI system’s reasoning that is most salient or relevant to that receiver need to be developed and validated.

Other ways of adapting the presentation of explanations are also important. Explanation content needs to be adapted to the time and the modalities available for presentation. The interactivity necessary for the adaptive or adaptable explanations described above implies a degree of verbal flexibility in information presentation that will require further advances in natural language use and understanding by AI systems. Some users are known to prefer and/or benefit from either visual or verbal presentation of content (Childers, Houston, and Heckler, 1985). Persuasion effects of various forms of presentation and content are worth considering (e.g., the framing effect described by Tversky and Kahneman (1987), whereby positive presentations of material are more likely to be

___________________

5 More information at: http://classics.mit.edu/Aristotle/rhetoric.1.i.html.

Suggested Citation:"5 AI Transparency and Explainability." National Academies of Sciences, Engineering, and Medicine. 2022. Human-AI Teaming: State-of-the-Art and Research Needs. Washington, DC: The National Academies Press. doi: 10.17226/26355.
×

accepted than negative ones), though they may raise ethical considerations. Finally, in military applications, the problem of classified information and need-to-know also needs to be considered when adapting explanations. Sometimes, the explanation for a decision may not be fully sharable with the receiver due to the need to obscure aspects of the rationale. The impacts of such necessary information withholding by an AI system on human users are not currently known and would benefit from more research.

Research Objective 5-7: Explainability of Learned Information and Change.

Perhaps the biggest comparatively new challenge in explanability for AI systems is prompted by the rise of deep-learning approaches that may operate in ways that are neither amenable to explanation nor readily comprehensible by humans. Explaining the functioning of such systems on a deep, causal level may not be possible. The field of explainable AI (Arrieta et al., 2020) is largely focused on pursuing answers to this question. Even though some progress is being made (particularly, as described by Hohman et al. (2019) in the use of visual analytics to convey the significance of features contributing to a learned decision system), this problem may ultimately be one of determining when the use of deep learning and unexplainable black-box AI is warranted and when it is not, while improving the performance of explainable AI approaches as much as possible (Arrieta et al., 2020; Lipton, 2017; Rudin, 2019). Although work in the field of explainable AI has exploded recently across a number of disciplines, including medicine, financial investment, and the military, much of this work is centered in computer science. Human-centered disciplines such as human factors would do well to provide inputs to such work, including improved visualizations and definitions of the parameters (e.g., training, skillsets, and individual cognitive traits) that limit or influence human comprehension quality and the speed of such systems.

Change awareness is a related topic that pertains to change explanations (Rensink, O’Regan, and Clark, 1997; Smallman and St. John, 2003). Learning systems afford new AI automation with a remarkable flexibility and the ability to change in response to changing environments, performance, and enemy capabilities and behaviors. Even more traditional AI and automation systems can be updated, often remotely, with little notification to the human operator. But this raises the problem of human awareness and ability to predict (and trust) what may well be ever-changing machine behavior. Research would benefit from an exploration of ways to rapidly convey how and when AI behavior and underlying reasoning has changed, perhaps using prior understanding as a benchmark. Techniques for reasoning about model drift may be useful here (Sreedharan, Chakraborti, and Kambhampati, 2021).

Research Objective 5-8: Machine Personae and Explanations.

The offering of an explanation, especially by an autonomous and intelligent system, is likely to promote an anthropomorphism response in the receiver (Hayes and Miller, 2010; Moon and Nass, 1996; Wynne and Lyons, 2018), precisely because it accesses human-human social protocols (Brown and Levinson, 1987). This anthropomorphism response can happen regardless of whether it was intended by the designer. Furthermore, the more responsive and reactive an explanation-giver is (particularly if it is embodied in a personified “I”, a voice, or a human-like form), the stronger the anthropomorphism response is likely to be. This response can be positive or negative, depending on context, and can impact trust and reliance decisions (Nourani et al., 2019; Wang et al., 2018). It is also likely that an anthropomorphism response can serve to rapidly convey otherwise-difficult concepts, such as expertise, confidence, and aggressiveness, as well as the source and provenance of actions or recommendations—again, regardless of whether these attributions are specifically intended by the designers. Research is needed to establish the magnitude of such effects and to develop methods to either encourage or discourage such anthropomorphism to support effective, calibrated trust.

Research Objective 5-9: Machine Benefits from Human Explanations.

An understudied approach that may improve human-AI teaming is the ability for humans to offer explanations of their own goals, intentions, or behaviors to inform and guide an AI teammate’s future behaviors. If such explanations could be provided in natural language or a human-AI language (see Chapter 3), they would be comparatively easy and natural for humans to offer, with the acknowledged limitations of human inspectability and willingness to articulate accurate rationales. These explanations could offer another, potentially superior channel for human tasking and interacting with AI teammates6 and

___________________

6 “Slider bar” input channels for adjusting AI algorithmic weights are currently fairly ubiquitous.

Suggested Citation:"5 AI Transparency and Explainability." National Academies of Sciences, Engineering, and Medicine. 2022. Human-AI Teaming: State-of-the-Art and Research Needs. Washington, DC: The National Academies Press. doi: 10.17226/26355.
×

would augment and complete the interaction cycle begun in Research Objective 5-6. Such an approach has roots in programming by example (e.g., Lieberman, 2001) but could allow more interactive, language-centered declarations of intent. The theoretical functions of a teammate imply that these approaches will be useful in at least some circumstances, but whether such approaches are feasible or widely useful still remains to be determined.

SUMMARY

System transparency and explainability are key mechanisms for improving SA, trust, and performance in human-AI teams. Methods for supporting transparency and explainability in future human-AI teams need to consider the appropriate types of information, methods for displaying that information, and timeliness of information presentation, particularly as these factors relate to dynamically changing AI systems. Methods for tailoring and adapting transparency and explainability information would benefit from further exploration, as would the advantages of bi-directional explanation in human-AI teams.

Suggested Citation:"5 AI Transparency and Explainability." National Academies of Sciences, Engineering, and Medicine. 2022. Human-AI Teaming: State-of-the-Art and Research Needs. Washington, DC: The National Academies Press. doi: 10.17226/26355.
×
Page 31
Suggested Citation:"5 AI Transparency and Explainability." National Academies of Sciences, Engineering, and Medicine. 2022. Human-AI Teaming: State-of-the-Art and Research Needs. Washington, DC: The National Academies Press. doi: 10.17226/26355.
×
Page 32
Suggested Citation:"5 AI Transparency and Explainability." National Academies of Sciences, Engineering, and Medicine. 2022. Human-AI Teaming: State-of-the-Art and Research Needs. Washington, DC: The National Academies Press. doi: 10.17226/26355.
×
Page 33
Suggested Citation:"5 AI Transparency and Explainability." National Academies of Sciences, Engineering, and Medicine. 2022. Human-AI Teaming: State-of-the-Art and Research Needs. Washington, DC: The National Academies Press. doi: 10.17226/26355.
×
Page 34
Suggested Citation:"5 AI Transparency and Explainability." National Academies of Sciences, Engineering, and Medicine. 2022. Human-AI Teaming: State-of-the-Art and Research Needs. Washington, DC: The National Academies Press. doi: 10.17226/26355.
×
Page 35
Suggested Citation:"5 AI Transparency and Explainability." National Academies of Sciences, Engineering, and Medicine. 2022. Human-AI Teaming: State-of-the-Art and Research Needs. Washington, DC: The National Academies Press. doi: 10.17226/26355.
×
Page 36
Suggested Citation:"5 AI Transparency and Explainability." National Academies of Sciences, Engineering, and Medicine. 2022. Human-AI Teaming: State-of-the-Art and Research Needs. Washington, DC: The National Academies Press. doi: 10.17226/26355.
×
Page 37
Suggested Citation:"5 AI Transparency and Explainability." National Academies of Sciences, Engineering, and Medicine. 2022. Human-AI Teaming: State-of-the-Art and Research Needs. Washington, DC: The National Academies Press. doi: 10.17226/26355.
×
Page 38
Suggested Citation:"5 AI Transparency and Explainability." National Academies of Sciences, Engineering, and Medicine. 2022. Human-AI Teaming: State-of-the-Art and Research Needs. Washington, DC: The National Academies Press. doi: 10.17226/26355.
×
Page 39
Suggested Citation:"5 AI Transparency and Explainability." National Academies of Sciences, Engineering, and Medicine. 2022. Human-AI Teaming: State-of-the-Art and Research Needs. Washington, DC: The National Academies Press. doi: 10.17226/26355.
×
Page 40
Next: 6 Human-AI Team Interaction »
Human-AI Teaming: State-of-the-Art and Research Needs Get This Book
×
Buy Paperback | $30.00 Buy Ebook | $24.99
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

Although artificial intelligence (AI) has many potential benefits, it has also been shown to suffer from a number of challenges for successful performance in complex real-world environments such as military operations, including brittleness, perceptual limitations, hidden biases, and lack of a model of causation important for understanding and predicting future events. These limitations mean that AI will remain inadequate for operating on its own in many complex and novel situations for the foreseeable future, and that AI will need to be carefully managed by humans to achieve their desired utility.

Human-AI Teaming: State-of-the-Art and Research Needs examines the factors that are relevant to the design and implementation of AI systems with respect to human operations. This report provides an overview of the state of research on human-AI teaming to determine gaps and future research priorities and explores critical human-systems integration issues for achieving optimal performance.

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    Switch between the Original Pages, where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

    « Back Next »
  6. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  7. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  8. ×

    View our suggested citation for this chapter.

    « Back Next »
  9. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!