Skip to main content

Currently Skimming:

5 AI Transparency and Explainability
Pages 31-40

The Chapter Skim interface presents what we've algorithmically identified as the most significant single chunk of text within every page in the chapter.
Select key terms on the right to highlight them within pages of the chapter.


From page 31...
... In the committee's opinion, in the dynamic, time-constrained situations common to many military and operational environments, explanations will primarily contribute to the development of improved mental models that can improve SA in the future, and decision making will be primarily reliant on real-time display transparency. In other situations that involve sufficient time for reviewing and processing explanations, both display transparency and explainability may be directly impactful on decision making.
From page 32...
... Level 1 SA Factors user State of Raw data   System Status is taking into knowledge used by account automation Current Key system Current What Current system state states system state automation is system state and mode doing and modes transitions Purpose Purpose Goals   (goals) Intentions Process Intentional   (purpose and social intent)
From page 33...
... further discusses information about environmental constraints that may affect system performance. In addition to system status information, the behavior and recommendations of the system need to be understandable to the human teammate, to the degree that the system has an impact on human decision making.
From page 34...
... level 3 SA -- projection uncertainty, which includes projections of future events based on the current situation and models of system dynamics and future likelihoods; and (4) decision uncertainty, which is the likelihood that a selected course of action will result in desired outcomes.
From page 35...
... , more research is needed in the following four areas: • The value of various types of transparency information across task types, contexts, temporal demands, and user types; • Best methods for providing system transparency to system operators for different types of transparency information; • Appropriate times for providing AI system transparency information for different classes of operations and temporal demands; and • Additional transparency requirements and methodologies for AI systems used in military MDO. Research Needs The committee recommends that four major research needs be addressed, to develop the levels of display transparency required for effective human-AI teams.
From page 36...
... Given that machine learning-based AI can change its capabilities, logic, and strategies in dynamic and perhaps unpredictable ways, and that learning systems can be opaque both in their reasoning processes and the effect of training inputs, research is needed to determine additional transparency requirements and methodologies for AI systems. The value of the transparency of the human teammate to the AI system for facilitating joint human-AI performance also needs to be determined.
From page 37...
... These rule structures proved helpful at improving both trust and human insight into the system's reasoning, but were unsatisfactory because, as Miller argued (2018) , such explanations were based on a comparatively limited and myopic view of what constitutes a good explanation for humans.
From page 38...
... automated, adaptive modification of an explanation to a receiver's perceived needs is more effective than user-initiated, adaptable modification. Since human-human explanations are frequently interactive -- with both parties navigating toward a mutually satisfactory explanation -- AI systems likely need to use similar techniques if they are to prove satisfactory and efficient for human receivers.
From page 39...
... , they would be comparatively easy and natural for humans to offer, with the acknowledged limitations of human inspectability and willingness to articulate accurate rationales. These explanations could offer another, potentially superior channel for human tasking and interacting with AI teammates6 and 6 "Slider bar" input channels for adjusting AI algorithmic weights are currently fairly ubiquitous.
From page 40...
... SUMMARY System transparency and explainability are key mechanisms for improving SA, trust, and performance in human-AI teams. Methods for supporting transparency and explainability in future human-AI teams need to consider the appropriate types of information, methods for displaying that information, and timeliness of information presentation, particularly as these factors relate to dynamically changing AI systems.


This material may be derived from roughly machine-read images, and so is provided only to facilitate research.
More information on Chapter Skim is available.