Skip to main content

Currently Skimming:

Artificial Intelligence and Justified Confidence: Proceedings of a Workshop - in Brief
Pages 1-11

The Chapter Skim interface presents what we've algorithmically identified as the most significant single chunk of text within every page in the chapter.
Select key terms on the right to highlight them within pages of the chapter.


From page 1...
... convened a Jennie Hwang, H-Technologies Group, planning workshop focused on Artificial Intelligence and Justified committee co-chair, commenced the workshop by Confidence in the Army that was structured to address examining the terminology of "justified confidence" the three framing questions of the statement of task: in the context of AI. Although "confidence" carries connotations of an intangible feeling, Dr.
From page 2...
... Dr. Roth characterized conformal prediction as a simple, Prediction set mulitvalidity involves dividing the data elegant method to affix prediction sets to black box into different groups that might intersect -- in which models.
From page 3...
... Unlike split conformal enable actionable situational awareness. Robots dealt prediction, which cannot train on the holdout set, this with dynamic terrain, austere navigation, degraded model can train on 100 percent of the data, enabling sensing, severe communications, endurance limits, and faster learning.
From page 4...
... Dr. Madni advocated for exploiting AI/ML in MACHINE LEARNING: IMPLICATIONS FOR COMMAND AND nominal situations, using humans to aid AI/ML in novel CONTROL OPERATIONS situations, and using AI/ML to aid humans in memory Azad Madni, University of Southern California, advocated recall and computation-intensive tasks.
From page 5...
... possesses a unique mental Karen Feigh, Georgia Institute of Technology, presented model of its own capabilities and role on the team. The her findings on the impact of world state awareness in shared mental model is the overlapping space in which joint human–automation decision making.
From page 6...
... To meet the needs of AI for C2 in integrating a single model required coordinating across particular, Dr. Frase recommended the following steps: ML engineers, data scientists, development operations, identify related internal AI programs; identify similar front-end developers, application managers, and so on.
From page 7...
... Dr. Harvey touted interpreting model predictions, the aim is to understand observational studies as a method to accelerate why the model made a prediction and to be able to integration of ML models into user workflows.
From page 8...
... BUILDING WELL-CALIBRATED TRUST IN ARTIFICIAL INTELLIGENCE Continuous monitoring of models is significant, affirmed SYSTEMS Dr. Taly, because prediction data may differ significantly Paul Scharre, Center for a New American Security, from test data and because the distribution of features articulated his vision of well-calibrated trust in AI and labels may drift over the course of production (due systems.
From page 9...
... Scharre recommended Dr. Scharre asserted that DoD maintains effective that DoD implement the necessary processes and procedures for creating well-calibrated trust during the authorities to retrain algorithms continuously through development pipeline, which consists of five phases: field deployment (real-life data)
From page 10...
... To bridge this gap, Dr. Hwang suggested practical applications, distributions do not often come that the Army tailor its training and use cases to specific into play, having a distribution can be a useful tool environments, similar to the environmental focus of for an operator to use in explaining why they took a DARPA's Sub-T and OFFSET programs.
From page 11...
... 2023. Artificial Intelligence and Justified Confidence: Proceedings of a Workshop -- in Brief.


This material may be derived from roughly machine-read images, and so is provided only to facilitate research.
More information on Chapter Skim is available.