The Chapter Skim interface presents what we've algorithmically identified as the most significant single chunk of text within every page in the chapter.
Select key terms on the right to highlight them within pages of the chapter.
From page 31... ...
Aside from the impact database, Crisman detailed efforts to create pipelines to sup port organizations with the evaluation of elements such as data sets, the training of AI models, AI software systems, and deployment operations (including the release, over sight, and response to user feedback regarding AI tools)
|
From page 32... ...
In his opening remarks, Rotenberg noted that AI risk management conversations have primarily discussed "high" and "low" risk. Rotenberg highlighted a need to have an additional category for systems that should be prohibited.
|
From page 33... ...
PANEL 2: WIDENING PARTICIPATION IN THE DESIGN, DEVELOPMENT, AND DEPLOYMENT OF AI TOOLS Sheena Erete, Nathanael Fast, and Tamara Kneese, members of the planning committee, made up the second panel. They were joined by Alex Givens, Center for Democracy & Technology, as an external respondent.
|
From page 34... ...
Givens stated that publishing the findings from and subsequent changes owing to community engagement helps to ensure that communi ties can see the outcomes and benefits of their work with organizations implementing AI tools. Kneese cautioned that organizations should consider stakeholder fatigue if they are bringing in the same community experts to elicit feedback.
|
From page 35... ...
This finding allowed CFPB to use standing regulations to combat AI-informed credit decisions that were unexplainable.8,9 Kneese asked how CFPB locates and addresses harms that might not be described in technical terms by users who report an issue. Meyer discussed the publicly accessible CFPB Consumer Complaint Database, a detailed collection of user or consumer feed back, as a robust source of information on emerging harms.
|
From page 36... ...
Behrend asked the panel to discuss how leaders might determine when to take risks regarding AI integration and when to be cautious. Burley stated that every orga nization should have preexisting boundaries for acceptable levels of risk that should be applied similarly to AI tools.
|
From page 37... ...
Tabassi emphasized that NIST is focused on the science of evaluation, providing the foundation for evidence-based and interoperable AI evaluations. As seen in the NIST AI RMF, this includes considerations such as investing in external independent oversight and using quantitative, qualitative, or mixed methods of evaluation as needed.
|
From page 38... ...
The NIST AI RMF does not make recommendations on thresholds for safety, according to Tabassi, due to the variety of assessment and risk mitigation strategies. The NIST AI RMF hopes to maximize the benefits of AI technology while minimiz ing negative consequences and harms.
|
Key Terms
This material may be derived from roughly machine-read images, and so is provided only to facilitate research.
More
information on Chapter Skim is available.