National Academies Press: OpenBook
« Previous: Appendix C: Workshop Statement of Task
Suggested Citation:"Appendix D: Capability Technology Matrix." National Academies of Sciences, Engineering, and Medicine. 2019. Robust Machine Learning Algorithms and Systems for Detection and Mitigation of Adversarial Attacks and Anomalies: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/25534.
×

D

Capability Technology Matrix

Workshop participants identified both near- and long-term enabling technology capabilities to help guide the Intelligence Community’s future technology investments (Table D.1).

TABLE D.1 Short- and Long-term Technology Capabilities Described by Workshop Speakers

Participant Short-Term Capabilities (3-5 years) Long-Term Capabilities (5-10 years)
Matthew Turek, Defense Advanced Research Projects Agency
  • Formalized requirements to test and assess models
  • Verification of whether existing models can run using new requirements
  • Hybridized models that incorporate contextual information in addition to data points
Hany Farid, Dartmouth College
  • Creation of more automated processes and faster work, owing to the sheer volume of data that is available
  • Improved accuracy
  • Use of secure-imaging pipelines to prevent the manipulation of digital evidence
  • Better cooperation from social media to reduce deepfakes
  • Increased education for citizen awareness to better recognize “fakes” in the digital age
  • Guidelines for responsible deployment of technologies
  • Consideration of the ethical and societal implications of algorithms that are advertised as artificial intelligence (AI), but are actually linear regression models, particularly in the
Suggested Citation:"Appendix D: Capability Technology Matrix." National Academies of Sciences, Engineering, and Medicine. 2019. Robust Machine Learning Algorithms and Systems for Detection and Mitigation of Adversarial Attacks and Anomalies: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/25534.
×
Participant Short-Term Capabilities (3-5 years) Long-Term Capabilities (5-10 years)
  • predictive space, (e.g., popular algorithms used to make decisions on university admissions and employment)

  • The weaponization, use and ethical implications of black-box AI algorithms
Joysula Rao, IBM Corporation
  • AI techniques used for defense
  • AI-powered attacks
  • Approaches and methodologies for developing provable security
Anish Athalye, Massachusetts Institute of Technology
  • The increase of the use of AI/machine learning to attack real systems
  • More realistic attacks by malicious actors
  • More principled evaluations of defenses
  • Provable/certifiable defenses
  • Increased security of machine learning systems with answers to the following questions:
    • What is an adversarial example?
    • What is the specification of a machine learning system?
  • How should a machine behave given a particular input?
Terry Boult, University of Colorado, Colorado Springs
  • Open-set recognition algorithms for well-behaved, low-moderate dimensional feature spaces
  • Realistic large open-set data sets/protocols
  • Better understanding of image–feature relationships
  • Ability of iterative Layer-wise Origin-Target Synthesis (LOTS) to attack all kinds of systems
  • LOTS attacks are reasonably portable
  • Use of LOTS to build physical attacks/camouflage
  • Problems with good representations to relate images to features
  • Better network models for open-set deep recognition
  • High-dimensional open-set algorithms
Rama Chellappa, University of Maryland, College Park
  • Explore the robustness of deeper networks
  • Work with multimodal inputs
  • Increase theoretical analysis
  • Investigate how humans and machines can work together to thwart adversarial attacks
  • Demonstrate on more difficult computer vision problems (e.g., face verification/identification, action detection, detection of doctored media)
  • Keep changing the network configuration and parameters in a probabilistic manner with guaranteed performance (i.e., adaptive networks)
  • Humans and machines work together
  • Design networks that incorporate common sense reasoning
Aram Galstyan, Information Sciences Institute, University of Southern California
  • Hybrid sensemaking systems
Judy Hoffman, Georgia Institute of Technology
  • Develop effective uncertainty measures and confidence intervals
Suggested Citation:"Appendix D: Capability Technology Matrix." National Academies of Sciences, Engineering, and Medicine. 2019. Robust Machine Learning Algorithms and Systems for Detection and Mitigation of Adversarial Attacks and Anomalies: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/25534.
×
Participant Short-Term Capabilities (3-5 years) Long-Term Capabilities (5-10 years)
  • Improve auto-calibration and understanding of when things have changed
  • Rethink the notion of domains in adaptation literature
  • Address the idea of overall robustness in the adaptation literature—find a way to improve control over the initial model to reduce susceptibility to natural or artificial changes
Anthony Hoogs, Kitware, Inc.
  • Generative adversarial networks effectively applied to video
  • Cataloguing of large, common objects and events
  • Large graphical processing unit farms required to keep up with video generation
  • Structure learning of deep networks on a large scale
  • Increased model transfer between video domains
  • Human-level accuracy for action recognition in single-action, temporarily clipped videos
  • Free-form, text-based queries with limited and open syntax and vocabulary
  • Adversarial attacks on video recognition problems (e.g., action recognition)
  • Cataloguing of all objects, scene elements, and events
  • Human-level accuracy for action and complex activity recognition in surveillance video
  • Similar accuracy to humans but much faster for video search and retrieval for complex activities and abstractions in Internet videos
Nathalie Baracaldo, IBM Corporation
  • Extraction attacks in learning models
Yunyao Li, IBM Corporation
  • Declarative systems that enable the building of more complex and interpretable models at scale
  • SystemT: a declarative text understanding system for the enterprise
  • SystemER: a declarative entity understanding and resolution system for the enterprise
  • Human-in-the-loop technologies that allow human and machines to co-create artificial intelligence algorithms/models
  • Emerging technologies to enable cross-lingual universal semantic understanding of natural language
  • Domain-specific knowledge base construction
  • Data need to be collected so they follow some procedure that the institutes, including commercial institutes, can use to develop the technology without issue (i.e., Institutional Review Board-type approval for data connections)
  • Automatically build robust explainable machine learning models easily adaptable across languages and domains requiring minimal labeled data and human input
Suggested Citation:"Appendix D: Capability Technology Matrix." National Academies of Sciences, Engineering, and Medicine. 2019. Robust Machine Learning Algorithms and Systems for Detection and Mitigation of Adversarial Attacks and Anomalies: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/25534.
×
Page 66
Suggested Citation:"Appendix D: Capability Technology Matrix." National Academies of Sciences, Engineering, and Medicine. 2019. Robust Machine Learning Algorithms and Systems for Detection and Mitigation of Adversarial Attacks and Anomalies: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/25534.
×
Page 67
Suggested Citation:"Appendix D: Capability Technology Matrix." National Academies of Sciences, Engineering, and Medicine. 2019. Robust Machine Learning Algorithms and Systems for Detection and Mitigation of Adversarial Attacks and Anomalies: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/25534.
×
Page 68
Next: Appendix E: Acronyms »
Robust Machine Learning Algorithms and Systems for Detection and Mitigation of Adversarial Attacks and Anomalies: Proceedings of a Workshop Get This Book
×
Buy Ebook | $14.99
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

The Intelligence Community Studies Board (ICSB) of the National Academies of Sciences, Engineering, and Medicine convened a workshop on December 11–12, 2018, in Berkeley, California, to discuss robust machine learning algorithms and systems for the detection and mitigation of adversarial attacks and anomalies. This publication summarizes the presentations and discussions from the workshop.

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    Switch between the Original Pages, where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

    « Back Next »
  6. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  7. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  8. ×

    View our suggested citation for this chapter.

    « Back Next »
  9. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!