Skip to main content

Currently Skimming:

4 Detection and Mitigation of Adversarial Attacks and Anomalies
Pages 13-18

The Chapter Skim interface presents what we've algorithmically identified as the most significant single chunk of text within every page in the chapter.
Select key terms on the right to highlight them within pages of the chapter.


From page 13...
... AI can be used to bolster defenses and proactively disable AI-powered attacks. It is also necessary to counter adversarial AI and protect against adversarial environments, theft of data and models, corruption, and evasion.
From page 14...
... Rao commented that the following series of techniques could be used to defend against AI-powered malware:  Feature extraction and pattern recognition to improve decision making and detect unknown threats;  Natural language processing to collect text on past and current breaches, consolidate threat intelligence, and increase security knowledge;  Use of reasoning to locate evidence of breaches, remediate planning and outcomes, and anticipate new threats and next steps; and  Automation of tasks to reduce the burden on the human analyst and decrease reaction time. AI can also be used to address the following types of security needs:  Improving modeling of behaviors to better identify emerging and past threats and risks.
From page 15...
... Rao described IBM's open-source Adversarial Robustness Toolbox, which is continually updated with attacks and countermeasures; this can be a resource to developers of AI services. A workshop participant asked if a balance exists between research work in adversarial attacks and adversarial defenses and whether there is a reward system for doing one over the other.
From page 16...
... An audience participant wondered if training models using adversarial examples could help build in a resistance to attacks. Athalye noted that this process has been shown to increase robustness to some degree, but current applications of adversarial training have not fully solved the problem, and scaling is a challenge.
From page 17...
... Even more restricted settings, such as the Google Cloud Vision Application Programming Interface, are vulnerable to adversarial attacks, Athalye continued. He noted that in the hundreds of papers published on defenses for adversarial attacks, many of the proposed defenses lack mathematical guarantees.
From page 18...
... He added that with knowledge of time series models from the 1970s, the concept of adversarial examples should not come as a surprise to anyone. Another audience participant discussed the black-box inversion that results from making many queries against a model and the characterization that is needed to produce an adversarial example.


This material may be derived from roughly machine-read images, and so is provided only to facilitate research.
More information on Chapter Skim is available.