Skip to main content

Currently Skimming:

5 Security Risks of Artificial Intelligence-Enabled Systems
Pages 44-53

The Chapter Skim interface presents what we've algorithmically identified as the most significant single chunk of text within every page in the chapter.
Select key terms on the right to highlight them within pages of the chapter.


From page 44...
... Panelists included Nicolas Papernot, research scientist at Google Brain; Bo Li, assistant professor in the Department of Computer Science at the University of Illinois, Urbana-Champaign; and Zico Kolter, assistant professor in the Computer Science department at Carnegie Mellon University and chief scientist of the Bosch Artificial Intelligence Center. As more systems, including military and intelligence systems, become AI enabled, thinking about how adversaries might exploit them becomes more critical.
From page 45...
... He went on to provide specific examples for three of the eight principles.3 1  B.W. Lampson, 2004, Computer security in the real world, IEEE Computer 37(6)
From page 46...
... Since everyone has their own idea of what privacy means, security researchers have coalesced around a definition known as differential privacy, which refers to a mathematical technique used to maximize the accuracy of queries from databases while minimizing impact on the privacy of that data.4 Differential privacy algorithms have been developed to make it impossible for an adversary to tell what data from which individuals were included in a training set, so the adversary can't learn anything about the individuals or any information about the data they contributed. The most standard algorithm for training ML algorithms is stochastic gradient descent, which takes a batch of data, computes the error, computes the gradients of the error in relation to the model parameters, and applies the gradients to update the model parameters.
From page 47...
... However, in cases of adversarial manipulation, at a certain layer, the test point's representation and label diverge from those of the training data whose input representations were similar, resulting in a mislabeled test point as an output. Papernot suggested that examination of the labels at each stage of a deep neural net, and potentially imposing some constraints on the structure of the process across all layers, might help to identify or reduce the potential for adversarially manipulated inputs.
From page 48...
... He noted that these complementary synergies could be explored through research on the relationship between private learning, robust learning, and generalization, or the relationship between data poisoning and learning from noisy data or in the presence of distribution drifts. As a final note, he quoted Goodhart's law: "When a measure becomes a target, it ceases to be a good measure."7 SECURE LEARNING IN ADVERSARIAL PHYSICAL ENVIRONMENTS Bo Li, University of Illinois, Urbana-Champaign Li discussed results from her research into physical world adversarial attacks on ML systems.
From page 49...
... Li showed examples of successful attacks based on physical modification of a stop sign against the sophisticated object detection systems YOLO8 and Fast R-CNN.9 For the YOLO attacks, she showed two videos, side by side, of a car approaching a stop sign. In one, the stop sign was unmodified.
From page 50...
... WORKING TOWARD FORMALLY RobustML Zico Kolter, Carnegie Mellon University Kolter noted that the workshop speakers had provided many examples of how ML systems can be broken by an adversary. Deep neural networks are vulnerable to both physical and digital attacks, and many proposed defenses against adversarial attacks prove ineffective.
From page 51...
... The process requires about 50 times more computations than what is needed for a normal network, which is not a problem for big companies with significant compute resources. The research is now at a point where it can train these complex networks; however, using training data from CIFAR,12 their network had about 46 percent provable accuracy, meaning that although the computational problem has been solved, the accuracy gap on complex data sets remains large and needs to be closed, Kolter said.
From page 52...
... Physical-World Attacks Vorobeychik asked the panelists to elaborate further on how defenders would deal with attacks that modify physical objects, and how to create an appropriate abstraction of the constraints on such attacks. Kolter posited that researchers might consider creating generative models of the physical world.
From page 53...
... Papernot said ML is different from cryptography or formal methods because it has many different application domains; instances of ML are domain-specific and may not be transferrable to another. For example, a robust vision model might not prove robust for malware detection, and the differential privacy formalism he discussed does have limitations.


This material may be derived from roughly machine-read images, and so is provided only to facilitate research.
More information on Chapter Skim is available.