Skip to main content

Currently Skimming:

6 Recent Trends in Machine Learning, Parts 1 and 2
Pages 23-34

The Chapter Skim interface presents what we've algorithmically identified as the most significant single chunk of text within every page in the chapter.
Select key terms on the right to highlight them within pages of the chapter.


From page 23...
... Classic machine learning presumes all classes are known, and it classifies all of the feature space; this does not apply to open-set problems. To address this problem, Boult's team developed an extreme value machine (EVM)
From page 24...
... OpenMax takes one of the deep feature layers, using that to represent a particular class by taking positive instances of that class, and builds extreme value distribution to provably solve open-set problems. OpenMax looks at distance and representational space and provides probability-based estimates for image classes; the adversarial image is not consistent with other representations, so the OpenMax labels it as unknown (see Figure 6.1)
From page 25...
... Adversarial examples are image perturbations that are invisible to humans but can easily fool deep networks. An open set is naturally thought of in "image space," but the methods work in "feature space." Adversarial examples show that for current deep neural networks, the two spaces are often not well related.
From page 26...
... The field now uses multiple networks based on ResNet and InceptionNet, and researchers have achieved the true acceptance rate around 90 percent at the false acceptance rate of 1 in 10 million for the unconstrained face verification problem on a challenging face data set, according to Chellappa. Deep learning techniques are being used for the IARPA Deep Intermodal Video Analytics (DIVA)
From page 27...
... Typically, learning algorithms perform poorly on samples drawn from a distribution other than training data, and deep models trained on synthetic data do not generalize well to real data. However, labeling real data is difficult and time consuming.
From page 28...
... He explained that, given our robust statistics literature, it should not be a surprise that even small perturbations can break deep learning networks, which are hierarchical nonlinear regression models. He next discussed two current defense approaches: Defense-GAN (i.e., an algorithm to detect outliers; see Figure 6.3)
From page 29...
... He described this as evidence that a revolution is impending and noted that regulatory and acceptance issues central to the medical field might also be of interest to the IC. RECENT ADVANCES IN OPTIMIZATION FOR MACHINE LEARNING Tom Goldstein, University of Maryland Tom Goldstein, University of Maryland, explained that his talk would focus on theoretical perspectives on adversarial examples.
From page 30...
... In this instance, it only took one poison image to change the behavior of the classifier at test time. Terry Boult, University of Colorado, Colorado Springs, asked if recomputing happens at every stage, since the feature representations are evolving, and Goldstein responded that since one is only training the deeper layers of
From page 31...
... This random sampling process produces adversarial behavior, and this is the most robust classifier one could construct. Levy and Pellegrino's 1951 isoperimetric inequality theorem states that the ε-expansion of any set that occupies half of the sphere is at least as big as the ε-expansion of a semi-sphere.
From page 32...
... Goldstein noted that Fawzi et al.'s work uses a different set of assumptions, so it is difficult to compare the two. FORECASTING USING MACHINE LEARNING Aram Galstyan, Information Sciences Institute, University of Southern California Aram Galstyan, University of Southern California, opened his presentation with a discussion of a 1.5year-long project on hybrid forecasting of geopolitical events before moving to a brief discussion about machine learning.
From page 33...
... For the sake of simplicity for users, Phase 1 of the project has only one model, the AutoRegressive Integrated Moving Average (ARIMA) model for time series forecasts, and it is not a machine learning model.
From page 34...
... Past events contain useful information for predictions of future events, so the past history of events can be leveraged to come up with better-than-random accuracy. Carefully constructing the training data set allows performance better than the baseline, according to Galstyan.


This material may be derived from roughly machine-read images, and so is provided only to facilitate research.
More information on Chapter Skim is available.