Skip to main content

Currently Skimming:

6 Deep Fakes
Pages 54-60

The Chapter Skim interface presents what we've algorithmically identified as the most significant single chunk of text within every page in the chapter.
Select key terms on the right to highlight them within pages of the chapter.


From page 54...
... He also contrasted deep fakes with adversarial examples, which are inputs designed to fool the perceptual system of a machine, whereas deep fakes are designed to fool humans. He pointed out that deep fakes can take the form of images or text that are generated by a machine, and suggested the potential for what he called deep head fakes, where, over time, machine-generated content could override one's entire mental model of what reality is.
From page 55...
... Deep fakes have rapidly emerged as a cruel and destructive tool for character assassination and cyberbullying, Stokes said. Among other examples, he described an early instance where the Facebook photos of a private citizen of Australia were used to create lewd deep fakes that spread across the Internet along with her personally identifiable information; with no real legal recourse, the individual experienced severe online and in-person harassment, exacerbated when the content was removed at her request by some Internet services.
From page 56...
... Manual investigation uses humans to detect fakes. For example, several news outlets have created deep fake task forces that are training editors and reporters to recognize deep fakes and create newsroom guidelines.
From page 57...
... Generative adversarial networks (GANs) have been able to create increasingly realistic synthetic images, but these often contain artifacts that help enable detection.
From page 58...
... Wrapping up, Rao concluded that the quality of synthetic media will continue to improve, and as they become increasingly successful at fooling humans, algorithmic detection methods will become more and more indispensable. The potential scale of distribution is a big part of the problem, but the recent adoption of private and ephemeral messaging products will pose further challenges, because while public data can be monitored, fact-checked, and used to create training data sets, private or ephemeral data cannot.
From page 59...
... He pointed to a paper that his team wrote in 2018 that found that humans are affected by adversarial examples even after a short exposure, and some of what advanced machines misclassify could be something that humans might also misclassify; this information could be extracted by a malicious actor for the purpose of tricking humans under certain conditions. Another participant noted that, besides deep fakes designed to deceive the public, they could be used in a military scenario where a deep fake influences short-term decision-making with catastrophic consequences.
From page 60...
... Might it be better to stop trying to detect fakes and look instead to random biometric authentication challenges, which make it harder for an attacker to convincingly pose as a human? While several speakers raised the challenges involved in addressing deep fakes vialegislation, Tyler Moore, University of Tulsa, argued that intent matters and can be legislated against, in the way that impersonating a police officer is illegal in certain circumstances.


This material may be derived from roughly machine-read images, and so is provided only to facilitate research.
More information on Chapter Skim is available.