Skip to main content Skip to secondary navigation
Article Newspaper/Magazine

The Social Impact of Deepfakes

In many ways, this special issue was inspired by a visit to the University of Washington in 2018. Seitz and his colleagues had just published the algorithms1 that enabled their now famous Obama video, in which a few hours of simple audio clips could drive a high-quality video lip syncing. At the end of the video, a young Obama audio clip is parroted perfectly by a video version of Obama who is twice his age. This is likely the most canonical, if not the original “deepfake” video. It is enabled by machine learning, which uses multiple videos as a training set to categorize speech into “mouth shapes,” which are then integrated into an existing target video. The outcome is a stunningly real video that few would give a second glance to—it simply looks like President Obama talking. Aside from the realism of the videos, there were two striking things about Seitz's presentation.First, the algorithms that build deepfakes are easier to build than detect, based on the very nature of the Generative Adversarial Networks employed according to Goodfellow,2 these models are constructed by pitting “counterfeiters” against “police,” and successful models by definition have already shown that the fake can beat detection methods. Indeed, since deepfakes have migrated from top computer science laboratories to cheap software platforms all over the world, researchers are also focusing on defensive algorithms that could detect the deception (see Tolosana et al.,3 for a recent review). But Seitz was not confident about this strategy, and likened the spiral of deception and detection with an arms race, with the algorithms that deceive having the early advantage compared with those that detect.The second eye opener was the many social and psychological questions that these deepfakes raised: does exposure to deepfakes undermine trust in the media? How might deepfakes be used during social interactions? Are there strategies for debunking or countering deepfakes? There has been ample work done in computer science on automatic generation and detection of deepfakes, but to date there have only been a handful of social scientists who have examined the social impact of the technology. It is time to understand the possible effects deepfakes might have on people, and how psychological and media theories apply.

Author(s)
Jeffrey T. Hancock
Jeremy Bailenson
Publisher
Cyberpsychology, Behavior, and Social Networking
Publication Date
March 17, 2021