Shallow Fakes

By: Albertina Antognini & Andrew Keane Woods*

Abstract 

Scholars and policymakers are rightly concerned with online deception, especially intentional efforts to spread fake news. But the problem of deception on social media is both subtler and more endemic than a series of malicious actors disseminating deepfakes. While platforms are indeed awash in fakery, many of these fakes are shallow—superficial tweaks to one’s self-presentation—and they are of the platforms’ own making.

Every day on social media, users place filters on their selfies, post photos out of context, and otherwise present a fake version of their lives. This is no accident. The ability to curate a better-than-real image is the sine qua non of social media platforms, whose business model relies on blurring the distinction between true and false, authentic and inauthentic, and, ultimately, content and advertising.

We argue that this widespread superficial fakery leads to a host of underappreciated costs. At a bare minimum, the sheer scale of deception warrants greater scrutiny, which would require more information sharing from the platforms. What little we do know is troubling. The platforms’ internal research shows that users, especially younger users, feel enormous pressure to adhere to a specific ideal of beauty. The pressure to conform is intense and manifests itself in traditionally gendered and racialized ways, with harms often falling on already-marginalized groups. Then there are epistemic and democratic concerns. The erosion of public trust and political polarization are often pinned on digital echo chambers, foreign influence campaigns, or both. But what share of the blame belongs to the fact that so much of everyday life takes place in a space that is marked by constant, casual deception?

This Article defines shallow fakes and explains their centrality to the social media ecosystem. It then turns normative, assessing the costs of shallow fakes, which often slip through the hard and soft law that govern other kinds of public information sharing, like advertising and journalism. We end with prescriptions, chief among them a need for more transparency around how the platforms operate. 

* James E. Rogers Professor of Law and Milton O. Riepe Professor of Law, respectively, University of Arizona James E. Rogers College of Law. The authors thank workshop participants at the Technologies of Deception conference organized by Yale Law School’s Information Society Project, Stanford Law School’s Grey Fellow’s Forum, the Family Law Scholars and Teachers Conference, and the University of Arizona. We are especially grateful for feedback from Michael Boucai, Andy Coan, Derek Bambauer, Jane Bambauer, Ellie Bublick, Joanna Grossman, Aníbal Rosario-Lebrón, George Fisher, Jill Hasday, Lynne Henderson, Xiaoqian Hu, Mugambi Jouet, Shalev Roisman, Charisa Kiyô Smith, Tyler Valeska, Deepa Varadarajan, and Tammi Walker. 

[FULL TEXT]