•  
  •  
 

Abstract

Deepfakes are manipulated media, often synthesized with machine learning, that create realistic digital impersonations, avatars, or derivative images based on pre-existing source material. Deepfakes are a source of technological innovations that can positively change our culture. However, “malicious” deepfakes pose serious threats to individuals and society at large, given their inherently upfront harm, rapid dissemination, and constant evolution that escapes an easy definition. Among privacy, technology, and legal experts, crafting policy to address malicious deepfakes has become a contentious hotspot. This Article outlines the current gap in effective policy addressing malicious deepfakes. First, existing legal remedies are ineffective at addressing deepfake harms. Second, proposed deepfake legislation does not fare much better, by being too broad, too narrow, or utilizing impractical requirements. Third, some scholars have correctly suggested that online platforms be held responsible for deepfake regulation. However, many of these attempts, including a “reasonable steps standard,” focus on stripping online platforms of important protections under Section 230 of the Communications Decency Act. Ultimately, this Article sets forth a novel tripartite proposal to better address malicious deepfake harm while protecting technological innovation and expression. First, online platforms should provide extensive transparency disclosures to inform their users and the public more generally about their practices regarding deepfakes and manipulated media. Second, the government should collaborate with the private sector to address deepfakes. Third, both online platforms and the government should invest in public education resources about deepfakes and media literacy. This proposal best addresses the unique characteristics of malicious deepfakes, preserves technological innovation, and balances the competing values underlying powerful deepfake technology.

Last Page

74

Share

COinS