•  
  •  
 

Abstract

People have used deep fake technology to generate nonconsensual pornography (NCP) since at least 2017. With technological advances, deep fakes are increasingly easy to create and difficult to identify. This article explores the dearth of both technological and legal recourse for victims of deep fake NCP. It first reviews existing technical solutions for detecting deep fakes, finding that successful deep fake classifiers are often only successful for a short while. As soon as computer scientists publish their work, others build on their discoveries to beat the classification mechanism. Deep fake technology is now so advanced that classification technologies cannot reliably detect them. Because technology cannot help victims of deep fake NCP trace and take down the deep fake content, this article next explores potential avenues for legal redress. Current interpretations of § 230 of the Communications Decency Act immunize websites that host user-contributed content against state civil claims, obstructing victims’ attempts to effectuate takedown. While existing scholarship regularly notes that § 230 does not preclude copyright infringement claims, the process of filing a copyright claim is not wellsuited to NCP victims and, even if it were, fair use doctrine likely protects websites that host deep fake NCP because of deep fakes’ transformative nature and the fact that NCP impacts a market in which most victims do not participate. This article finds that copyright—designed to protect intellectual property, not address sexual violence—is an inappropriate solution to deep fake NCP. This article ultimately concludes that the legislature should revise § 230 of the Communications Decency Act to provide legal recourse to victims of deep fake NCP.

Last Page

253

Share

COinS