Abstract
In the short story Runaround, science fiction author Isaac Asimov first introduced the world to an ethical framework for artificial intelligence known as the Three Laws of Robotics. These laws state: (1) “a robot may not injure a human being or, through inaction, allow a human being to come to harm”; (2) “a robot must obey the orders given it by human beings except where such orders would conflict with the First Law”; and (3) “a robot must protect its own existence as long as such protection does not conflict with the First and Second Laws.” On the surface, the Three Laws appear to provide a tidy regulatory framework for alleviating society’s concerns regarding how and when machines may adversely interact with humans, preventing harm or even death. These Three Laws are particularly appealing in our current time where robots and artificial intelligence are no longer the stuff of science fiction, but rather increasingly part of our everyday lives. Yet, society cannot rely on Asimov’s Three Laws of Robotics to provide a much- needed regulatory framework for artificial intelligence. These Laws are not only fictional, but also practically flawed because they place the legal, as well as the ethical, duties on the artificial intelligence and not on the actual intelligence—the human—behind the machine.
Last Page
46
Recommended Citation
William Goodrum & Jacqueline Goodrum,
Beyond the Three Laws: An Argument for Regulating Data Scientists as Fiduciaries,
27
Rich. J.L. & Tech
1
(2024).
Available at:
https://scholarship.richmond.edu/jolt/vol27/iss3/1