Abstract

In little over a year, the possibility of a complete ban on autonomous weapon systems—known colloquially as “killer robots”—has evolved from a proposal in an NGO report to the subject of an international meeting with representatives from over eighty states. However, no one has yet put forward a coherent definition of autonomy in weapon systems from a law of armed conflict perspective, which often results in the conflation of legal, ethical, policy, and political arguments. This Article therefore proposes that an “autonomous weapon system” be defined as “a weapon system that, based on conclusions derived from gathered information and preprogrammed constraints, is capable of independently selecting and engaging targets.”

Applying this definition, and contrary to the nearly universal consensus, it quickly becomes apparent that autonomous weapon systems are not weapons of the future: they exist and have already been integrated into states’ armed forces. The fact that such weaponry is currently being used with little critique has a number of profound implications. First, it undermines pro-ban arguments based on the premise that autonomous weapon systems are inherently unlawful. Second, it significantly reduces the likelihood that a complete ban would be successful, as states will be unwilling to voluntarily relinquish otherwise lawful and uniquely effective weaponry.

But law is not doomed to follow technology: if used proactively, law can channel the development and use of autonomous weapon systems. This Article concludes that intentional international regulation is needed, now, and suggests how such regulation may be designed to incorporate beneficial legal limitations and humanitarian protections.

Document Type

Article

Publication Date

2015

Share

COinS