Lethal Autonomous Weapons Systems (LAWS): The Humanitarian Consequences of Automating Harm  

The rapid advances in artificial intelligence combined with global military interest increasingly investing in lethal autonomous weapons systems, makes us deeply concerned that the complete automation of lethal harm may imminently come into existence. Otherwise stated, removing humans from the decision-making process and ceding control over who lives and who dies to software and hardware, which would have dire humanitarian consequences.

We believe that artificial intelligence (AI) possesses the greatest potential of any technology in human history to revolutionize society for the better: to cure diseases, lift people from poverty, and improve the wellbeing of all people on our planet. However we are deeply concerned that this positive future with AI may be curtailed by the use of AI to harm people, specifically, to fully automate harm, in the case of lethal autonomous weapons systems (LAWS).

Combining AI software, with robotics hardware to automate lethal human harm would represent a third revolution in warfare following gunpowder and nuclear weapons and is on the brink of triggering a global arms race. Hence, there is a deeply urgent need for the international community to ensure this class of weapons never comes into existence existence, as LAWS technology cannot assess the moral and legal legitimacy of harm, that is a decision that only humans have the authority to make. Furthermore, we believe that to ensure a positive future with AI, we must draw a moral line over its use cases, and autonomous killing is clearly on the side of morally repugnant.

The rapid developments and increasing sophistication of artificial intelligence have challenged the public and policy makers to think about how it should be governed and used in society. The decisions we make as an international community on lethal autonomous weapons will provide the precedent for thinking about how to apply the principles of human rights, ethics and preexisting legal frameworks to artificial intelligence. The collective agreement that humans must retain meaningful control over the use of lethal force against other humans is a grounding principle in the broader discussion of AI governance. If global consensus cannot be reached on that principle, there is little guidance and moral standing for more nuanced conversations of the ethical used of artificial intelligence in civilian settings.

Recommended Reading:

Campaign to Stop Killer Robots: https://www.stopkillerrobots.org/learn/

The Future of Life Institute: https://futureoflife.org/lethal-autonomous-weapons-pledge/

PAX: https://www.paxforpeace.nl/publications/all-publications/where-to-draw-the-line

Human Rights Watch: https://www.hrw.org/report/2016/12/09/making-case/dangers-killer-robots-and-need-preemptive-ban#