Reasons to Punish Autonomous Robots
Zac Cogley considers the design of robots that would be responsive to human criticism and blame.
Deploying autonomous robots in military contexts strikes many people as terrifying and morally odious. What lies behind those reactions? One thought is that if a sophisticated artificial intelligence were causally responsible for some harm, there will be no one to punish for the harm because no one—not programmers, not commanders, and not machines—would be morally responsible. Call this the no appropriate subject of punishment objection to deploying autonomous robots for military purposes. The objection has been discussed by several authors (Matthias 2004; Lucas 2013; Danaher 2016), but is most fully developed in Robert Sparrow’s paper “Killer Robots” (2007) .
The Gradient is a reader-supported publication. To receive new posts and support our work, consider becoming a free or paid subscriber.