Questions of Morality
Q005 | July 27, 2015
..
Should we continue to develop autonomous military weapons?
In 2012, U.S. Department of Defense (DoD) issued its first public policy on autonomous weapons systems (AWS). DoD Diretive #3000.09 defines AWS as “a weapon system that, once activated, can select and engage targets without further intervention by a human operator,” including “human-supervised autonomous weapon systems that are designed to allow human operators to override operation of the weapon system, but can select and engage targets without further human input after activation.”
The same documents includes “guided munitions that can independently select and discriminate targets” among AWS. Now, imagine a bomb that has the capacity to “select and discriminate targets” without human intervention, helped by its software alone. (John Markoff (2011), Fearing Bombs That Can Pick Whom to Kill; in “The New York Times”, November 11.)
For the general public, but in particular for researchers, we are far from the robots imagined by 1950s and 1960s US cartoonists and revived or reinvented by Hollywood over and over. We are far from ROBOCOP and S.H.I.E.L.D.’s Helicarrier, but autonomous weapons are part of the military technology already in use. United State, Israel and Norway lead the technological development and they aim high.
A few weeks ago, Future of Life Institute, a non-profit entity based in U.S., launched an international petition against AWS. They warn that, though “AI has great potential to benefit humanity in many ways,” “starting a military AI arms race is a bad idea, and should be prevented by a ban on offensive autonomous weapons beyond meaningful human control.” A similar call for ban of “killer robots” was signed by experts from 37 countries and released in mid-October 2013. It was argued that “given the limitations and unknown future risks of autonomous robot weapons technology,” the development and deployment of autonomous weapons should be prohibited. Moreover, the experts wrote that “decisions about the application of violent force must not be delegated to machines.”
One major argument thrown in the face of the general public by those that support AWS is the idea that autonomous weapons will limit the number of soldiers dying on foreign soil. A second argument is represented by the idea that these weapons will also limit the situations of friendly-fire or civilian casualties because they will be able to discriminate targets better than a human being (because of human senses limitations, because of technological enhancement).
Though there seems to find some benefit in developing autonomous weapons, is there sufficient moral ground to stop this process and avoid an AI arms race?
..