University of Sheffield – Autonomous Weapons Debate
What should we do about killer robots?
This isn’t a debate to be shelved until we reach some distant sci-fi future. As we delegate more and more tasks to robots, it’s time to ask ourselves: Are we comfortable with giving them the power to decide when to kill?
In April 2015, a multilateral meeting was held at the United Nations in Geneva. Representatives from the Campaign to Stop Killer Robots hope it will be another step on a path towards a ban on “lethal autonomous weapons systems”.
One of the leading campaigners pushing for this ban is Noel Sharkey, a Professor of Artificial Intelligence and Robotics at the University of Sheffield.
An experienced robotics researcher, Professor Sharkey is now focused on their ethical application in fields from health to education and the military. His expertise and campaigning have helped to shape the debate over the issue, and he has met with several military figures and governments to persuade them to take a stand against autonomous machines with the power to kill.
I have a real problem with autonomous weapons systems. I’m most concerned about the decision to kill being delegated to a machine, giving it the power to pull a trigger without human intervention.
Professor Sharkey has spent more than 15 years researching robot learning and behaviour. During this time, he has worked with colleagues on technical solutions for problems such as how to locate robots quickly and reliably, and how to help them avoid obstacles. He started researching the ethics behind the use of robots in 2005, and began speaking out against Autonomous Weapons Systems in 2007.
Professor Philip Alston – the former UN Special Rapporteur on Extrajudicial Executions – says that “because of the depth of his technical and scientific expertise, his work is an indispensable reference point for those working on these issues”.
Professor Sharkey’s concern lies with the development of systems that can identify a target and act on it themselves, without a human hand in the process. As part of his fight against this new development, he has appeared in print, radio, television and online over 1,000 times, in more than 50 countries.
He believes that these robots do not meet the standards for war set out in the Geneva and Hague conventions, as they will be unable to acceptably distinguish combatants from non-combatants, and the loss of life or damage will outweigh the military advantage gained.
He is also concerned that autonomous robots would be more vulnerable to malfunctions and cyber attacks, and that they would be more likely to strike illegitimate targets.
In a 2007 article in the Guardian newspaper, he said: “In reality, a robot could not pinpoint a weapon without pinpointing the person using it or even discriminate between weapons and non-weapons.
“I can imagine a little girl being zapped because she points her ice cream at a robot to share. Or a robot could be tricked into killing innocent civilians.”
Professor Sharkey has published several pieces in science and engineering journals, analysing whether robots have the cognitive capability to make lethal decisions. He has used his technical knowledge to explore questions of autonomy, and has argued that the failures that currently affect drones are likely to be even more pronounced with autonomous weapons systems. This work has been cited in academic publications more than 185 times.
His research led him to co-found the International Committee for Robot Arms Control in 2009. The ICRAC gathers experts in robot technology, ethics, international relations, arms control and human rights law to push for limits on the use of robot technology. Work with non-governmental organisations such as Human Rights Watch culminated in the launch of the Campaign to Stop Killer Robots in 2013. Human Rights Watch describes him as “one of the most effective spokespersons” in an international coalition of more than 40 NGOs, including Amnesty International, the International Peace Initiative and the Nobel Women’s Initiative.
Opponents to his line of argument have suggested that safeguards are preferable to a ban, and that the use of such machines could be restricted to non-human targets. However, Professor Sharkey counters that this approach is a “foot in the door” that can be opened further all too quickly.
Sharkey has already expressed concerns over the British unmanned supersonic super drone Taranis, which is pencilled in for introduction alongside manned combat aircraft after 2030.
“It might start off slowly, but if you look at aerial bombardment, people like Roosevelt initially tried to get treaties to stop that. Submarines were once completely against the rules of war as you were meant to show your colours, but they were so useful that eventually everyone adopted them.
“I’m convinced that if these things are in development, they will be used.”
As a result, he has been active in trying to influence policy-makers internationally. He has addressed senior military figures in 26 countries, and his research is referenced in officer training materials in the UK, US, France, and the Netherlands.
He has briefed UK parliamentarians on several occasions, including a cross-party briefing of around 20 MPs and peers in April 2013. He has also addressed groups in Germany in France, and was cited extensively in the 2013 European Parliament policy document on the use of unmanned robots in warfare.
Partly as a result of such efforts, the UN’s Convention on Conventional Weapons agreed to meet last year to discuss such lethal weapons. Their second meeting this month is a continuation of that discussion.
“A lot of people in the military support us”, says Professor Sharkey. “They don’t want to see these weapons, as there’s an accountability gap. Who is responsible if something goes wrong? Is it the commander, or the creator of the software?
“In fact, the UK is the only country that has taken a definitive stand on not banning them. The argument from our Ministry of Defence is that it has no intention of having autonomous weapons, but they will not support a moratorium or a ban.”
If the April 2015 talks progress well, the next step in the process would be the establishment of a Governmental group of experts to discuss the best course of action.
“The process could last another four or five years. Maybe it could take longer, but we hope not. We’ll have to wait and see.
“As an academic it’s been fascinating. I wrote a lot of papers and gave a lot of talks and people were very interested, but I didn’t know what the best course of action was. As soon as Human Rights Watch and other major Non-Governmental Organisations got involved, they knew exactly what to do and put me in touch with ambassadors and into diplomatic circles.
“We’re now very much part of a large international civil society movement.”