What are “Killer Robots” and why are Governments and Campaigners Worried?

Published August 22nd, 2018 - 10:06 GMT
“Killer robots” are, in fact, exactly what the name suggests /AFP
“Killer robots” are, in fact, exactly what the name suggests /AFP

The words “Killer Robots” might sound like they belong in Hollywood studios, rather than in high-level debates on international law. But a new report from Human Rights Watch argues that “killer robots” could be a very real, and dangerous presence on battlefields in the next few years if some developers get their way.

“Killer robots” are, in fact, exactly what the name suggests. They are fully autonomous weapons systems, designed to choose and hit targets in battle, completely independent of human instruction. The technology aims to create systems that can decide who lives and dies without any human input.

The fact that military technology is becoming increasingly automated is no secret. Many think of Unmanned Aerial Vehicles (UAVs), or drones as a very modern phenomenon, and perhaps the pinnacle of automatic warfare. But the history of trying to make machines kill instead of people is a long one. The first weapon to make use of “automation” was the American Civil War-era Gatling Gun, the forerunner of modern automatic and semi-automatic rifles.

The same logic that guided the creation of the Gatling Gun continues to guide the development of ever-more autonomous weapons. If a machine can do soldiers’ jobs, so the thinking goes, then less soldiers die on the battlefield, and perhaps wars can be finished sooner. The problem, though, is always the same - as the makers of the Gatling Gun came to see. When one side acquires destructive technology, others are compelled to produce and use it too.

This problem has not gone away. The use of drone strikes was similarly justified. “Precision” strikes from UAVs were meant to be able to take out individuals who presented serious threats without risking the lives of soldiers. They could, perhaps, allow more destructive forms of conventional warfare to be avoided. And for so long as drone technology was the high-tech, expensive purview of well-equipped armies, that utilitarian argument might have worked.

But now drones look set to be the next AK-47 - cheap, easy to procure, and easy to adapt. Rebel groups across the Middle East and beyond have made use of them. So apparently did dissidents attempting to kill Venezuelan President Nicolas Maduro. Drone technology has gone from being the victor’s secret weapon, to a serious security challenge in itself.

Yet this hasn’t swayed attempts to automate weapons further. Today’s “automated” warfare requires a human operator to control it remotely. Unmanned tanks and drones, (though there are plenty of ethical concerns around them) do at least have a person to make decisions, and to hold to account. Now, there are developers of military technology aiming to take that human out of the loop. And they will soon succeed.

Professor Stuart Russell, a renowned expert in Artificial Intelligence at the University of California, Berkeley told Al Bawaba:

All of the component technologies of autonomy—flight control, swarming, navigation, indoor and outdoor exploration and mapping, obstacle avoidance, detecting and tracking humans, tactical planning, and coordinated attack—have been demonstrated and most of these are already in commercial products. Building a lethal autonomous weapon, perhaps in the form of a multi-rotor micro-UAV, is easier than building a self-driving car, since the latter is held to a far higher performance standard and must operate without error in a very wide range of complex situations. This is not “science fiction”; autonomous weapons don’t have to be humanoid, conscious, and evil; and the capabilities are not “decades away” as claimed by some countries.”

 

Fully autonomous weapons obviously present moral and legal concerns. But international law simply can’t keep up with technological development. The Geneva Conventions are still the gold standard for determining what is and is not legal in conflict, and are meant to prevent the worst of war’s crimes. But these were written in 1949. They are not ready for the day that robots will be choosing who lives and who dies in battle.

This is why myriad campaigners across different fields are attempting to ban the development of “killer robots” before they make it onto the battlefield. Human Rights Watch’s new report argues that there should be a pre-emptive ban on the development and use of killer robots before they become a bigger problem.

The report argues that killer robots represent a violation of the laws of war by design. One of the cornerstones of the Geneva Conventions is the principle of “humanity”. This requires that decision makers in war protect human dignity and human life where possible. The ability to do so requires the ability to empathise with other humans. No algorithm or software in a killer robot can replace this, argues Human Rights Watch, and therefore they cannot be legal. 

 

Stopping defence contractors developing dangerous new technology sounds like a daunting challenge. But the report argues that not only is there grounds to do it, but it has been done before. There is a clause in International Humanitarian Law called the Martens Clause, which states that “…in the absence of an international agreement, established custom, the principles of humanity, and the dictates of public conscience should provide protection for civilians and combatants”. And it was invoked in the 1990s when “blinding lasers” were being developed for use in battle. Neither public opinion, nor the principle of humanity, supported the practice of deliberately blinding one’s enemy. 

According to opinion polls, the general public are uneasy about the idea of killer robots too. 55% of Americans, (and, strikingly, 73% of serving military personnel) were found to be against the development of fully autonomous weapons in a 2013 survey. Similar results were noted in other countries such as Belgium. Several leading technology developers have sided with this view, including DeepMind, Tesla and Clearpath Robotics. Google also came under pressure from its own employees this year for its involvement in a US military program to automatically process drone footage. The tech giant eventually agreed not to renew the contract after its expiry.

But if the moral consensus of both the experts and the public are stacked against the development of killer robots, why do some governments continue to support the development? Bonnie Docherty, the Senior Researcher in the Arms Division of Human Rights Watch, and the principle author of the report told Al Bawaba:

While 26 countries have called for a ban on fully autonomous weapons, some have expressed opposition to a preemptive prohibition. These countries support the development of fully autonomous weapons because the systems would be able to process information faster than humans. They argue that we should wait and see what technology brings before banning it. But the many concerns surrounding fully autonomous weapons – legal, moral, accountability, security and technological – would far outweigh any potential military advantage. Therefore, countries should adopt a preemptive prohibition on their development, production and use as soon as possible.”

The opportunity to take action against killer robots is around the corner. Member states that are party to the Convention on Conventional Weapons are meeting in Geneva next week, where they will have the chance to set in motion a new treaty outlawing fully autonomous weapons. But while a large number of nations have supported a preemptive ban, some states have suggested that the existing legal frameworks for protecting civilians are sufficient.

Professor Stuart Russell continued:

Russia, the US, the UK, and Israel are all opposed to bans, but for different reasons. The US and UK state that existing law is sufficient to prevent (in a legal sense) indiscriminate killing of civilians. Russia says, despite all the evidence, that we are decades away from having such weapons. None of these countries have addressed the major point. This is that because they do not require individual human supervision, autonomous weapons are potentially scalable weapons of mass destruction (WMDs); essentially unlimited numbers can be launched by a small number of people. This is an inescapable logical consequence of autonomy. 

 

I estimate, for example, that roughly one million lethal weapons can be carried in a single container truck or cargo aircraft, perhaps with only 2 or 3 human operators rather than 2 or 3 million. Such weapons would be able to hunt for and eliminate humans in towns and cities, even inside buildings. They would be cheap, effective, unattributable, and easily proliferated once the major powers initiate mass production and the weapons become available on the international arms market. Attacks could escalate smoothly from 100 casualties to 1,000 to 10,000 to 100,000.”

If the use of drones has a further cautionary tale, it is that the technology offers an advantage to the developer for only so long as it remains exclusive to them, or at least limited in use. Once the technology proliferates, it becomes a threat in its own right. If this were to happen with killer robots, it would be a very grave threat indeed.

Subscribe

Sign up to our newsletter for exclusive updates and enhanced content