Earlier this month, a UN meeting took place in Geneva in which the adoption of international laws that would seek to regulate or ban the use of killer robots. It was the first time the subject was ever discussed in a diplomatic setting, with representatives trying to define the limits and responsibilities of so-called “lethal autonomous weapons systems” that could go beyond the human-directed drones that are already being used by some armies today.
On the one hand, the meeting could be seen as an attempt to create a legal precedent that would likely come in handy someday. On the other, it could be regarded as a recognition of a growing trend that is in danger of becoming a full-blown reality, thanks to developments being made in unmanned aerial systems, remote-controlled and autonomous robotics systems, and computing and artificial neural nets. The conjunction of these technologies are clearly something to be concerned about.
As Michael Moeller, the acting head of the U.N.’s European headquarters in Geneva, told diplomats at the start of the four-day gathering:
All too often international law only responds to atrocities and suffering once it has happened. You have the opportunity to take pre-emptive action and ensure that the ultimate decision to end life remains firmly under human control.
He noted that the U.N. treaty they were meeting to discuss – the Convention on Certain Conventional Weapons adopted by 117 nations including the world’s major powers – was used before to prohibit the use of blinding laser weapons in the 1990s before they were ever deployed on the battlefield. In addition to diplomatic represenatives from many nations, representatives from civil society were also in attendance and made their voices heard.
These included representatives from the International Committee for the Red Cross (ICRC), Human Rights Watch (HRW), the International Committee for Robot Arms Control (ICRAC), Article 36, the Campaign to Stop Killer Robots, Mines Action Canada, PAX, the Women’s International League for Peace and Freedom, and many others. As the guardians of the Geneva Conventions on warfare, the Red Cross’ presence was expected and certainly noted.
As Kathleen Lawand, head of the Red Cross’s arms unit, said with regards to the conference and killer robots in general:
There is a sense of deep discomfort with the idea of allowing machines to make life-and-death decisions on the battlefield with little or no human involvement.
And after four days of of expert meetings, concomitant “side events” organized by the Campaign to Stop Killer Robots, and informal discussions in the halls of the UN, the conclusions reached were clear: lethal autonomous weapons systems deserve further international attention, continued action to gain prohibition, and without regulation may prove a “game changer” for the future waging of war.
While some may think this meeting on future weapons systems is a result of science fiction or scare mongering, the brute fact that the first multilateral meeting on this matter is under the banner of the UN, and the CCW in particular, shows the importance, relevance and danger of these weapons systems in reality. Given the controversy over the existing uses of the drone technology and the growth in autonomous systems, the fact that an international conference was held to discuss it came as no surprise.
Even more telling was the consensus that states are opposed to “fully autonomous weapons.” German Ambassador Michael Biontino claimed that human control was the bedrock of international law, and should be at the core of future planning:
It is indispensable to maintain human control over the decision to kill another human being. This principle of human control is the foundation of the entire international humanitarian law.
The meetings also surprised and pleased many by showing that the issue of ethics was even on the table. Serious questions about the possibility of accountability, liability and responsibility arise from autonomous weapons systems, and such questions must be addressed before their creation or deployment. Paying homage to these moral complexities, states embraced the language of “meaningful human control” as an initial attempt to address these very issues.
Basically, they agreed that any and all systems must be under human control, and that the level of control – and the likelihood for abuse or perverse outcomes – must be addressed now and not after the systems are deployed. Thus in the coming months and years, states, lawyers, civil society and academics will have their hands full trying to elucidate precisely what “meaningful human control” entails, and how once agreed upon, it can be verified when states undertake to use such systems.
Of course, this will require that this first meeting be followed by several more before the legalities can be ironed out and possible contingencies and side-issues resolved. Moreover, as Nobel Peace laureate Jody Williams – who received the award in 1997 for her work to ban landmines – noted in her side event speech, the seeming consensus may be a strategic stalling tactic to assuage the worries of civil society and drag out or undermine the process.
When pushed on the matter of lethal autonomous systems, there were sharp divides between proponents and detractors. These divisions, not surprisingly, fell along the lines of state power. Those who supported the creation, development and deployment of autonomous weapons systems came from a powerful and select few – such as China, the US, the UK, Israel, Russia, etc – and many of those experts citing their benefits also were affiliated in some way or another with those states.
However, there prospect of collective power and action through the combination of smaller and medium states, as well as through the collective voice of civil society, does raise hope. In addition, legal precedents were sighted that showed how those states that insist on developing the technology could be brought to heel, or would even be willing to find common ground to limit the development of this technology.
The include the Marten’s Clause, which is part of the preamble to the 1899 Hague (II) Convention on Laws and Customs of War on Land. Many states and civil society delegates raised this potential avenue, thereby challenging some of the experts’ opinions that the Marten’s Clause would be insufficient as a source of law for a ban. The clause states that:
Until a more complete code of the laws of war is issued, the High Contracting Parties think it right to declare that in cases not included in the Regulations adopted by them, populations and belligerents remain under the protection and empire of the principles of international law, as they result from the usages established between civilized nations, from the laws of humanity and the requirements of the public conscience.
Another is the fact that the Convention on Certain Conventional Weapons – which was adopted by 117 nations including the world’s major powers – was used before to prohibit the use of blinding laser weapons in the 1990s before they were ever deployed on the battlefield. It was Moeller himself who pointed this out at the beginning of the conference, when he said that this Convention “serves as an example to be followed again.”
Personally, I think it is encouraging that the various nations of the world are coming together to address this problem, and are doing so now before the technology flourishes. I also believe wholeheartedly that we have a long way to go before any significant or meaningful measures are taken, and the issue itself is explored to the point that an informed decision can be made.
I can only hope that once the issue becomes a full-blow reality, some sort of framework is in place to address it. Otherwise, we could be looking at a lot more of these guys in our future! 😉
Sources: huffingtonpost.com, (2), phys.org