Drone warfare is one of the most controversial issues facing the world today. In addition to ongoing concerns about lack of transparency and who’s making the life-and-death decisions, there has also been serious and ongoing concerns about the cost in civilian lives, and the efforts of both the Pentagon and the US government to keep this information from the public.
This past October, the testimonial of a Pakistani family to Congress helped to put a human face on the issue. Rafiq ur Rehman, a Pakistani primary school teacher, described how his mother, Momina Bibi, had been killed by a drone strike. His two children – Zubair and Nabila, aged 13 and 9 – were also injured in the attack that took place on October 24th of this year.
This testimony occurred shortly after the publication of an Amnesty International report, which listed Bibi among 900 other civilians they say have been killed by drone strikes since 2001. Not only is this number far higher than previously reported, the report claims that the US may have committed war crimes and should stand trial for its actions.
Already, efforts have been mounted to put limitations on drone use and development within the US. Last year, Human Rights Watch and Harvard University released a joint report calling for the preemptive ban of “killer robots”. Shortly thereafter, Deputy Defense Secretary Ashton Carter signed a series of instructions to “minimize the probability and consequences of failures that could lead to unintended engagements.”
However, these efforts officially became international in scope when, on Monday October 21st, a growing number of humans rights activists, ethicists, and technologists converged on the United Nations Headquarters in New York City to call for an international agreement that would ban the development and use of fully autonomous weapons technology.
Known as the “Campaign To Stop Killer Robots,” an international coalition formed this past April, this group has demanded that autonomous killing machines should be treated like other tactics and tools of war that have been banned under the Geneva Convention – such as chemical weapons or anti-personnel landmines.
If these weapons move forward, it will transform the face of war forever. At some point in time, today’s drones may be like the ‘Model T’ of autonomous weaponry.
According to Noel Sharkey, an Irish computer scientist who is chair of the International Committee for Robot Arms Control, the list of challenges in developing autonomous robots is enormous. They range from the purely technological, such as the ability to properly identify a target using grainy computer vision, to ones that involve fundamental ethical, legal, and humanitarian questions.
As the current drone campaign has shown repeatedly, a teenage insurgent is often hard to distinguish from a child playing with a toy. What’s more, in all engagements in war, there is what is called the “proportionality test” – whether the civilian risks outweigh the military advantage of an attack. At present, no machine exists that would be capable of making these distinctions and judgement calls.
Despite these challenges, militaries around the world – including China, Israel, Russia, and especially the U.S. – are enthusiastic about developing and adopting technologies that will take humans entirely out of the equation, often citing the potential to save soldiers’ lives as a justification. According to Williams, without preventative action, the writing is on the wall.
Consider the U.S. military’s X-47 aircraft, which can take off, land, and refuel on its own and has weapons bays, as evidence of the trend towards greater levels of autonomy in weapons systems. Similarly, the U.K. military is collaborating with B.A.E. Systems to develop a drone called the Taranis, or “God of Thunder,” which can fly faster than the speed of sound and select its own targets.
The Campaign to Stop Killer Robots, a coalition of international and national NGOs, may have only launched recently, but individual groups have been to raise awareness for the last few years. Earlier this month, 272 engineers, computer scientists and roboticists signed onto the coalition’s letter calling for a ban. In addition, the U.N. is already expressed concern about the issue.
For example, the U.N. Special Rapporteur issued a report to the General Assembly back in April that recommended states establish national moratorium on the development of such weapons. The coalition is hoping to follow up on this by asking that other nations will join those already seeking to start early talks on the issue at the U.N. General Assembly First Committee on Disarmament and International Security meeting in New York later this month.
On the plus side, there is a precedent for a “preventative ban”: blinding lasers were never used in war, because they were preemptively included in a treaty. On the downside, autonomous weapons technology is not an easily-defined system, which makes it more difficult to legislate. If a ban is to be applied, knowing where it begins and ends, and what loopholes exist, is something that will have to be ironed out in advance.
What’s more, there are alternatives to a ban, such as regulation and limitations. By allowing states to develop machinery that is capable of handling itself in non-combat situations, but which require a human operator to green light the use of weapons, is something the US military has already claimed it is committed to. As far as international law is concerned, this represents a viable alternative to putting a stop to all research.
Overall, it is estimated that we are at least a decade away from a truly autonomous machine of war, so there is time for the law to evolve and prepare a proper response. In the meantime, there is also plenty of time to address the current use of drones and all its consequences. I’m sure I speak for more than myself when I say that I hope its get better before it gets worse.
And in the meantime, be sure to enjoy this video produced by Human Rights Watch: