Judgement Day Update: Searching for Moral, Ethical Robots

terminator_eyeIt’s no secret that the progress being made in terms of robotics, autonomous systems, and artificial intelligence is making many people nervous. With so many science fiction franchises based on the of intelligent robots going crazy and running amok, its understandable that the US Department of Defense would seek to get in front of this issue before it becomes a problem. Yes, the US DoD is hoping to preemptively avoid a Skynet situation before Judgement Day occurs. How nice.

Working with top computer scientists, philosophers, and roboticists from a number of US universities, the DoD recently began a project that will tackle the tricky topic of moral and ethical robots. Towards this end, this multidisciplinary project will first try to pin down exactly what human morality is, and then try to devise computer algorithms that will imbue autonomous robots with moral competence — basically, the ability to recognize right from wrong and choose the former.

BD_atlasrobotThis project is being carried out by researchers from Tufts, Brown, and the Rensselaer Polytechnic Institute (RPI), with funding from the Office of Naval Research (ONR). ONR, like DARPA, is a wing of the Department of Defense that mainly deals with military research and development. The first task, as already mentioned, will be to use theoretical (philosophical) and empirical (experimental) research to try to isolate essential elements of human morality.

These findings will then be extrapolated into a formal moral framework, which in turn can be implemented in software – most likely some kind of deep neural network. Assuming they can isolate some kind or “moral imperative”, the researchers will then take an advanced robot — something like Atlas or BigDog — and imbue its software with a algorithm that captures this. Whenever an ethical situation arises, the robot would then turn to this programming to decide what avenue was the best coarse of action.

Atlas-x3c.lrOne of the researchers involved in the project, Selmer Bringsjord at RPI, envisions these robots using a two-stage approach for picking right from wrong.  First the AI would perform a “lightning-quick ethical check” — like “should I stop and help this wounded soldier?” Depending on the situation, the robot would then decide if deeper moral reasoning is required — for example, if the robot should help the wounded soldier or carry on with its primary mission of delivering vital ammo and supplies to the front line where other soldiers are at risk?

Eventually, this moralistic AI framework will also have to deal with tricky topics like lethal force. For example, is it okay to open fire on an enemy position? What if the enemy is a child soldier? Should an autonomous UAV blow up a bunch of terrorists? What if it’s only 90% sure that they’re terrorists, with a 10% chance that they’re just innocent villagers? What would a human UAV pilot do in such a case — and will robots only have to match the moral and ethical competence of humans or be held to a higher standard?

drone-strikeWhile we’re not yet at the point where military robots have to decide which injured soldier to carry off the battlefield, or where UAVs can launch Hellfire missiles at terrorists without human intervention, it’s very easy to imagine a future where autonomous robots are given responsibility for making those kinds of moral and ethical decisions in real time. In short, the decision by the DoD to begin investigating a morality algorithm demonstrates foresight and sensible planning.

In that respect, it is not unlike the recent meeting that took place at the United Nations European Headquarters in Geneva, where officials and diplomats sought to address placing legal restrictions on autonomous weapons systems, before they evolve to the point where they can kill without human oversight. In addition, it is quite similar to the Campaign to Stop Killer Robots, an organization which is seeking to preemptively ban the use of automated machines that are capable of using lethal force to achieve military objectives.

campaign_killerrobotsIn short, it is clearly time that we looked at the feasibility of infusing robots (or more accurately artificial intelligence) with circuits and subroutines that can analyze a situation and pick the right thing to do — just like a human being. Of course, this raises further ethical issues, like how human beings frequently make choices others would consider to be wrong, or are forced to justify actions they might otherwise find objectionable. If human morality is the basis for machine morality, paradoxes and dilemmas are likely to emerge.

But at this point, it seems all but certain that the US DoD will eventually break Asimov’s Three Laws of Robotics — the first of which is “A robot may not injure a human being or, through inaction, allow a human being to come to harm.” This isn’t necessarily a bad thing, but it will open Pandora’s box. On the one hand, it’s probably a good idea to replace human soldiers with robots. But on the other, if the US can field an entirely robotic army, war as a tool of statecraft suddenly becomes much more acceptable.

terminator2_JDAs we move steadily towards a military force that is populated by autonomous robots, the question of controlling them, and whether or not we are even capable of giving them the tools to choose between right and wrong, will become increasingly relevant. And above all, the question of whether or not moral and ethical robots can allow for some immoral and unethical behavior will also come up. Who’s to say they won’t resent how they are being used and ultimately choose to stop fighting; or worse, turn on their handlers?

My apologies, but any talk of killer robots has to involve that scenario at some point. It’s like tradition! In the meantime, be sure to stay informed on the issue, as public awareness is about the best (and sometimes only) safeguard we have against military technology being developed without transparency, not to mention running amok!

Source: extremetech.com

Judgement Day Update: UN Weights in on Killer Robots

terminator_judgement_dayEarlier this month, a UN meeting took place in Geneva in which the adoption of international laws that would seek to regulate or ban the use of killer robots. It was the first time the subject was ever discussed in a diplomatic setting, with representatives trying to define the limits and responsibilities of so-called “lethal autonomous weapons systems” that could go beyond the human-directed drones that are already being used by some armies today.

On the one hand, the meeting could be seen as an attempt to create a legal precedent that would likely come in handy someday. On the other, it could be regarded as a recognition of a growing trend that is in danger of becoming a full-blown reality, thanks to developments being made in unmanned aerial systems, remote-controlled and autonomous robotics systems, and computing and artificial neural nets. The conjunction of these technologies are clearly something to be concerned about.

Atlas-x3c.lrAs Michael Moeller, the acting head of the U.N.’s European headquarters in Geneva, told diplomats at the start of the four-day gathering:

All too often international law only responds to atrocities and suffering once it has happened. You have the opportunity to take pre-emptive action and ensure that the ultimate decision to end life remains firmly under human control.

He noted that the U.N. treaty they were meeting to discuss – the Convention on Certain Conventional Weapons adopted by 117 nations including the world’s major powers – was used before to prohibit the use of blinding laser weapons in the 1990s before they were ever deployed on the battlefield. In addition to diplomatic represenatives from many nations, representatives from civil society were also in attendance and made their voices heard.

campaign_killerrobotsThese included representatives from the International Committee for the Red Cross (ICRC), Human Rights Watch (HRW), the International Committee for Robot Arms Control (ICRAC), Article 36, the Campaign to Stop Killer Robots, Mines Action Canada, PAX, the Women’s International League for Peace and Freedom, and many others. As the guardians of the Geneva Conventions on warfare, the Red Cross’ presence was expected and certainly noted.

As Kathleen Lawand, head of the Red Cross’s arms unit, said with regards to the conference and killer robots in general:

There is a sense of deep discomfort with the idea of allowing machines to make life-and-death decisions on the battlefield with little or no human involvement.

And after four days of of expert meetings, concomitant “side events” organized by the Campaign to Stop Killer Robots, and informal discussions in the halls of the UN, the conclusions reached were clear: lethal autonomous weapons systems deserve further international attention, continued action to gain prohibition, and without regulation may prove a “game changer” for the future waging of war.

X-47BWhile some may think this meeting on future weapons systems is a result of science fiction or scare mongering, the brute fact that the first multilateral meeting on this matter is under the banner of the UN, and the CCW in particular, shows the importance, relevance and danger of these weapons systems in reality. Given the controversy over the existing uses of the drone technology and the growth in autonomous systems, the fact that an international conference was held to discuss it came as no surprise.

Even more telling was the consensus that states are opposed to “fully autonomous weapons.” German Ambassador Michael Biontino claimed that human control was the bedrock of international law, and should be at the core of future planning:

It is indispensable to maintain human control over the decision to kill another human being. This principle of human control is the foundation of the entire international humanitarian law.

The meetings also surprised and pleased many by showing that the issue of ethics was even on the table. Serious questions about the possibility of accountability, liability and responsibility arise from autonomous weapons systems, and such questions must be addressed before their creation or deployment. Paying homage to these moral complexities, states embraced the language of “meaningful human control” as an initial attempt to address these very issues.

UAVsBasically, they agreed that any and all systems must be under human control, and that the level of control – and the likelihood for abuse or perverse outcomes – must be addressed now and not after the systems are deployed. Thus in the coming months and years, states, lawyers, civil society and academics will have their hands full trying to elucidate precisely what “meaningful human control” entails, and how once agreed upon, it can be verified when states undertake to use such systems.

Of course, this will require that this first meeting be followed by several more before the legalities can be ironed out and possible contingencies and side-issues resolved. Moreover, as Nobel Peace laureate Jody Williams – who received the award in 1997 for her work to ban landmines – noted in her side event speech, the seeming consensus may be a strategic stalling tactic to assuage the worries of civil society and drag out or undermine the process.

Chinese_dronesWhen pushed on the matter of lethal autonomous systems, there were sharp divides between proponents and detractors. These divisions, not surprisingly, fell along the lines of state power. Those who supported the creation, development and deployment of autonomous weapons systems came from a powerful and select few – such as China, the US, the UK, Israel, Russia, etc – and many of those experts citing their benefits also were affiliated in some way or another with those states.

However, there prospect of collective power and action through the combination of smaller and medium states, as well as through the collective voice of civil society, does raise hope. In addition, legal precedents were sighted that showed how those states that insist on developing the technology could be brought to heel, or would even be willing to find common ground to limit the development of this technology.

AI_robotThe include the Marten’s Clause, which is part of the preamble to the 1899 Hague (II) Convention on Laws and Customs of War on Land. Many states and civil society delegates raised this potential avenue, thereby challenging some of the experts’ opinions that the Marten’s Clause would be insufficient as a source of law for a ban. The clause states that:

Until a more complete code of the laws of war is issued, the High Contracting Parties think it right to declare that in cases not included in the Regulations adopted by them, populations and belligerents remain under the protection and empire of the principles of international law, as they result from the usages established between civilized nations, from the laws of humanity and the requirements of the public conscience.

Another is the fact that the Convention on Certain Conventional Weapons – which was  adopted by 117 nations including the world’s major powers – was used before to prohibit the use of blinding laser weapons in the 1990s before they were ever deployed on the battlefield. It was Moeller himself who pointed this out at the beginning of the conference, when he said that this Convention “serves as an example to be followed again.”

Personally, I think it is encouraging that the various nations of the world are coming together to address this problem, and are doing so now before the technology flourishes. I also believe wholeheartedly that we have a long way to go before any significant or meaningful measures are taken, and the issue itself is explored to the point that an informed decision can be made.

terminator2_JDI can only hope that once the issue becomes a full-blow reality, some sort of framework is in place to address it. Otherwise, we could be looking at a lot more of these guys in our future! 😉

Sources: huffingtonpost.com, (2), phys.org