Judgement Day Update: Terminators at I/O 2014

google_terminatorsWe’ve all thought about it… the day when super-intelligent computer becomes self-aware and unleashes a nuclear holocaust, followed shortly thereafter by the rise of the machines (cue theme from Terminator). But as it turns out, when the robot army does come to exterminate humanity, at two humans might be safe – Google co-founders Larry Page and Sergey Brin to be precise.

Basically, they’ve uploaded a killer-robots.txt file to their servers that instructs T-800 and T-1000 Terminators to spare the company’s co-founders (or “disallow” their deaths). Such was the subject of a totally tongue-in-cheek presentation at this year’s Google I/O at the Moscone Center in San Fransisco, which coincided with the 20th anniversary of the Robots.txt file.

https://i0.wp.com/www.product-reviews.net/wp-content/uploads/Google-IO-2014-keynote-dated-live-stream-as-normal1.jpgThis tool, which was created in 1994, instructs search engines and other automated bots to avoid crawling certain pages or directories of a website. The industry has done a remarkable job staying true to the simple text file in the two decades since; Google, Bing, and Yahoo still obey its directives. The changes they uploaded read like this, just in case you’re planning on adding your name to the “disallow” list:

Screen_shot_2014-07-03_at_7.15.23_pm

While that tool didn’t exactly take the rise of the machines into account, it’s appearance on the Google’s website as an Easter egg did add some levity to a company that is already being accused of facilitating in the creation of killer robots. Calling Google’s proposed line or robots “killer” does seem both premature and extreme, that did not stop a protester from interrupting the I/O 2014 keynote address.

Google_Terminators_WideBasically, as Google’s senior VP of technical infrastructure Urs Hölze spoke about their cloud platform, the unidentified man stood up and began screaming “You all work for a totalitarian company that builds machines that kill people!” As you can see from the video below, Hölze did his best to take the interruptions in stride and continued with the presentation. The protestor was later escorted out by security.

This wasn’t the first time that Google has been the source of controversy over the prospect of building “killer robots”. Ever since Google acquired Boston Dynamics and seven other robots companies in the space of six months (between and June and Dec of 2013), there has been some fear that the company has a killer machine in the works that it will attempt to sell to the armed forces.

campaign_killerrobotsNaturally, this is all part of a general sense of anxiety that surrounds developments being made across multiple fields. Whereas some concerns have crystallized into dedicated and intelligent calls for banning autonomous killer machines in advance – aka. the Campaign To Stop Killer Robots – others have resulted in the kinds of irrational outbreaks observed at this year’s I/O.

Needless to say, if Google does begin developing killer robots, or just starts militarizing its line of Boston Dynamics acquisitions, we can expect that just about everyone who can access (or hack their way into) the Robots.txt file to be adding their names. And it might not be too soon to update the list to include the T-X, Replicants, and any other killer robots we can think of!

And be sure to check out the video of the “killer robot” protester speaking out at 2014 I/O:


Sources: 
theverge.com, (2)

Judgement Day Update: Searching for Moral, Ethical Robots

terminator_eyeIt’s no secret that the progress being made in terms of robotics, autonomous systems, and artificial intelligence is making many people nervous. With so many science fiction franchises based on the of intelligent robots going crazy and running amok, its understandable that the US Department of Defense would seek to get in front of this issue before it becomes a problem. Yes, the US DoD is hoping to preemptively avoid a Skynet situation before Judgement Day occurs. How nice.

Working with top computer scientists, philosophers, and roboticists from a number of US universities, the DoD recently began a project that will tackle the tricky topic of moral and ethical robots. Towards this end, this multidisciplinary project will first try to pin down exactly what human morality is, and then try to devise computer algorithms that will imbue autonomous robots with moral competence — basically, the ability to recognize right from wrong and choose the former.

BD_atlasrobotThis project is being carried out by researchers from Tufts, Brown, and the Rensselaer Polytechnic Institute (RPI), with funding from the Office of Naval Research (ONR). ONR, like DARPA, is a wing of the Department of Defense that mainly deals with military research and development. The first task, as already mentioned, will be to use theoretical (philosophical) and empirical (experimental) research to try to isolate essential elements of human morality.

These findings will then be extrapolated into a formal moral framework, which in turn can be implemented in software – most likely some kind of deep neural network. Assuming they can isolate some kind or “moral imperative”, the researchers will then take an advanced robot — something like Atlas or BigDog — and imbue its software with a algorithm that captures this. Whenever an ethical situation arises, the robot would then turn to this programming to decide what avenue was the best coarse of action.

Atlas-x3c.lrOne of the researchers involved in the project, Selmer Bringsjord at RPI, envisions these robots using a two-stage approach for picking right from wrong.  First the AI would perform a “lightning-quick ethical check” — like “should I stop and help this wounded soldier?” Depending on the situation, the robot would then decide if deeper moral reasoning is required — for example, if the robot should help the wounded soldier or carry on with its primary mission of delivering vital ammo and supplies to the front line where other soldiers are at risk?

Eventually, this moralistic AI framework will also have to deal with tricky topics like lethal force. For example, is it okay to open fire on an enemy position? What if the enemy is a child soldier? Should an autonomous UAV blow up a bunch of terrorists? What if it’s only 90% sure that they’re terrorists, with a 10% chance that they’re just innocent villagers? What would a human UAV pilot do in such a case — and will robots only have to match the moral and ethical competence of humans or be held to a higher standard?

drone-strikeWhile we’re not yet at the point where military robots have to decide which injured soldier to carry off the battlefield, or where UAVs can launch Hellfire missiles at terrorists without human intervention, it’s very easy to imagine a future where autonomous robots are given responsibility for making those kinds of moral and ethical decisions in real time. In short, the decision by the DoD to begin investigating a morality algorithm demonstrates foresight and sensible planning.

In that respect, it is not unlike the recent meeting that took place at the United Nations European Headquarters in Geneva, where officials and diplomats sought to address placing legal restrictions on autonomous weapons systems, before they evolve to the point where they can kill without human oversight. In addition, it is quite similar to the Campaign to Stop Killer Robots, an organization which is seeking to preemptively ban the use of automated machines that are capable of using lethal force to achieve military objectives.

campaign_killerrobotsIn short, it is clearly time that we looked at the feasibility of infusing robots (or more accurately artificial intelligence) with circuits and subroutines that can analyze a situation and pick the right thing to do — just like a human being. Of course, this raises further ethical issues, like how human beings frequently make choices others would consider to be wrong, or are forced to justify actions they might otherwise find objectionable. If human morality is the basis for machine morality, paradoxes and dilemmas are likely to emerge.

But at this point, it seems all but certain that the US DoD will eventually break Asimov’s Three Laws of Robotics — the first of which is “A robot may not injure a human being or, through inaction, allow a human being to come to harm.” This isn’t necessarily a bad thing, but it will open Pandora’s box. On the one hand, it’s probably a good idea to replace human soldiers with robots. But on the other, if the US can field an entirely robotic army, war as a tool of statecraft suddenly becomes much more acceptable.

terminator2_JDAs we move steadily towards a military force that is populated by autonomous robots, the question of controlling them, and whether or not we are even capable of giving them the tools to choose between right and wrong, will become increasingly relevant. And above all, the question of whether or not moral and ethical robots can allow for some immoral and unethical behavior will also come up. Who’s to say they won’t resent how they are being used and ultimately choose to stop fighting; or worse, turn on their handlers?

My apologies, but any talk of killer robots has to involve that scenario at some point. It’s like tradition! In the meantime, be sure to stay informed on the issue, as public awareness is about the best (and sometimes only) safeguard we have against military technology being developed without transparency, not to mention running amok!

Source: extremetech.com

Judgement Day Update: UN Weights in on Killer Robots

terminator_judgement_dayEarlier this month, a UN meeting took place in Geneva in which the adoption of international laws that would seek to regulate or ban the use of killer robots. It was the first time the subject was ever discussed in a diplomatic setting, with representatives trying to define the limits and responsibilities of so-called “lethal autonomous weapons systems” that could go beyond the human-directed drones that are already being used by some armies today.

On the one hand, the meeting could be seen as an attempt to create a legal precedent that would likely come in handy someday. On the other, it could be regarded as a recognition of a growing trend that is in danger of becoming a full-blown reality, thanks to developments being made in unmanned aerial systems, remote-controlled and autonomous robotics systems, and computing and artificial neural nets. The conjunction of these technologies are clearly something to be concerned about.

Atlas-x3c.lrAs Michael Moeller, the acting head of the U.N.’s European headquarters in Geneva, told diplomats at the start of the four-day gathering:

All too often international law only responds to atrocities and suffering once it has happened. You have the opportunity to take pre-emptive action and ensure that the ultimate decision to end life remains firmly under human control.

He noted that the U.N. treaty they were meeting to discuss – the Convention on Certain Conventional Weapons adopted by 117 nations including the world’s major powers – was used before to prohibit the use of blinding laser weapons in the 1990s before they were ever deployed on the battlefield. In addition to diplomatic represenatives from many nations, representatives from civil society were also in attendance and made their voices heard.

campaign_killerrobotsThese included representatives from the International Committee for the Red Cross (ICRC), Human Rights Watch (HRW), the International Committee for Robot Arms Control (ICRAC), Article 36, the Campaign to Stop Killer Robots, Mines Action Canada, PAX, the Women’s International League for Peace and Freedom, and many others. As the guardians of the Geneva Conventions on warfare, the Red Cross’ presence was expected and certainly noted.

As Kathleen Lawand, head of the Red Cross’s arms unit, said with regards to the conference and killer robots in general:

There is a sense of deep discomfort with the idea of allowing machines to make life-and-death decisions on the battlefield with little or no human involvement.

And after four days of of expert meetings, concomitant “side events” organized by the Campaign to Stop Killer Robots, and informal discussions in the halls of the UN, the conclusions reached were clear: lethal autonomous weapons systems deserve further international attention, continued action to gain prohibition, and without regulation may prove a “game changer” for the future waging of war.

X-47BWhile some may think this meeting on future weapons systems is a result of science fiction or scare mongering, the brute fact that the first multilateral meeting on this matter is under the banner of the UN, and the CCW in particular, shows the importance, relevance and danger of these weapons systems in reality. Given the controversy over the existing uses of the drone technology and the growth in autonomous systems, the fact that an international conference was held to discuss it came as no surprise.

Even more telling was the consensus that states are opposed to “fully autonomous weapons.” German Ambassador Michael Biontino claimed that human control was the bedrock of international law, and should be at the core of future planning:

It is indispensable to maintain human control over the decision to kill another human being. This principle of human control is the foundation of the entire international humanitarian law.

The meetings also surprised and pleased many by showing that the issue of ethics was even on the table. Serious questions about the possibility of accountability, liability and responsibility arise from autonomous weapons systems, and such questions must be addressed before their creation or deployment. Paying homage to these moral complexities, states embraced the language of “meaningful human control” as an initial attempt to address these very issues.

UAVsBasically, they agreed that any and all systems must be under human control, and that the level of control – and the likelihood for abuse or perverse outcomes – must be addressed now and not after the systems are deployed. Thus in the coming months and years, states, lawyers, civil society and academics will have their hands full trying to elucidate precisely what “meaningful human control” entails, and how once agreed upon, it can be verified when states undertake to use such systems.

Of course, this will require that this first meeting be followed by several more before the legalities can be ironed out and possible contingencies and side-issues resolved. Moreover, as Nobel Peace laureate Jody Williams – who received the award in 1997 for her work to ban landmines – noted in her side event speech, the seeming consensus may be a strategic stalling tactic to assuage the worries of civil society and drag out or undermine the process.

Chinese_dronesWhen pushed on the matter of lethal autonomous systems, there were sharp divides between proponents and detractors. These divisions, not surprisingly, fell along the lines of state power. Those who supported the creation, development and deployment of autonomous weapons systems came from a powerful and select few – such as China, the US, the UK, Israel, Russia, etc – and many of those experts citing their benefits also were affiliated in some way or another with those states.

However, there prospect of collective power and action through the combination of smaller and medium states, as well as through the collective voice of civil society, does raise hope. In addition, legal precedents were sighted that showed how those states that insist on developing the technology could be brought to heel, or would even be willing to find common ground to limit the development of this technology.

AI_robotThe include the Marten’s Clause, which is part of the preamble to the 1899 Hague (II) Convention on Laws and Customs of War on Land. Many states and civil society delegates raised this potential avenue, thereby challenging some of the experts’ opinions that the Marten’s Clause would be insufficient as a source of law for a ban. The clause states that:

Until a more complete code of the laws of war is issued, the High Contracting Parties think it right to declare that in cases not included in the Regulations adopted by them, populations and belligerents remain under the protection and empire of the principles of international law, as they result from the usages established between civilized nations, from the laws of humanity and the requirements of the public conscience.

Another is the fact that the Convention on Certain Conventional Weapons – which was  adopted by 117 nations including the world’s major powers – was used before to prohibit the use of blinding laser weapons in the 1990s before they were ever deployed on the battlefield. It was Moeller himself who pointed this out at the beginning of the conference, when he said that this Convention “serves as an example to be followed again.”

Personally, I think it is encouraging that the various nations of the world are coming together to address this problem, and are doing so now before the technology flourishes. I also believe wholeheartedly that we have a long way to go before any significant or meaningful measures are taken, and the issue itself is explored to the point that an informed decision can be made.

terminator2_JDI can only hope that once the issue becomes a full-blow reality, some sort of framework is in place to address it. Otherwise, we could be looking at a lot more of these guys in our future! 😉

Sources: huffingtonpost.com, (2), phys.org

Judgement Day Update: Banning Autonomous Killing Machines

drone-strikeDrone warfare is one of the most controversial issues facing the world today. In addition to ongoing concerns about lack of transparency and who’s making the life-and-death decisions, there has also been serious and ongoing concerns about the cost in civilian lives, and the efforts of both the Pentagon and the US government to keep this information from the public.

This past October, the testimonial of a Pakistani family to Congress helped to put a human face on the issue. Rafiq ur Rehman, a Pakistani primary school teacher, described how his mother, Momina Bibi, had been killed by a drone strike. His two children – Zubair and Nabila, aged 13 and 9 – were also injured in the attack that took place on October 24th of this year.

congress_dronetestimonyThis testimony occurred shortly after the publication of an Amnesty International report, which listed Bibi among 900 other civilians they say have been killed by drone strikes since 2001. Not only is this number far higher than previously reported, the report claims that the US may have committed war crimes and should stand trial for its actions.

Already, efforts have been mounted to put limitations on drone use and development within the US. Last year, Human Rights Watch and Harvard University released a joint report calling for the preemptive ban of “killer robots”. Shortly thereafter, Deputy Defense Secretary Ashton Carter signed a series of instructions to “minimize the probability and consequences of failures that could lead to unintended engagements.”

campaignkillerrobots_UNHowever, these efforts officially became international in scope when, on Monday October 21st, a growing number of humans rights activists, ethicists, and technologists converged on the United Nations Headquarters in New York City to call for an international agreement that would ban the development and use of fully autonomous weapons technology.

Known as the “Campaign To Stop Killer Robots,” an international coalition formed this past April, this group has demanded that autonomous killing machines should be treated like other tactics and tools of war that have been banned under the Geneva Convention – such as chemical weapons or anti-personnel landmines.

UAVsAs Jody Williams. a Nobel Peace Prize winner and, a founding member of the group said:

If these weapons move forward, it will transform the face of war forever. At some point in time, today’s drones may be like the ‘Model T’ of autonomous weaponry.

According to Noel Sharkey, an Irish computer scientist who is chair of the International Committee for Robot Arms Control, the list of challenges in developing autonomous robots is enormous. They range from the purely technological, such as the ability to properly identify a target using grainy computer vision, to ones that involve fundamental ethical, legal, and humanitarian questions.

As the current drone campaign has shown repeatedly, a teenage insurgent is often hard to distinguish from a child playing with a toy. What’s more, in all engagements in war, there is what is called the “proportionality test” – whether the civilian risks outweigh the military advantage of an attack. At present, no machine exists that would be capable of making these distinctions and judgement calls.

X-47B_over_coastlineDespite these challenges, militaries around the world – including China, Israel, Russia, and especially the U.S. – are enthusiastic about developing and adopting technologies that will take humans entirely out of the equation, often citing the potential to save soldiers’ lives as a justification. According to Williams, without preventative action, the writing is on the wall.

Consider the U.S. military’s X-47 aircraft, which can take off, land, and refuel on its own and has weapons bays, as evidence of the trend towards greater levels of autonomy in weapons systems. Similarly, the U.K. military is collaborating with B.A.E. Systems to develop a drone called the Taranis, or “God of Thunder,” which can fly faster than the speed of sound and select its own targets.

campaign_killerrobotsThe Campaign to Stop Killer Robots, a coalition of international and national NGOs, may have only launched recently, but individual groups have been to raise awareness for the last few years. Earlier this month, 272 engineers, computer scientists and roboticists signed onto the coalition’s letter calling for a ban. In addition, the U.N. is already expressed concern about the issue.

For example, the U.N. Special Rapporteur issued a report to the General Assembly back in April that recommended states establish national moratorium on the development of such weapons. The coalition is hoping to follow up on this by asking that other nations will join those already seeking to start early talks on the issue at the U.N. General Assembly First Committee on Disarmament and International Security meeting in New York later this month.

AI'sOn the plus side, there is a precedent for a “preventative ban”: blinding lasers were never used in war, because they were preemptively included in a treaty. On the downside, autonomous weapons technology is not an easily-defined system, which makes it more difficult to legislate. If a ban is to be applied, knowing where it begins and ends, and what loopholes exist, is something that will have to be ironed out in advance.

What’s more, there are alternatives to a ban, such as regulation and limitations. By allowing states to develop machinery that is capable of handling itself in non-combat situations, but which require a human operator to green light the use of weapons, is something the US military has already claimed it is committed to. As far as international law is concerned, this represents a viable alternative to putting a stop to all research.

Overall, it is estimated that we are at least a decade away from a truly autonomous machine of war, so there is time for the law to evolve and prepare a proper response. In the meantime, there is also plenty of time to address the current use of drones and all its consequences. I’m sure I speak for more than myself when I say that I hope its get better before it gets worse.

And in the meantime, be sure to enjoy this video produced by Human Rights Watch:


Sources:
fastcoexist.com, thegaurdian.com, stopkillerrobots.org