Judgement Day Update: Searching for Moral, Ethical Robots

terminator_eyeIt’s no secret that the progress being made in terms of robotics, autonomous systems, and artificial intelligence is making many people nervous. With so many science fiction franchises based on the of intelligent robots going crazy and running amok, its understandable that the US Department of Defense would seek to get in front of this issue before it becomes a problem. Yes, the US DoD is hoping to preemptively avoid a Skynet situation before Judgement Day occurs. How nice.

Working with top computer scientists, philosophers, and roboticists from a number of US universities, the DoD recently began a project that will tackle the tricky topic of moral and ethical robots. Towards this end, this multidisciplinary project will first try to pin down exactly what human morality is, and then try to devise computer algorithms that will imbue autonomous robots with moral competence — basically, the ability to recognize right from wrong and choose the former.

BD_atlasrobotThis project is being carried out by researchers from Tufts, Brown, and the Rensselaer Polytechnic Institute (RPI), with funding from the Office of Naval Research (ONR). ONR, like DARPA, is a wing of the Department of Defense that mainly deals with military research and development. The first task, as already mentioned, will be to use theoretical (philosophical) and empirical (experimental) research to try to isolate essential elements of human morality.

These findings will then be extrapolated into a formal moral framework, which in turn can be implemented in software – most likely some kind of deep neural network. Assuming they can isolate some kind or “moral imperative”, the researchers will then take an advanced robot — something like Atlas or BigDog — and imbue its software with a algorithm that captures this. Whenever an ethical situation arises, the robot would then turn to this programming to decide what avenue was the best coarse of action.

Atlas-x3c.lrOne of the researchers involved in the project, Selmer Bringsjord at RPI, envisions these robots using a two-stage approach for picking right from wrong.  First the AI would perform a “lightning-quick ethical check” — like “should I stop and help this wounded soldier?” Depending on the situation, the robot would then decide if deeper moral reasoning is required — for example, if the robot should help the wounded soldier or carry on with its primary mission of delivering vital ammo and supplies to the front line where other soldiers are at risk?

Eventually, this moralistic AI framework will also have to deal with tricky topics like lethal force. For example, is it okay to open fire on an enemy position? What if the enemy is a child soldier? Should an autonomous UAV blow up a bunch of terrorists? What if it’s only 90% sure that they’re terrorists, with a 10% chance that they’re just innocent villagers? What would a human UAV pilot do in such a case — and will robots only have to match the moral and ethical competence of humans or be held to a higher standard?

drone-strikeWhile we’re not yet at the point where military robots have to decide which injured soldier to carry off the battlefield, or where UAVs can launch Hellfire missiles at terrorists without human intervention, it’s very easy to imagine a future where autonomous robots are given responsibility for making those kinds of moral and ethical decisions in real time. In short, the decision by the DoD to begin investigating a morality algorithm demonstrates foresight and sensible planning.

In that respect, it is not unlike the recent meeting that took place at the United Nations European Headquarters in Geneva, where officials and diplomats sought to address placing legal restrictions on autonomous weapons systems, before they evolve to the point where they can kill without human oversight. In addition, it is quite similar to the Campaign to Stop Killer Robots, an organization which is seeking to preemptively ban the use of automated machines that are capable of using lethal force to achieve military objectives.

campaign_killerrobotsIn short, it is clearly time that we looked at the feasibility of infusing robots (or more accurately artificial intelligence) with circuits and subroutines that can analyze a situation and pick the right thing to do — just like a human being. Of course, this raises further ethical issues, like how human beings frequently make choices others would consider to be wrong, or are forced to justify actions they might otherwise find objectionable. If human morality is the basis for machine morality, paradoxes and dilemmas are likely to emerge.

But at this point, it seems all but certain that the US DoD will eventually break Asimov’s Three Laws of Robotics — the first of which is “A robot may not injure a human being or, through inaction, allow a human being to come to harm.” This isn’t necessarily a bad thing, but it will open Pandora’s box. On the one hand, it’s probably a good idea to replace human soldiers with robots. But on the other, if the US can field an entirely robotic army, war as a tool of statecraft suddenly becomes much more acceptable.

terminator2_JDAs we move steadily towards a military force that is populated by autonomous robots, the question of controlling them, and whether or not we are even capable of giving them the tools to choose between right and wrong, will become increasingly relevant. And above all, the question of whether or not moral and ethical robots can allow for some immoral and unethical behavior will also come up. Who’s to say they won’t resent how they are being used and ultimately choose to stop fighting; or worse, turn on their handlers?

My apologies, but any talk of killer robots has to involve that scenario at some point. It’s like tradition! In the meantime, be sure to stay informed on the issue, as public awareness is about the best (and sometimes only) safeguard we have against military technology being developed without transparency, not to mention running amok!

Source: extremetech.com

Drone Wars: Bigger, Badder, and Deadlier

UAVsIn their quest to “unman the front the lines”, and maintain drone superiority over other states, the US armed forces have been working on a series of designs that will one day replace their air fleet of Raptors and Predators. Given that potential rivals, like Iran and China, are actively imitating aspects of these designs in an added incentive, forcing military planners to think bigger and bolder.

Consider the MQ-4C Triton Unmanned Aerial System (UAS), a jet-powered drone that is the size of a Boeing 757 passenger jet. Developed by Northrop Grumman and measuring some 40 meters (130 feet) from wingtip to wingtip, this “super drone” is intended to replace the US Navy’s fleet of RQ-4 Global Hawks, a series of unmanned aerial vehicles that have been in service since the late 90’s.

Triton_droneThanks to a sensor suite that supplies a 360-degree view at a radius of over 3700 kms (2,300 miles), the Triton can provide high-altitude, real-time intelligence, surveillance and reconnaissance (ISR) at heights and distances in excess of any of its competitors. In addition, the drone possess unique de-icing and lightning protection capabilities, allowing to plunge through the clouds to get a closer view at surface ships.

And although Triton has a higher degree of autonomy than the most autonomous drones, operators on the ground are still relied upon to obtain high-resolution imagery, use radar for target detection and provide information-sharing capabilities to other military units. Thus far, Triton has completed flights up to 9.4 hours at altitudes of 15,250 meters (50,000 feet) at the company’s manufacturing facility in Palmdale, California.

?????????????????????????????????Mike Mackey, Northrop Grumman’s Triton UAS program director, had the following to say in a statement:

During surveillance missions using Triton, Navy operators may spot a target of interest and order the aircraft to a lower altitude to make positive identification. The wing’s strength allows the aircraft to safely descend, sometimes through weather patterns, to complete this maneuver.

Under an initial contract of $1.16 billion in 2008, the Navy has ordered 68 of the MQ-4C Triton drones with expected delivery in 2017. Check out the video of the Triton during its most recent test flight below:

But of course, this jetliner-sized customer is just one of many enhancements the US armed forces is planning on making to its drone army. Another is the jet-powered, long-range attack drone that is a planned replacement for the aging MQ-1 Predator system. It’s known as the Avenger (alternately the MQ-1 Predator C), a next-generation unmanned aerial vehicle that has a range of close to 3000 kms (1800 miles).

Designed by General Atomics, the Avenger is designed with Afghanistan in mind; or rather, the planned US withdrawal by the end 0f 2014. Given the ongoing CIA anti-terrorism operations in neighboring Pakistan are expected to continue, and airstrips in Afghanistan will no longer be available, the drones they use will need to have significant range.

(c) Kollected Pty Ltd.

The Avenger prototype made its first test flight in 2009, and after a new round of tests completed last month, is now operationally ready. Based on the company’s more well-known MQ-9 Reaper drone, Avenger is designed to perform high-speed, long-endurance surveillance or strike missions, flying up to 800 kms (500 mph) at a maximum of 15,250 meters (50,000 feet) for as long as 18 hours.

Compared to its earlier prototype, the Avenger’s fuselage has been increased by four feet to accommodate larger payloads and more fuel, allowing for extended missions. It can carry up to 1000 kilograms (3,500 pounds) internally, and its wingspan is capable of carrying weapons as large as a 2,000-pound Joint Direct Attack Munition (JDAM) and a full-compliment of Hellfire missiles.

Avenger_drone1Switching from propeller-driven drones to jets will allow the CIA to continue its Pakistan strikes from a more distant base if the U.S. is forced to withdraw entirely from neighboring Afghanistan. And according to a recent Los Angeles Times report, the Obama administration is actively making contingency plans to maintain surveillance and attacks in northwest Pakistan as part of its security agreement with Afghanistan.

The opportunity to close the gap between the need to act quickly and operating from a further distance with technology isn’t lost on the US military, or the company behind the Avenger. Frank Pace, president of the Aircraft Systems Group at General Atomics, said in a recent statement:

Avenger provides the right capabilities for the right cost at the right time and is operationally ready today. This aircraft offers unique advantages in terms of performance, cost, timescale, and adaptability that are unmatched by any other UAS in its class.

??????????????????????????????What’s more, one can tell by simply looking at the streamlined fuselage and softer contours that stealth is part of the package. By reducing the drone’s radar cross-section (RCS) and applying radar-absorbing materials, next-generation drone fleets will also be mimicking fifth-generation fighter craft. Perhaps we can expect aerial duels between remotely-controlled fighters to follow not long after…

And of course, there’s the General Atomic’s Avenger concept video to enjoy:

wired.com, (2)