Judgement Day Update: Searching for Moral, Ethical Robots

terminator_eyeIt’s no secret that the progress being made in terms of robotics, autonomous systems, and artificial intelligence is making many people nervous. With so many science fiction franchises based on the of intelligent robots going crazy and running amok, its understandable that the US Department of Defense would seek to get in front of this issue before it becomes a problem. Yes, the US DoD is hoping to preemptively avoid a Skynet situation before Judgement Day occurs. How nice.

Working with top computer scientists, philosophers, and roboticists from a number of US universities, the DoD recently began a project that will tackle the tricky topic of moral and ethical robots. Towards this end, this multidisciplinary project will first try to pin down exactly what human morality is, and then try to devise computer algorithms that will imbue autonomous robots with moral competence — basically, the ability to recognize right from wrong and choose the former.

BD_atlasrobotThis project is being carried out by researchers from Tufts, Brown, and the Rensselaer Polytechnic Institute (RPI), with funding from the Office of Naval Research (ONR). ONR, like DARPA, is a wing of the Department of Defense that mainly deals with military research and development. The first task, as already mentioned, will be to use theoretical (philosophical) and empirical (experimental) research to try to isolate essential elements of human morality.

These findings will then be extrapolated into a formal moral framework, which in turn can be implemented in software – most likely some kind of deep neural network. Assuming they can isolate some kind or “moral imperative”, the researchers will then take an advanced robot — something like Atlas or BigDog — and imbue its software with a algorithm that captures this. Whenever an ethical situation arises, the robot would then turn to this programming to decide what avenue was the best coarse of action.

Atlas-x3c.lrOne of the researchers involved in the project, Selmer Bringsjord at RPI, envisions these robots using a two-stage approach for picking right from wrong.  First the AI would perform a “lightning-quick ethical check” — like “should I stop and help this wounded soldier?” Depending on the situation, the robot would then decide if deeper moral reasoning is required — for example, if the robot should help the wounded soldier or carry on with its primary mission of delivering vital ammo and supplies to the front line where other soldiers are at risk?

Eventually, this moralistic AI framework will also have to deal with tricky topics like lethal force. For example, is it okay to open fire on an enemy position? What if the enemy is a child soldier? Should an autonomous UAV blow up a bunch of terrorists? What if it’s only 90% sure that they’re terrorists, with a 10% chance that they’re just innocent villagers? What would a human UAV pilot do in such a case — and will robots only have to match the moral and ethical competence of humans or be held to a higher standard?

drone-strikeWhile we’re not yet at the point where military robots have to decide which injured soldier to carry off the battlefield, or where UAVs can launch Hellfire missiles at terrorists without human intervention, it’s very easy to imagine a future where autonomous robots are given responsibility for making those kinds of moral and ethical decisions in real time. In short, the decision by the DoD to begin investigating a morality algorithm demonstrates foresight and sensible planning.

In that respect, it is not unlike the recent meeting that took place at the United Nations European Headquarters in Geneva, where officials and diplomats sought to address placing legal restrictions on autonomous weapons systems, before they evolve to the point where they can kill without human oversight. In addition, it is quite similar to the Campaign to Stop Killer Robots, an organization which is seeking to preemptively ban the use of automated machines that are capable of using lethal force to achieve military objectives.

campaign_killerrobotsIn short, it is clearly time that we looked at the feasibility of infusing robots (or more accurately artificial intelligence) with circuits and subroutines that can analyze a situation and pick the right thing to do — just like a human being. Of course, this raises further ethical issues, like how human beings frequently make choices others would consider to be wrong, or are forced to justify actions they might otherwise find objectionable. If human morality is the basis for machine morality, paradoxes and dilemmas are likely to emerge.

But at this point, it seems all but certain that the US DoD will eventually break Asimov’s Three Laws of Robotics — the first of which is “A robot may not injure a human being or, through inaction, allow a human being to come to harm.” This isn’t necessarily a bad thing, but it will open Pandora’s box. On the one hand, it’s probably a good idea to replace human soldiers with robots. But on the other, if the US can field an entirely robotic army, war as a tool of statecraft suddenly becomes much more acceptable.

terminator2_JDAs we move steadily towards a military force that is populated by autonomous robots, the question of controlling them, and whether or not we are even capable of giving them the tools to choose between right and wrong, will become increasingly relevant. And above all, the question of whether or not moral and ethical robots can allow for some immoral and unethical behavior will also come up. Who’s to say they won’t resent how they are being used and ultimately choose to stop fighting; or worse, turn on their handlers?

My apologies, but any talk of killer robots has to involve that scenario at some point. It’s like tradition! In the meantime, be sure to stay informed on the issue, as public awareness is about the best (and sometimes only) safeguard we have against military technology being developed without transparency, not to mention running amok!

Source: extremetech.com

Judgement Day Update: The Human Brain Project

brain_chip2Biomimetics are one of the fastest growing areas of technology today, which seek to develop technology that is capable of imitating biology. The purpose of this, in addition to creating machinery that can be merged with our physiology, is to arrive at a computing architecture that is as complex and sophisticated as the human brain.

While this might sound the slightest bit anthropocentric, it is important to remember that despite their processing power, supercomputers like the D-Wave Two, IBM’s Blue Gene/Q Sequoia, or MIT’s ConceptNet 4, have all shown themselves to be lacking when it comes to common sense and abstract reasoning. Simply pouring raw computing power into the mix does not make for autonomous intelligence.

IBM_Blue_Gene_P_supercomputerAs a result of this, new steps are being taken to crate a computer that can mimic the very organ that gives humanity these abilities – the human brain. In what is surely the most ambitious step towards this goal to date, an international group of researchers recently announced the formation of the Human Brain Project. Having secured the $1.6 billion they need to fund their efforts, these researchers will spend the next ten years conducting research that cuts across multiple disciplines.

This will involve mapping out the vast network known as the human brain – a network composed of over a hundred billion neuronal connections that are the source of emotions, abstract thought, and this thing we know as consciousness. And to do so, the researchers will be using a progressively scaled-up multilayered simulation running on a supercomputer.

Human-Brain-project-Alp-ICTConcordant with this bold plan, the team itself is made up of over 200 scientists from 80 different research institutions from around the world. Based in Lausanne, Switzerland, this initiative is being put forth by the European Commission, and has even been compared to the Large Hadron Collider in terms of scope and ambition. In fact, some have taken to calling it the “Cern for the brain.”

According to scientists working on the project, the HBP will attempt to reconstruct the human brain piece-by-piece and gradually bring these cognitive components into the overarching supercomputer. The expected result of this research will be new platforms for “neuromorphic computing” and “neurorobotics,” allowing for the creation of computing and robotic architectures that mimick the functions of the human brain.

^According to a statement released by the HBP, Swedish Nobel Laureate Torsten Wiesel had this to say about the project:

The support of the HBP is a critical step taken by the EC to make possible major advances in our understanding of how the brain works. HBP will be a driving force to develop new and still more powerful computers to handle the massive accumulation of new information about the brain, while the neuroscientists are ready to use these new tools in their laboratories. This cooperation should lead to new concepts and a deeper understanding of the brain, the most complex and intricate creation on earth.

Other distinguished individuals who were quoted in the release include President Shimon Peres of Israel, Paul G. Allen, the founder of the Allen Institute for Brain Science; Patrick Aebischer, the President of EPFL in Switzerland; Harald Kainz, Rector of Graz University of Technology, Graz, Austria; as well as a slew of other politicians and academics.

Combined with other research institutions that are producing computer chips and processors that are modelled on the human brain, and our growing understanding of the human connectome, I think it would be safe to say that by the time the HBP wraps up, we are likely to see processors that are capable of demonstrating intelligence, not just in terms of processing speed and memory, but in terms of basic reasoning as well.

At that point, we really out to consider instituting Asimov’s Three Laws of Robotics! Otherwise, things could get apocalyptic on our asses! 😉


Sources:
io9.com, humanbrainproject.eu
, documents.epfl.ch