Judgement Day Update: Searching for Moral, Ethical Robots

terminator_eyeIt’s no secret that the progress being made in terms of robotics, autonomous systems, and artificial intelligence is making many people nervous. With so many science fiction franchises based on the of intelligent robots going crazy and running amok, its understandable that the US Department of Defense would seek to get in front of this issue before it becomes a problem. Yes, the US DoD is hoping to preemptively avoid a Skynet situation before Judgement Day occurs. How nice.

Working with top computer scientists, philosophers, and roboticists from a number of US universities, the DoD recently began a project that will tackle the tricky topic of moral and ethical robots. Towards this end, this multidisciplinary project will first try to pin down exactly what human morality is, and then try to devise computer algorithms that will imbue autonomous robots with moral competence — basically, the ability to recognize right from wrong and choose the former.

BD_atlasrobotThis project is being carried out by researchers from Tufts, Brown, and the Rensselaer Polytechnic Institute (RPI), with funding from the Office of Naval Research (ONR). ONR, like DARPA, is a wing of the Department of Defense that mainly deals with military research and development. The first task, as already mentioned, will be to use theoretical (philosophical) and empirical (experimental) research to try to isolate essential elements of human morality.

These findings will then be extrapolated into a formal moral framework, which in turn can be implemented in software – most likely some kind of deep neural network. Assuming they can isolate some kind or “moral imperative”, the researchers will then take an advanced robot — something like Atlas or BigDog — and imbue its software with a algorithm that captures this. Whenever an ethical situation arises, the robot would then turn to this programming to decide what avenue was the best coarse of action.

Atlas-x3c.lrOne of the researchers involved in the project, Selmer Bringsjord at RPI, envisions these robots using a two-stage approach for picking right from wrong.  First the AI would perform a “lightning-quick ethical check” — like “should I stop and help this wounded soldier?” Depending on the situation, the robot would then decide if deeper moral reasoning is required — for example, if the robot should help the wounded soldier or carry on with its primary mission of delivering vital ammo and supplies to the front line where other soldiers are at risk?

Eventually, this moralistic AI framework will also have to deal with tricky topics like lethal force. For example, is it okay to open fire on an enemy position? What if the enemy is a child soldier? Should an autonomous UAV blow up a bunch of terrorists? What if it’s only 90% sure that they’re terrorists, with a 10% chance that they’re just innocent villagers? What would a human UAV pilot do in such a case — and will robots only have to match the moral and ethical competence of humans or be held to a higher standard?

drone-strikeWhile we’re not yet at the point where military robots have to decide which injured soldier to carry off the battlefield, or where UAVs can launch Hellfire missiles at terrorists without human intervention, it’s very easy to imagine a future where autonomous robots are given responsibility for making those kinds of moral and ethical decisions in real time. In short, the decision by the DoD to begin investigating a morality algorithm demonstrates foresight and sensible planning.

In that respect, it is not unlike the recent meeting that took place at the United Nations European Headquarters in Geneva, where officials and diplomats sought to address placing legal restrictions on autonomous weapons systems, before they evolve to the point where they can kill without human oversight. In addition, it is quite similar to the Campaign to Stop Killer Robots, an organization which is seeking to preemptively ban the use of automated machines that are capable of using lethal force to achieve military objectives.

campaign_killerrobotsIn short, it is clearly time that we looked at the feasibility of infusing robots (or more accurately artificial intelligence) with circuits and subroutines that can analyze a situation and pick the right thing to do — just like a human being. Of course, this raises further ethical issues, like how human beings frequently make choices others would consider to be wrong, or are forced to justify actions they might otherwise find objectionable. If human morality is the basis for machine morality, paradoxes and dilemmas are likely to emerge.

But at this point, it seems all but certain that the US DoD will eventually break Asimov’s Three Laws of Robotics — the first of which is “A robot may not injure a human being or, through inaction, allow a human being to come to harm.” This isn’t necessarily a bad thing, but it will open Pandora’s box. On the one hand, it’s probably a good idea to replace human soldiers with robots. But on the other, if the US can field an entirely robotic army, war as a tool of statecraft suddenly becomes much more acceptable.

terminator2_JDAs we move steadily towards a military force that is populated by autonomous robots, the question of controlling them, and whether or not we are even capable of giving them the tools to choose between right and wrong, will become increasingly relevant. And above all, the question of whether or not moral and ethical robots can allow for some immoral and unethical behavior will also come up. Who’s to say they won’t resent how they are being used and ultimately choose to stop fighting; or worse, turn on their handlers?

My apologies, but any talk of killer robots has to involve that scenario at some point. It’s like tradition! In the meantime, be sure to stay informed on the issue, as public awareness is about the best (and sometimes only) safeguard we have against military technology being developed without transparency, not to mention running amok!

Source: extremetech.com

New Drones Art Campaign

UAVsOver at deviantART, a constant source of inspired art for me, there’s an interest new campaign designed to raise awareness and stimulate debate on a rather controversial issue. I am referring, as the topic line would suggest, to the use of drones and UAV’s and all that it entails.

As one of the greatest concerns facing developed nations today, not to mention the developing world where they are being increasingly used, this campaign is not only timely and relevant, but an intriguing display of artwork motivated by social conscience. In short, it asks the question: how is this debate reflected in art and what will future generations think of it?

looking for a hole, by arcas art
looking for a hole, by arcas art

Inspired by similar projects which are taking place around the world, the purpose of the campaign is to draw attention to the fact that were are living in a world increasingly characterized by surveillance and killing machines. Or as technognotic puts it:

Drones have become the white hot center of debate for a multitude of deeply consequential concerns for the entire Earth Sphere. No matter the digital end point or theatre of conversation, whether it be politics, war, privacy, pop culture, or the rise of machines – Drones or UAV’s (unmanned aerial vehicles) are the current catalyst du jour in any number of flashpoint discussions…

Even more interesting is the tone of inevitability of outcome. Core discussion seems to focus on a coming drone-filled sky and how we might govern our selves accordingly as this fact becomes a reality… Is this the dark side of human creativity and inquisitiveness that will ultimately one day spell our doom or the first signs of a coming technological Utopia.

galaxy saga - white gryphon, by ukitakumuki
galaxy saga – white gryphon, by ukitakumuki

In addition, the campaign features the thoughtful essay of the same name by Jason Boog (deviantART handle istickboy), who takes a look at how killing machines and drones have been explored through art and popular culture. Beginning with a short romp through history, identifying the first “drone” to ever be used, he goes on to examine how several generations of artists chose to portray them and their use.

Things culminate in the modern age, where spending on drone development and humanitarian concerns have culminated to make this a truly global and auspicious concern. With remote-controlled drones giving way to autonomous models and UAV’s being used for domestic surveillance, there’s no telling where things could go.

mysterious journals, by sundragon83
mysterious journals, by sundragon83

On the one hand, a concerned and mobilized public could place limits and controls on them, or counter using their own form of “sousveillance” (public counter-surveillance). On the other hand, we could be headed for a police state here privacy is non-existent and robots decide who lives and who dies – maybe entirely on their own!

As you can certainly imagine, when I first learned of this campaign I could tell that it was right up my alley. Being such an obsessive geek for all things technological and how innovation and progress affect us, I knew I had to post about it. And as you can certainly tell from the samples posted here, the artwork is pretty damn badass!

I would recommend checking it out for the aesthetic appeal alone. Knowing you’re taking part in a campaign dedicated to public awareness is just a big bonus!

For more information, and to take a gander at some galleries, visit the campaign at techgnotic.deviantart.com.

DARPA’s New Sub-Hunting Robot

robot_sub-hunterWhen it comes to planning for the next possible conflict, military planners are often forced to take into account emerging trends in technology, and find both uses and countermeasures for them. And when it comes to future wars at sea, possibly fought in the Straight of Hormuz or the Sea of Japan, a number of startling developments are being taken into account, and solutions drawn up!

One such “solution” is the new robot sub-hunter being jointly created by the Science Applications International Corporation and DARPA – the Defense Advanced Research Projects Agency. That unmanned maritime robot, called the Anti-Submarine Warfare Continuous Trail Unmanned Vehicle, or ACTUV, doesn’t exist yet and won’t for years. But the SAIC’s plan does have the backing it needs, and presents an idea that is likely to inspire fear in submariners everywhere!

knifefish-drone-640x353For one, the unmanned vehicle will be capable of operating for periods ranging between 60 and 90 days, significantly longer than any aerial drone is capable of staying airborne. What’s more, SAIC is designing the ACTUV to be way more autonomous than contemporary drone aircraft. Once powered up, all a ship need do is release the drone and allow it to rely on its long-range acquisition sonar and other advanced sensors to scan for submarines, while at the same time steering clear of any nearby surface ships.

And then there is the advanced technology powering the drone’s sonar arrays. Unlike other ships, the ACTUV’s sensors create an acoustic image of its target to know it has the right one. Once the ACTUV thinks it’s got something, it pings nearby Navy ships through a satellite link, which they can either confirm or deny, either giving the ship the green light to hunt or instructions to search elsewhere.

And last, but not least, the ACTUV can operate alongside its surface fleet, remain in constant communication with a mothership as well as naval aircraft as they deploy sonar charges to help it hunt subs. This is a level of coordination that is rarely seen in aerial drones, which are either sent into action far from the front lines or controlled remotely by infantry in the field to offer fire support.

X-47BAh, but there’s one thing: the drone isn’t armed. Primarilyy developed to help Naval ships with hunting silent subs and/or cheap diesel-electric models, the ship may be capable of operating autonomously, but cannot take action to end lives. This feature may be the result of the Pentagon’s recent decision to limit the killing powers of UAV and autonomous drones, which amounted to ensuring that a human being will always be at the helm wherever the death of human beings is involved.

What’s more, the drone is designed with all kinds of futuristic and present-day scenarios in mind. While silent subs – ones that use advanced drive systems to generate little to no noise (a la The Hunt for Red October) – are one likely scenario, there is also the possibility of the US Navy running into the cheap diesel models which are technologically inferior, but can be much quieter and harder to track than anything nuclear. Russia is known to sell them and Iran claims to have them, so any military analyst worth his salt would advise being prepared to meet them wherever they present themselves.

And of course, the SAIC was sure to create a video showing the ACTUV in action:


Source:
Wired.com

Should We Be Afraid? A List for 2013

emerg_techIn a recent study, the John J. Reilly Center at University of Notre Dame published a rather list of possible threats that could be seen in the new year. The study, which was called “Emerging Ethical Dilemmas and Policy Issues in Science and Technology” sought to address all the likely threats people might face as a result of all developments and changes made of late, particularly in the fields of medical research, autonomous machines, 3D printing, Climate Change and enhancements.

The list contained eleven articles, presented in random order so people can assess what they think is the most important and vote accordingly. And of course, each one was detailed and sourced so as to ensure people understood the nature of the issue and where the information was obtained. They included:

1. Personalized Medicine:
dna_selfassemblyWithin the last ten years, the creation of fast, low-cost genetic sequencing has given the public direct access to genome sequencing and analysis, with little or no guidance from physicians or genetic counselors on how to process the information. Genetic testing may result in prevention and early detection of diseases and conditions, but may also create a new set of moral, legal, ethical, and policy issues surrounding the use of these tests. These include equal access, privacy, terms of use, accuracy, and the possibility of an age of eugenics.

2. Hacking medical devices:
pacemakerThough no reported incidents have taken place (yet), there is concern that wireless medical devices could prove vulnerable to hacking. The US Government Accountability Office recently released a report warning of this while Barnaby Jack – a hacker and director of embedded device security at IOActive Inc. – demonstrated the vulnerability of a pacemaker by breaching the security of the wireless device from his laptop and reprogramming it to deliver an 830-volt shock. Because many devices are programmed to allow doctors easy access in case reprogramming is necessary in an emergency, the design of many of these devices is not geared toward security.

3. Driverless zipcars:
googlecarIn three states – Nevada, Florida, and California – it is now legal for Google to operate its driverless cars. A human in the vehicle is still required, but not at the controls. Google also plans to marry this idea to the zipcar, fleets of automobiles shared by a group of users on an as-needed basis and sharing in costs. These fully automated zipcars will change the way people travel but also the entire urban/suburban landscape. And once it gets going, ethical questions surrounding access, oversight, legality and safety are naturally likely to emerge.

4. 3-D Printing:
AR-153D printing has astounded many scientists and researchers thanks to the sheer number of possibilities it has created for manufacturing. At the same time, there is concern that some usages might be unethical, illegal, and just plain dangerous. Take for example, recent effort by groups such as Distributed Defense, a group intent on using 3D printers to create “Wiki-weapons”, or the possibility that DNA assembling and bioprinting could yield infectious or dangerous agents.

5. Adaptation to Climate Change:
climatewarsThe effects of climate change are likely to be felt differently by different people’s around the world. Geography plays a role in susceptibility, but a nation’s respective level of development is also intrinsic to how its citizens are likely to adapt. What’s more, we need to address how we intend to manage and manipulate wild species and nature in order to preserve biodiversity.This warrants an ethical discussion, not to mention suggestions of how we will address it when it comes.

6. Counterfeit Pharmaceuticals:
Syringe___Spritze___by_F4U_DraconiXIn developing nations, where life saving drugs are most needed, low-quality and counterfeit pharmaceuticals are extremely common. Detecting such drugs requires the use of expensive equipment which is often unavailable, and expanding trade in pharmaceuticals is giving rise to the need to establish legal measures to combat foreign markets being flooded with cheap or ineffective knock-offs.

7. Autonomous Systems:
X-47BWar machines and other robotic systems are evolving to the point that they can do away with human controllers or oversight. In the coming decades, machines that can perform surgery, carry out airstrikes, diffuse bombs and even conduct research and development are likely to be created, giving rise to a myriad of ethical, safety and existential issues. Debate needs to be fostered on how this will effect us and what steps should be taken to ensure that the outcome is foreseeable and controllable.

8. Human-animal hybrids:
human animal hybrid
Is interspecies research the next frontier in understanding humanity and curing disease, or a slippery slope, rife with ethical dilemmas, toward creating new species? So far, scientists have kept experimentation with human-animal hybrids on the cellular level and have recieved support for their research goals. But to some, even modest experiments involving animal embryos and human stem cells are ethical violation. An examination of the long-term goals and potential consequences is arguably needed.

9. Wireless technology:
vortex-radio-waves-348x196Mobile devices, PDAs and wireless connectivity are having a profound effect in developed nations, with the rate of data usage doubling on an annual basis. As a result, telecommunications and government agencies are under intense pressure to regulate the radio frequency spectrum. The very way government and society does business, communicates, and conducts its most critical missions is changing rapidly. As such, a policy conversation is needed about how to make the most effective use of the precious radio spectrum, and to close the digital access divide for underdeveloped populations.

10. Data collection/privacy:
privacy1With all the data that is being transmitted on a daily basis, the issue of privacy is a major concern that is growing all the time. Considering the amount of personal information a person gives simply to participate in a social network, establish an email account, or install software to their computer, it is no surprise that hacking and identity theft are also major conerns. And now that data storage, microprocessors and cloud computing have become inexpensive and so widespread, a discussion on what kinds of information gathering and how quickly a person should be willing to surrender details about their life needs to be had.

11. Human enhancements:
transhumanismA tremendous amount of progress has been made in recent decades when it comes to prosthetic, neurological, pharmaceutical and therapeutic devices and methods. Naturally, there is warranted concern that progress in these fields will reach past addressing disabilities and restorative measures and venture into the realm of pure enhancement. With the line between biological and artificial being blurred, many are concerned that we may very well be entering into an era where the two are indistinguishable, and where cybernetic, biotechnological and other enhancements lead to a new form of competition where people must alter their bodies in order to maintain their jobs or avoid behind left behind.

Feel scared yet? Well you shouldn’t. The issue here is about remaining informed about possible threats, likely scenarios, and how we as people can address and deal with them now and later. If there’s one thing we should always keep in mind, it is that the future is always in the process of formation. What we do at any given time controls the shape of it and together we are always deciding what kind of world we want to live in. Things only change because all of us, either through action or inaction, allow them to. And if we want things to go a certain way, we need to be prepared to learn all we can about the causes, consequences, and likely outcomes of every scenario.

To view the whole report, follow the link below. And to vote on which issue you think is the most important, click here.

Source: reilly.nd.edu

Planning For Judgement Day…

TerminatorSome very interesting things have been taking place in the last month, all of concerning the possibility that humanity may someday face the extinction at the hands of killer AIs. The first took place on November 19th, when Human Rights Watch and Harvard University teamed up to release a report calling for the ban of “killer robots”, a preemptive move to ensure that we as a species never develop machines that could one day turn against us.

The second came roughly a week later when the Pentagon announced that measures were being taken to ensure that wherever robots do kill – as with drones, remote killer bots, and cruise missiles – the controller will always be a human being. Yes, while Americans were preparing for Thanksgiving, Deputy Defense Secretary Ashton Carter signed a series of instructions to “minimize the probability and consequences of failures that could lead to unintended engagements,” starting at the design stage.

X-47A Drone
X-47A Drone, the latest “hunter-killer”

And then most recently, and perhaps in response to Harvard’s and HRW’s declaration, the University of Cambridge announced the creation of the Centre for the Study of Existential Risk (CSER). This new body, which is headed up by such luminaries as Huw Price, Martin Rees, and Skype co-founder Jaan Tallinn, will investigate whether recent advances in AI, biotechnology, and nanotechnology might eventually trigger some kind of extinction-level event. The Centre will also look at anthropomorphic (human-caused) climate change, as it might not be robots that eventually kill us, but a swelteringly hot climate instead.

All of these developments stem from the same thing: ongoing developments in the field of computer science, remotes, and AIs. Thanks in part to the creation of the Google Neural Net, increasingly sophisticated killing machines, and predictions that it is only a matter of time before they are capable of making decisions on their own, there is some worry that machines programs to kill will be able to do so without human oversight. By creating bodies that can make recommendations on the application of technologies, it is hopes that ethical conundrums and threats can be nipped in the bud. And by legislating that human agency be the deciding factor, it is further hoped that such will never be the case.

The question is, is all this overkill, or is it make perfect sense given the direction military technology and the development of AI is taking? Or, as a third possibility, might it not go far enough? Given the possibility of a “Judgement Day”-type scenario, might it be best to ban all AI’s and autonomous robots altogether? Hard to say. All I know is, its exciting to live in a time when such things are being seriously contemplated, and are not merely restricted to the realm of science fiction.Blade_runner

Mercury Robot Survives Hurricane Sandy

Amidst the news of Hurricane Sandy, of the devastation and ongoing efforts at rescuing those in harm’s way, there was a story that might have been overlooked. It seems that a small robot named Mercury, one of Liquid Robotics wave gliders, survived the storm and managed to keep transmitting information the whole time.

When the storm hit, Mercury was located just 161 km east of Toms River, New Jersey, where winds got up to about 115 km/hour. Nevertheless, the robot continued to function though the worst of it, transmitting real-time weather data and helping scientists to get a better understanding of what made the storm tick.

Naturally, everyone at the parent company was quite pleased with their little automaton, even though it was only doing its job. Technically speaking, Wave Gliders are autonomous monitoring devices that use the ocean’s waves for propulsion. They are composed of two sections; a float for the surface and a submarine compartment that resides under the water. The lower section also comes equipped with moving wings that ensure that the Glider can convert wave energy into forward momentum.

Each Glider comes with a GPS, a series of internal pocessors, navigation software, and an assortment of environmental sensors. Designed for oceanic data-gathering missions, their primary purpose is to help scientists and meteorologists understand and come up with solutions for climate change, resource management, and weather alerts. Given this mission profile, Mercury’s ability to keep on working through a Class One hurricane was quite encouraging. According to Joanne Masters of Liquid Robotics’: “Being able to provide real-time weather data from the surface and the first layer of the water column of the ocean will help scientists better measure and predict hurricane intensity. This can help save lives and prevent property devastation.”

Source: news.cnet.com