Some very interesting things have been taking place in the last month, all of concerning the possibility that humanity may someday face the extinction at the hands of killer AIs. The first took place on November 19th, when Human Rights Watch and Harvard University teamed up to release a report calling for the ban of “killer robots”, a preemptive move to ensure that we as a species never develop machines that could one day turn against us.
The second came roughly a week later when the Pentagon announced that measures were being taken to ensure that wherever robots do kill – as with drones, remote killer bots, and cruise missiles – the controller will always be a human being. Yes, while Americans were preparing for Thanksgiving, Deputy Defense Secretary Ashton Carter signed a series of instructions to “minimize the probability and consequences of failures that could lead to unintended engagements,” starting at the design stage.

And then most recently, and perhaps in response to Harvard’s and HRW’s declaration, the University of Cambridge announced the creation of the Centre for the Study of Existential Risk (CSER). This new body, which is headed up by such luminaries as Huw Price, Martin Rees, and Skype co-founder Jaan Tallinn, will investigate whether recent advances in AI, biotechnology, and nanotechnology might eventually trigger some kind of extinction-level event. The Centre will also look at anthropomorphic (human-caused) climate change, as it might not be robots that eventually kill us, but a swelteringly hot climate instead.
All of these developments stem from the same thing: ongoing developments in the field of computer science, remotes, and AIs. Thanks in part to the creation of the Google Neural Net, increasingly sophisticated killing machines, and predictions that it is only a matter of time before they are capable of making decisions on their own, there is some worry that machines programs to kill will be able to do so without human oversight. By creating bodies that can make recommendations on the application of technologies, it is hopes that ethical conundrums and threats can be nipped in the bud. And by legislating that human agency be the deciding factor, it is further hoped that such will never be the case.
The question is, is all this overkill, or is it make perfect sense given the direction military technology and the development of AI is taking? Or, as a third possibility, might it not go far enough? Given the possibility of a “Judgement Day”-type scenario, might it be best to ban all AI’s and autonomous robots altogether? Hard to say. All I know is, its exciting to live in a time when such things are being seriously contemplated, and are not merely restricted to the realm of science fiction.
You just had to write a post like this 17 days before the whole Mayan calendar thing is supposed to wind down. Now the conspiracy nuts out there are going to point to the things in this article and say it’s proof of the end.
What? I thought we passed that business already! Okay, time to start looting and hoarding!
the next two dates are december 21 and 31. after new years, we know we’re okay.
End of the World Party!
I love that there’s a Centre for the Study of Existential Risk. I love living in the Horrible Future!
This is why I am a doomsday prepper. Not actually; but it is fun to pretend. I am more worried about the INEVITABLE Zombie apocalypse.
Yep, we need to legislate for that!
Agreed! That’s the real reason we have the right to bear arms in this country, the founding fathers had to deal with a pretty nasty zombie outbreak right after the revolution.
Have you been watching Abe Lincoln: Vampire Hunter?
Loved the idea, hated the movie. Slate.com ran a pretty great article about other presidents who battled the supernatural though (I can’t remember which ones did what unfortunately). I assume that someone involved in the writing of the constitution had to fight zombies and that our second amendment rights weren’t given to us to encourage us to kill each other.
Reblogged this on Jay .