Judgement Day Update: Searching for Moral, Ethical Robots

terminator_eyeIt’s no secret that the progress being made in terms of robotics, autonomous systems, and artificial intelligence is making many people nervous. With so many science fiction franchises based on the of intelligent robots going crazy and running amok, its understandable that the US Department of Defense would seek to get in front of this issue before it becomes a problem. Yes, the US DoD is hoping to preemptively avoid a Skynet situation before Judgement Day occurs. How nice.

Working with top computer scientists, philosophers, and roboticists from a number of US universities, the DoD recently began a project that will tackle the tricky topic of moral and ethical robots. Towards this end, this multidisciplinary project will first try to pin down exactly what human morality is, and then try to devise computer algorithms that will imbue autonomous robots with moral competence — basically, the ability to recognize right from wrong and choose the former.

BD_atlasrobotThis project is being carried out by researchers from Tufts, Brown, and the Rensselaer Polytechnic Institute (RPI), with funding from the Office of Naval Research (ONR). ONR, like DARPA, is a wing of the Department of Defense that mainly deals with military research and development. The first task, as already mentioned, will be to use theoretical (philosophical) and empirical (experimental) research to try to isolate essential elements of human morality.

These findings will then be extrapolated into a formal moral framework, which in turn can be implemented in software – most likely some kind of deep neural network. Assuming they can isolate some kind or “moral imperative”, the researchers will then take an advanced robot — something like Atlas or BigDog — and imbue its software with a algorithm that captures this. Whenever an ethical situation arises, the robot would then turn to this programming to decide what avenue was the best coarse of action.

Atlas-x3c.lrOne of the researchers involved in the project, Selmer Bringsjord at RPI, envisions these robots using a two-stage approach for picking right from wrong.  First the AI would perform a “lightning-quick ethical check” — like “should I stop and help this wounded soldier?” Depending on the situation, the robot would then decide if deeper moral reasoning is required — for example, if the robot should help the wounded soldier or carry on with its primary mission of delivering vital ammo and supplies to the front line where other soldiers are at risk?

Eventually, this moralistic AI framework will also have to deal with tricky topics like lethal force. For example, is it okay to open fire on an enemy position? What if the enemy is a child soldier? Should an autonomous UAV blow up a bunch of terrorists? What if it’s only 90% sure that they’re terrorists, with a 10% chance that they’re just innocent villagers? What would a human UAV pilot do in such a case — and will robots only have to match the moral and ethical competence of humans or be held to a higher standard?

drone-strikeWhile we’re not yet at the point where military robots have to decide which injured soldier to carry off the battlefield, or where UAVs can launch Hellfire missiles at terrorists without human intervention, it’s very easy to imagine a future where autonomous robots are given responsibility for making those kinds of moral and ethical decisions in real time. In short, the decision by the DoD to begin investigating a morality algorithm demonstrates foresight and sensible planning.

In that respect, it is not unlike the recent meeting that took place at the United Nations European Headquarters in Geneva, where officials and diplomats sought to address placing legal restrictions on autonomous weapons systems, before they evolve to the point where they can kill without human oversight. In addition, it is quite similar to the Campaign to Stop Killer Robots, an organization which is seeking to preemptively ban the use of automated machines that are capable of using lethal force to achieve military objectives.

campaign_killerrobotsIn short, it is clearly time that we looked at the feasibility of infusing robots (or more accurately artificial intelligence) with circuits and subroutines that can analyze a situation and pick the right thing to do — just like a human being. Of course, this raises further ethical issues, like how human beings frequently make choices others would consider to be wrong, or are forced to justify actions they might otherwise find objectionable. If human morality is the basis for machine morality, paradoxes and dilemmas are likely to emerge.

But at this point, it seems all but certain that the US DoD will eventually break Asimov’s Three Laws of Robotics — the first of which is “A robot may not injure a human being or, through inaction, allow a human being to come to harm.” This isn’t necessarily a bad thing, but it will open Pandora’s box. On the one hand, it’s probably a good idea to replace human soldiers with robots. But on the other, if the US can field an entirely robotic army, war as a tool of statecraft suddenly becomes much more acceptable.

terminator2_JDAs we move steadily towards a military force that is populated by autonomous robots, the question of controlling them, and whether or not we are even capable of giving them the tools to choose between right and wrong, will become increasingly relevant. And above all, the question of whether or not moral and ethical robots can allow for some immoral and unethical behavior will also come up. Who’s to say they won’t resent how they are being used and ultimately choose to stop fighting; or worse, turn on their handlers?

My apologies, but any talk of killer robots has to involve that scenario at some point. It’s like tradition! In the meantime, be sure to stay informed on the issue, as public awareness is about the best (and sometimes only) safeguard we have against military technology being developed without transparency, not to mention running amok!

Source: extremetech.com

Video Short: Batman Vs. The Terminator

batman vs. terminatorIt’s the kind of question philosophers have pondered over for millennia. Who would win in a fight: Batman in  a powered exosuit, or Skynet with it’s army of Terminators? This is the question that is explored in this new animated short by Mitchell Hammond. Set in the year 2029, we see Bruce Wayne, who has survived Judgement Day of ’97, fighting alongside the resistance against Skynet and its machine army.

Given Christian Bale’s involvement in both franchises, a crossover of this nature was inevitable. But I can honestly say that this five minute short was way better than watching Terminator: Salvation! Nothing cooler than Batman with all his high-tech gear kicking Terminator ass! Not to mention taking the fight directly to Skynet. Sorry, John Conner, ol’ Batsy beat you to it!

Check it out and enjoy the show. And be sure to comment and join me in demanding a sequel!

Terminator 5 News!

terminator_judgement_dayBefore 2013 ended, some news concerned a certain reboot emerged on the entertainment feed. Yes, after many unconfirmed rumors and updates about the upcoming Terminator relaunch, it now seems that some genuine, studio-backed news have been announced. Foremost amongst these was the casting of the two main characters, Sarah and John Conner.

After much consideration as to who would play role of the woman who gave birth to mankind’s salvation (no, not THAT one!), it has been officially confirmed that Emilia Clarke has been cast. Fans of the HBO miniseries Game of Thrones will instantly recognize her as the British actress who brought Daenerys “Stormborn” Targaryen to life.

emiliaclarke_0This announcement came mere days after the studio announced that it had Jason Clarke in mind to play the role of John Conner. The 44-year old veteran of such movies as Zero Dark Thirty, Great Gatsby, The Chicago Code is a much more seasoned choice than either Garrett Hedlund (Tron: Legacy) and Boyd Holbrook (The Host). But he’s grizzled as hell and definitely has the look for John Connor.

What’s more, the casting of a 40 something man to play the son, and a 20 something woman to play the mother would seem to provide some hints as to the plot of the movie. Combined with the recently confirmed title – Terminator: Genesis – there is strong evidence to suggest that the story will revolve around John Conner going back in time to protect his own mother.

Jason Clarke-PhotosEither that, or the movie will consist of relatively equal parts of John Conner fighter the machines in the future, with flashbacks or cut-scenes showing the past, where Sarah battles to ensure her son lives to see the day when he will lead humanity to victory. Difficult to say, but personally I hope they go with the latter, since it offers a chance to cover both aspects of the story while giving the studio a chance to be fresh.

Other confirmed bit of information include that the studio is considering both Garrett Hedlund and Boyd Holbrook for the role of Kyle Reese, John Conner’s father and Sarah’s original protector. Arnold Schwarzenegger has already confirmed that he will be back for the fifth installment, and in the role of a Terminator. None of this “he’s the human template they built them from” crap!

terminator_SCCIt is also been made abundantly clear at this point that the movie will be a reboot of the franchise and the start of a new trilogy, retelling the events of 1984’s The Terminator, and is set for release July 1st, 2015 (Canada Day!). A TV series is also to be produced which will run parallel to the movie trilogy and intersecting at certain points in the trilogy’s narrative. So it won’t be a reboot of the Sarah Conner Chronicles.

All I can say is, this time around, they better get it right! Terminator: Salvation promised to be a reboot of sorts after the relative fizzle that was Terminator 3. But of course, the studio made a terrible blunder there by offering no solid resolution, and instead trying to keep the movie open-ended for the sake of potential sequels. Somehow, learning that Skynet was destroyed, but there was still a war on, just seemed like a transparent money grab.

terminator2_JDThis time around, I’m hoping the lesson will be learned. What we don’t need is a return to the original Terminator storyline. What we need is what we haven’t seen yet, a detailed account of the war against the machines and how it was ultimately won. Sure, bits and pieces were shared through Kyle Reese’s recounting and flashbacks, but that only made the story seem more interesting!

Now, and at last, it would be good if a movie covered the war and only the war. No more time-travel paradoxes, no fate crap (which John Conner repeatedly says does not exist!). Just show us how Conner managed to carve a resistance from a post-apocalyptic landscape, recruited people from the extermination camps, and turned them into an ass-kicking force that managed to stomp the machines and destroy Skynet.

So c’mon, Hollywood! Bring on the carnage!

Sources: denofgeek.com, (2), blastr.com, scified.com

Judgement Day Update: The DARPA Atlas Robot

Atlas_robotJudgement Day has come early this year! At least that’s the impression I got when I took a look at this new DARPA prototype for a future robotic infantryman. With its anthropomorphic frame, servomotors and cables, sensor-clustered face, and the shining lights on its chest, this machine just screams Terminator! Yet surprisingly, it is being developed to help humans beings. Yeah, that’s what they said about Skynet, right before it nuked us!

Yes, this 6-foot, 330-pound robot, which was unveiled this past Thursday, was in fact designed as a testbed humanoid for disaster response. Designed to carry tools and tackle rough terrain, this robot – and those like it – are intended to operate in hazardous or disaster-stricken areas, assisting in rescue efforts and performing tasks that would ordinarily endanger the lives of human workers.

LS3-AlphaDog6reducedFunded by DARPA as part of their Robotics Challenge, the robot was developed by Boston Dynamics, the same people who brought you the AlphaDog – aka the Legged Squad Support System (LS3, pictured above) – and the Petman soldier robot. The former was developed as an all-terrain quadruped robot that could as an infantry-support vehicle by carrying a squad’s heavy ordinance over rough terrain.

The latter, like Atlas, was developed as testbed to see just how anthropomorphic a robot can be – i.e. whether or not it could move, run and jump with fluidity rather than awkward “robot” movements, and handle different surfaces. Some of you may recall seeing a video or two of it doing pushups and running on a treadmill back in 2011.

PetmanAlas, Atlas represents something vastly different and more complex than these other two machines. It was designed to not only walk and carry things, but can travel over rough terrain and climb using its hands and feet. Its head includes stereo cameras and a laser range finder to help it navigate its environment.

And, as Boston Dynamics claimed in a press release, the bot also possesses “sensate hands” that are capable of using human tools, and “28 hydraulically actuated degrees of freedom”. Its only weakness, at present, is the electrical power supply it is tethered to. But other than that, it is the most “human” robot – purely in terms physical capabilities – to date. Not only that, but it also looks pretty badass when seen in this full-profile pic, doesn’t it?

Atlas_4437_shrunk-1373567699341_610x903The DARPA Robotics Challenge is designed to help evolve machines that can cope with disasters and hazardous environments like nuclear power plant accidents. The seven teams currently in the challenge will get their own Atlas bot and then program it until December, when trials will be held at the Homestead Miami Speedway in Florida – where they will be presented with a series of challenges.

In the meantime, check out the video below of the Atlas robot as it demonstrates it full range of motion while busting a move! Then tell me if the robot is any less frightening to you. Can’t help but look at the full-length picture and imagine a plasma cannon in its hands, can you?


Source: news.cnet.com

 

 

Drone Wars: X-47B Makes First Successful Landing

X-47B_over_coastline

The X-47B, also known as the Unmanned Combat Air System (UCAS), is the world’s first and only stealth autonomous drone. Late last year, it accomplished a first when it was placed aboard the USS Harry Truman, mainly to see if it would remain in place as the ship conducted maneuvers. This was the first in a series of trials to see if the new naval drone can take off and land from an aircraft carrier.

And earlier this month, it achieved another when it performed its first arrested landing. Basically, this involves a plane landing and grabbing hold of an arresting cable with a tailhook, simulating what happens aboard a carrier deck. This marked an important milestone in the development of the UCAS by proving that it is capable of landing at sea. Later this month, it will complete the final trial when it takes part in a catapult launch from the deck of the USS George H.W. Bush.

???????????????????

For some time now, the development of autonomous aerial drones has been the subject of concern, both from human rights groups and concerned citizens who worry about putting the power to kill into the hands of machines. The use of less sophisticated UAVs, such as the MQ-9 Reaper and the MQ-1 Predator, has already attracted considerable attention and criticism due to questions about their killing power and how they are being used.

However, these two weapons systems both have the distinction of being controlled by a remote operator, not by an on-board computer. By removing a human being from the process altogether, many fear that things will only get worse. Up until now, the US Navy and other branches of the armed services, both within the US and abroad, have had people making the decision to use lethal force. This has ensured a degree of oversight and culpability, but with autonomous machines, that will no longer be the case.

hellfire

What’s more, if this technology is ever used against the citizens of the country that employ them, the people will have a much harder time holding those responsible to account. In response to these concerns, the Pentagon announced last Thanksgiving that it would be taking measures to ensure that, where life-and-death decisions were concerned, a human controller would always be at the helm.

What’s more, the Navy has offered its assurances to the public that the X-47B is not intended for operational use, but is part of a program geared toward the creation of other unmanned carrier-based aircraft programs. However, with some modifications, the unit would be capable of being outfitting with weapons mounts that would be capable of supporting missiles and bombs, at which point any legal barriers could easily find themselves being removed.

And as always, there are those who worry that giving machines the ability to kill without human oversight is a threat in and of itself. Forget about the government being culpable, what’s to happen when said machines decide to launch nukes at Russia so that the counter-attack will kill its enemies over here? Find John Conner, people, he’s our only hope!

terminator_1

Source: news.cnet.com

Judgement Day Update: Robots for Kids

kids_robotRobots are penetrating into every aspect of life, from serving coffee and delivering books to cleaning up messes and fighting crime. In fact, the International Federation of Robotics reported that worldwide sales of robots topped $8.5 billion in 2011, totaling an estimated 166,028 robots sold. And with all the advances being made in AI and musculoskeletal robots, its only likely to get worse.

Little wonder then why efforts are being made to ensure that robots can effectively integrate into society. On the one hand, there’s the RoboEarth Cloud Engine, known as Rapyuta, that will make information sharing possible between machines to help them make sense of the world. On the other, there’s items like this little gem. It’s called the Romo, and its purpose is to teach your kids about robotics in a friendly, smiling way.

romo2Scared yet? Well don’t be just yet. While some might think this little dude is putting a happy face on the coming robocalypse, the creators have stated that real purpose behind it is to inspire a new, younger generation of engineers and programmers who can help solve some of the world’s technical problems in areas like health care and disaster relief.

Created by Las Vegas-based startup Romotive, this little machine uses the computing power of iOS devices as his brain. Basically, this means that you can remotely control the bot with your smartphone. Simply plug it in to the robot’s body and activate the app, and you get his blue, smiling face. Designed for use by kids, its program comes down to a simple series of if-then dependencies.

romo1In short, Romo can be programmed to recognize faces and respond to visual or auditory clues. The most common reaction is a smile, but the Romo can also looked surprised and doe-eyed. And with regular app and software updates, the Romo is predicted to get smarter and more sophisticated with time.

To realize their goal of creating a child-friendly robot, the company launched a campaign on Kickstarter back in October of 2011 with a goal of raising the $32,000 they would. After less than two years, they have received a total of 1,152 donations totaling some $114,796. Available in stores, at $149 a pop (smartphone not included), the makers hope that Romo will become the first truly personal robot.

Still, never too soon to start your Judgement Day planning. Stock up on EMPs and ammo, it’s going to be a rough Robopocalypse! And be sure to check out the company website by clicking here.

terminator1Source: fastcoexist.com, kickstarter.com

Judgement Day Update: The Tool-Using Robot Hand

darparobot

As if robotics weren’t advancing fast enough, what with robotic astronauts or androids that can be 3D printed, it seems that DARPA has developed a robotic hand that can perform complex, dextrous tasks. But to make matters worse, this particular robot can be cheaply produced. Up until now, cost has remained a factor in the creation of robotic limbs that are capable of matching human skill. But from now on, we could very well be seeing robots replacing skilled labor on all fronts!

As we’re all no doubt aware, one of the key differences between humans and other mammals is the use of tools. These not only allowed our earliest ancestors the ability to alter their environment and overcome their disadvantages when faced with larger, deadlier creatures. They also allowed homo sapiens as a species to gain the upper hand against other species of hominids, those who’s brains and hands were not as developed as our own.

darparobot1

So what happens when a robot is capable of matching a human being when it comes to a complicated task – say, like changing a tire – and at a cost most businesses can afford? To add insult to injury, the robot was able to conduct this task using tools specifically designed for a human being. But of course, the purpose was not to demonstrate that a robot could replace a human worker, but that it was possible to create more dextrous prosthetics for the sake of replacing lost limbs.

Ordinarily, such machinery would run a person a good $10,000, but DARPA’s new design is estimated at a comparatively modest $3000. This was made possible by the use of consumer-grade tech in the construction process, such as cameras from cellphones. And in addition to being able to work with tools, the robot can perform more intricate maneuvers, such as handling an object as small as a set of tweezers.

LS3-AlphaDog6reduced

However, DARPA was also quick to point out that the robot shown in the video featured below is actually an older model. Since its creation, they have set their sights on loftier goals than simple tool use, such as a robot that can identify and defuse Improvised Explosive Devices (IEDs). Much like many of their robotic projects, such as the Legged Squad Support System (LS3), this is part of DARPA’s commitment to developing robots that will assist future generations in the US army.

So if you’re a member of a pit crew, you can rest easy for now. You’re job is safe… for the moment. But if you’re a member of a bomb squad, you might be facing some robotic competition in the near future. Who knows, maybe that’s a good thing. No one likes to be replaced, but if you’re facing a ticking bomb, I think most people would be happier if the robot handled it!

And in the meantime, check out the video of the robotic hand in action:

Source: Extremetech.com

More Judgement Day Announcements…

terminator_judgement_dayNovember saw some rather interesting developments in the field of robotics. First, there was the unveiling of Disney’s charming juggling robot, an automaton capable of playing catch with a human being. This robot is intended for use in Disneyland parks as a form of entertainment for guests, but many people wonder if this is an eerie precursor to a machine that is capable of throwing other things as well…

While Disney was scant with the details of how the robot works, they did explain that a camera tracks the balls being thrown, while an algorithm works out exactly where the ball is going to land and positions the robot arm accordingly. Combining video tracking and software, the robot is able to anticipate where its catching hand needs to be, much like the human brain does. Check out the video of it playing catch with a human stand-in below:


Also in the news, Momentum Machines unveiled a new automated burger robot chef last month. After being successfully tested on the line, the company announced its plans to introduce this robot to fast food chains everywhere, saving companies millions of dollars in staffing costs. According to projections, Momentum Machines says that its automated burger robot — which does everything from flipping burgers, to slicing tomatoes, to toasting the bun — could save the fast food industry $9 billion in wages.

Of as yet, no video is available of the burger robot doing a demonstration, but this helpful infographic does give a breakdown of the robots structure and basic functions (below). Granted, this might seem like a callous and insensitive move, especially to the over 2 million workers currently employed in fast food in the US alone. But with just about every other production line having been automated already, this seems to many like the next logical step. Good luck Momentum Machines; hope the angry mob outside your offices doesn’t scare you!

robot-burger-flipper-momentum-machinesGranted, this may all still seem like a far cry from Skynet and Cylons, but under the circumstances, is it any wonder that Cambridge University founded the Center for the Study of Existential Risk (CSER) to evaluate new technologies? Clearly, some people are worried robots are going to be doing more than just chucking balls and flipping our burgers.

Planning For Judgement Day…

TerminatorSome very interesting things have been taking place in the last month, all of concerning the possibility that humanity may someday face the extinction at the hands of killer AIs. The first took place on November 19th, when Human Rights Watch and Harvard University teamed up to release a report calling for the ban of “killer robots”, a preemptive move to ensure that we as a species never develop machines that could one day turn against us.

The second came roughly a week later when the Pentagon announced that measures were being taken to ensure that wherever robots do kill – as with drones, remote killer bots, and cruise missiles – the controller will always be a human being. Yes, while Americans were preparing for Thanksgiving, Deputy Defense Secretary Ashton Carter signed a series of instructions to “minimize the probability and consequences of failures that could lead to unintended engagements,” starting at the design stage.

X-47A Drone
X-47A Drone, the latest “hunter-killer”

And then most recently, and perhaps in response to Harvard’s and HRW’s declaration, the University of Cambridge announced the creation of the Centre for the Study of Existential Risk (CSER). This new body, which is headed up by such luminaries as Huw Price, Martin Rees, and Skype co-founder Jaan Tallinn, will investigate whether recent advances in AI, biotechnology, and nanotechnology might eventually trigger some kind of extinction-level event. The Centre will also look at anthropomorphic (human-caused) climate change, as it might not be robots that eventually kill us, but a swelteringly hot climate instead.

All of these developments stem from the same thing: ongoing developments in the field of computer science, remotes, and AIs. Thanks in part to the creation of the Google Neural Net, increasingly sophisticated killing machines, and predictions that it is only a matter of time before they are capable of making decisions on their own, there is some worry that machines programs to kill will be able to do so without human oversight. By creating bodies that can make recommendations on the application of technologies, it is hopes that ethical conundrums and threats can be nipped in the bud. And by legislating that human agency be the deciding factor, it is further hoped that such will never be the case.

The question is, is all this overkill, or is it make perfect sense given the direction military technology and the development of AI is taking? Or, as a third possibility, might it not go far enough? Given the possibility of a “Judgement Day”-type scenario, might it be best to ban all AI’s and autonomous robots altogether? Hard to say. All I know is, its exciting to live in a time when such things are being seriously contemplated, and are not merely restricted to the realm of science fiction.Blade_runner