Universe Today: Are Intelligent Civilizations Doomed?

Gaia_galaxyMy friend over at Universe Today, Fraser Cain, has been busy of late! In his latest podcast, he asks an all-important question that addresses the worrisome questions arising out of the Fermi Paradox. For those unfamiliar with this, the paradox states that given the age of the universe, the sheer number of stars and planets, and the statistical likelihood of some of the supporting life, how has humanity failed to find any indications of intelligent life elsewhere?

It’s a good question, and raised some frightening possibilities. First off, humanity may be alone in the universe, which is frightening enough prospect given its sheer size. Nothing worse than being on a massive playground and knowing you only have but yourself to play with. A second possibility is that extra-terrestrial life does exist, but has taken great pains to avoid being contacting us. An insulting, if understandable, proposition.

alien-worldThird, it could be that humanity alone has achieved the level of technical development necessary to send out and receive radio transmissions or construct satellites. That too is troubling, since it would means that despite the age of the universe, it took this long for an technologically advanced species to emerge, and that there are no species out there that we can learn from or look up to.

The fourth, and arguably most frightening possibility, is the Great Filter theory – that all intelligent life is doomed to destroy itself, and we haven’t heard from any others because they are all dead. This concept has been explored by numerous science fiction authors – such as Stephen Baxter (Manifold: Space), Alastair Reynolds (the Revelation Space universe) and Charles Stross (Accelerand0) – all of whom employ a different variation and answer.

kardashev_scaleAs explored by these and other authors, the biggest suggestions are that either civilizations will eventually create weapons or some kind of programmed matter which will destroy – such as nuclear weapons, planet busters, killer robots, or nanotech that goes haywire (aka. “grey goo”). A second possibility is that all species eventually undergo a technological/existential singularity where they shed their bodies and live out their lives in a simulated existence.

A third is that intelligent civilizations fell into a “success trap”, outgrowing their resources and their capacity to support their numbers, or simply ruined their planetary environment before they could get out into the universe. As usual, Fraser gives a great rundown on all of this, explaining the Fermi Paradox is, the statistical likelihood of life existing elsewhere, and what likely scenarios could explain why humanity has yet to find any proof of other civilizations.

Are Intelligent Civilizations Doomed:


And be sure to check out the podcast that deals strictly with the Fermi Paradox, from roughly a year ago:

The Fermi Paradox Explained:

Judgement Day Update: Terminators at I/O 2014

google_terminatorsWe’ve all thought about it… the day when super-intelligent computer becomes self-aware and unleashes a nuclear holocaust, followed shortly thereafter by the rise of the machines (cue theme from Terminator). But as it turns out, when the robot army does come to exterminate humanity, at two humans might be safe – Google co-founders Larry Page and Sergey Brin to be precise.

Basically, they’ve uploaded a killer-robots.txt file to their servers that instructs T-800 and T-1000 Terminators to spare the company’s co-founders (or “disallow” their deaths). Such was the subject of a totally tongue-in-cheek presentation at this year’s Google I/O at the Moscone Center in San Fransisco, which coincided with the 20th anniversary of the Robots.txt file.

https://i0.wp.com/www.product-reviews.net/wp-content/uploads/Google-IO-2014-keynote-dated-live-stream-as-normal1.jpgThis tool, which was created in 1994, instructs search engines and other automated bots to avoid crawling certain pages or directories of a website. The industry has done a remarkable job staying true to the simple text file in the two decades since; Google, Bing, and Yahoo still obey its directives. The changes they uploaded read like this, just in case you’re planning on adding your name to the “disallow” list:

Screen_shot_2014-07-03_at_7.15.23_pm

While that tool didn’t exactly take the rise of the machines into account, it’s appearance on the Google’s website as an Easter egg did add some levity to a company that is already being accused of facilitating in the creation of killer robots. Calling Google’s proposed line or robots “killer” does seem both premature and extreme, that did not stop a protester from interrupting the I/O 2014 keynote address.

Google_Terminators_WideBasically, as Google’s senior VP of technical infrastructure Urs Hölze spoke about their cloud platform, the unidentified man stood up and began screaming “You all work for a totalitarian company that builds machines that kill people!” As you can see from the video below, Hölze did his best to take the interruptions in stride and continued with the presentation. The protestor was later escorted out by security.

This wasn’t the first time that Google has been the source of controversy over the prospect of building “killer robots”. Ever since Google acquired Boston Dynamics and seven other robots companies in the space of six months (between and June and Dec of 2013), there has been some fear that the company has a killer machine in the works that it will attempt to sell to the armed forces.

campaign_killerrobotsNaturally, this is all part of a general sense of anxiety that surrounds developments being made across multiple fields. Whereas some concerns have crystallized into dedicated and intelligent calls for banning autonomous killer machines in advance – aka. the Campaign To Stop Killer Robots – others have resulted in the kinds of irrational outbreaks observed at this year’s I/O.

Needless to say, if Google does begin developing killer robots, or just starts militarizing its line of Boston Dynamics acquisitions, we can expect that just about everyone who can access (or hack their way into) the Robots.txt file to be adding their names. And it might not be too soon to update the list to include the T-X, Replicants, and any other killer robots we can think of!

And be sure to check out the video of the “killer robot” protester speaking out at 2014 I/O:


Sources: 
theverge.com, (2)

The Future is Here: Black Hawk Drones and AI pilots

blackhawk_droneThe US Army’s most iconic helicopter is about to go autonomous for the first time. In their ongoing drive to reduce troops and costs, they are now letting their five-ton helicopter carry out autonomous expeditionary and resupply operations. This began last month when the defense contractor Sikorsky Aircraft, the company that produces the UH-60 Black Hawk – demonstrated the hover and flight capability in an “optionally piloted” version of their craft for the first time.

Sikorsky has been working on the project since 2007 and convinced the Army’s research department to bankroll further development last year. As Chris Van Buiten, Sikorsky’s vice president of Technology and Innovation, said of the demonstration:

Imagine a vehicle that can double the productivity of the Black Hawk in Iraq and Afghanistan by flying with, at times, a single pilot instead of two, decreasing the workload, decreasing the risk, and at times when the mission is really dull and really dangerous, go it all the way to fully unmanned.

blackhawk_drone1The Optionally Piloted Black Hawk (OPBH) operates under Sikorsky’s Manned/Unmanned Resupply Aerial Lifter (MURAL) program, which couples the company’s advanced Matrix aviation software with its man-portable Ground Control Station (GCS) technology. Matrix, introduced a year ago, gives rotary and fixed-wing vertical take-off and landing (VTOL) aircraft a high level of system intelligence to complete missions with little human oversight.

Mark Miller, Sikorsky’s vice-president of Research and Engineering, explained in a statement:

The autonomous Black Hawk helicopter provides the commander with the flexibility to determine crewed or un-crewed operations, increasing sorties while maintaining crew rest requirements. This allows the crew to focus on the more ‘sensitive’ operations, and leaves the critical resupply missions for autonomous operations without increasing fleet size or mix.

Alias-DarpaThe Optionally Piloted Black Hawk fits into the larger trend of the military finding technological ways of reducing troop numbers. While it can be controlled from a ground control station, it can also make crucial flying decisions without any human input, relying solely on its ‘Matrix’ proprietary artificial intelligence technology. Under the guidance of these systems, it can fly a fully autonomous cargo mission and can operate both ways: unmanned or piloted by a human.

And this is just one of many attempts by military contractors and defense agencies to bring remote and autonomous control to more classes of aerial vehicles. Earlier last month, DARPA announced a new program called Aircrew Labor In-Cockpit Automation System (ALIAS), the purpose of which is to develop a portable, drop-in autopilot to reduce the number of crew members on board, making a single pilot a “mission supervisor.”

darpa-alias-flight-crew-simulator.siMilitary aircraft have grown increasingly complex over the past few decades, and automated systems have also evolved to the point that some aircraft can’t be flown without them. However, the complex controls and interfaces require intensive training to master and can still overwhelm even experienced flight crews in emergency situations. In addition, many aircraft, especially older ones, require large crews to handle the workload.

According to DARPA, avionics upgrades can help alleviate this problem, but only at a cost of tens of millions of dollars per aircraft type, which makes such a solution slow to implement. This is where the ALIAS program comes in: instead of retrofitting planes with a bespoke automated system, DARPA wants to develop a tailorable, drop‐in, removable kit that takes up the slack and reduces the size of the crew by drawing on both existing work in automated systems and newer developments in unmanned aerial vehicles (UAVs).

Alias_DARPA1DARPA says that it wants ALIAS to not only be capable of executing a complete mission from takeoff to landing, but also handle emergencies. It would do this through the use of autonomous capabilities that can be programmed for particular missions, as well as constantly monitoring the aircraft’s systems. But according to DARPA, the development of the ALIAS system will require advances in three key areas.

First, because ALIAS will require working with a wide variety of aircraft while controlling their systems, it will need to be portable and confined to the cockpit. Second, the system will need to use existing information about aircraft, procedures, and flight mechanics. And third, ALIAS will need a simple, intuitive, touch and voice interface because the ultimate goal is to turn the pilot into a mission-level supervisor while ALIAS handles the second-to-second flying.

AI'sAt the moment, DARPA is seeking participants to conduct interdisciplinary research aimed at a series of technology demonstrations from ground-based prototypes, to proof of concept, to controlling an entire flight with responses to simulated emergency situations. As Daniel Patt, DARPA program manager, put it:

Our goal is to design and develop a full-time automated assistant that could be rapidly adapted to help operate diverse aircraft through an easy-to-use operator interface. These capabilities could help transform the role of pilot from a systems operator to a mission supervisor directing intermeshed, trusted, reliable systems at a high level.

Given time and the rapid advance of robotics and autonomous systems, we are likely just a decade away from aircraft being controlled by sentient or semi-sentient systems. Alongside killer robots (assuming they are not preemptively made illegal), UAVs, and autonomous hovercraft, it is entirely possible wars will be fought entirely by machines. At which point, the very definition of war will change. And in the meantime, check out this video of the history of unmanned flight:


Sources:
wired.com, motherboard.vice.com, gizmag.com
, darpa.mil

Judgement Day Update: Searching for Moral, Ethical Robots

terminator_eyeIt’s no secret that the progress being made in terms of robotics, autonomous systems, and artificial intelligence is making many people nervous. With so many science fiction franchises based on the of intelligent robots going crazy and running amok, its understandable that the US Department of Defense would seek to get in front of this issue before it becomes a problem. Yes, the US DoD is hoping to preemptively avoid a Skynet situation before Judgement Day occurs. How nice.

Working with top computer scientists, philosophers, and roboticists from a number of US universities, the DoD recently began a project that will tackle the tricky topic of moral and ethical robots. Towards this end, this multidisciplinary project will first try to pin down exactly what human morality is, and then try to devise computer algorithms that will imbue autonomous robots with moral competence — basically, the ability to recognize right from wrong and choose the former.

BD_atlasrobotThis project is being carried out by researchers from Tufts, Brown, and the Rensselaer Polytechnic Institute (RPI), with funding from the Office of Naval Research (ONR). ONR, like DARPA, is a wing of the Department of Defense that mainly deals with military research and development. The first task, as already mentioned, will be to use theoretical (philosophical) and empirical (experimental) research to try to isolate essential elements of human morality.

These findings will then be extrapolated into a formal moral framework, which in turn can be implemented in software – most likely some kind of deep neural network. Assuming they can isolate some kind or “moral imperative”, the researchers will then take an advanced robot — something like Atlas or BigDog — and imbue its software with a algorithm that captures this. Whenever an ethical situation arises, the robot would then turn to this programming to decide what avenue was the best coarse of action.

Atlas-x3c.lrOne of the researchers involved in the project, Selmer Bringsjord at RPI, envisions these robots using a two-stage approach for picking right from wrong.  First the AI would perform a “lightning-quick ethical check” — like “should I stop and help this wounded soldier?” Depending on the situation, the robot would then decide if deeper moral reasoning is required — for example, if the robot should help the wounded soldier or carry on with its primary mission of delivering vital ammo and supplies to the front line where other soldiers are at risk?

Eventually, this moralistic AI framework will also have to deal with tricky topics like lethal force. For example, is it okay to open fire on an enemy position? What if the enemy is a child soldier? Should an autonomous UAV blow up a bunch of terrorists? What if it’s only 90% sure that they’re terrorists, with a 10% chance that they’re just innocent villagers? What would a human UAV pilot do in such a case — and will robots only have to match the moral and ethical competence of humans or be held to a higher standard?

drone-strikeWhile we’re not yet at the point where military robots have to decide which injured soldier to carry off the battlefield, or where UAVs can launch Hellfire missiles at terrorists without human intervention, it’s very easy to imagine a future where autonomous robots are given responsibility for making those kinds of moral and ethical decisions in real time. In short, the decision by the DoD to begin investigating a morality algorithm demonstrates foresight and sensible planning.

In that respect, it is not unlike the recent meeting that took place at the United Nations European Headquarters in Geneva, where officials and diplomats sought to address placing legal restrictions on autonomous weapons systems, before they evolve to the point where they can kill without human oversight. In addition, it is quite similar to the Campaign to Stop Killer Robots, an organization which is seeking to preemptively ban the use of automated machines that are capable of using lethal force to achieve military objectives.

campaign_killerrobotsIn short, it is clearly time that we looked at the feasibility of infusing robots (or more accurately artificial intelligence) with circuits and subroutines that can analyze a situation and pick the right thing to do — just like a human being. Of course, this raises further ethical issues, like how human beings frequently make choices others would consider to be wrong, or are forced to justify actions they might otherwise find objectionable. If human morality is the basis for machine morality, paradoxes and dilemmas are likely to emerge.

But at this point, it seems all but certain that the US DoD will eventually break Asimov’s Three Laws of Robotics — the first of which is “A robot may not injure a human being or, through inaction, allow a human being to come to harm.” This isn’t necessarily a bad thing, but it will open Pandora’s box. On the one hand, it’s probably a good idea to replace human soldiers with robots. But on the other, if the US can field an entirely robotic army, war as a tool of statecraft suddenly becomes much more acceptable.

terminator2_JDAs we move steadily towards a military force that is populated by autonomous robots, the question of controlling them, and whether or not we are even capable of giving them the tools to choose between right and wrong, will become increasingly relevant. And above all, the question of whether or not moral and ethical robots can allow for some immoral and unethical behavior will also come up. Who’s to say they won’t resent how they are being used and ultimately choose to stop fighting; or worse, turn on their handlers?

My apologies, but any talk of killer robots has to involve that scenario at some point. It’s like tradition! In the meantime, be sure to stay informed on the issue, as public awareness is about the best (and sometimes only) safeguard we have against military technology being developed without transparency, not to mention running amok!

Source: extremetech.com

Judgement Day Update: UN Weights in on Killer Robots

terminator_judgement_dayEarlier this month, a UN meeting took place in Geneva in which the adoption of international laws that would seek to regulate or ban the use of killer robots. It was the first time the subject was ever discussed in a diplomatic setting, with representatives trying to define the limits and responsibilities of so-called “lethal autonomous weapons systems” that could go beyond the human-directed drones that are already being used by some armies today.

On the one hand, the meeting could be seen as an attempt to create a legal precedent that would likely come in handy someday. On the other, it could be regarded as a recognition of a growing trend that is in danger of becoming a full-blown reality, thanks to developments being made in unmanned aerial systems, remote-controlled and autonomous robotics systems, and computing and artificial neural nets. The conjunction of these technologies are clearly something to be concerned about.

Atlas-x3c.lrAs Michael Moeller, the acting head of the U.N.’s European headquarters in Geneva, told diplomats at the start of the four-day gathering:

All too often international law only responds to atrocities and suffering once it has happened. You have the opportunity to take pre-emptive action and ensure that the ultimate decision to end life remains firmly under human control.

He noted that the U.N. treaty they were meeting to discuss – the Convention on Certain Conventional Weapons adopted by 117 nations including the world’s major powers – was used before to prohibit the use of blinding laser weapons in the 1990s before they were ever deployed on the battlefield. In addition to diplomatic represenatives from many nations, representatives from civil society were also in attendance and made their voices heard.

campaign_killerrobotsThese included representatives from the International Committee for the Red Cross (ICRC), Human Rights Watch (HRW), the International Committee for Robot Arms Control (ICRAC), Article 36, the Campaign to Stop Killer Robots, Mines Action Canada, PAX, the Women’s International League for Peace and Freedom, and many others. As the guardians of the Geneva Conventions on warfare, the Red Cross’ presence was expected and certainly noted.

As Kathleen Lawand, head of the Red Cross’s arms unit, said with regards to the conference and killer robots in general:

There is a sense of deep discomfort with the idea of allowing machines to make life-and-death decisions on the battlefield with little or no human involvement.

And after four days of of expert meetings, concomitant “side events” organized by the Campaign to Stop Killer Robots, and informal discussions in the halls of the UN, the conclusions reached were clear: lethal autonomous weapons systems deserve further international attention, continued action to gain prohibition, and without regulation may prove a “game changer” for the future waging of war.

X-47BWhile some may think this meeting on future weapons systems is a result of science fiction or scare mongering, the brute fact that the first multilateral meeting on this matter is under the banner of the UN, and the CCW in particular, shows the importance, relevance and danger of these weapons systems in reality. Given the controversy over the existing uses of the drone technology and the growth in autonomous systems, the fact that an international conference was held to discuss it came as no surprise.

Even more telling was the consensus that states are opposed to “fully autonomous weapons.” German Ambassador Michael Biontino claimed that human control was the bedrock of international law, and should be at the core of future planning:

It is indispensable to maintain human control over the decision to kill another human being. This principle of human control is the foundation of the entire international humanitarian law.

The meetings also surprised and pleased many by showing that the issue of ethics was even on the table. Serious questions about the possibility of accountability, liability and responsibility arise from autonomous weapons systems, and such questions must be addressed before their creation or deployment. Paying homage to these moral complexities, states embraced the language of “meaningful human control” as an initial attempt to address these very issues.

UAVsBasically, they agreed that any and all systems must be under human control, and that the level of control – and the likelihood for abuse or perverse outcomes – must be addressed now and not after the systems are deployed. Thus in the coming months and years, states, lawyers, civil society and academics will have their hands full trying to elucidate precisely what “meaningful human control” entails, and how once agreed upon, it can be verified when states undertake to use such systems.

Of course, this will require that this first meeting be followed by several more before the legalities can be ironed out and possible contingencies and side-issues resolved. Moreover, as Nobel Peace laureate Jody Williams – who received the award in 1997 for her work to ban landmines – noted in her side event speech, the seeming consensus may be a strategic stalling tactic to assuage the worries of civil society and drag out or undermine the process.

Chinese_dronesWhen pushed on the matter of lethal autonomous systems, there were sharp divides between proponents and detractors. These divisions, not surprisingly, fell along the lines of state power. Those who supported the creation, development and deployment of autonomous weapons systems came from a powerful and select few – such as China, the US, the UK, Israel, Russia, etc – and many of those experts citing their benefits also were affiliated in some way or another with those states.

However, there prospect of collective power and action through the combination of smaller and medium states, as well as through the collective voice of civil society, does raise hope. In addition, legal precedents were sighted that showed how those states that insist on developing the technology could be brought to heel, or would even be willing to find common ground to limit the development of this technology.

AI_robotThe include the Marten’s Clause, which is part of the preamble to the 1899 Hague (II) Convention on Laws and Customs of War on Land. Many states and civil society delegates raised this potential avenue, thereby challenging some of the experts’ opinions that the Marten’s Clause would be insufficient as a source of law for a ban. The clause states that:

Until a more complete code of the laws of war is issued, the High Contracting Parties think it right to declare that in cases not included in the Regulations adopted by them, populations and belligerents remain under the protection and empire of the principles of international law, as they result from the usages established between civilized nations, from the laws of humanity and the requirements of the public conscience.

Another is the fact that the Convention on Certain Conventional Weapons – which was  adopted by 117 nations including the world’s major powers – was used before to prohibit the use of blinding laser weapons in the 1990s before they were ever deployed on the battlefield. It was Moeller himself who pointed this out at the beginning of the conference, when he said that this Convention “serves as an example to be followed again.”

Personally, I think it is encouraging that the various nations of the world are coming together to address this problem, and are doing so now before the technology flourishes. I also believe wholeheartedly that we have a long way to go before any significant or meaningful measures are taken, and the issue itself is explored to the point that an informed decision can be made.

terminator2_JDI can only hope that once the issue becomes a full-blow reality, some sort of framework is in place to address it. Otherwise, we could be looking at a lot more of these guys in our future! 😉

Sources: huffingtonpost.com, (2), phys.org

Judgement Day Update: Super-Strong Robotic Muscle

robot-arm-wrestling-03-20-09In their quest to build better, smarter and faster machines, researchers are looking to human biology for inspiration. As has been clear for some time, anthropomorphic robot designs cannot be expected to do the work of a person or replace human rescue workers if they are composed of gears, pullies, and hydraulics. Not only would they be too slow, but they would be prone to breakage.

Because of this, researchers have been working looking to create artificial muscles, synthetics tissues that respond to electrical stimuli, are flexible, and able to carry several times their own weight – just like the real thing. Such muscles will not only give robots the ability to move and perform tasks with the same ambulatory range as a human, they are likely to be far stronger than the flesh and blood variety.

micro_robot_muscleAnd of late, there have been two key developments on this front which may make this vision come true. The first comes from the US Department of Energy ’s Lawrence Berkeley National Laboratory, where a team of researchers have demonstrated a new type of robotic muscle that is 1,000 times more powerful than that of a human’s, and has the ability to catapult an item 50 times its own weight.

The artificial muscle was constructed using vanadium dioxide, a material known for its ability to rapidly change size and shape. Combined with chromium and fashioned with a silicone substrate, the team formed a V-shaped ribbon which formed a coil when released from the substrate. The coil when heated turned into a micro-catapult with the ability to hurl objects – in this case, a proximity sensor.

micro_robot_muscle2pngVanadium dioxide boasts several useful qualities for creating miniaturized artificial muscles and motors. An insulator at low temperatures, it abruptly becomes a conductor at 67° Celsius (152.6° F), a quality which makes it an energy efficient option for electronic devices. In addition, the vanadium dioxide crystals undergo a change in their physical form when warmed, contracting along one dimension while expanding along the other two.

Junqiao Wu, the team’s project leader, had this to say about their invention in a press statement:

Using a simple design and inorganic materials, we achieve superior performance in power density and speed over the motors and actuators now used in integrated micro-systems… With its combination of power and multi-functionality, our micro-muscle shows great potential for applications that require a high level of functionality integration in a small space.

In short, the concept is a big improvement over existing gears and motors that are currently employed in electronic systems. However, since it is on the scale of nanometers, it’s not exactly Terminator-compliant. However, it does provide some very interesting possibilities for machines of the future, especially where the functionality of micro-systems are concerned.

graphene_flexibleAnother development with the potential to create robotic muscles comes from Duke University, where a team of engineers have found a possible way to turn graphene into a stretchable, retractable material. For years now, the miracle properties of graphene have made it an attractive option for batteries, circuits, capacitors, and transistors.

However, graphene’s tendency to stick together once crumpled has had a somewhat limiting effect on its applications. But by attacking the material to a stretchy polymer film, the Duke researchers were able to crumple and then unfold the material, resulting in a properties that lend it to a broader range of applications- including artificial muscles.

robot_muscle1Before adhering the graphene to the rubber film, the researchers first pre-stretched the film to multiple times its original size. The graphene was then attached and, as the rubber film relaxed, the graphene layer compressed and crumpled, forming a pattern where tiny sections were detached. It was this pattern that allowed the graphene to “unfold” when the rubber layer was stretched out again.

The researchers say that by crumpling and stretching, it is possible to tune the graphene from being opaque to transparent, and different polymer films can result in different properties. These include a “soft” material that acts like an artificial muscle. When electricity is applied, the material expands, and when the electricity is cut off, it contracts; the degree of which depends on the amount of voltage used.

robot_muscle2Xuanhe Zhao, an Assistant Professor at the Pratt School of Engineering, explained the implications of this discovery:

New artificial muscles are enabling diverse technologies ranging from robotics and drug delivery to energy harvesting and storage. In particular, they promise to greatly improve the quality of life for millions of disabled people by providing affordable devices such as lightweight prostheses and full-page Braille displays.

Currently, artificial muscles in robots are mostly of the pneumatic variety, relying on pressurized air to function. However, few robots use them because they can’t be controlled as precisely as electric motors. It’s possible then, that future robots may use this new rubberized graphene and other carbon-based alternatives as a kind of muscle tissue that would more closely replicate their biological counterparts.

artificial-muscle-1This would not only would this be a boon for robotics, but (as Zhao notes) for amputees and prosthetics as well. Already, bionic devices are restoring ability and even sensation to accident victims, veterans and people who suffer from physical disabilities. By incorporating carbon-based, piezoelectric muscles, these prosthetics could function just like the real thing, but with greater strength and carrying capacity.

And of course, there is the potential for cybernetic enhancement, at least in the long-term. As soon as such technology becomes commercially available, even affordable, people will have the option of swapping out their regular flesh and blood muscles for something a little more “sophisticated” and high-performance. So in addition to killer robots, we might want to keep an eye out for deranged cyborg people!

And be sure to check out this video from the Berkley Lab showing the vanadium dioxide muscle in action:


Source:
gizmag.com, (2)
, extremetech.com, pratt.duke.edu

Judgement Day Update: Banning Autonomous Killing Machines

drone-strikeDrone warfare is one of the most controversial issues facing the world today. In addition to ongoing concerns about lack of transparency and who’s making the life-and-death decisions, there has also been serious and ongoing concerns about the cost in civilian lives, and the efforts of both the Pentagon and the US government to keep this information from the public.

This past October, the testimonial of a Pakistani family to Congress helped to put a human face on the issue. Rafiq ur Rehman, a Pakistani primary school teacher, described how his mother, Momina Bibi, had been killed by a drone strike. His two children – Zubair and Nabila, aged 13 and 9 – were also injured in the attack that took place on October 24th of this year.

congress_dronetestimonyThis testimony occurred shortly after the publication of an Amnesty International report, which listed Bibi among 900 other civilians they say have been killed by drone strikes since 2001. Not only is this number far higher than previously reported, the report claims that the US may have committed war crimes and should stand trial for its actions.

Already, efforts have been mounted to put limitations on drone use and development within the US. Last year, Human Rights Watch and Harvard University released a joint report calling for the preemptive ban of “killer robots”. Shortly thereafter, Deputy Defense Secretary Ashton Carter signed a series of instructions to “minimize the probability and consequences of failures that could lead to unintended engagements.”

campaignkillerrobots_UNHowever, these efforts officially became international in scope when, on Monday October 21st, a growing number of humans rights activists, ethicists, and technologists converged on the United Nations Headquarters in New York City to call for an international agreement that would ban the development and use of fully autonomous weapons technology.

Known as the “Campaign To Stop Killer Robots,” an international coalition formed this past April, this group has demanded that autonomous killing machines should be treated like other tactics and tools of war that have been banned under the Geneva Convention – such as chemical weapons or anti-personnel landmines.

UAVsAs Jody Williams. a Nobel Peace Prize winner and, a founding member of the group said:

If these weapons move forward, it will transform the face of war forever. At some point in time, today’s drones may be like the ‘Model T’ of autonomous weaponry.

According to Noel Sharkey, an Irish computer scientist who is chair of the International Committee for Robot Arms Control, the list of challenges in developing autonomous robots is enormous. They range from the purely technological, such as the ability to properly identify a target using grainy computer vision, to ones that involve fundamental ethical, legal, and humanitarian questions.

As the current drone campaign has shown repeatedly, a teenage insurgent is often hard to distinguish from a child playing with a toy. What’s more, in all engagements in war, there is what is called the “proportionality test” – whether the civilian risks outweigh the military advantage of an attack. At present, no machine exists that would be capable of making these distinctions and judgement calls.

X-47B_over_coastlineDespite these challenges, militaries around the world – including China, Israel, Russia, and especially the U.S. – are enthusiastic about developing and adopting technologies that will take humans entirely out of the equation, often citing the potential to save soldiers’ lives as a justification. According to Williams, without preventative action, the writing is on the wall.

Consider the U.S. military’s X-47 aircraft, which can take off, land, and refuel on its own and has weapons bays, as evidence of the trend towards greater levels of autonomy in weapons systems. Similarly, the U.K. military is collaborating with B.A.E. Systems to develop a drone called the Taranis, or “God of Thunder,” which can fly faster than the speed of sound and select its own targets.

campaign_killerrobotsThe Campaign to Stop Killer Robots, a coalition of international and national NGOs, may have only launched recently, but individual groups have been to raise awareness for the last few years. Earlier this month, 272 engineers, computer scientists and roboticists signed onto the coalition’s letter calling for a ban. In addition, the U.N. is already expressed concern about the issue.

For example, the U.N. Special Rapporteur issued a report to the General Assembly back in April that recommended states establish national moratorium on the development of such weapons. The coalition is hoping to follow up on this by asking that other nations will join those already seeking to start early talks on the issue at the U.N. General Assembly First Committee on Disarmament and International Security meeting in New York later this month.

AI'sOn the plus side, there is a precedent for a “preventative ban”: blinding lasers were never used in war, because they were preemptively included in a treaty. On the downside, autonomous weapons technology is not an easily-defined system, which makes it more difficult to legislate. If a ban is to be applied, knowing where it begins and ends, and what loopholes exist, is something that will have to be ironed out in advance.

What’s more, there are alternatives to a ban, such as regulation and limitations. By allowing states to develop machinery that is capable of handling itself in non-combat situations, but which require a human operator to green light the use of weapons, is something the US military has already claimed it is committed to. As far as international law is concerned, this represents a viable alternative to putting a stop to all research.

Overall, it is estimated that we are at least a decade away from a truly autonomous machine of war, so there is time for the law to evolve and prepare a proper response. In the meantime, there is also plenty of time to address the current use of drones and all its consequences. I’m sure I speak for more than myself when I say that I hope its get better before it gets worse.

And in the meantime, be sure to enjoy this video produced by Human Rights Watch:


Sources:
fastcoexist.com, thegaurdian.com, stopkillerrobots.org