Reciprocity – The Deets

self-aware-colonyHey again, all. I find myself with some spare time for the first time in awhile. So I thought I might take a moment to share an idea I’ve been working with, in a bit more detail. Last post I made, I talked about the bare bones of a story I am working on known as Reciprocity, the successor to the story known as Apocrypha. But as it turns out, there are a lot of details to that story idea that I still want to share and get people’s opinion on.

You might say this is a story that I am particularly serious about. Should it work out, it would be my break from both space-opera sci-fi and zombie fiction. A foray into the world of hard-hitting social commentary and speculative science fiction.

The Story:
So the year is 2030. The world is reeling from the effects of widespread drought, wildfires, coastal storms, flooding, and population displacement. At the same time, a revolution is taking place in terms of computing, robotics, biomachinery, and artificial intelligence. As a result, the world’s population finds itself being pulled in two different directions – between a future of scarcity and the promise of plenty.

space-solar-headSpace exploration continues as private aerospace and space agencies all race to put boots on Mars, a settlement on the Moon, and lay claim to the resources of the Solar System. India, China, the US, the EU, Russia, Argentina, Brazil, and Iran are all taking part now – using robotic probes and rovers to telexplore the System and prospect asteroids. Humanity’s future as an interplanetary species seems all but guaranteed at this point.

Meanwhile, a new global balance of power is shaping up. While the US and the EU struggle with food and fuel shortages, Russia remains firmly in the grips of quasi-fascist interests, having spurned the idea of globalization and amicable relations with NATO and the EU in favor of its Collective Security Treaty, which in recent years has expanded to include Iran, Afghanistan and Pakistan.

shanghai_towerMeanwhile, China is going through a period of transition. After the fall of Communism in 2023, the Chinese state is lurching between the forces of reform and ultra-nationalism, and no one is sure which side it will fall on. The economy has largely recovered, but the divide between rich and poor is all too apparent. And given the sense of listless frustration and angst, there is fear that a skilled politician could exploit it all too well.

It’s an era of uncertainty, high hopes and renewed Cold War.

The MacGuffin:
The central item of the story is a cybervirus known as Baoying, a quantum-decryption algorithm that was designed by Unit 61398 in the early 2020’s to take down America’s quantum networks in the event of open war. When the Party fell from power, the Unit was dissolved and the virus itself was destroyed. However, rumors persisted that one or more copies still exist…

MatrixBackgroundNotable Characters:
For this ensemble to work, it had to represent a good cross-section of the world that will be, with all its national, social and economic boundaries represented. And so I came up with the following people, individuals who find themselves on different sides of what’s right, and are all their own mix of good, bad, and ambiguous.

William Harding: A privileged high school senior with an big of a drug problem who lives in Port Coquitlam, just outside of the Pacific Northwest megalopolis of Cascadia. Like many people his age, he carries all his personal computing in the form of implants. However, a kidnapping and a close brush with death suddenly expand his worldview. Being at the mercy of others and deprived of his hardware, he realizes that his lifestyle have shielded him from the real world.

Amy Dixon: A young refugee who has moved to Cascadia from the American South. Her socioeconomic status places her and her family at the fringes of society, and she is determined to change their fortunes by plying her talents and being the first in her family to get a comprehensive education.

Climate_ChangeFernie Dixon: Amy’s brother, a twenty-something year-old man who lives away from her and claims to be a software developer. In reality, he is a member of the local Aryan Brotherhood, one of many gangs that run rampant in the outlying districts of the city. Not a true believer like his “brothers”, he seeks money and power so he can give his sister the opportunities he knows she deserves.

Shen Zhou: A former Lieutenant in the People’s Liberation Army and member of Unit 61398 during the Cyberwars of the late teens. After the fall of Communism, he did not ingratiate himself to the new government and was accused of spying for foreign interests. As  result, he left the country to pursue his own agenda, which places him in the cross hairs of both the new regime and western governments.

artificial-intelligenceArthur Banks: A major industrialist and part-owner of Harding Enterprises, a high-tech multinational that specializes in quantum computing and the development of artificial intelligence. For years, Banks and his associates have been working on a project known as QuaSI – a Quantum-based Sentient Intelligence that would revolutionize the world and usher in the Technological Singularity.

Rhianna Sanchez: Commander of Joint Task Force 2, an elite unit attached to National Security Agency’s Cyberwarfare Division. For years, she and her task force have been charged with locating terror cells that are engaged in private cyberwarfare with the US and its allies. And Shen Zhou, a suspected terrorist with many troubling connections, gets on their radar after a mysterious kidnapping and high-profile cyberintrusion coincide.

And that about covers the particulars. Naturally, there are a lot of other details, but I haven’t got all day and neither do you fine folks 😉 In any case, the idea is in the queue and its getting updated regularly. But I don’t plan to have it finished until I’ve polished off Oscar Mike, Arrivals, and a bunch of other projects first!

The Future is Here: Black Hawk Drones and AI pilots

blackhawk_droneThe US Army’s most iconic helicopter is about to go autonomous for the first time. In their ongoing drive to reduce troops and costs, they are now letting their five-ton helicopter carry out autonomous expeditionary and resupply operations. This began last month when the defense contractor Sikorsky Aircraft, the company that produces the UH-60 Black Hawk – demonstrated the hover and flight capability in an “optionally piloted” version of their craft for the first time.

Sikorsky has been working on the project since 2007 and convinced the Army’s research department to bankroll further development last year. As Chris Van Buiten, Sikorsky’s vice president of Technology and Innovation, said of the demonstration:

Imagine a vehicle that can double the productivity of the Black Hawk in Iraq and Afghanistan by flying with, at times, a single pilot instead of two, decreasing the workload, decreasing the risk, and at times when the mission is really dull and really dangerous, go it all the way to fully unmanned.

blackhawk_drone1The Optionally Piloted Black Hawk (OPBH) operates under Sikorsky’s Manned/Unmanned Resupply Aerial Lifter (MURAL) program, which couples the company’s advanced Matrix aviation software with its man-portable Ground Control Station (GCS) technology. Matrix, introduced a year ago, gives rotary and fixed-wing vertical take-off and landing (VTOL) aircraft a high level of system intelligence to complete missions with little human oversight.

Mark Miller, Sikorsky’s vice-president of Research and Engineering, explained in a statement:

The autonomous Black Hawk helicopter provides the commander with the flexibility to determine crewed or un-crewed operations, increasing sorties while maintaining crew rest requirements. This allows the crew to focus on the more ‘sensitive’ operations, and leaves the critical resupply missions for autonomous operations without increasing fleet size or mix.

Alias-DarpaThe Optionally Piloted Black Hawk fits into the larger trend of the military finding technological ways of reducing troop numbers. While it can be controlled from a ground control station, it can also make crucial flying decisions without any human input, relying solely on its ‘Matrix’ proprietary artificial intelligence technology. Under the guidance of these systems, it can fly a fully autonomous cargo mission and can operate both ways: unmanned or piloted by a human.

And this is just one of many attempts by military contractors and defense agencies to bring remote and autonomous control to more classes of aerial vehicles. Earlier last month, DARPA announced a new program called Aircrew Labor In-Cockpit Automation System (ALIAS), the purpose of which is to develop a portable, drop-in autopilot to reduce the number of crew members on board, making a single pilot a “mission supervisor.”

darpa-alias-flight-crew-simulator.siMilitary aircraft have grown increasingly complex over the past few decades, and automated systems have also evolved to the point that some aircraft can’t be flown without them. However, the complex controls and interfaces require intensive training to master and can still overwhelm even experienced flight crews in emergency situations. In addition, many aircraft, especially older ones, require large crews to handle the workload.

According to DARPA, avionics upgrades can help alleviate this problem, but only at a cost of tens of millions of dollars per aircraft type, which makes such a solution slow to implement. This is where the ALIAS program comes in: instead of retrofitting planes with a bespoke automated system, DARPA wants to develop a tailorable, drop‐in, removable kit that takes up the slack and reduces the size of the crew by drawing on both existing work in automated systems and newer developments in unmanned aerial vehicles (UAVs).

Alias_DARPA1DARPA says that it wants ALIAS to not only be capable of executing a complete mission from takeoff to landing, but also handle emergencies. It would do this through the use of autonomous capabilities that can be programmed for particular missions, as well as constantly monitoring the aircraft’s systems. But according to DARPA, the development of the ALIAS system will require advances in three key areas.

First, because ALIAS will require working with a wide variety of aircraft while controlling their systems, it will need to be portable and confined to the cockpit. Second, the system will need to use existing information about aircraft, procedures, and flight mechanics. And third, ALIAS will need a simple, intuitive, touch and voice interface because the ultimate goal is to turn the pilot into a mission-level supervisor while ALIAS handles the second-to-second flying.

AI'sAt the moment, DARPA is seeking participants to conduct interdisciplinary research aimed at a series of technology demonstrations from ground-based prototypes, to proof of concept, to controlling an entire flight with responses to simulated emergency situations. As Daniel Patt, DARPA program manager, put it:

Our goal is to design and develop a full-time automated assistant that could be rapidly adapted to help operate diverse aircraft through an easy-to-use operator interface. These capabilities could help transform the role of pilot from a systems operator to a mission supervisor directing intermeshed, trusted, reliable systems at a high level.

Given time and the rapid advance of robotics and autonomous systems, we are likely just a decade away from aircraft being controlled by sentient or semi-sentient systems. Alongside killer robots (assuming they are not preemptively made illegal), UAVs, and autonomous hovercraft, it is entirely possible wars will be fought entirely by machines. At which point, the very definition of war will change. And in the meantime, check out this video of the history of unmanned flight:


Sources:
wired.com, motherboard.vice.com, gizmag.com
, darpa.mil

Tech News: Google Seeking “Conscious Homes”

nest_therm1In Google’s drive for world supremacy, a good number of start-ups and developers have been bought up. Between their acquisition of eight robotics companies in the space of sixth months back in 2013 to their ongoing  buyout of anyone in the business of aerospace, voice and facial recognition, and artificial intelligence, Google seems determined to have a controlling interest in all fields of innovation.

And in what is their second-largest acquisition to date, Google announced earlier this month that they intend get in on the business of smart homes. The company in question is known as Nest Labs, a home automation company that was founded by former Apple engineers Tony Fadell and Matt Rogers in 2010 and is behind the creation of The Learning Thermostat and the Protect smoke and carbon monoxide detector.

nest-thermostatThe Learning Thermostat, the company’s flagship product, works by learning a home’s heating and cooling preferences over time, removing the need for manual adjustments or programming. Wi-Fi networking and a series of apps also let users control and monitor the unit Nest from afar, consistent with one of the biggest tenets of smart home technology, which is connectivity.

Similarly, the Nest Protect, a combination smoke and carbon monoxide detector, works by differentiating between burnt toast and real fires. Whenever it detects smoke, one alarm goes off, which can be quieted by simply waving your hand in front of it. But in a real fire, or where deadly carbon monoxide is detected, a much louder alarm sounds to alert its owners.

nest_smoke_detector_(1_of_9)_1_610x407In addition, the device sends a daily battery status report to the Nest mobile app, which is the same one that controls the thermostats, and is capable of connecting with other units in the home. And, since Nest is building a platform for all its devices, if a Nest thermostat is installed in the same home, the Protect and automatically shut it down in the event that carbon monoxide is detected.

According to a statement released by co-f0under Tony Fadell, Nest will continue to be run in-house, but will be partnered with Google in their drive to create a conscious home. On his blog, Fadell explained his company’s decision to join forces with the tech giant:

Google will help us fully realize our vision of the conscious home and allow us to change the world faster than we ever could if we continued to go it alone. We’ve had great momentum, but this is a rocket ship. Google has the business resources, global scale, and platform reach to accelerate Nest growth across hardware, software, and services for the home globally.

smarthomeYes, and I’m guessing that the $3.2 billion price tag added a little push as well! Needless to say, some wondered why Apple didn’t try to snatch up this burgeoning company, seeing as how its being run by two of its former employees. But according to Fadell, Google founder Sergey Brin “instantly got what we were doing and so did the rest of the Google team” when they got a Nest demo at the 2011 TED conference.

In a press release, Google CEO Larry Page had this to say about bringing Nest into their fold:

They’re already delivering amazing products you can buy right now – thermostats that save energy and smoke/[carbon monoxide] alarms that can help keep your family safe. We are excited to bring great experiences to more homes in more countries and fulfill their dreams!

machine_learningBut according to some, this latest act by Google goes way beyond wanting to develop devices. Sara Watson at Harvard University’s Berkman Center for Internet and Society is one such person, who believes Google is now a company obsessed with viewing everyday activities as “information problems” to be solved by machine learning and algorithms.

Consider Google’s fleet of self-driving vehicles as an example, not to mention their many forays into smartphone and deep learning technology. The home is no different, and a Google-enabled smart home of the future, using a platform such as the Google Now app – which already gathers data on users’ travel habits – could adapt energy usage to your life in even more sophisticated ways.

Larry_PageSeen in these terms, Google’s long terms plans of being at the forefront of the new technological paradigm  – where smart technology knows and anticipates and everything is at our fingertips – certainly becomes more clear. I imagine that their next goal will be to facilitate the creation of household AIs, machine minds that monitor everything within our household, provide maintenance, and ensure energy efficiency.

However, another theory has it that this is in keeping with Google’s push into robotics, led by the former head of Android, Andy Rubin. According to Alexis C. Madrigal of the Atlantic, Nest always thought of itself as a robotics company, as evidence by the fact that their VP of technology is none other than Yoky Matsuoka – a roboticist and artificial intelligence expert from the University of Washington.

yokymatsuoka1During an interview with Madrigal back in 2012, she explained why this was. Apparently, Matsuoka saw Nest as being positioned right in a place where it could help machine and human intelligence work together:

The intersection of neuroscience and robotics is about how the human brain learns to do things and how machine learning comes in to augment that.

In short, Nest is a cryptorobotics company that deals in sensing, automation, and control. It may not make a personable, humanoid robot, but it is producing machine intelligences that can do things in the physical world. Seen in this respect, the acquisition was not so much part of Google’s drive to possess all our personal information, but a mere step along the way towards the creation of a working artificial intelligence.

It’s a Brave New World, and it seems that people like Musk, Page, and a slew of futurists that are determined to make it happen, are at the center of it.

Sources: cnet.news.com, (2), newscientist.com, nest.com, theatlantic.com

The Future is… Worms: Life Extension and Computer-Simulations

genetic_circuitPost-mortality is considered by most to be an intrinsic part of the so-called Technological Singularity. For centuries, improvements in medicine, nutrition and health have led to improved life expectancy. And in an age where so much more is possible – thanks to cybernetics, bio, nano, and medical advances – it stands to reason that people will alter their physique in order slow the onset of age and extend their lives even more.

And as research continues, new and exciting finds are being made that would seem to indicate that this future may be just around the corner. And at the heart of it may be a series of experiments involving worms. At the Buck Institute for Research and Aging in California, researchers have been tweaking longevity-related genes in nematode worms in order to amplify their lifespans.

immortal_wormsAnd the latest results caught even the researchers by surprise. By triggering mutations in two pathways known for lifespan extension – mutations that inhibit key molecules involved in insulin signaling (IIS) and the nutrient signaling pathway Target of Rapamycin (TOR) – they created an unexpected feedback effect that amplified the lifespan of the worms by a factor of five.

Ordinarily, a tweak to the TOR pathway results in a 30% lifespan extension in C. Elegans worms, while mutations in IIS (Daf-2) results in a doubling of lifespan. By combining the mutations, the researchers were expecting something around a 130% extension to lifespan. Instead, the worms lived the equivalent of about 400 to 500 human years.

antiagingAs Doctor Pankaj Kapahi said in an official statement:

Instead, what we have here is a synergistic five-fold increase in lifespan. The two mutations set off a positive feedback loop in specific tissues that amplified lifespan. These results now show that combining mutants can lead to radical lifespan extension — at least in simple organisms like the nematode worm.

The positive feedback loop, say the researchers, originates in the germline tissue of worms – a sequence of reproductive cells that may be passed onto successive generations. This may be where the interactions between the two mutations are integrated; and if correct, might apply to the pathways of more complex organisms. Towards that end, Kapahi and his team are looking to perform similar experiments in mice.

DNA_antiagingBut long-term, Kapahi says that a similar technique could be used to produce therapies for aging in humans. It’s unlikely that it would result in the dramatic increase to lifespan seen in worms, but it could be significant nonetheless. For example, the research could help explain why scientists are having a difficult time identifying single genes responsible for the long lives experienced by human centenarians:

In the early years, cancer researchers focused on mutations in single genes, but then it became apparent that different mutations in a class of genes were driving the disease process. The same thing is likely happening in aging. It’s quite probable that interactions between genes are critical in those fortunate enough to live very long, healthy lives.

A second worm-related story comes from the OpenWorm project, an international open source project dedicated to the creation of a bottom-up computer model of a millimeter-sized nemotode. As one of the simplest known multicellular life forms on Earth, it is considered a natural starting point for creating computer-simulated models of organic beings.

openworm-nematode-roundworm-simulation-artificial-lifeIn an important step forward, OpenWorm researchers have completed the simulation of the nematode’s 959 cells, 302 neurons, and 95 muscle cells and their worm is wriggling around in fine form. However, despite this basic simplicity, the nematode is not without without its share of complex behaviors, such as feeding, reproducing, and avoiding being eaten.

To model the complex behavior of this organism, the OpenWorm collaboration (which began in May 2013) is developing a bottom-up description. This involves making models of the individual worm cells and their interactions, based on their observed functionality in the real-world nematodes. Their hope is that realistic behavior will emerge if the individual cells act on each other as they do in the real organism.

openworm-nematode-roundworm-simulation-artificial-life-0Fortunately, we know a lot about these nematodes. The complete cellular structure is known, as well as rather comprehensive information concerning the behavior of the thing in reaction to its environment. Included in our knowledge is the complete connectome, a comprehensive map of neural connections (synapses) in the worm’s nervous system.

The big question is, assuming that the behavior of the simulated worms continues to agree with the real thing, at what stage might it be reasonable to call it a living organism? The usual definition of living organisms is behavioral, that they extract usable energy from their environment, maintain homeostasis, possess a capacity to grow, respond to stimuli, reproduce, and adapt to their environment in successive generations.

openworm-nematode1If the simulation exhibits these behaviors, combined with realistic responses to its external environment, should we consider it to be alive? And just as importantly, what tests would be considered to test such a hypothesis? One possibility is an altered version of the Turing test – Alan Turing’s proposed idea for testing whether or not a computer could be called sentient.

In the Turing test, a computer is considered sentient and sapient if it can simulate the responses of a conscious sentient being so that an auditor can’t tell the difference. A modified Turing test might say that a simulated organism is alive if a skeptical biologist cannot, after thorough study of the simulation, identify a behavior that argues against the organism being alive.

openworm-nematode2And of course, this raises an even larger questions. For one, is humanity on the verge of creating “artificial life”? And what, if anything, does that really look like? Could it just as easily be in the form of computer simulations as anthropomorphic robots and biomachinery? And if the answer to any of these questions is yes, then what exactly does that say about our preconceived notions about what life is?

If humanity is indeed moving into an age of “artificial life”, and from several different directions, it is probably time that we figure out what differentiates the living from the nonliving. Structure? Behavior? DNA? Local reduction of entropy? The good news is that we don’t have to answer that question right away. Chances are, we wouldn’t be able to at any rate.

Brain-ScanAnd though it might not seem apparent, there is a connection between the former and latter story here. In addition to being able to prolong life through genetic engineering, the ability to simulate consciousness through computer-generated constructs might just prove a way to cheat death in the future. If complex life forms and connectomes (like that involved in the human brain) can be simulated, then people may be able to transfer their neural patterns before death and live on in simulated form indefinitely.

So… anti-aging, artificial life forms, and the potential for living indefinitely. And to think that it all begins with the simplest multicellular life form on Earth – the nemotode worm. But then again, all life – nay, all of existence – depends upon the most simple of interactions, which in turn give rise to more complex behaviors and organisms. Where else would we expect the next leap in biotechnological evolution to come from?

And in the meantime, be sure to enjoy this video of the OpenWorm’s simulated nemotode in action


Sources:
IO9, cell.com, gizmag, openworm

Ted Talks: The Age of the Industrial Internet

Tedtalks_marco_internetofthingsI came across another interesting and fascinating TED Talk recently. In this lecture, famed economist Marco Annunziata spoke about a rather popular subject – “The Internet of Things”, and how it is shaping our society. This term is thrown around a lot lately, and it refers to a growing phenomenon in our world where uniquely identifiable objects are connected to virtual representations in an Internet-like structure.

Basically, the concept postulates that if all objects and people in daily life were equipped with identifiers, they could be managed and inventoried by computers. By equipping all objects in the world with minuscule machine-readable identifiers, daily life could be transformed. How this is likely to look is the subject of Annunziata’s talk, beginning with the past two hundred years and the two major waves of innovation humanity went through.

Internet_of_ThingsThe first came with the Industrial Revolution (ca. 1760-1903), which permanently altered our lives with factories, machinery, railways, electricity, air travel, etc. The second wave came with the Internet Revolution (ca. 1980 – 2000), which has once again changed our lives permanently with computing power, data networks, and unprecedented access to information and communication.

Now, in the modern era, we are entering into a new phase of innovation, one which he refers to as the “Industrial Internet”. Judging by current research and marketing trends, this wave is characterized by intelligent machines, advanced analytics, and the creativity of people at work. It is a marriage of minds and machines, and once again, our lives will be permanently altered by it.

internet_of_things_beechamIn the course of the twelve minute lecture, Annunziata explains how the emergence of machines that can see, feel, sense and react will lead to an age where the technology we depend upon will operate with far greater efficiently. Naturally, there are many who would suspect that this all boils down to AIs doing the thinking for us, but in fact, it’s much more complicated than that.

Think of a world where we would be able to network and communicate with all of our devices – not just our smartphones or computers, but everything from our car keys to our cars and home appliances. By all things being marked and represented in a virtual internet-like environment, we could communicate with or remotely check on things that are halfway across the world.

Think of the implications! As someone who is currently very fascinated with how the world will look in the not-too-distant future, and how people will interact with it, I can tell you this stuff is science fiction gold! Check it out and be sure to follow the link at the bottom of the page to comment.


Source:
ted.com

Judgement Day Update: Google Robot Army Expanding

Atlas-x3c.lrLast week, Google announced that it will be expanding its menagerie of robots, thanks to a recent acquisition. The announcement came on Dec. 13th, when the tech giant confirmed that it had bought out the engineering company known as Boston Dynamics. This company, which has had several lucrative contracts with DARPA and the Pentagon, has been making the headlines in the past few years, thanks to its advanced robot designs.

Based in Waltham, Massachusetts, Boston Dynamics has gained an international reputation for machines that walk with an uncanny sense of balance, can navigate tough terrain on four feet, and even run faster than the fastest humans. The names BigDog, Cheetah, WildCat, Atlas and the Legged Squad Support System (LS3), have all become synonymous with the next generation of robotics, an era when machines can handle tasks too dangerous or too dirty for most humans to do.

Andy-Rubin-and-Android-logoMore impressive is the fact that this is the eight robot company that Google has acquired in the past six months. Thus far, the company has been tight-lipped about what it intends to do with this expanding robot-making arsenal. But Boston Dynamics and its machines bring significant cachet to Google’s robotic efforts, which are being led by Andy Rubin, the Google executive who spearheaded the development of Android.

The deal is also the clearest indication yet that Google is intent on building a new class of autonomous systems that might do anything from warehouse work to package delivery and even elder care. And considering the many areas of scientific and technological advancement Google is involved in – everything from AI and IT to smartphones and space travel – it is not surprising to see them branching out in this way.

wildcat1Boston Dynamics was founded in 1992 by Marc Raibert, a former professor at the Massachusetts Institute of Technology. And while it has not sold robots commercially, it has pushed the limits of mobile and off-road robotics technology thanks to its ongoing relationship and funding from DARPA. Early on, the company also did consulting work for Sony on consumer robots like the Aibo robotic dog.

Speaking on the subject of the recent acquisition, Raibert had nothing but nice things to say about Google and the man leading the charge:

I am excited by Andy and Google’s ability to think very, very big, with the resources to make it happen.

Videos uploaded to Youtube featuring the robots of Boston Dynamics have been extremely popular in recent years. For example, the video of their four-legged, gas powered, Big Dog walker has been viewed 15 million times since it was posted on YouTube in 2008. In terms of comments, many people expressed dismay over how such robots could eventually become autonomous killing machines with the potential to murder us.

petman-clothesIn response, Dr. Raibert has emphasized repeatedly that he does not consider his company to be a military contractor – it is merely trying to advance robotics technology. Google executives said the company would honor existing military contracts, but that it did not plan to move toward becoming a military contractor on its own. In many respects, this acquisition is likely just an attempt to acquire more talent and resources as part of a larger push.

Google’s other robotics acquisitions include companies in the United States and Japan that have pioneered a range of technologies including software for advanced robot arms, grasping technology and computer vision. Mr. Rubin has also said that he is interested in advancing sensor technology. Mr. Rubin has called his robotics effort a “moonshot,” but has declined to describe specific products that might come from the project.

Cheetah-robotHe has, however, also said that he does not expect initial product development to go on for some time, indicating that Google commercial robots of some nature would not be available for several more years. Google declined to say how much it paid for its newest robotics acquisition and said that it did not plan to release financial information on any of the other companies it has recently bought.

Considering the growing power and influence Google is having over technological research – be it in computing, robotics, neural nets or space exploration – it might not be too soon to assume that they are destined to one day create the supercomputer that will try to kill us all. In short, Google will play Cyberdyne to Skynet and unleash the Terminators. Consider yourself warned, people! 😉

Source: nytimes.com

Judgement Day Update: Bionic Computing!

big_blue1IBM has always been at the forefront of cutting-edge technology. Whether it was with the development computers that could guide ICBMs and rockets into space during the Cold War, or the creation of the Internet during the early 90’s, they have managed to stay on the vanguard by constantly looking ahead. So it comes as no surprise that they had plenty to say last month on the subject of the next of the next big leap.

During a media tour of their Zurich lab in late October, IBM presented some of the company’s latest concepts. According to the company, the key to creating supermachines that 10,000 faster and more efficient is to build bionic computers cooled and powered by electronic blood. The end result of this plan is what is known as “Big Blue”, a proposed biocomputer that they anticipate will take 10 years to make.

Human-Brain-project-Alp-ICTIntrinsic to the design is the merger of computing and biological forms, specifically the human brain. In terms of computing, IBM is relying the human brain as their template. Through this, they hope to be able to enable processing power that’s densely packed into 3D volumes rather than spread out across flat 2D circuit boards with slow communication links.

On the biological side of things, IBM is supplying computing equipment to the Human Brain Project (HBP) – a $1.3 billion European effort that uses computers to simulate the actual workings of an entire brain. Beginning with mice, but then working their way up to human beings, their simulations examine the inner workings of the mind all the way down to the biochemical level of the neuron.

brain_chip2It’s all part of what IBM calls “the cognitive systems era”, a future where computers aren’t just programmed, but also perceive what’s going on, make judgments, communicate with natural language, and learn from experience. As the description would suggest, it is closely related to artificial intelligence, and may very well prove to be the curtain raiser of the AI era.

One of the key challenge behind this work is matching the brain’s power consumption. The ability to process the subtleties of human language helped IBM’s Watson supercomputer win at “Jeopardy.” That was a high-profile step on the road to cognitive computing, but from a practical perspective, it also showed how much farther computing has to go. Whereas Watson uses 85 kilowatts of power, the human brain uses only 20 watts.

aquasar2Already, a shift has been occurring in computing, which is evident in the way engineers and technicians are now measuring computer progress. For the past few decades, the method of choice for gauging performance was operations per second, or the rate at which a machine could perform mathematical calculations.

But as a computers began to require prohibitive amounts of power to perform various functions and generated far too much waste heat, a new measurement was called for. The new measurement that emerged as a result was expressed in operations per joule of energy consumed. In short, progress has come to be measured in term’s of a computer’s energy efficiency.

IBM_Research_ZurichBut now, IBM is contemplating another method for measuring progress that is known as “operations per liter”. In accordance with this new paradigm, the success of a computer will be judged by how much data-processing can be squeezed into a given volume of space. This is where the brain really serves as a source of inspiration, being the most efficient computer in terms of performance per cubic centimeter.

As it stands, today’s computers consist of transistors and circuits laid out on flat boards that ensure plenty of contact with air that cools the chips. But as Bruno Michel – a biophysics professor and researcher in advanced thermal packaging for IBM Research – explains, this is a terribly inefficient use of space:

In a computer, processors occupy one-millionth of the volume. In a brain, it’s 40 percent. Our brain is a volumetric, dense, object.

IBM_stacked3dchipsIn short, communication links between processing elements can’t keep up with data-transfer demands, and they consume too much power as well. The proposed solution is to stack and link chips into dense 3D configurations, a process which is impossible today because stacking even two chips means crippling overheating problems. That’s where the “liquid blood” comes in, at least as far as cooling is concerned.

This process is demonstrated with the company’s prototype system called Aquasar. By branching chips into a network of liquid cooling channels that funnel fluid into ever-smaller tubes, the chips can be stacked together in large configurations without overheating. The liquid passes not next to the chip, but through it, drawing away heat in the thousandth of a second it takes to make the trip.

aquasarIn addition, IBM also is developing a system called a redox flow battery that uses liquid to distribute power instead of using wires. Two types of electrolyte fluid, each with oppositely charged electrical ions, circulate through the system to distribute power, much in the same way that the human body provides oxygen, nutrients and cooling to brain through the blood.

The electrolytes travel through ever-smaller tubes that are about 100 microns wide at their smallest – the width of a human hair – before handing off their power to conventional electrical wires. Flow batteries can produce between 0.5 and 3 volts, and that in turn means IBM can use the technology today to supply 1 watt of power for every square centimeter of a computer’s circuit board.

IBM_Blue_Gene_P_supercomputerAlready, the IBM Blue Gene supercomputer has been used for brain research by the Blue Brain Project at the Ecole Polytechnique Federale de Lausanne (EPFL) in Lausanne, Switzerland. Working with the HBP, their next step ill be to augment a Blue Gene/Q with additional flash memory at the Swiss National Supercomputing Center.

After that, they will begin simulating the inner workings of the mouse brain, which consists of 70 million neurons. By the time they will be conducting human brain simulations, they plan to be using an “exascale” machine – one that performs 1 exaflops, or quintillion floating-point operations per second. This will take place at the Juelich Supercomputing Center in northern Germany.

brain-activityThis is no easy challenge, mainly because the brain is so complex. In addition to 100 billion neurons and 100 trillionsynapses,  there are 55 different varieties of neuron, and 3,000 ways they can interconnect. That complexity is multiplied by differences that appear with 600 different diseases, genetic variation from one person to the next, and changes that go along with the age and sex of humans.

As Henry Markram, the co-director of EPFL who has worked on the Blue Brain project for years:

If you can’t experimentally map the brain, you have to predict it — the numbers of neurons, the types, where the proteins are located, how they’ll interact. We have to develop an entirely new science where we predict most of the stuff that cannot be measured.

child-ai-brainWith the Human Brain Project, researchers will use supercomputers to reproduce how brains form in an virtual vat. Then, they will see how they respond to input signals from simulated senses and nervous system. If it works, actual brain behavior should emerge from the fundamental framework inside the computer, and where it doesn’t work, scientists will know where their knowledge falls short.

The end result of all this will also be computers that are “neuromorphic” – capable of imitating human brains, thereby ushering in an age when machines will be able to truly think, reason, and make autonomous decisions. No more supercomputers that are tall on knowledge but short on understanding. The age of artificial intelligence will be upon us. And I think we all know what will follow, don’t we?

Evolution-of-the-Cylon_1024Yep, that’s what! And may God help us all!

Sources: news.cnet.com, extremetech.com

Immortality Inc: Google’s “Calico”

calico-header-640x353Google has always been famous for investing in speculative ventures and future trends. Between their robot cars, Google Glass, the development of AI (the Google Brain), high-speed travel (the Hyperloop), and alternative energy, their seems to be no limit to what Musk and Page’s company will take on. And now, with Calico, Google has made the burgeoning industry of life-extension its business.

The newly formed company has set itself to “focus on health and well-being, in particular the challenge of aging and associated diseases.” Those were the words of Google co-founder Larry Page, who issued a two-part press release back in September. From this, it is known that Calico will focus on life extension and improvement. But in what way and with what business model, the company has yet to explain.

DNA-1What does seem clear at this point is that Art Levinson, the chairman of Apple and former CEO of Genentech (a pioneer in biotech) will be the one to head up this new venture. His history working his way from a research scientist on up to CEO of Genentech makes him the natural choice, since he will bring medical connections and credibility to a company that’s currently low on both.

Google Health, the company’s last foray into the health industry, was a failure for the company. This site, which began in 2008 and shut down in 2011, was a personal health information centralization service that allowed Google users to volunteer their health records. Once entered, the site would provide them with a merged health record, information on conditions, and possible interactions between drugs, conditions, and allergies.

Larry_PageIn addition, the reasons for the company’s venture into the realm of health and aging may have something to do with Larry Page’s own recent health concerns. For years, Page has struggled with vocal nerve strain, which led him to make a significant donation to research into the problem. But clearly, Calico aims to go beyond simple health problems and cures for known diseases.

google.cover.inddIn a comment to Time Magazine, Page stated that a cure for cancer would only extent the average human lifespan by 3 years. They want to think bigger than that, which could mean addressing the actual causes of aging, the molecular processes that break down cells. Given that Google Ventures included life extension technology as part of their recent bid to attract engineering students, Google’s top brass might have a slightly different idea.

And while this might all sound a bit farfetched, the concept of life-extension and even clinical immortality have been serious pursuits for some time. We tend to think of aging as a fact of life, something that is as inevitable as it is irreversible. However, a number of plausible scenarios have already been discussed that could slow or even end this process, ranging from genetic manipulation, nanotechnology, implant technology, and cellular therapy.

Fountain_of_Eternal_Life_cropWhether or not Calico will get into any of these fields remains to be seen. But keeping in mind that this is the company that has proposed setting aside land for no-hold barred experimentation and even talked about building a Space Elevator with a straight face. I wouldn’t be surprised if they started building cryogentic tanks and jars for preserving disembodies brains before long!

Source: extremetech.com, (2), content.time.com

The Future is Here: Self-Healing Polymer

t1000I’ve heard of biomimetics – machinery and synthetics that can imitate organic materials – but this really takes the cake! In an effort to pioneer components and devices that would posses the regenerative powers of skin, a Spanish researcher Ibon Odriozola – who works for the CIDETEC Centre for Electrochemical Technologies in Spain – has created a polymer that could lead to a future where repairing machinery is as easy as suturing an open wound.

Comprised of a poly (urea-urethane) elastomeric matrix, the material is basically a network of complex molecular interactions that will spontaneously cross-link to “heal” most any break. In this context, the word “spontaneous” means that the material needs no outside intervention to begin its healing process, no catalyst or extra reactant.

healing-polymer-headerTo experiment with the material, Odriozola cut a sample in half with a razor blade at room temperature. And in just two hours, the cut healed itself with 97% efficiency. The reaction, called a metathesis reaction, has led Odriozola to dub the material his “Terminator” polymer, in reference to you-know-who (pictured above). Though the transition process takes a little longer, and involves polymers instead of metal, the basic principle is the same.

Unlike other self-healing materials, this one requires no catalyst and no layering. In addition to being very impressive to behold, this technology can extend the life spans of plastics that are under regular stress.  The group’s main goal now is to make a harder version, perhaps one that could be formed into such parts itself. As it exists today, the polymer is squishy and somewhat soft.

???????????In addition, a good self-healing material like this is a boon for ongoing efforts to find a viable material for artificial skin. Self-healing technology could also open the door to growth materials, as new units of the matrix could be incorporated as the material stretches and tears on the microscopic level. This would be especially useful when it comes to artificial skin, since it could grow over time and remove the need for replacement.

And if the healing mechanism proves strong enough, it could even be used as an adhesive or a sealant in other materials and even electronics. Just think of it! Everything from windows, to personal devices, to joints that are in need of padding. A simple injection of this type of material, and the breaks and aches go away. And given the progress being made with androids and life-like robots, its use as a source for artificial skin could go a long way to making them anthropomorphic.

And as usual, there’s a cool demonstration video. Enjoy!


Source: extremetech.com

Judgement Day Update: Using AI to Predict Flu Outbreaks

hal9000It’s a rare angle for those who’ve been raised on a heady diet of movies where the robot goes mad and tries to kill all humans: an artificial intelligence using its abilities to help humankind! But that’s the idea being explored by researchers like Raul Rabadan, a theoretical physicist working in biology at Columbia University. Using a new form of machine learning, they are seeking to unlock the mysteries of flu strains.

Basically, they are hoping to find out why flu strains like the H1N1, which ordinarily infect pigs and cows, are managing to make the jump to human hosts. Key to understanding this is finding the specific mutations that transform it into a human pathogen. Traditionally, answering this question would require painstaking comparisons of the DNA and protein sequences of different viruses.

AI-fightingfluBut thanks to rapidly growing databases of virus sequences and advances made in computing, scientists are now using sophisticated machine learning techniquesa branch of artificial intelligence in which computers develop algorithms based on the data they have been given to identify key properties in viruses like bird flu and swine flu and seeing how they go about transmitting from animals to humans.

This is especially important since every few decades, a pandemic flu virus emerges that not only infects humans but also passes rapidly from person to person. The H7N9 avian flu that infected more than 130 people in China is just the latest example. While it has not been as infectious as others, the fact that humans lack the antibodies to combat it led to a high lethality rate, with 44 of the infected dying. Whats more, it is expected to emerge again this fall or winter.

Influenza_virus_2008765Knowing the key properties to this and other viruses will help researchers identify the most dangerous new flu strains and could lead to more effective vaccines. Most importantly, scientists can now look at hundreds or thousands of flu strains simultaneously, which could reveal common mechanisms across different viruses or a broad diversity of transformations that enable human transmission.

Researchers are also using these approaches to investigate other viral mysteries, including what makes some viruses more harmful than others and factors influencing a virus’s ability to trigger an immune response. The latter could ultimately aid the development of flu vaccines. Machine learning techniques might even accelerate future efforts to identify the animal source of mystery viruses.

2009_world_subdivisions_flu_pandemicThis technique was first employed in 2011 by Nir Ben-Tal – a computational biologist at Tel Aviv University in Israel – and Richard Webby – a virologist at St. Jude Children’s Research Hospital in Memphis, Tennessee. Together, Ben-Tal and Webby used machine learning to compare protein sequences of the 2009 H1N1 pandemic swine flu with hundreds of other swine viruses.

Machine learning algorithms have been used to study DNA and protein sequences for more than 20 years, but only in the past few years have scientists applied them to viruses. Inspired by the growing amount of viral sequence data available for analysis, the machine learning approach is likely to expand as even more genomic information becomes available.

Map_H1N1_2009As Webby has said, “Databases will get much richer, and computational approaches will get much more powerful.” That in turn will help scientists better monitor emerging flu strains and predict their impact, ideally forecasting when a virus is likely to jump to people and how dangerous it is likely to become.

Perhaps Asimov had the right of it. Perhaps humanity will actually derive many benefits from turning our world increasingly over to machines. Either that, or Cameron will be right, and we’ll invent a supercomputer that’ll kill us all!

Source: wired.com