Flash Forward Is Done!

FlashForward_2After many months on the back burner, I finally took a big step while house-sitting for my family this weekend and completed Flash Forward. For those who don’t know, this book is an anthology of short sci-fi stories I did back in April of 2013, with a few additions from both before and after. All told, it works out to 19 short stories, 140 pages, and just over 51,000 words.

For some time, I had been wanting to do some fiction that explored the world of emerging technologies, artificial intelligence, autonomous machines, space exploration and the coming Technological Singularity. And a project involving a short story a day for 26 days was just the excuse I needed. After collecting the resulting stories together, I grouped them into three parts based on common time period and theme.

transhumanismPart I: Transitions deals with the near future, where climate change, militarized borders, and explosive growth in portables, social media, and synthetic foods will have a major effect on life. Part II: Convergence deals with the ensuing decades, where space exploration, artificial intelligence, digital sentience, and extropianism will become the norm and fundamentally alter what it is to live, work, and be human.

And Part III: Infinitum finishes things off, looking to the distant future where the seed of humanity is planted amongst the distant stars and our species passes the existential singularity. It was fun to write, but what I’ve been looking forward to for quite some time is the chance to hold a physical copy. Somehow, that’s always the best moment of the whole creative process for me. Seeing the book in print, as a real, physical thing you can touch and leaf through.

hyperspace4And now if you’ll excuse me, I have a book to edit, a million and one ideas for critical revision to consider, and a whole heap of what Aldous Huxley referred to as “Chronic Remorse” to deal with. Writing, huh? There’s a reason not everybody does it!

Stephen Hawking: AI Could Be a “Real Danger”

http://flavorwire.files.wordpress.com/2014/06/safe_image.jpgIn a hilarious appearance on “Last Week Tonight” – John Oliver’s HBO show – guest Stephen Hawking spoke about some rather interesting concepts. Among these were the concepts of “imaginary time” and, more interestingly, artificial intelligence. And much to the surprise of Oliver, and perhaps more than a few viewers, Hawking’s was not too keen on the idea of the latter. In fact, his predictions were just a tad bit dire.

Of course, this is not the first time Oliver had a scientific authority on his show, as demonstrated by his recent episode which dealt with Climate Change and featured guest speaker Bill Nye “The Science Guy”. When asked about the concept of imaginary time, Hawking explained it as follows:

Imaginary time is like another direction in space. It’s the one bit of my work science fiction writers haven’t used.

singularity.specrepIn sum, imaginary time has something to do with time that runs in a different direction to the time that guides the universe and ravages us on a daily basis. And according to Hawking, the reason why sci-fi writers haven’t built stories around imaginary time is apparently due to the fact that  “They don’t understand it”. As for artificial intelligence, Hawking replied without any sugar-coating:

Artificial intelligence could be a real danger in the not too distant future. [For your average robot could simply] design improvements to itself and outsmart us all.

Oliver, channeling his inner 9-year-old, asked: “But why should I not be excited about fighting a robot?” Hawking offered a very scientific response: “You would lose.” And in that respect, he was absolutely right. One of the greatest concerns with AI, for better or for worse, is the fact that a superior intelligence, left alone to its own devices, would find ways to produce better and better machines without human oversight or intervention.

terminator2_JDAt worst, this could lead to the machines concluding that humanity is no longer necessary. At best, it would lead to an earthly utopia where machines address all our worries. But in all likelihood, it will lead to a future where the pace of technological change will impossible to predict. As history has repeatedly shown, technological change brings with it all kinds of social and political upheaval. If it becomes a runaway effect, humanity will find it impossible to keep up.

Keeping things light, Oliver began to worry that Hawking wasn’t talking to him at all. Instead, this could be a computer spouting wisdoms. To which, Hawking replied: “You’re an idiot.” Oliver also wondered whether, given that there may be many parallel universes, there might be one where he is smarter than Hawking. “Yes,” replied the physicist. “And also a universe where you’re funny.”

Well at least robots won’t have the jump on us when it comes to being irreverent. At least… not right away! Check out the video of the interview below:


Source: cnet.com

Tech News: Google Seeking “Conscious Homes”

nest_therm1In Google’s drive for world supremacy, a good number of start-ups and developers have been bought up. Between their acquisition of eight robotics companies in the space of sixth months back in 2013 to their ongoing  buyout of anyone in the business of aerospace, voice and facial recognition, and artificial intelligence, Google seems determined to have a controlling interest in all fields of innovation.

And in what is their second-largest acquisition to date, Google announced earlier this month that they intend get in on the business of smart homes. The company in question is known as Nest Labs, a home automation company that was founded by former Apple engineers Tony Fadell and Matt Rogers in 2010 and is behind the creation of The Learning Thermostat and the Protect smoke and carbon monoxide detector.

nest-thermostatThe Learning Thermostat, the company’s flagship product, works by learning a home’s heating and cooling preferences over time, removing the need for manual adjustments or programming. Wi-Fi networking and a series of apps also let users control and monitor the unit Nest from afar, consistent with one of the biggest tenets of smart home technology, which is connectivity.

Similarly, the Nest Protect, a combination smoke and carbon monoxide detector, works by differentiating between burnt toast and real fires. Whenever it detects smoke, one alarm goes off, which can be quieted by simply waving your hand in front of it. But in a real fire, or where deadly carbon monoxide is detected, a much louder alarm sounds to alert its owners.

nest_smoke_detector_(1_of_9)_1_610x407In addition, the device sends a daily battery status report to the Nest mobile app, which is the same one that controls the thermostats, and is capable of connecting with other units in the home. And, since Nest is building a platform for all its devices, if a Nest thermostat is installed in the same home, the Protect and automatically shut it down in the event that carbon monoxide is detected.

According to a statement released by co-f0under Tony Fadell, Nest will continue to be run in-house, but will be partnered with Google in their drive to create a conscious home. On his blog, Fadell explained his company’s decision to join forces with the tech giant:

Google will help us fully realize our vision of the conscious home and allow us to change the world faster than we ever could if we continued to go it alone. We’ve had great momentum, but this is a rocket ship. Google has the business resources, global scale, and platform reach to accelerate Nest growth across hardware, software, and services for the home globally.

smarthomeYes, and I’m guessing that the $3.2 billion price tag added a little push as well! Needless to say, some wondered why Apple didn’t try to snatch up this burgeoning company, seeing as how its being run by two of its former employees. But according to Fadell, Google founder Sergey Brin “instantly got what we were doing and so did the rest of the Google team” when they got a Nest demo at the 2011 TED conference.

In a press release, Google CEO Larry Page had this to say about bringing Nest into their fold:

They’re already delivering amazing products you can buy right now – thermostats that save energy and smoke/[carbon monoxide] alarms that can help keep your family safe. We are excited to bring great experiences to more homes in more countries and fulfill their dreams!

machine_learningBut according to some, this latest act by Google goes way beyond wanting to develop devices. Sara Watson at Harvard University’s Berkman Center for Internet and Society is one such person, who believes Google is now a company obsessed with viewing everyday activities as “information problems” to be solved by machine learning and algorithms.

Consider Google’s fleet of self-driving vehicles as an example, not to mention their many forays into smartphone and deep learning technology. The home is no different, and a Google-enabled smart home of the future, using a platform such as the Google Now app – which already gathers data on users’ travel habits – could adapt energy usage to your life in even more sophisticated ways.

Larry_PageSeen in these terms, Google’s long terms plans of being at the forefront of the new technological paradigm  – where smart technology knows and anticipates and everything is at our fingertips – certainly becomes more clear. I imagine that their next goal will be to facilitate the creation of household AIs, machine minds that monitor everything within our household, provide maintenance, and ensure energy efficiency.

However, another theory has it that this is in keeping with Google’s push into robotics, led by the former head of Android, Andy Rubin. According to Alexis C. Madrigal of the Atlantic, Nest always thought of itself as a robotics company, as evidence by the fact that their VP of technology is none other than Yoky Matsuoka – a roboticist and artificial intelligence expert from the University of Washington.

yokymatsuoka1During an interview with Madrigal back in 2012, she explained why this was. Apparently, Matsuoka saw Nest as being positioned right in a place where it could help machine and human intelligence work together:

The intersection of neuroscience and robotics is about how the human brain learns to do things and how machine learning comes in to augment that.

In short, Nest is a cryptorobotics company that deals in sensing, automation, and control. It may not make a personable, humanoid robot, but it is producing machine intelligences that can do things in the physical world. Seen in this respect, the acquisition was not so much part of Google’s drive to possess all our personal information, but a mere step along the way towards the creation of a working artificial intelligence.

It’s a Brave New World, and it seems that people like Musk, Page, and a slew of futurists that are determined to make it happen, are at the center of it.

Sources: cnet.news.com, (2), newscientist.com, nest.com, theatlantic.com

The Future is… Worms: Life Extension and Computer-Simulations

genetic_circuitPost-mortality is considered by most to be an intrinsic part of the so-called Technological Singularity. For centuries, improvements in medicine, nutrition and health have led to improved life expectancy. And in an age where so much more is possible – thanks to cybernetics, bio, nano, and medical advances – it stands to reason that people will alter their physique in order slow the onset of age and extend their lives even more.

And as research continues, new and exciting finds are being made that would seem to indicate that this future may be just around the corner. And at the heart of it may be a series of experiments involving worms. At the Buck Institute for Research and Aging in California, researchers have been tweaking longevity-related genes in nematode worms in order to amplify their lifespans.

immortal_wormsAnd the latest results caught even the researchers by surprise. By triggering mutations in two pathways known for lifespan extension – mutations that inhibit key molecules involved in insulin signaling (IIS) and the nutrient signaling pathway Target of Rapamycin (TOR) – they created an unexpected feedback effect that amplified the lifespan of the worms by a factor of five.

Ordinarily, a tweak to the TOR pathway results in a 30% lifespan extension in C. Elegans worms, while mutations in IIS (Daf-2) results in a doubling of lifespan. By combining the mutations, the researchers were expecting something around a 130% extension to lifespan. Instead, the worms lived the equivalent of about 400 to 500 human years.

antiagingAs Doctor Pankaj Kapahi said in an official statement:

Instead, what we have here is a synergistic five-fold increase in lifespan. The two mutations set off a positive feedback loop in specific tissues that amplified lifespan. These results now show that combining mutants can lead to radical lifespan extension — at least in simple organisms like the nematode worm.

The positive feedback loop, say the researchers, originates in the germline tissue of worms – a sequence of reproductive cells that may be passed onto successive generations. This may be where the interactions between the two mutations are integrated; and if correct, might apply to the pathways of more complex organisms. Towards that end, Kapahi and his team are looking to perform similar experiments in mice.

DNA_antiagingBut long-term, Kapahi says that a similar technique could be used to produce therapies for aging in humans. It’s unlikely that it would result in the dramatic increase to lifespan seen in worms, but it could be significant nonetheless. For example, the research could help explain why scientists are having a difficult time identifying single genes responsible for the long lives experienced by human centenarians:

In the early years, cancer researchers focused on mutations in single genes, but then it became apparent that different mutations in a class of genes were driving the disease process. The same thing is likely happening in aging. It’s quite probable that interactions between genes are critical in those fortunate enough to live very long, healthy lives.

A second worm-related story comes from the OpenWorm project, an international open source project dedicated to the creation of a bottom-up computer model of a millimeter-sized nemotode. As one of the simplest known multicellular life forms on Earth, it is considered a natural starting point for creating computer-simulated models of organic beings.

openworm-nematode-roundworm-simulation-artificial-lifeIn an important step forward, OpenWorm researchers have completed the simulation of the nematode’s 959 cells, 302 neurons, and 95 muscle cells and their worm is wriggling around in fine form. However, despite this basic simplicity, the nematode is not without without its share of complex behaviors, such as feeding, reproducing, and avoiding being eaten.

To model the complex behavior of this organism, the OpenWorm collaboration (which began in May 2013) is developing a bottom-up description. This involves making models of the individual worm cells and their interactions, based on their observed functionality in the real-world nematodes. Their hope is that realistic behavior will emerge if the individual cells act on each other as they do in the real organism.

openworm-nematode-roundworm-simulation-artificial-life-0Fortunately, we know a lot about these nematodes. The complete cellular structure is known, as well as rather comprehensive information concerning the behavior of the thing in reaction to its environment. Included in our knowledge is the complete connectome, a comprehensive map of neural connections (synapses) in the worm’s nervous system.

The big question is, assuming that the behavior of the simulated worms continues to agree with the real thing, at what stage might it be reasonable to call it a living organism? The usual definition of living organisms is behavioral, that they extract usable energy from their environment, maintain homeostasis, possess a capacity to grow, respond to stimuli, reproduce, and adapt to their environment in successive generations.

openworm-nematode1If the simulation exhibits these behaviors, combined with realistic responses to its external environment, should we consider it to be alive? And just as importantly, what tests would be considered to test such a hypothesis? One possibility is an altered version of the Turing test – Alan Turing’s proposed idea for testing whether or not a computer could be called sentient.

In the Turing test, a computer is considered sentient and sapient if it can simulate the responses of a conscious sentient being so that an auditor can’t tell the difference. A modified Turing test might say that a simulated organism is alive if a skeptical biologist cannot, after thorough study of the simulation, identify a behavior that argues against the organism being alive.

openworm-nematode2And of course, this raises an even larger questions. For one, is humanity on the verge of creating “artificial life”? And what, if anything, does that really look like? Could it just as easily be in the form of computer simulations as anthropomorphic robots and biomachinery? And if the answer to any of these questions is yes, then what exactly does that say about our preconceived notions about what life is?

If humanity is indeed moving into an age of “artificial life”, and from several different directions, it is probably time that we figure out what differentiates the living from the nonliving. Structure? Behavior? DNA? Local reduction of entropy? The good news is that we don’t have to answer that question right away. Chances are, we wouldn’t be able to at any rate.

Brain-ScanAnd though it might not seem apparent, there is a connection between the former and latter story here. In addition to being able to prolong life through genetic engineering, the ability to simulate consciousness through computer-generated constructs might just prove a way to cheat death in the future. If complex life forms and connectomes (like that involved in the human brain) can be simulated, then people may be able to transfer their neural patterns before death and live on in simulated form indefinitely.

So… anti-aging, artificial life forms, and the potential for living indefinitely. And to think that it all begins with the simplest multicellular life form on Earth – the nemotode worm. But then again, all life – nay, all of existence – depends upon the most simple of interactions, which in turn give rise to more complex behaviors and organisms. Where else would we expect the next leap in biotechnological evolution to come from?

And in the meantime, be sure to enjoy this video of the OpenWorm’s simulated nemotode in action


Sources:
IO9, cell.com, gizmag, openworm

Judgement Day Update: Google Robot Army Expanding

Atlas-x3c.lrLast week, Google announced that it will be expanding its menagerie of robots, thanks to a recent acquisition. The announcement came on Dec. 13th, when the tech giant confirmed that it had bought out the engineering company known as Boston Dynamics. This company, which has had several lucrative contracts with DARPA and the Pentagon, has been making the headlines in the past few years, thanks to its advanced robot designs.

Based in Waltham, Massachusetts, Boston Dynamics has gained an international reputation for machines that walk with an uncanny sense of balance, can navigate tough terrain on four feet, and even run faster than the fastest humans. The names BigDog, Cheetah, WildCat, Atlas and the Legged Squad Support System (LS3), have all become synonymous with the next generation of robotics, an era when machines can handle tasks too dangerous or too dirty for most humans to do.

Andy-Rubin-and-Android-logoMore impressive is the fact that this is the eight robot company that Google has acquired in the past six months. Thus far, the company has been tight-lipped about what it intends to do with this expanding robot-making arsenal. But Boston Dynamics and its machines bring significant cachet to Google’s robotic efforts, which are being led by Andy Rubin, the Google executive who spearheaded the development of Android.

The deal is also the clearest indication yet that Google is intent on building a new class of autonomous systems that might do anything from warehouse work to package delivery and even elder care. And considering the many areas of scientific and technological advancement Google is involved in – everything from AI and IT to smartphones and space travel – it is not surprising to see them branching out in this way.

wildcat1Boston Dynamics was founded in 1992 by Marc Raibert, a former professor at the Massachusetts Institute of Technology. And while it has not sold robots commercially, it has pushed the limits of mobile and off-road robotics technology thanks to its ongoing relationship and funding from DARPA. Early on, the company also did consulting work for Sony on consumer robots like the Aibo robotic dog.

Speaking on the subject of the recent acquisition, Raibert had nothing but nice things to say about Google and the man leading the charge:

I am excited by Andy and Google’s ability to think very, very big, with the resources to make it happen.

Videos uploaded to Youtube featuring the robots of Boston Dynamics have been extremely popular in recent years. For example, the video of their four-legged, gas powered, Big Dog walker has been viewed 15 million times since it was posted on YouTube in 2008. In terms of comments, many people expressed dismay over how such robots could eventually become autonomous killing machines with the potential to murder us.

petman-clothesIn response, Dr. Raibert has emphasized repeatedly that he does not consider his company to be a military contractor – it is merely trying to advance robotics technology. Google executives said the company would honor existing military contracts, but that it did not plan to move toward becoming a military contractor on its own. In many respects, this acquisition is likely just an attempt to acquire more talent and resources as part of a larger push.

Google’s other robotics acquisitions include companies in the United States and Japan that have pioneered a range of technologies including software for advanced robot arms, grasping technology and computer vision. Mr. Rubin has also said that he is interested in advancing sensor technology. Mr. Rubin has called his robotics effort a “moonshot,” but has declined to describe specific products that might come from the project.

Cheetah-robotHe has, however, also said that he does not expect initial product development to go on for some time, indicating that Google commercial robots of some nature would not be available for several more years. Google declined to say how much it paid for its newest robotics acquisition and said that it did not plan to release financial information on any of the other companies it has recently bought.

Considering the growing power and influence Google is having over technological research – be it in computing, robotics, neural nets or space exploration – it might not be too soon to assume that they are destined to one day create the supercomputer that will try to kill us all. In short, Google will play Cyberdyne to Skynet and unleash the Terminators. Consider yourself warned, people! 😉

Source: nytimes.com

Judgement Day Update: Bionic Computing!

big_blue1IBM has always been at the forefront of cutting-edge technology. Whether it was with the development computers that could guide ICBMs and rockets into space during the Cold War, or the creation of the Internet during the early 90’s, they have managed to stay on the vanguard by constantly looking ahead. So it comes as no surprise that they had plenty to say last month on the subject of the next of the next big leap.

During a media tour of their Zurich lab in late October, IBM presented some of the company’s latest concepts. According to the company, the key to creating supermachines that 10,000 faster and more efficient is to build bionic computers cooled and powered by electronic blood. The end result of this plan is what is known as “Big Blue”, a proposed biocomputer that they anticipate will take 10 years to make.

Human-Brain-project-Alp-ICTIntrinsic to the design is the merger of computing and biological forms, specifically the human brain. In terms of computing, IBM is relying the human brain as their template. Through this, they hope to be able to enable processing power that’s densely packed into 3D volumes rather than spread out across flat 2D circuit boards with slow communication links.

On the biological side of things, IBM is supplying computing equipment to the Human Brain Project (HBP) – a $1.3 billion European effort that uses computers to simulate the actual workings of an entire brain. Beginning with mice, but then working their way up to human beings, their simulations examine the inner workings of the mind all the way down to the biochemical level of the neuron.

brain_chip2It’s all part of what IBM calls “the cognitive systems era”, a future where computers aren’t just programmed, but also perceive what’s going on, make judgments, communicate with natural language, and learn from experience. As the description would suggest, it is closely related to artificial intelligence, and may very well prove to be the curtain raiser of the AI era.

One of the key challenge behind this work is matching the brain’s power consumption. The ability to process the subtleties of human language helped IBM’s Watson supercomputer win at “Jeopardy.” That was a high-profile step on the road to cognitive computing, but from a practical perspective, it also showed how much farther computing has to go. Whereas Watson uses 85 kilowatts of power, the human brain uses only 20 watts.

aquasar2Already, a shift has been occurring in computing, which is evident in the way engineers and technicians are now measuring computer progress. For the past few decades, the method of choice for gauging performance was operations per second, or the rate at which a machine could perform mathematical calculations.

But as a computers began to require prohibitive amounts of power to perform various functions and generated far too much waste heat, a new measurement was called for. The new measurement that emerged as a result was expressed in operations per joule of energy consumed. In short, progress has come to be measured in term’s of a computer’s energy efficiency.

IBM_Research_ZurichBut now, IBM is contemplating another method for measuring progress that is known as “operations per liter”. In accordance with this new paradigm, the success of a computer will be judged by how much data-processing can be squeezed into a given volume of space. This is where the brain really serves as a source of inspiration, being the most efficient computer in terms of performance per cubic centimeter.

As it stands, today’s computers consist of transistors and circuits laid out on flat boards that ensure plenty of contact with air that cools the chips. But as Bruno Michel – a biophysics professor and researcher in advanced thermal packaging for IBM Research – explains, this is a terribly inefficient use of space:

In a computer, processors occupy one-millionth of the volume. In a brain, it’s 40 percent. Our brain is a volumetric, dense, object.

IBM_stacked3dchipsIn short, communication links between processing elements can’t keep up with data-transfer demands, and they consume too much power as well. The proposed solution is to stack and link chips into dense 3D configurations, a process which is impossible today because stacking even two chips means crippling overheating problems. That’s where the “liquid blood” comes in, at least as far as cooling is concerned.

This process is demonstrated with the company’s prototype system called Aquasar. By branching chips into a network of liquid cooling channels that funnel fluid into ever-smaller tubes, the chips can be stacked together in large configurations without overheating. The liquid passes not next to the chip, but through it, drawing away heat in the thousandth of a second it takes to make the trip.

aquasarIn addition, IBM also is developing a system called a redox flow battery that uses liquid to distribute power instead of using wires. Two types of electrolyte fluid, each with oppositely charged electrical ions, circulate through the system to distribute power, much in the same way that the human body provides oxygen, nutrients and cooling to brain through the blood.

The electrolytes travel through ever-smaller tubes that are about 100 microns wide at their smallest – the width of a human hair – before handing off their power to conventional electrical wires. Flow batteries can produce between 0.5 and 3 volts, and that in turn means IBM can use the technology today to supply 1 watt of power for every square centimeter of a computer’s circuit board.

IBM_Blue_Gene_P_supercomputerAlready, the IBM Blue Gene supercomputer has been used for brain research by the Blue Brain Project at the Ecole Polytechnique Federale de Lausanne (EPFL) in Lausanne, Switzerland. Working with the HBP, their next step ill be to augment a Blue Gene/Q with additional flash memory at the Swiss National Supercomputing Center.

After that, they will begin simulating the inner workings of the mouse brain, which consists of 70 million neurons. By the time they will be conducting human brain simulations, they plan to be using an “exascale” machine – one that performs 1 exaflops, or quintillion floating-point operations per second. This will take place at the Juelich Supercomputing Center in northern Germany.

brain-activityThis is no easy challenge, mainly because the brain is so complex. In addition to 100 billion neurons and 100 trillionsynapses,  there are 55 different varieties of neuron, and 3,000 ways they can interconnect. That complexity is multiplied by differences that appear with 600 different diseases, genetic variation from one person to the next, and changes that go along with the age and sex of humans.

As Henry Markram, the co-director of EPFL who has worked on the Blue Brain project for years:

If you can’t experimentally map the brain, you have to predict it — the numbers of neurons, the types, where the proteins are located, how they’ll interact. We have to develop an entirely new science where we predict most of the stuff that cannot be measured.

child-ai-brainWith the Human Brain Project, researchers will use supercomputers to reproduce how brains form in an virtual vat. Then, they will see how they respond to input signals from simulated senses and nervous system. If it works, actual brain behavior should emerge from the fundamental framework inside the computer, and where it doesn’t work, scientists will know where their knowledge falls short.

The end result of all this will also be computers that are “neuromorphic” – capable of imitating human brains, thereby ushering in an age when machines will be able to truly think, reason, and make autonomous decisions. No more supercomputers that are tall on knowledge but short on understanding. The age of artificial intelligence will be upon us. And I think we all know what will follow, don’t we?

Evolution-of-the-Cylon_1024Yep, that’s what! And may God help us all!

Sources: news.cnet.com, extremetech.com