Immortality Is On The Way!

William Gibson must get a kick out of news items like these. According to a recent article over at IO9, it seems that an entrepreneur named Dmitry Itskova and a team of Russian scientists are developing a project that could render humans immortal by the year 2045, after a fashion. According to the plan, which is called the 2045 Initiative, they hope to create a fully functional, holographic avatar of a human being.

At the core of this avatar will be an artificial brain containing all the thoughts, memories, and emotions of the person being simulated. Given the advancements in the field of computer technology, which includes the Google Neural Net, the team estimates that it won’t be long before a construct can be made which can store the sum total of a human’s mind.

If this concept sounds familiar, then chances are you’ve been reading either from Gibson’s Sprawl Trilogy or Ray Kurzweil’s wishlist. Intrinsic to the former’s cyberpunk novels and the latter’s futurist predictions is the concept of people being able to merge their intelligence with machines for the sake of preserving their very essence for all time. Men like Kurzweil want this technology because it will ensure them the ability to live forever, while novelists like Gibson predicted that this would be something the mega-rich alone would have access to.

Which brings me to another aspect of this project. It seems that Itskova has gone to great lengths to secure investment capital to realize this dream. This included an open letter to roughly the world’s 1226 wealthiest citizens, everybody on Forbes Magazine’s list of the world’s richest people, offering them a chance to invest and make their mark on history. If any of them have already chosen to invest, it’s pretty obvious why. Being so rich and powerful, they can’t be too crazy about the idea of dying. In addition, the process isn’t likely to come cheap. Hence, if and when the technology is realized, the world’s richest people will be the first to create avatars of themselves.

No indication of when the technology will be commercially viable for say, the rest of us. But the team has provided a helpful infographic of when the project’s various steps will be realized (see above). The dates are a little flexible, but they anticipate that they will be able to create a robotic copy of a human body (i.e. an android) within three to eight years. In eight to thirteen, they would be able to build a robotic body capable of housing a brain. By eighteen to twenty-three, a robotic humanoid with a mechanical brain that can house human memories will be realizable. And last, and most impressive, will be a holographic program that is capable of preserving a person’s memories and neural patterns (aka. their personality) indefinitely.

You have to admit, this kind of technology raises an awful lot of questions. For one, there’s the inevitable social consequences of it. If the wealthiest citizens in the world are never going to die, what becomes of their spoiled children? Do they no longer inherit their parent’s wealth, or simply live on forever as they do? And won’t this cramp this style, knowing that mommy and daddy are living forever in the box next to theirs?

What’s more, if there’s no generational turn-over, won’t this effect the whole nature and culture of wealth? It is, by its very nature, something which is passed on from generation to generation, ensuring the creation of elites and their influence over society. In this scenario, the same people are likely to exert influence generation after generation, wielding a sort of power which is virtually godlike.

And let’s not forget the immense spiritual and existential implications! Does technology like this disprove the concept of the immortal soul, or its very transcendent nature? If the human personality can be reduced to a connectome, which can in turn be digitized and stored, then what room is left for the soul? Or, alternately, if the soul really does exist, won’t people who partake in this experiment be committing the ultimate sin?

All stuff to ponder as the project either approaches realization or falls flat on its face, leaving such matters for future generations to ponder. In the meantime, we shouldn’t worry to much. As this century progresses and technology grows, we will have plenty of other chances to desecrate the soul. And given the advance of overpopulation and climate change, odds are we’ll be dying off before any of those plans reach fruition. Always look on the bright side, as they say 😉

The Future is Here: The Google Neural Net!

I came across a recent story at BBC News, one which makes me both hopeful and fearful. It seems that a team of researchers, working for Google, have completed work on an artificial neural net that is capable of recognizing pictures of cats. Designed and built to mimic the human brain, this may very well be the first instance where a computer was capable of exercising the faculty of autonomous reasoning – the very thing that we humans are so proud (and jealous) of!

The revolutionary new system was a collaborative effort between Google’s X Labs division and Professor Andrew Ng of the AI Lab at Standford University, California. As opposed to image recognition software, which tells computers to look for specific features in a target picture before being presented with it, the Google machine knew nothing about the images in advance. Instead, it relied on its 16,000 processing cores to run software that simulated the workings of a biological neural network with about one billion connections.

Now, according to various estimates, the human cerebral cortex contains at least 1010 neurons linked by 1014 synaptic connections – or in lay terms, 10 trillions neurons with roughly 1 quadrillion connections. That means this artificial brain has one one thousandth the complexity of the organic, human one. Not quite as complex, but it’s a start… A BIG start really!

For decades – hell, even centuries and millennia – human beings have contemplated what it would take to make an autonomous automaton. Even with all the growth in computer’s processing speed and storage, the question of how to make the leap between a smart machine and a truly “intelligent” one has remained a tricky one. Judging from all the speculation and representations in fiction, everyone seemed to surmise that some sort of artificial neural net would be involved, something that could mimic the process of forming connections, encoding experiences into a physical (i.e. digital) form, and expanding based on ongoing learning.

Naturally, Google has plans for an application using this new system. Apparently, the company is hoping that it will help them with its indexing systems and with language translation.  Giving the new guy the boring jobs, huh? I wonder what’s going to happen when the newer, smarter models start coming out? Yeah, I can foresee new generations emerging over time, much as new generations of iPods with larger and larger storage capacities have been coming out every year for the past decade. Or, like faster and faster CPU’s from the past three decades. Yes, this could very well represent the next great technological race, as foreseen by such men as Eliezer Yudkowsky, Nick Bostrom, and Ray Kurzweil.

In short, Futurists will rejoice, Alarmists will be afraid, and science fiction writers will exploit it for all its worth! Until next time, keep your eyes peeled for any red-eyed robots. That seems to be the first warning sign of impending robocalypse!

The Coming Singularity… In Song!

Singularitarian. That’s a good name for someone who embraces the idea of the coming Technological Singularity, which I believe I mentioned somewhere… Yes, these days a lot of high-minded terms get thrown around to describe what may very well be possible somewhere in this century and the next. Extropian, Post-Human, Clinical Immortality, Artificial Intelligence, Cyber Ethics, Transhuman, Mind/Machine Interface, Law of Accelerated Returns, and so forth. It can be kind of confusing to stay up with it since all the lingo is kind of complex and esoteric. Lot of big and obscure words there…

Luckily, Mr. Charlie Kam has decided to explain. Setting the ideas to the tune of “I am the Very Model of a Modern Major General”, he tells how the idea works and what the eventual aim is. Basically, the idea is all about improving the condition of humanity through the ongoing application of technology. By preserving our cells, our memories, lengthening our lives, we will ensure that humanity will live on and achieve more than we previously thought possible.

Since we don’t yet know how to do this, we will achieve the first step by either merging our own minds with technology to enhance our thought processes and expand our awareness. Or, we could just create machinery that could do the job for us (aka. AI). Then, applying this superior intelligence, we will unlock the mysteries of the universe, create nanotech machines, medicines that can cure all diseases, and machinery that can store human memories, senses and impressions for all time.

Some big names got thrown in there too, not the least of which was Ray Kurzweil, noted Futurist. But don’t take my word for it, watch the video. If nothing else, its good for a laugh.

The Technological Singularity

This is a little off the beaten path right now, but lately, I’ve been spending a lot of time contemplating this big concept. In fact, it’s been informing the majority of my writing for the past year, and during my recent trip back to Ottawa, it was just about all my friend and I could talk about (dammit, we used to club!) And since I find myself explaining this concept to people quite often, and enjoying it, I thought I’d dedicate a post to it as well.

It’s called the Technological Singularity, and was coined in 1993 by sci-fi author Vernor Vinge. To put it concisely, Vinge predicted that at some point in the 21st century, human beings would be able to augment their intelligence using artificial means. This, he argued, would make the future completely unpredictable beyond that point, seeing as how the minds that contemplating the next leaps would be beyond anything we possess now.

The name itself is derived from the concept of the Quantum Singularity or Event Horizon, the region that resides at the center of a black hole beyond which, nothing is visible. In the case of a black hole, the reason you can’t see beyond this point is because the very laws of physics break down and become indistinguishable. The same is being postulated here, that beyond a certain point in our technological evolution, things will get so advanced and radical that we couldn’t possibly imagine what the future will look like.

how-nanotechnology-could-reengineer-us

Bad news for sci-fi writers huh? But strangely, it is this very concept which appears to fascinate them the most! Just because we not be able to accurately predict the future doesn’t stop people from trying, especially writers like Neal Stephenson, Greg Bear, and Charles Stross. Frankly, the concept was coined by a sci-fi writer so we’re gonna damn well continue to talk about it. And besides, when was the last time science fiction writers were bang on about anything? It’s called fiction for a reason.

Men like Ray Kurzweil, a futurist who is all about achieving immortality, have popularized this idea greatly. Thanks to people like him, this idea has ventured beyond the realm of pure sci-fi and become a legitimate area of academic study. Relying on ongoing research into the many, many paradigm shifts that have taken place over time, he and others have concluded that technological progress is not a linear phenomena, but an exponential one.

Consider the past few decades. Has it not been a constant complaint that the pace of life and work have been increasing greatly from year to year? Of course, and the driving force has been constant technological change. Whereas people in our parents generation grew up learning to use slide rules and hand-cranked ammonia copiers, by the time they hit the workforce, everything was being done with calculators and Xerox printers.

PPTMooresLawai

In terms of documents, they used to learn typewriters and the filing system. Then, with the microprocessor revolution, everything was done on computer and electronically. Phones and secretaries gave way to voicemail and faxes, and then changed again with the advent of the internet, pager, cell phone and PDA. Now, all things were digital, people could be reached anywhere, and messages were all handled by central computers.

And that’s just within the last half-century. Expanding the time-frame further, let’s take a much longer view. As a historian, I am often fascinated with the full history of humanity, going back roughly 200,000 years.  Back then, higher order primates such as ourselves had emerged in one small pocket of the world (North-Eastern Africa) and began to circulate outwards.

By 50,000 years ago, we had reached full maturity as far as being homo sapiens is concerned, relying on complex tools, social interaction, sewing and hunting and gathering technigues to occupy every corner of the Old World and make it suitable for our purposes. From the far reaches of the North to the Tropics in the South, humanity showed that it could live anywhere in the world thanks to its ingenuity and ability to adapt. By 15,000 years ago, we had expanded to occupy the New World as well, had hunted countless species to extinction, and began the process of switching over to agriculture.

By 5000 years ago, civilization as we know it was emerging independently in three corners of the world. By this, I mean permanent settlements that were based in part or in full on the cultivation of crops and domestication of animals. Then, 500 years ago, the world’s collided when the Spanish landed in the New World and opened up the “Age of Imperialism”. Because of the discovery of the New World, Europe shot ahead of its peer civilizations in Africa, Asia and the Middle East, went on to colonize every corner of the world, and began to experience some major political shifts at home and abroad. The “Age of Imperialism” gradually gave way to the “Age of Revolutions”.

100 years ago, the total population of the Earth reached 1 billion, industrialization had taken full effect in every developed nation and urban populations were now exceeding that of rural. 50 years ago, we had reached 3 billion human beings, were splitting the atom, sending rockets into space, and watching the world decolonize itself. And only 10 years ago, we had reached a whopping 6 billion human beings, were in the throws of yet another technological revolution (the digital) and were contemplating nanotechnology, biomedicine and even AI.

In short, since our inception, the trend has been moving ever upwards, faster and faster. With every change, the pace seems to increase exponentially. The amount of time between paradigm shifts – that is, between revolutionary changes that alter the way we look at the world – has been getting smaller and smaller. Given this pattern, it seems like only a matter of time before the line on the graph rises infinitely and we have to rethink the whole concept of progress.

Is your nooble baked yet? Mine sure is! It’s get like that any time I start contemplating the distant past and the not too distant future. These are exciting times, and even if you think that the coming Singularity might spell doom, you gotta admit, this is an exciting time to be alive. If nothing else, its always a source of intrigue to know that you are on the cutting edge of history, that some day, people will be talking about what was and you will be able to say “I was there”.

Whoo… deep stuff man. And like I said, fun to write about. Ever since I was a senior in high school, I dreamed of being able to write a book that could capture the Zeitgeist. As soon as I learned about the Technological Singularity, I felt I had found my subject matter. If I could write just one book that captures the essence of history at this point in our technological (and possibly biological) evolution, I think I’ll die a happy man. Because for me, it’s not enough to just have been there. I want to have been there and said something worthwhile about it.

Alright, thanks for listening! Stay tuned for more lighter subject matter and some updates on the latest from Story Time and Data Miners. Plus more on Star Wars, coming soon!