Ten Day Book Challenge: Day Five

Ten Day Book Challenge: Day Five

Okay, I admit it. I’ve been completely derelict when it comes to this challenge. But I hope to amend that by finishing it things up and acknowledging all the books that have inspired me in the past.

Okay, so as usual, here are the rules of this challenge:

  • Thank whoever nominated you with big, bold print. If they have a blog, link to the post where you got tagged there.
  • Explain the rules.
  • Post the cover of a book that was influential on you or that you love dearly.
  • Explain why it was so influential to you.
  • Tag someone else to do the challenge, and let them know they’ve been tagged.

Thanks once again to RAMI UNGAR for the nomination, and you can find him at ramiungarthewriter.com. And for this latest entry, I would like to select the Singularity-themed sci-fi classic Accelerando, by Charles Stross.

Have you ever read a book that felt it came along at exactly the right time? Or one that spoke to you and your particular interests at the time? Well, this was one such book for me. Rather than being a single story, this book is actually a collection of shorts that Stross wrote during the early 2000s, but which were all connected by a common theme. Essentially, the six shorts tell the story of three generations of the Macx family, and take place before, during and after the Technological Singularity.

What I loved about this book is how it takes a look at the near-future and how the accelerated pace of technological innovation will make life very interesting (and complicated). It also speaks about several key innovations that are expected, ranging from AI, additive manufacturing (3-D printing), nanotechnology, neural uploads, and commercial space travel.

Looking at the more distant future, it shows how these trends will lead to a breakneck pace of change that will leave most of humanity struggling to remain human. It also throws is some truly interesting and entertaining bits about extra-terrestrial intelligence, a possible answer to the Fermi Paradox, and humanity’s long-term destiny among the stars.

Basically, this book covered all the bases that I was voraciously trying to learn about at the time for the sake of my own writing. It made predictions, both realistic and fantastical, that just spoke to me. And what especially impressed was the way that Stross, writing these stories at a least decade prior to me reading them, predicted so many trends that were slowly coming true. As such, I consider this book to be both inspirational and quintessential to my more recent education as a science fiction writer.

Next up, I nominate Joachim Boaz and his blog, Science Fiction and Other Suspect Ruminations!

Universe Today: Are Intelligent Civilizations Doomed?

Gaia_galaxyMy friend over at Universe Today, Fraser Cain, has been busy of late! In his latest podcast, he asks an all-important question that addresses the worrisome questions arising out of the Fermi Paradox. For those unfamiliar with this, the paradox states that given the age of the universe, the sheer number of stars and planets, and the statistical likelihood of some of the supporting life, how has humanity failed to find any indications of intelligent life elsewhere?

It’s a good question, and raised some frightening possibilities. First off, humanity may be alone in the universe, which is frightening enough prospect given its sheer size. Nothing worse than being on a massive playground and knowing you only have but yourself to play with. A second possibility is that extra-terrestrial life does exist, but has taken great pains to avoid being contacting us. An insulting, if understandable, proposition.

alien-worldThird, it could be that humanity alone has achieved the level of technical development necessary to send out and receive radio transmissions or construct satellites. That too is troubling, since it would means that despite the age of the universe, it took this long for an technologically advanced species to emerge, and that there are no species out there that we can learn from or look up to.

The fourth, and arguably most frightening possibility, is the Great Filter theory – that all intelligent life is doomed to destroy itself, and we haven’t heard from any others because they are all dead. This concept has been explored by numerous science fiction authors – such as Stephen Baxter (Manifold: Space), Alastair Reynolds (the Revelation Space universe) and Charles Stross (Accelerand0) – all of whom employ a different variation and answer.

kardashev_scaleAs explored by these and other authors, the biggest suggestions are that either civilizations will eventually create weapons or some kind of programmed matter which will destroy – such as nuclear weapons, planet busters, killer robots, or nanotech that goes haywire (aka. “grey goo”). A second possibility is that all species eventually undergo a technological/existential singularity where they shed their bodies and live out their lives in a simulated existence.

A third is that intelligent civilizations fell into a “success trap”, outgrowing their resources and their capacity to support their numbers, or simply ruined their planetary environment before they could get out into the universe. As usual, Fraser gives a great rundown on all of this, explaining the Fermi Paradox is, the statistical likelihood of life existing elsewhere, and what likely scenarios could explain why humanity has yet to find any proof of other civilizations.

Are Intelligent Civilizations Doomed:


And be sure to check out the podcast that deals strictly with the Fermi Paradox, from roughly a year ago:

The Fermi Paradox Explained:

Accelerando: A Review

posthumanIt’s been a long while since I did a book review, mainly because I’ve been immersed in my writing. But sooner or later, you have to return to the source, right? As usual, I’ve been reading books that I hope will help me expand my horizons and become a better writer. And with that in mind, I thought I’d finally review a book I finished reading some months ago, one which was I read in the hopes of learning my craft.

It’s called Accelerando, one of Charle’s Stross better known works that earned him the Hugo, Campbell, Clarke, and British Science Fiction Association Awards. The book contains nine short stories, all of which were originally published as novellas and novelettes in Azimov’s Science Fiction. Each one revolves around the Mancx family, looking at three generations that live before, during, and after the technological singularity.

https://i1.wp.com/1a3kls1q8u5etu6z53sktyqdif.wpengine.netdna-cdn.com/wp-content/uploads/2011/06/Charles-Stross.jpgThis is the central focus of the story – and Stross’ particular obsession – which he explores in serious depth. The title, which in Italian means “speeding up” and is used as a tempo marking in musical notation, refers to the accelerating rate of technological progress and its impact on humanity. Beginning in the 21st century with the character of Manfred Mancx, a “venture altruist”; moving to his daughter Amber in the mid 21st century; the story culminates with Sirhan al-Khurasani, Amber’s son in the late 21st century and distant future.

In the course of all that, the story looks at such high-minded concepts as nanotechnology, utility fogs, clinical immortality, Matrioshka Brains, extra-terrestrials, FTL, Dyson Spheres and Dyson Swarms, and the Fermi Paradox. It also takes a long-view of emerging technologies and predicts where they will take us down the road.

And to quote Cory Doctorw’s own review of the book, it essentially “Makes hallucinogens obsolete.”

Plot Synopsis:
https://i1.wp.com/upload.wikimedia.org/wikipedia/en/0/0b/Accelerando_%28book_cover%29.jpg
Part I, Slow Takeoff, begins with the short story “Lobsters“, which opens in early-21st century Amsterdam. Here, we see Manfred Macx, a “venture altruist”, going about his business, making business ideas happen for others and promoting development. In the course of things, Manfred receives a call on a courier-delivered phone from entities claiming to be a net-based AI working through a KGB website, seeking his help on how to defect.

Eventually, he discovers the callers are actually uploaded brain-scans of the California spiny lobster looking to escape from humanity’s interference. This leads Macx to team up with his friend, entrepreneur Bob Franklin, who is looking for an AI to crew his nascent spacefaring project—the building of a self-replicating factory complex from cometary material.

In the course of securing them passage aboard Franklin’s ship, a new legal precedent is established that will help define the rights of future AIs and uploaded minds. Meanwhile, Macx’s ex-fiancee Pamela pursues him, seeking to get him to declare his assets as part of her job with the IRS and her disdain for her husband’s post-scarcity economic outlook. Eventually, she catches up to him and forces him to impregnate and marry her in an attempt to control him.

The second story, “Troubador“, takes place three years later where Manfred is in the middle of an acrimonious divorce with Pamela who is once again seeking to force him to declare his assets. Their daughter, Amber, is frozen as a newly fertilized embryo and Pamela wants to raise her in a way that would be consistent with her religious beliefs and not Manfred’s extropian views. Meanwhile, he is working on three new schemes and looking for help to make them a reality.

These include a workable state-centralized planning apparatus that can interface with external market systems, a way to upload the entirety of the 20th century’s out-of-copyright film and music to the net. He meets up with Annette again – a woman working for Arianspace, a French commercial aerospace company – and the two begin a relationship. With her help, his schemes come together perfectly and he is able to thwart his wife and her lawyers. However, their daughter Amber is then defrosted and born, and henceforth is being raised by Pamela.

The third and final story in Part I is “Tourist“, which takes place five years later in Edinburgh. During this story, Manfred is mugged and his memories (stored in a series of Turing-compatible cyberware) are stolen. The criminal tries to use Manfred’s memories and glasses to make some money, but is horrified when he learns all of his plans are being made available free of charge. This forces Annabelle to go out and find the man who did it and cut a deal to get his memories back.

Meanwhile, the Lobsters are thriving in colonies situated at the L5 point, and on a comet in the asteroid belt. Along with the Jet Propulsion Laboratory and the ESA, they have picked up encrypted signals from outside the solar system. Bob Franklin, now dead, is personality-reconstructed in the Franklin Collective. Manfred, his memories recovered, moves to further expand the rights of non-human intelligences while Aineko begins to study and decode the alien signals.

http://garethstack.files.wordpress.com/2006/12/url-3.jpegPart II, Point of Inflection, opens a decade later in the early/mid-21st century and centers on Amber Macx, now a teen-ager, in the outer Solar System. The first story, entitled “Halo“, centers around Amber’s plot (with Annette and Manfred’s help) to break free from her domineering mother by enslaving herself via s Yemeni shell corporation and enlisting aboard a Franklin-Collective owned spacecraft that is mining materials from Amalthea, Jupiter’s fourth moon.

To retain control of her daughter, Pamela petitions an imam named Sadeq to travel to Amalthea to issue an Islamic legal judgment against Amber. Amber manages to thwart this by setting up her own empire on a small, privately owned asteroid, thus making herself sovereign over an actual state. In the meantime, the alien signals have been decoded, and a physical journey to an alien “router” beyond the Solar System is planned.

In the second story Router“, the uploaded personalities of Amber and 62 of her peers travel to a brown dwarf star named Hyundai +4904/-56 to find the alien router. Traveling aboard the Field Circus, a tiny spacecraft made of computronium and propelled by a Jupiter-based laser and a lightsail, the virtualized crew are contacted by aliens.

Known as “The Wunch”, these sentients occupy virtual bodies based on Lobster patterns that were “borrowed” from Manfred’s original transmissions. After opening up negotiations for technology, Amber and her friends realize the Wunch are just a group of thieving, third-rate “barbarians” who have taken over in the wake of another species transcending thanks to a technological singularity. After thwarting The Wunch, Amber and a few others make the decision to travel deep into the router’s wormhole network.

In the third story, Nightfall“, the router explorers find themselves trapped by yet more malign aliens in a variety of virtual spaces. In time, they realize the virtual reaities are being hosted by a Matrioshka brain – a megastructure built around a star (similar to a Dyson’s Sphere) composed of computronium. The builders of this brain seem to have disappeared (or been destroyed by their own creations), leaving an anarchy ruled by sentient, viral corporations and scavengers who attempt to use newcomers as currency.

With Aineko’s help, the crew finally escapes by offering passage to a “rogue alien corporation” (a “pyramid scheme crossed with a 419 scam”), represented by a giant virtual slug. This alien personality opens a powered route out, and the crew begins the journey back home after many decades of being away.

https://storiesbywilliams.files.wordpress.com/2014/06/d622e-charles_stross_accelerando_magyar.jpgPart III, Singularity, things take place back in the Solar System from the point of view of Sirhan – the son of the physical Amber and Sadeq who stayed behind. In “Curator“, the crew of the Field Circus comes home to find that the inner planets of the Solar System have been disassembled to build a Matrioshka brain similar to the one they encountered through the router. They arrive at Saturn, which is where normal humans now reside, and come to a floating habitat in Saturn’s upper atmosphere being run by Sirhan.

The crew upload their virtual states into new bodies, and find that they are all now bankrupt and unable to compete with the new Economics 2.0 model practised by the posthuman intelligences of the inner system. Manfred, Pamela, and Annette are present in various forms and realize Sirhan has summoned them all to this place. Meanwhile, Bailiffs—sentient enforcement constructs—arrive to “repossess” Amber and Aineko, but a scheme is hatched whereby the Slug is introduced to Economics 2.0, which keeps both constructs very busy.

In “Elector“, we see Amber, Annette, Manfred and Gianna (Manfred’s old political colleague) in the increasingly-populated Saturnian floating cities and working on a political campaign to finance a scheme to escape the predations of the “Vile Offspring” – the sentient minds that inhabit the inner Solar System’s Matrioshka brain. With Amber in charge of this “Accelerationista” party, they plan to journey once more to the router network. She loses the election to the stay-at-home “conservationista” faction, but once more the Lobsters step in to help by offering passage to uploads on their large ships if the humans agree to act as explorers and mappers.

In the third and final chapter, “Survivor“, things fast-forward to a few centuries after the singularity. The router has once again been reached by the human ship and humanity now lives in space habitats throughout the Galaxy. While some continue in the ongoing exploration of space, others (copies of various people) live in habitats around Hyundai and other stars, raising children and keeping all past versions of themselves and others archived.

Meanwhile, Manfred and Annette reconcile their differences and realize they were being manipulated all along. Aineko, who was becoming increasingly intelligent throughout the decades, was apparently pushing Manfred to fulfill his schemes to help bring the humanity to the alien node and help humanity escape the fate of other civilizations that were consumed by their own technological progress.

Summary:
Needless to say, this book was one big tome of big ideas, and could be mind-bendingly weird and inaccessible at times! I’m thankful I came to it when I did, because no one should attempt to read this until they’ve had sufficient priming by studying all the key concepts involved. For instance, don’t even think about touching this book unless you’re familiar with the notion of the Technological Singularity. Beyond that, be sure to familiarize yourself with things like utility fogs, Dyson Spheres, computronium, nanotechnology, and the basics of space travel.

You know what, let’s just say you shouldn’t be allowed to read this book until you’ve first tackled writers like Ray Kurzweil, William Gibson, Arthur C. Clarke, Alastair Reynolds and Neal Stephenson. Maybe Vernon Vinge too, who I’m currently working on. But assuming you can wrap your mind around the things presented therein, you will feel like you’ve digested something pretty elephantine and which is still pretty cutting edge a decade or more years after it was first published!

But to break it all down, the story is essentially a sort of cautionary tale of the dangers of the ever-increasing pace of change and advancement. At several points in the story, the drive toward extropianism and post-humanity is held up as both an inevitability and a fearful prospect. It’s also presented as a possible explanation for the Fermi Paradox – which states that if sentient life is statistically likely and plentiful in our universe, why has humanity not observed or encountered it?

According to Stross, it is because sentient species – which would all presumably have the capacity for technological advancement – will eventually be consumed by the explosion caused by ever-accelerating progress. This will inevitably lead to a situation where all matter can be converted into computing space, all thought and existence can be uploaded, and species will not want to venture away from their solar system because the bandwidth will be too weak. In a society built on computronium and endless time, instant communication and access will be tantamount to life itself.

All that being said, the inaccessibility can be tricky sometimes and can make the read feel like its a bit of a labor. And the twist at the ending did seem like it was a little contrived and out of left field. It certainly made sense in the context of the story, but to think that a robotic cat that was progressively getting smarter was the reason behind so much of the story’s dynamic – both in terms of the characters and the larger plot – seemed sudden and farfetched.

And in reality, the story was more about the technical aspects and deeper philosophical questions than anything about the characters themselves. As such, anyone who enjoys character-driven stories should probably stay away from it. But for people who enjoy plot-driven tales that are very dense and loaded with cool technical stuff (which describes me pretty well!), this is definitely a must-read.

Now if you will excuse me, I’m off to finish Vernor Vinge’s Rainbow’s End, another dense, sometimes inaccessible read!

The Future of Computing: Brain-Like Computers

neuronsIt’s no secret that computer scientists and engineers are looking to the human brain as means of achieving the next great leap in computer evolution. Already, machines are being developed that rely on machine blood, can continue working despite being damaged, and recognize images and speech. And soon, a computer chip that is capable of learning from its mistakes will also be available.

The new computing approach, already in use by some large technology companies, is based on the biological nervous system – specifically on how neurons react to stimuli and connect with other neurons to interpret information. It allows computers to absorb new information while carrying out a task, and adjust what they do based on the changing signals.

brain_chip2The first commercial version of the new kind of computer chip is scheduled to be released in 2014, and was the result of a collaborative effort between I.B.M. and Qualcomm, as well as a Stanford research team. This “neuromorphic processor” can not only automate tasks that once required painstaking programming, but can also sidestep and even tolerate errors, potentially making the term “computer crash” obsolete.

In coming years, the approach will make possible a new generation of artificial intelligence systems that will perform some functions that humans do with ease: see, speak, listen, navigate, manipulate and control. That can hold enormous consequences for tasks like facial and speech recognition, navigation and planning, which are still in elementary stages and rely heavily on human programming.

googleneuralnetworkFor example, computer vision systems only “recognize” objects that can be identified by the statistics-oriented algorithms programmed into them. An algorithm is like a recipe, a set of step-by-step instructions to perform a calculation. But last year, Google researchers were able to get a machine-learning algorithm, known as a “Google Neural Network”, to perform an identification task (involving cats) without supervision.

And this past June, the company said it had used those neural network techniques to develop a new search service to help customers find specific photos more accurately. And this past November, researchers at Standford University came up with a new algorithm that could give computers the power to more reliably interpret language. It’s known as the Neural Analysis of Sentiment (NaSent).

deep_learning_laptopA similar concept known as Deep Leaning is also looking to endow software with a measure of common sense. Google is using this technique with their voice recognition technology to aid in performing searches. In addition, the social media giant Facebook is looking to use deep learning to help them improve Graph Search, an engine that allows users to search activity on their network.

Until now, the design of computers was dictated by ideas originated by the mathematician John von Neumann about 65 years ago. Microprocessors perform operations at lightning speed, following instructions programmed using long strings of binary code (0s and 1s). The information is stored separately in what is known as memory, either in the processor itself, in adjacent storage chips or in higher capacity magnetic disk drives.

neural-networksBy contrast, the new processors consist of electronic components that can be connected by wires that mimic biological synapses. Because they are based on large groups of neuron-like elements, they are known as neuromorphic processors, a term credited to the California Institute of Technology physicist Carver Mead, who pioneered the concept in the late 1980s.

These processors are not “programmed”, in the conventional sense. Instead, the connections between the circuits are “weighted” according to correlations in data that the processor has already “learned.” Those weights are then altered as data flows in to the chip, causing them to change their values and to “spike.” This, in turn, strengthens some connections and weakens others, reacting much the same way the human brain does.

Neuromorphic-chip-640x353In the words of Dharmendra Modha, an I.B.M. computer scientist who leads the company’s cognitive computing research effort:

Instead of bringing data to computation as we do today, we can now bring computation to data. Sensors become the computer, and it opens up a new way to use computer chips that can be everywhere.

One great advantage of the new approach is its ability to tolerate glitches, whereas traditional computers are cannot work around the failure of even a single transistor. With the biological designs, the algorithms are ever changing, allowing the system to continuously adapt and work around failures to complete tasks. Another benefit is energy efficiency, another inspiration drawn from the human brain.

IBM_stacked3dchipsThe new computers, which are still based on silicon chips, will not replace today’s computers, but augment them; at least for the foreseeable future. Many computer designers see them as coprocessors, meaning they can work in tandem with other circuits that can be embedded in smartphones and the centralized computers that run computing clouds.

However, the new approach is still limited, thanks to the fact that scientists still do not fully understand how the human brain functions. As Kwabena Boahen, a computer scientist who leads Stanford’s Brains in Silicon research program, put it:

We have no clue. I’m an engineer, and I build things. There are these highfalutin theories, but give me one that will let me build something.

calit2PhotoLuckily, there are efforts underway that are designed to remedy this, with the specific intention of directing that knowledge towards the creation of better computers and AIs. One such effort comes from the National Science Foundation financed the Center for Brains, Minds and Machines, a new research center based at the Massachusetts Institute of Technology, with Harvard and Cornell.

Another is the California Institute for Telecommunications and Information Technology (aka. Calit2) – a center dedicated to innovation in nanotechnology, life sciences, information technology, and telecommunications. As
Larry Smarr, an astrophysicist and director of Institute, put it:

We’re moving from engineering computing systems to something that has many of the characteristics of biological computing.

Human-Brain-project-Alp-ICTAnd last, but certainly not least, is the Human Brain Project, an international group of 200 scientists from 80 different research institutions and based in Lausanne, Switzerland. Having secured the $1.6 billion they need to fund their efforts, these researchers will spend the next ten years conducting research that cuts across multiple disciplines.

This initiative, which has been compared to the Large Hadron Collider, will attempt to reconstruct the human brain piece-by-piece and gradually bring these cognitive components into an overarching supercomputer. The expected result of this research will be new platforms for “neuromorphic computing” and “neurorobotics,” allowing for the creation of computing and robotic architectures that mimic the functions of the human brain.

neuromorphic_revolutionWhen future generations look back on this decade, no doubt they will refer to it as the birth of the neuromophic computing revolution. Or maybe just Neuromorphic Revolution for short, but that sort of depends on the outcome. With so many technological revolutions well underway, it is difficult to imagine how the future will look back and characterize this time.

Perhaps, as Charles Stross suggest, it will simply be known as “the teens”, that time in pre-Singularity history where it was all starting to come together, but was yet to explode and violently change everything we know. I for one am looking forward to being around to witness it all!

Sources: nytimes.com, technologyreview.com, calit2.net, humanbrainproject.eu

The Technological Singularity

This is a little off the beaten path right now, but lately, I’ve been spending a lot of time contemplating this big concept. In fact, it’s been informing the majority of my writing for the past year, and during my recent trip back to Ottawa, it was just about all my friend and I could talk about (dammit, we used to club!) And since I find myself explaining this concept to people quite often, and enjoying it, I thought I’d dedicate a post to it as well.

It’s called the Technological Singularity, and was coined in 1993 by sci-fi author Vernor Vinge. To put it concisely, Vinge predicted that at some point in the 21st century, human beings would be able to augment their intelligence using artificial means. This, he argued, would make the future completely unpredictable beyond that point, seeing as how the minds that contemplating the next leaps would be beyond anything we possess now.

The name itself is derived from the concept of the Quantum Singularity or Event Horizon, the region that resides at the center of a black hole beyond which, nothing is visible. In the case of a black hole, the reason you can’t see beyond this point is because the very laws of physics break down and become indistinguishable. The same is being postulated here, that beyond a certain point in our technological evolution, things will get so advanced and radical that we couldn’t possibly imagine what the future will look like.

how-nanotechnology-could-reengineer-us

Bad news for sci-fi writers huh? But strangely, it is this very concept which appears to fascinate them the most! Just because we not be able to accurately predict the future doesn’t stop people from trying, especially writers like Neal Stephenson, Greg Bear, and Charles Stross. Frankly, the concept was coined by a sci-fi writer so we’re gonna damn well continue to talk about it. And besides, when was the last time science fiction writers were bang on about anything? It’s called fiction for a reason.

Men like Ray Kurzweil, a futurist who is all about achieving immortality, have popularized this idea greatly. Thanks to people like him, this idea has ventured beyond the realm of pure sci-fi and become a legitimate area of academic study. Relying on ongoing research into the many, many paradigm shifts that have taken place over time, he and others have concluded that technological progress is not a linear phenomena, but an exponential one.

Consider the past few decades. Has it not been a constant complaint that the pace of life and work have been increasing greatly from year to year? Of course, and the driving force has been constant technological change. Whereas people in our parents generation grew up learning to use slide rules and hand-cranked ammonia copiers, by the time they hit the workforce, everything was being done with calculators and Xerox printers.

PPTMooresLawai

In terms of documents, they used to learn typewriters and the filing system. Then, with the microprocessor revolution, everything was done on computer and electronically. Phones and secretaries gave way to voicemail and faxes, and then changed again with the advent of the internet, pager, cell phone and PDA. Now, all things were digital, people could be reached anywhere, and messages were all handled by central computers.

And that’s just within the last half-century. Expanding the time-frame further, let’s take a much longer view. As a historian, I am often fascinated with the full history of humanity, going back roughly 200,000 years.  Back then, higher order primates such as ourselves had emerged in one small pocket of the world (North-Eastern Africa) and began to circulate outwards.

By 50,000 years ago, we had reached full maturity as far as being homo sapiens is concerned, relying on complex tools, social interaction, sewing and hunting and gathering technigues to occupy every corner of the Old World and make it suitable for our purposes. From the far reaches of the North to the Tropics in the South, humanity showed that it could live anywhere in the world thanks to its ingenuity and ability to adapt. By 15,000 years ago, we had expanded to occupy the New World as well, had hunted countless species to extinction, and began the process of switching over to agriculture.

By 5000 years ago, civilization as we know it was emerging independently in three corners of the world. By this, I mean permanent settlements that were based in part or in full on the cultivation of crops and domestication of animals. Then, 500 years ago, the world’s collided when the Spanish landed in the New World and opened up the “Age of Imperialism”. Because of the discovery of the New World, Europe shot ahead of its peer civilizations in Africa, Asia and the Middle East, went on to colonize every corner of the world, and began to experience some major political shifts at home and abroad. The “Age of Imperialism” gradually gave way to the “Age of Revolutions”.

100 years ago, the total population of the Earth reached 1 billion, industrialization had taken full effect in every developed nation and urban populations were now exceeding that of rural. 50 years ago, we had reached 3 billion human beings, were splitting the atom, sending rockets into space, and watching the world decolonize itself. And only 10 years ago, we had reached a whopping 6 billion human beings, were in the throws of yet another technological revolution (the digital) and were contemplating nanotechnology, biomedicine and even AI.

In short, since our inception, the trend has been moving ever upwards, faster and faster. With every change, the pace seems to increase exponentially. The amount of time between paradigm shifts – that is, between revolutionary changes that alter the way we look at the world – has been getting smaller and smaller. Given this pattern, it seems like only a matter of time before the line on the graph rises infinitely and we have to rethink the whole concept of progress.

Is your nooble baked yet? Mine sure is! It’s get like that any time I start contemplating the distant past and the not too distant future. These are exciting times, and even if you think that the coming Singularity might spell doom, you gotta admit, this is an exciting time to be alive. If nothing else, its always a source of intrigue to know that you are on the cutting edge of history, that some day, people will be talking about what was and you will be able to say “I was there”.

Whoo… deep stuff man. And like I said, fun to write about. Ever since I was a senior in high school, I dreamed of being able to write a book that could capture the Zeitgeist. As soon as I learned about the Technological Singularity, I felt I had found my subject matter. If I could write just one book that captures the essence of history at this point in our technological (and possibly biological) evolution, I think I’ll die a happy man. Because for me, it’s not enough to just have been there. I want to have been there and said something worthwhile about it.

Alright, thanks for listening! Stay tuned for more lighter subject matter and some updates on the latest from Story Time and Data Miners. Plus more on Star Wars, coming soon!