The Future is Here: Cancer Drug Developed by AI

AI'sThe development of cancer drugs is a costly, expensive, time-consuming process that has a high probability rate of failure. On average, it takes 24 to 48 months to find a suitable candidate and costs upwards of $100 million. And in the end, roughly 95% of all potential drugs fail in clinical trials. Because of this, scientists are understandably looking for a way to speed up the discovery process.

That’s where the anti-cancer drug known as BPM 31510 comes in play. Unlike most pharmaceuticals, it was developed by artificial intelligence instead of a group of researchers toiling away in a lab. Created by biotech company Berg (named after real estate billionaire Carl Berg) the company seeks to use artificial intelligence to design cancer drugs that are cheaper, have fewer side effects, and can be developed in half the time it normally takes.

drugsTowards this end, they are looking to data-driven methods of drug discovery. Instead of generating cancer drugs based on chemical compounds identified in labs, the company compares tissue, urine, and blood samples from cancer patients and healthy patients, generating tens of trillions of data points that are fed into an artificial intelligence system. That system crunches all the data, looking for problems.

BPM 31510, which is the first of Berg’s drugs to get a real-world test, focuses on mitochondria – a framework within cells that’s responsible for programmed cell death. Normally, mitochondria triggers damaged cells to die. When cancer strikes, this process goes haywire, and the damaged cells spread. Berg’s drug, if successful, will be able to restore normal cell death processes by changing the metabolic environment within mitochondria.

MitochondriaSpeaking on the subject of the drug, which is now in human-clinical trials, Berg president and co-founder Niven Narain said:

BPM 31510 works by switching the fuel that cancer likes to operate on. Cancer cells prefer to operate in a less energy-efficient manner. Cancers with a high metabolic function, like triple negative breast cancer, glioblastoma, and colon cancer–that’s the sweet spot for this technology.

IBM is also leveraging artificial intelligence in the race to design better cancer treatments. In their case, this involves their much-heralded supercomputer Watson looking for better treatment options for patients. In a trial conducted with the New York Genome Center, Watson has been scanning mutations found in brain cancer patients, matching them with available treatments.

dna_cancerAll of these efforts are still in early days, and even on its accelerated timeline, BPM 31510 is still years away from winning an FDA approval. But, as Narain points out, the current drug discovery system desperately needs rethinking. With a success rate of 1 out of 20, their is definitely room for improvement. And a process that seeks to address cancer in a way that is more targeted, and more personalized is certainly in keeping with the most modern approaches to medicine.

Source: fastcoexist.com

Immortality Inc: Google’s Kurzweil Talks Life Extension

calico-header-640x353Human life expectancy has been gradually getting longer and longer over the past century, keeping pace with advances made in health and medical technologies. And in the next 20 years, as the pace of technological change accelerates significantly, we can expect life-expectancy to undergo a similarly accelerated increase. So its only natural that one of the worlds biggest tech giants (Google) would decide to becoming invested in the business of post-mortality.

As part of this initiative, Google has been seeking to build a computer that can think like a human brain. They even hired renowed futurist and AI expert Ray Kurzweil last year to act as the director of engineering on this project. Speaking at Google’s I/O conference late last month, he detailed his prediction that our ability to improve human health is beginning to move up an “exponential” growth curve, similar to the law of accelerating returns that governs the information technology and communications sectors today.

raykurzweilThe capacity to sequence DNA, which is dropping rapidly in cost and ease, is the most obvious example. At one time, it took about seven years to sequence 1% of the first human genome. But now, it can be done in a matter of hours. And thanks to initiatives like the Human Genome Project and ENCODE, we have not only successfully mapped every inch of the human genome, we’ve also identified the function of every gene within.

But as Kurzweil said in the course of his presentation – entitled “Biologically Inspired Models of Intelligence” – simply reading DNA is only the beginning:

Our ability to reprogram this outdated software is growing exponentially. Somewhere between that 10- and 20-year mark, we’ll see see significant differences in life expectancy–not just infant life expectancy, but your remaining life expectancy. The models that are used by life insurance companies sort of continue the linear progress we’ve made before health and medicine was an information technology… This is going to go into high gear.

immortality_dnaKurzweil cited several examples of our increasing ability to “reprogram this outdated data” – technologies like RNA interference that can turn genes on and off, or doctors’ ability to now add a missing gene to patients with a terminal disease called pulmonary hypertension. He cited the case of a girl whose life was threatened by a damaged wind pipe, who had a new pipe designed and 3-D printed for her using her own stem cells.

In other countries, he notes, heart attack survivors who have lasting heart damage can now get a rejuvenated heart from reprogrammed stem cells. And while this procedure awaits approval from the FDA in the US, it has already been demonstrated to be both safe and effective. Beyond tweaking human biology through DNA/RNA reprogramming, there are also countless initiatives aimed at creating biomonitoring patches that will improve the functionality and longevity of human organs.

avatar_imageAnd in addition to building computer brains, Google itself is also in the business of extending human life. This project, called Calico, hopes to slow the process of natural aging, a related though different goal than extending life expectancy with treatment for disease. Though of course, the term “immortality” is perhaps a bit of misnomer, hence why it is amended with the word “clinical”. While the natural effects of aging are something that can be addressed, there will still be countless ways to die.

As Kurzweil himself put it:

Life expectancy is a statistical phenomenon. You could still be hit by the proverbial bus tomorrow. Of course, we’re working on that here at Google also, with self-driving cars.

Good one, Kurzweil! Of course, there are plenty of skeptics who question the validity of these assertions, and challenge the notion of clinical immortality on ethical grounds. After all, our planet currently plays host to some 7 billion people, and another 2 to 3 billion are expected to be added before we reach the halfway mark of this century. And with cures for diseases like HIV and cancer already showing promise, we may already be looking at a severe drop in mortality in the coming decades.

calico1Combined with an extension in life-expectancy, who knows how this will effect life and society as we know it? But one thing is for certain: the study of life has become tantamount to a study of information. And much like computational technology, this information can be manipulated, resulting in greater performance and returns. So at this point, regardless of whether or not it should be done, it’s an almost foregone conclusion that it will be done.

After all? While very few people would dare to live forever, there is virtually no one who wouldn’t want to live a little longer. And in the meantime, if you’ve got the time and feel like some “light veiwing”, be sure to check out Kurzweil’s full Google I/O 2014 speech in which he addresses the topics of computing, artificial intelligence, biology and clinical immortality:


Sources: fastcoexist.com, kurzweilai.net

The Birth of AI: Computer Beats the Turing Test!

turing-statueAlan Turing, the British mathematician and cryptogropher, is widely known as the “Father of Theoretical Computer Science and Artificial Intelligence”. Amongst his many accomplishments – such as breaking Germany’s Enigma Code – was the development of the Turing Test. The test was introduced by Turing’s 1950 paper “Computing Machinery and Intelligence,” in which he proposed a game wherein a computer and human players would play an imitation game.

In the game, which involves three players, involves Player C  asking the other two a series of written questions and attempts to determine which of the other two players is a human and which one is a computer. If Player C cannot distinguish which one is which, then the computer can be said to fit the criteria of an “artificial intelligence”. And this past weekend, a computer program finally beat the test, in what experts are claiming to be the first time AI has legitimately fooled people into believing it’s human.

eugene_goostmanThe event was known as the Turing Test 2014, and was held in partnership with RoboLaw, an organization that examines the regulation of robotic technologies. The machine that won the test is known as Eugene Goostman, a program that was developed in Russia in 2001 and goes under the character of a 13-year-old Ukrainian boy. In a series of chatroom-style conversations at the University of Reading’s School of Systems Engineering, the Goostman program managed to convince 33 percent of a team of judges that he was human.

This may sound modest, but that score placed his performance just over the 30 percent requirement that Alan Turing wrote he expected to see by the year 2000. Kevin Warwick, one of the organisers of the event at the Royal Society in London this weekend, was on hand for the test and monitored it rigorously. As Deputy chancellor for research at Coventry University, and considered by some to be the world’s first cyborg, Warwick knows a thing or two about human-computer relations

kevin_warwickIn a post-test interview, he explained how the test went down:

We stuck to the Turing test as designed by Alan Turing in his paper; we stuck as rigorously as possible to that… It’s quite a difficult task for the machine because it’s not just trying to show you that it’s human, but it’s trying to show you that it’s more human than the human it’s competing against.

For the sake of conducting the test, thirty judges had conversations with two different partners on a split screen—one human, one machine. After chatting for five minutes, they had to choose which one was the human. Five machines took part, but Eugene was the only one to pass, fooling one third of his interrogators. Warwick put Eugene’s success down to his ability to keep conversation flowing logically, but not with robotic perfection.

Turing-Test-SchemeEugene can initiate conversations, but won’t do so totally out of the blue, and answers factual questions more like a human. For example, some factual question elicited the all-too-human answer “I don’t know”, rather than an encyclopaedic-style answer where he simply stated cold, hard facts and descriptions. Eugene’s successful trickery is also likely helped by the fact he has a realistic persona. From the way he answered questions, it seemed apparent that he was in fact a teenager.

Some of the “hidden humans” competing against the bots were also teenagers as well, to provide a basis of comparison. As Warwick explained:

In the conversations it can be a bit ‘texty’ if you like, a bit short-form. There can be some colloquialisms, some modern-day nuances with references to pop music that you might not get so much of if you’re talking to a philosophy professor or something like that. It’s hip; it’s with-it.

Warwick conceded the teenage character could be easier for a computer to convincingly emulate, especially if you’re using adult interrogators who aren’t so familiar with youth culture. But this is consistent with what scientists and analysts predict about the development of AI, which is that as computers achieve greater and greater sophistication, they will be able to imitate human beings of greater intellectual and emotional development.

artificial-intelligenceNaturally, there are plenty of people who criticize the Turing test for being an inaccurate way of testing machine intelligence, or of gauging this thing known as intelligence in general. The test is also controversial because of the tendency of interrogators to attribute human characteristics to what is often a very simple algorithm. This is unfortunate because chatbots are easy to trip up if the interrogator is even slightly suspicious.

For instance, chatbots have difficulty answering follow up questions and are easily thrown by non-sequiturs. In these cases, a human would either give a straight answer, or respond to by specifically asking what the heck the person posing the questions is talking about, then replying in context to the answer. There are also several versions of the test, each with its own rules and criteria of what constitutes success. And as Professor Warwick freely admitted:

Some will claim that the Test has already been passed. The words Turing Test have been applied to similar competitions around the world. However this event involved more simultaneous comparison tests than ever before, was independently verified and, crucially, the conversations were unrestricted. A true Turing Test does not set the questions or topics prior to the conversations. We are therefore proud to declare that Alan Turing’s Test was passed for the first time on Saturday.

artificial_intelligence1So what are the implications of this computing milestone? Is it a step in the direction of a massive explosion in learning and research, an age where computing intelligences vastly exceed human ones and are able to assist us in making countless ideas real? Or it is a step in the direction of a confused, sinister age, where the line between human beings and machines is non-existent, and no one can tell who or what the individual addressing them is anymore?

Difficult to say, but such is the nature of groundbreaking achievements. And as Warwick suggested, an AI like Eugene could be very helpful to human beings and address real social issues. For example, imagine an AI that is always hard at work on the other side of the cybercrime battle, locating “black-hat” hackers and cyber predators for law enforcement agencies. And what of assisting in research endeavors, helping human researchers to discover cures for disease, or design cheaper, cleaner, energy sources?

As always, what the future holds varies, depending on who you ask. But in the end, it really comes down to who is involved in making it a reality. So a little fear and optimism are perfectly understandable when something like this occurs, not to mention healthy.

Sources: motherboard.vice.com, gizmag.com, reading.ac.uk

Building Future Worlds…

inspirationIn the course of becoming an indie writer, there is one aspect of the creative process which keeps coming back to me. To put it simply, it is the challenges and delights of world building – i.e. creating the background, context, and location in which a story takes place. For years, I have been reading other people’s thoughts on the subject, be they authors themselves or just big fans of literary fiction.

But my own experience with the process has taught me much that I simply couldn’t appreciate before I picked up my pen and pad (or in this case, opened a word doc and began typing). Ad lately, the thoughts have been percolating in my mind and I felt the need to write them out. Having done that, I thought I might share them in full.

alien-worldFor starters, being a science fiction writer presents a person with particular opportunities for creative expression. But at the same time, it presents its share of particular challenges. While one is certainly freer to play around with space, place, and invent more freely than with most other genres, they are still required to take into account realism, consistency and continuity in all that they do.

Sooner or later, the world a writer builds will be explored, mapped, and assessed, and any and all inconsistencies are sure to stick out like a sore thumb! So in addition to making sure back-stories, timelines and other details accord with the main plot, authors also need to be mindful of things like technology, physical laws, and the nature of space and time.

self-aware-colonyBut above all, the author in question has to ask themselves what kind of universe they want to build. If it is set in the future, they need to ask themselves certain fundamental questions about where human beings will be down the road. Not only that, they also need to decide what parallels (and they always come up!) they want to draw with the world of today.

Through all of this, they will be basically deciding what kind of message they want to be sending with their book. Because of course, anything they manage to dream up about the future will tell their readers lots about the world the author inhabits, both in the real sense and within their own head. And from what I have seen, it all comes down to five basic questions they must ask themselves…

1. Near-Future/Far Future:
future-city3When it comes to science-fiction stories, the setting is almost always the future. At times, it will be set in an alternate universe, or an alternate timeline; but more often than not, the story takes place down the road. The only question is, how far down the road? Some authors prefer to go with the world of tomorrow, setting their stories a few decades or somewhere in the vicinity of next century.

By doing this, the author in question is generally trying to show how the world of today will determine the world of tomorrow, commenting on current trends and how they are helping/hurting us. During the latter half of the 20th century, this was a very popular option for writers, as the consensus seemed to be that the 21st century would be a time when some truly amazing things would be possible; be it in terms of science, technology, or space travel.

1984_John_HurtOther, less technologically-inclined authors, liked to use the not-so-distant future as a setting for dystopian, post-apocalytpic scenarios, showing how current trends (atomic diplomacy, arms races, high tech, environmental destruction) would have disastrous consequences for humanity in the near-future. Examples of this include Brave New World, 1984, The Iron Heel, The Chrysalids, and a slew of others.

In all cases, the totalitarian regimes or severe technological and social regression that characterized their worlds were the result of something happening in the very near-future, be it nuclear or biological war, a catastrophic accident, or environmental collapse. Basically, humanity’s current behavior was the basis for a cautionary tale, where an exaggerated example is used to illustrate the logical outcome of all this behavior.

arrakis-duneAt the other end of the spectrum, many authors have taken the long view with their sci-fi world building. Basically, they set their stories several centuries or even millennia from now. In so doing, they are able to break with linear timelines and the duty of having to explain how humanity got from here to there, and instead could focus on more abstract questions of existence and broader allegories.

Examples of this include Frank Herbert’s Dune and Asimov’s Foundation series, both of which were set tens of thousands of years in the future. In both of these universes, humanity’s origins and how they got to where they were took a backseat to the historical allegories that were being played upon. While some mention is given to the origins of humanity and where they came from, little attempt is made to draw a line from the present into the future.

foundation_coversInstead, the focus is overwhelmingly on the wider nature of human beings and what drives us to do the things we do. Asimov drew from Gibbon’s Decline and Fall of the Roman Empire to make a point about the timeless nature of history, while Herbert drew on the modern age, medieval and ancient history, religion, philosophy, and evolutionary biology and ecology to investigate the timeless nature of humanity and what factors shape it.

For non-purists, Star Wars and Star Trek can also serve as examples of both tendencies in action. For decades, Star Trek used a not-too-distant future setting to endlessly expound on the human race and the issues it faces today. And always, this examination was done in the form of interstellar travel, the crew of the Enterprise going form world to world and seeing themselves in the problems, norms and social structure of other races.

coruscantStar Wars, on the other hand, was an entirely different animal. For the people living in this universe, no mention is ever made of Earth, and pre-Republic history is considered a distant and inaccessible thing. And while certain existential and social issues are explored (i.e. racism, freedom and oppression), the connections with Earth’s past are more subtle, relying on indirect clues rather than overt comparisons.

The Republic and the Empire, for example, is clearly inspired by Rome’s own example. The Jedi Code is very much the picture of the Bushido code, its practitioners a sort of futuristic samurai, and the smugglers of Tatooine are every bit the swashbuckling, gun toting pirates and cowboys of popular fiction. But always, the focus seemed to more on classically-inspired tales of destiny, and of epic battles of good versus evil.

And of course, whether we are talking near future or far future has a big influence on the physical setting of the story as well. Which brings me to item two…

2. Stellar or Interstellar:100,000starsHere is another important question that every science fiction author has faced, and one which seriously influences the nature  of the story. When it comes to the world of tomorrow, will it be within the confines of planet Earth, the Solar System, or on many different world throughout our galaxy? Or, to go really big, will it encompass the entire Milky Way, or maybe even beyond?

Important questions for a world-builder, and examples certainly abound. In the former case, you have your dystopian, post-apocalyptic, and near future seenarios, where humanity is stuck living on a hellish Earth that has seen better days. Given that humanity would not be significantly more adavanced than the time of writing, or may have even regressed due to the downfall of civilization, Earth would be the only place people can live.

Gaia_galaxyBut that need not always be the case. Consider Do Androids Dream of Electric Sheep? by Philip K Dick. In his dystopian, post-apocalyptic tale, Earth was devestated by nuclear war, forcing the wealthiest and healthiest to live in the Offworld Colonies while everyone who was too poor or too ravaged by their exposure to radiation was confined to Earth. Clearly, dystopia does not rule out space travel, though it might limit it.

And in the latter case, where human beings have left the cradle and begun walking amongst our System’s other planets and even the stars, the nature of the story tends to be a bit more ambiguous. Those who choose such a setting tend to be of the opinion that mankind either needs to reach out in order to survive, or that doing so will allow us to shed some of our problems.

chasm_city_2Examples abound here again, but Alastair Reynolds’ Revelation Space universe seems like the ideal one here. In this series, humanity has access to near-light speed travel, nanotechnology, brain-computer interfacing, neural uploading, AI, smart materials, and has colonized dozens of new worlds. However, the state of humanity has not changed, and on many worlds, civil war and sectarian violence are common.

In either case, the setting also bears a direct relation to the state of technology in the story. For humans still living on Earth (and nowhere else) in the future, chances are, they are about as advanced or even behind the times in which the story was written. For those living amongst the stars, technology would have to advanced sufficiently to make it happen. Which brings me to the next point…

3. High-Tech or Low-Tech:
Star_Trek_SpacedockWhat would a work of science fiction be without plenty of room for gadgets, gizmos, and speculation about the future state of technology? And once more, I can discern of two broad categories that an author can choose from, both of which have their share of potential positives and negatives. And depending on what kind of story you want to write, the choice of what that state is often predetermined.

In the former case, there is the belief that technology will continue to advance in the future, leading to things like space travel, FTL, advanced cyborgs, clones, tricorders, replicators, artificial intelligence, laser guns, lightsabers, phasers, photon torpedoes, synthetic humans, and any number of other fun, interesting and potentially dangerous things.

BAMA_3With stories like these, the purpose of high-tech usually serves as a framing device, providing visual evidence that the story is indeed taking place in the future. In other words, it serves a creative and fun purpose, without much thought being given towards exploring the deeper issues of technological progress and determinism.  But this not be the case, and oftentimes with science fiction, high-tech serves a different purpose altogether.

In many other cases, the advance of technology is directly tied to the plot and the nature of the story. Consider cyberpunk novels like Neuromancer and the other novels of William Gibson’s Sprawl Trilogy. In these and other cyberpunk novels, the state of technology – i.e. cyberpsace decks, robotic prosthetics, biotech devices – served to illustrate the gap between rich and poor and highlighting the nature of light in a dark, gritty future.

65By contrast, such post-cyberpunk novels as Neal Stephenson’s The Diamond Age took a different approach. While high-tech and its effects on society were explored in great detail, he and other authors of this sub genre chose to break with their predecessors on one key issue. Namely, they did not suppose that the emergence of high-tech would lead to dystopia, but rather an ambiguous future where both good and harm resulted.

And at the other end of the spectrum, where technology is in a low state, the purpose and intent of this is generally the same. On the one hand, it may serve as a plot framing device, illustrating how the world is in a primitive state due to the collapse of civilization as we know it, or because our unsustainable habits caught up with us and resulted in the world stepping backwards in time.

a_boy_and_his_dogAt the same time, the very fact that people live in a primitive state in any of these stories serves the purpose of  commentary. Simply by showing how our lives were unsustainable, or the actions of the story’s progenitor’s so foolish, the author is making a statement and asking the reader to acknowledge and ponder the deeper issue, whether they realize it or not.

At this end of things, A Boy and His Dog and Mad Max serve as good examples. In the former case, the story takes place in a post-apocalyptic landscape where a lone boy and his genetically-engineered talking dog rove the landscape in search of food and (in the boy’s case) sexual gratification. Here, the state of technology helps to illustrate the timeless nature of the human condition, namely how we are essentially the products of our environment.

pursuit_specialIn Mad Max as well, the way roving gangs are constantly looking for gasoline, using improvised weapons, and riding around in vehicles cobbled together from various parts gives us a clear picture of what life is like in this post-collapse environment. In addition, the obvious desperation created by said collapse serves to characterize the cultural landscape, which is made up of gangs, tinpot despots, and quasi-cults seeking deliverance.

But on the other hand, the fact that the world exists in this state due to collapse after the planet’s supply of oil ran dry also provides some social commentary. By saying that the world became a dangerous, anarchistic and brutal place simply because humanity was dependent on a resource that suddenly went dry, the creators of Mad Max’s world were clearly trying to tell us something. Namely, conserve!

4. Aliens or Only Humans:
warofworldsaliensAnother very important question for setting the scene in a science fiction story is whether or not extra-terrestrials are involved. Is humanity still alone in the universe, or have they broken that invisible barrier that lies between them and the discovery of other sentient life forms? Once again, the answer to this question has a profound effect on the nature of the story, and it can take many forms.

For starters, if the picture is devoid of aliens, then the focus of the story will certainly be inward, looking at human nature, issues of identity, and how our environment serves to shape us. But if there are aliens, either a single species or several dozen, then the chances are, humanity is a united species and the aliens serve as the “others”, either as a window into our own nature, or as an exploration into the awe and wonder of First Contact.

Alien OrganismsAs case studies for the former category, let us consider the Dune, Foundation, and Firefly universes. In each of these, humanity has become an interstellar species, but has yet to find other sentiences like itself. And in each of these, human nature and weaknesses appear to be very much a constant, with war, petty rivalries and division a costant. Basically, in the absence of an “other”, humanity is focused on itself and the things that divide it.

In Dune, for example, a galaxy-spanning human race has settled millions of worlds, and each world has given rise to its own identity – with some appearing very much alien to another. Their are the “navigators”, beings that have mutated their minds and bodies through constant exposure to spice. Then there are the Tleilaxu, a race of genetic manipulators  who breed humans from dead tissue and produce eunuch “Face Dancers” that can assume any identity.

2007-8-18_DuneAxlotlTank

Basically, in the absence of aliens, human beings have become amorphous in terms of their sense of self, with some altering themselves to the point that they are no longer even considered human to their bretherin. And all the while, humanity’s biggest fight is with itself, with rival houses vying for power, the Emperor gaurding his dominance, and the Guild and various orders looking to ensure that the resource upon which all civilization depends continues to flow.

In the Foundation universe, things are slightly less complicated; but again, the focus is entirely inward. Faced with the imminent decline and collapse of this civilization, Hari Seldon invents the tool known as “Psychohistory”. This science is dedicated to anticipating the behavior of large groups of people, and becomes a roadmap to recovery for a small group of Foundationists who seek to preserve the light of civilization once the empire is gone.

foundation

The series then chronicles their adventures, first in establishing their world and becoming a major power in the periphery – where Imperial power declines first – and then rebuilding the Empire once it finally and fully collapses. Along the way, some unforeseen challenges arise, but Seldon’s Plan prevails and the Empire is restored. In short, it’s all about humans trying to understand the nature of human civilization, so they can control it a little better.

Last, but not least, their is the Firefly universe which – despite the show’s short run – showed itself to be in-depth and interestingly detailed. Basically, the many worlds that make up “The Verse” are divided along quasi-national lines. The core worlds constitute the Alliance, the most advanced and well-off worlds in the system that are constantly trying to expand to bring the entire system under its rule.

verse_whitesunThe Independents, we learn early in the story, were a coalition of worlds immediately outside the core worlds that fought these attempts, and lost. The Border Worlds, meanwhile, are those planets farthest from the core where life is backwards and “uncivilized” by comparison. All of this serves to illustrate the power space and place have over human identity, and how hierarchy, power struggles and  divisiveness are still very much a part of us.

But in universes where aliens are common, then things are a little bit different. In these science fiction universes, where human beings are merely one of many intelligent species finding their way in the cosmos, extra-terrestrials serve to make us look outward and inward at the same time. In this vein, the cases of Babylon 5, and 2001: A Space Odyssey provide the perfect range of examples.

B5_season2In  B5 – much as with Stark Trek, Star Gate, or a slew of other franchises – aliens serve as a mirror for the human condition. By presenting humanity with alien cultures, all of whom have their own particular quarks and flaws, we are given a meter stick with which to measure ourselves. And in B5‘s case, this was done rather brilliantly – with younger races learning from older ones, seeking wisdom from species so evolved that often they are not even physical entities.

However, in time the younger race discover that the oldest (i.e. the Shadows, Vorlons, and First Ones) are not above being flawed themselves. They too are subject to fear, arrogance, and going to war over ideology. The only difference is, when they do it the consequences are far graver! In addition, these races themselves come to see that the ongoing war between them and their proxies has become a senseless, self-perpetuating mistake. Echoes of human frailty there!

2001spaceodyssey128.jpgIn 2001: A Space Odyssey, much the same is true of the Firstborn, a race of aliens so ancient that they too are no longer physical beings, but uploaded intelligences that travel through the cosmos using sleek, seamless, impenetrable slabs (the monoliths). As we learn in the course of the story, this race has existed for eons, and has been seeking out life with the intention of helping it to achieve sentience.

This mission brought them to Earth when humanity was still in its primordial, high-order primate phase. After tinkering with our evolution, these aliens stood back and watched us evolve, until the day that we began to reach out into the cosmos ourselves and began to discover some of the tools they left behind. These include the Tycho Monolith Anomaly-1 (TMA-1) on the Moon, and the even larger one in orbit around Jupiter’s moon of Europa.

2001-monolith-alignmentAfter making contact with this monolith twice, first with the American vessel Discovery and then the joint Russian-American Alexei Leonov, the people of Earth realize that the Firstborn are still at work, looking to turn Jupiter into a sun so that life on Europa (confined to the warm oceans beneath its icy shell) will finally be able to flourish. Humanity is both astounded and humbled to learn that it is not alone in the universe, and wary of its new neighbors.

This story, rather than using aliens as a mirror for humanity’s own nature, uses a far more evolved species to provide a contrast to our own. This has the same effect, in that it forces us to take a look at ourselves and assess our flaws. But instead of showing those flaws in another, it showcases the kind of potential we have. Surely, if the Firstborn could achieve such lengths of evolutionary and technological development, surely we can too!

5. Utopian/Dystopian/Ambiguous:
Inner_city_by_aksuFinally, there is the big question of the qualitative state of humanity and life in this future universe. Will life be good, bad, ugly, or somewhere in between? And will humanity in this narrative be better, worse, or the same as it now? It is the questions of outlook, whether it is pessimistic, optimistic, realistic, or something else entirely which must concern a science fiction writer sooner or later.

Given that the genre evolved as a way of commenting on contemporary trends and offering insight into their effect on us, this should come as no surprise. When looking at where we are going and how things are going to change, one cannot help but delve into what it is that defines this thing we know as “humanity”. And when it comes right down to it, there are a few schools of thought that thousands of years of scholarship and philosophy have provided us with.

transhuman3Consider the dystopian school, which essentially posits that mankind is a selfish, brutish, and essentially evil creature that only ever seeks to do right by himself, rather than other creatures. Out of this school of thought has come many masterful works of science fiction, which show humanity to be oppressive to its own, anthropocentric to aliens and other life forms, and indifferent to the destruction and waste it leaves in its wake.

And of course, there’s the even older Utopia school, which presents us with a future where mankind’s inherent flaws and bad behavior have been overcome through a combination of technological progress, political reform, social evolution, and good old fashioned reason. In these worlds, the angels of humanity’s nature have won the day, having proven superior to humanity’s devils.

IngsocIn the literally realm, 1984 is again a perfect example of dytopian sci=fi, where the totalitarian rule of the few is based entirely on selfishness and the desire for dominance over others. According to O’Brien, the Party’s mouthpiece in the story, their philosophy in quite simple:

The Party seeks power entirely for its own sake. We are not interested in the good of others; we are interested solely in power. Power is in inflicting pain and humiliation. Power is in tearing human minds to pieces and putting them together again in new shapes of your own choosing.  If you want a picture of the future, imagine a boot stamping on a human face — forever.

Hard to argue with something so brutal and unapologetic, isn’t it? In Orwell’s case, the future would be shaped by ongoing war, deprivation, propaganda, fear, torture, humiliation, and brutality. In short, man’s endless capacity to inflict pain and suffering on others.

invitro2Aldous Huxley took a different approach in his seminal dystopian work, Brave New World, in which he posited that civilization would come to be ruled based on man’s endless appetite for pleasure, indifference and distraction. Personal freedom and individuality would be eliminated, yes, but apparently for man’s own good rather than the twisted designs of a few true-believers:

Universal happiness keeps the wheels steadily turning; truth and beauty can’t. And, of course, whenever the masses seized political power, then it was happiness rather than truth and beauty that mattered… People were ready to have even their appetites controlled then. Anything for a quiet life. We’ve gone on controlling ever since. It hasn’t been very good for truth, of course. But it’s been very good for happiness. One can’t have something for nothing. Happiness has got to be paid for.

But even though the means are entirely different, the basic aim is the same. Deprive humanity of his basic freedom and the potential to do wrong in order to ensure stability and long-term rule. In the end, a darker, more cynical view of humanity and the path that we are on characterized these classic examples of dystopia and all those that would come to be inspired them.

Imminent Utopia by Kuksi
Imminent Utopia by Kuksi

As for Utopian fiction, H.G. Wells’ Men Like Gods is a very appropriate example. In this novel, a contemporary journalist finds himself hurled through time into 3000 years into the future where humanity lives in a global state named Utopia, and where the “Five Principles of Liberty” – privacy, free movement, unlimited knowledge, truthfulness, and free discussion and criticism – are the only law.

After staying with them for a month, the protogonist returns home with renewed vigor and is now committed to the “Great Revolution that is afoot on Earth; that marches and will never desist nor rest again until old Earth is one city and Utopia set up therein.” In short, like most Positivists of his day, Wells believed that the march of progress would lead to a future golden age where humanity would shed it’s primitive habits and finally live up to its full potential.

Larry Niven_2004_Ringworld's Children_0This view would prove to have a profound influence on futurist writers like Asimov and Clarke. In the latter case, he would come to express similar sentiments in both the Space Odyssey series and his novel Childhood’s End. In both cases, humanity found itself confronted with alien beings of superior technology and sophistication, and eventually was able to better itself by opening itself up to their influence.

In both series, humanity is shown the way to betterment (often against their will) by cosmic intelligences far more advanced than their own. But despite the obvious questions about conquest, loss of freedom, individuality, and identity, Clarke presents this as a good thing. Humanity, he believed, had great potential, and would embrace it, even if it had to be carried kicking and screaming.

And just like H.G Wells, Clarke, Asimov, and a great many of his futurist contemporaries believes that the ongoing and expanding applications of science and technology would be what led to humanity’s betterment. A commitment to this, they believed, would eschew humanity’s dependence on religion, superstition, passion and petty emotion; basically, all the things that made us go to war and behave badly in the first place.

Summary:
These are by no means the only considerations one must make before penning a science fiction story, but I think they provide a pretty good picture of the big-ticket items. At least the ones that keep me preoccupied when I’m writing! In the end, knowing where you stand on the questions of location, content, tone and feel, and what your basic conception of the future, is all part of the creation process.

In other words, you need to figure out what you’re trying to say and how you want to say it before you can go to town. In the meantime, I say to all aspiring and established science fiction writers alike: keep pondering, keep dreaming, and keep reaching for them stars!

The Future of Computing: Brain-Like Computers

neuronsIt’s no secret that computer scientists and engineers are looking to the human brain as means of achieving the next great leap in computer evolution. Already, machines are being developed that rely on machine blood, can continue working despite being damaged, and recognize images and speech. And soon, a computer chip that is capable of learning from its mistakes will also be available.

The new computing approach, already in use by some large technology companies, is based on the biological nervous system – specifically on how neurons react to stimuli and connect with other neurons to interpret information. It allows computers to absorb new information while carrying out a task, and adjust what they do based on the changing signals.

brain_chip2The first commercial version of the new kind of computer chip is scheduled to be released in 2014, and was the result of a collaborative effort between I.B.M. and Qualcomm, as well as a Stanford research team. This “neuromorphic processor” can not only automate tasks that once required painstaking programming, but can also sidestep and even tolerate errors, potentially making the term “computer crash” obsolete.

In coming years, the approach will make possible a new generation of artificial intelligence systems that will perform some functions that humans do with ease: see, speak, listen, navigate, manipulate and control. That can hold enormous consequences for tasks like facial and speech recognition, navigation and planning, which are still in elementary stages and rely heavily on human programming.

googleneuralnetworkFor example, computer vision systems only “recognize” objects that can be identified by the statistics-oriented algorithms programmed into them. An algorithm is like a recipe, a set of step-by-step instructions to perform a calculation. But last year, Google researchers were able to get a machine-learning algorithm, known as a “Google Neural Network”, to perform an identification task (involving cats) without supervision.

And this past June, the company said it had used those neural network techniques to develop a new search service to help customers find specific photos more accurately. And this past November, researchers at Standford University came up with a new algorithm that could give computers the power to more reliably interpret language. It’s known as the Neural Analysis of Sentiment (NaSent).

deep_learning_laptopA similar concept known as Deep Leaning is also looking to endow software with a measure of common sense. Google is using this technique with their voice recognition technology to aid in performing searches. In addition, the social media giant Facebook is looking to use deep learning to help them improve Graph Search, an engine that allows users to search activity on their network.

Until now, the design of computers was dictated by ideas originated by the mathematician John von Neumann about 65 years ago. Microprocessors perform operations at lightning speed, following instructions programmed using long strings of binary code (0s and 1s). The information is stored separately in what is known as memory, either in the processor itself, in adjacent storage chips or in higher capacity magnetic disk drives.

neural-networksBy contrast, the new processors consist of electronic components that can be connected by wires that mimic biological synapses. Because they are based on large groups of neuron-like elements, they are known as neuromorphic processors, a term credited to the California Institute of Technology physicist Carver Mead, who pioneered the concept in the late 1980s.

These processors are not “programmed”, in the conventional sense. Instead, the connections between the circuits are “weighted” according to correlations in data that the processor has already “learned.” Those weights are then altered as data flows in to the chip, causing them to change their values and to “spike.” This, in turn, strengthens some connections and weakens others, reacting much the same way the human brain does.

Neuromorphic-chip-640x353In the words of Dharmendra Modha, an I.B.M. computer scientist who leads the company’s cognitive computing research effort:

Instead of bringing data to computation as we do today, we can now bring computation to data. Sensors become the computer, and it opens up a new way to use computer chips that can be everywhere.

One great advantage of the new approach is its ability to tolerate glitches, whereas traditional computers are cannot work around the failure of even a single transistor. With the biological designs, the algorithms are ever changing, allowing the system to continuously adapt and work around failures to complete tasks. Another benefit is energy efficiency, another inspiration drawn from the human brain.

IBM_stacked3dchipsThe new computers, which are still based on silicon chips, will not replace today’s computers, but augment them; at least for the foreseeable future. Many computer designers see them as coprocessors, meaning they can work in tandem with other circuits that can be embedded in smartphones and the centralized computers that run computing clouds.

However, the new approach is still limited, thanks to the fact that scientists still do not fully understand how the human brain functions. As Kwabena Boahen, a computer scientist who leads Stanford’s Brains in Silicon research program, put it:

We have no clue. I’m an engineer, and I build things. There are these highfalutin theories, but give me one that will let me build something.

calit2PhotoLuckily, there are efforts underway that are designed to remedy this, with the specific intention of directing that knowledge towards the creation of better computers and AIs. One such effort comes from the National Science Foundation financed the Center for Brains, Minds and Machines, a new research center based at the Massachusetts Institute of Technology, with Harvard and Cornell.

Another is the California Institute for Telecommunications and Information Technology (aka. Calit2) – a center dedicated to innovation in nanotechnology, life sciences, information technology, and telecommunications. As
Larry Smarr, an astrophysicist and director of Institute, put it:

We’re moving from engineering computing systems to something that has many of the characteristics of biological computing.

Human-Brain-project-Alp-ICTAnd last, but certainly not least, is the Human Brain Project, an international group of 200 scientists from 80 different research institutions and based in Lausanne, Switzerland. Having secured the $1.6 billion they need to fund their efforts, these researchers will spend the next ten years conducting research that cuts across multiple disciplines.

This initiative, which has been compared to the Large Hadron Collider, will attempt to reconstruct the human brain piece-by-piece and gradually bring these cognitive components into an overarching supercomputer. The expected result of this research will be new platforms for “neuromorphic computing” and “neurorobotics,” allowing for the creation of computing and robotic architectures that mimic the functions of the human brain.

neuromorphic_revolutionWhen future generations look back on this decade, no doubt they will refer to it as the birth of the neuromophic computing revolution. Or maybe just Neuromorphic Revolution for short, but that sort of depends on the outcome. With so many technological revolutions well underway, it is difficult to imagine how the future will look back and characterize this time.

Perhaps, as Charles Stross suggest, it will simply be known as “the teens”, that time in pre-Singularity history where it was all starting to come together, but was yet to explode and violently change everything we know. I for one am looking forward to being around to witness it all!

Sources: nytimes.com, technologyreview.com, calit2.net, humanbrainproject.eu