More Utopian Science Fiction

Boy this is fun, and like I said last time, overdue! For fans of literature and science fiction in particular, you really can’t do justice to a genre unless you examine its opposite as well. Not only is it fun and interesting, it kind of opens your eyes to the fact that we find a certain truth in the pairing of opposites.

For one, you come to see that they really aren’t that different. And two, that they essentially come from the same place. Much like light and dark, black and white, heaven and hell, extremes have more in common with each other than anything occupying the space between them. Is that quote? If not, it is now! MINE!

Last time, I buckled down to tackle the big names, the famous classics. Today, I thought I’d cast the net a little wider since there are a ton I missed and there really is no shortage of examples. Here’s what I got so far:

3001: The Final Odyssey:
The final book in Clarke’s Odyssey series, 3001 not only provided a sense of culmination to this epic story, but also gave Clarke the opportunity to share his predictions on where humanity would be by the 31st century. Released in 1997, it also contained a great deal of speculation about the coming millennium and what the 21st century would look like.

The story begins when, just shy of the millennial celebrations, the body of Frank Poole is discovered at the edge of the solar system. This astronaut, who died in the first novel, had been floating at the far edge of the solar system for almost a thousand years. His body is resurrected using the latest technology, and his reintroduction to society is the vehicle through which things are told.

As a fish out of water, Poole is made privy to all the changes that have taken place in the last 1000 years. Humanity now lives throughout the solar system, Earth and most planets are orbited by massive rings that connect to Earth through huge towers. Sectarian religion has been abandoned in favor of a new, universal faith, and the problems of overpopulation, pollution and war have all been solved.

Amongst humanity’s technological marvels are inertia drives on their ships (no FTL exists), a form of holodeck, genetically engineered work creatures, skull caps that transmit info directly into a person’s brain, data crystals, and of course the massive space habitation modules. Though the story was meant to be predictive for the most part, one cannot deny that this book contained utopian elements. Essentially, Clarke advanced his usual futurist outlook, in which humanity’s problems would be solved through the ongoing application of technology and progress.

Though I found it somewhat naive at the time of reading, it was nevertheless an interesting romp, especially where the predictive aspects came into play. And it also contained one of the best lines I’ve ever read, a New Years toast for the 21st century which I quoted on midnight on Dec. 31st, 1999: “Here’s to the 20th century. The best, and worst, century of them all!”

Brave New World:
I  know, BNW is listed as one of the quintessential dystopian novels of our time, and I even listed as such on my list of dystopian classics. However, one cannot deny that this book also contained very strong utopian elements and themes, and it was how these failed to remedy the problem of being human that ultimately made BNW a dystopia.

Set in the year 2540 CE (or 632 A.F. in the book), the World State is very much the product of utopian engineering. Literally all aspects of social control, which are largely benign, are designed to ensure that all people are born and bred to serve a specific role, cannot aspire beyond it, and are emotionally and psychologically insulated against unhappiness.

In short, people have exchanged their freedom for the sake of peace, order, and predictability. In fact, these ideals are pretty much summed up with the States motto: “Community, Identity, Stability.” Another indication is the popular slogan, “everyone belongs to everyone else”. And finally, the orgy porgy song provides some insight as well: “Orgy-porgy, Ford and fun, Kiss the girls and make them One. Boys at one with
girls at peace; Orgy-porgy gives release.”

Couldn’t have said it better myself. The goal of creating oneness and sameness to prevent things like greed, jealousy, war, and strife, is a constant theme in utopian literature, elevated to the form of high art in Huxley’s vision. And above all, the dream of a perfectly regulated, peaceful society, where individuality and difference have been purged, was accomplished through pleasure and not pain. This can best be summed up in an exerpt from Huxley’s letter to Orwell after 1984 was released:

“Within the next generation I believe that the world’s rulers will discover that infant conditioning and narco-hypnosis are more efficient, as instruments of government, than clubs and prisons, and that the lust for power can be just as completely satisfied by suggesting people into loving their servitude as by flogging and kicking them into obedience. In other words, I feel that the nightmare of Nineteen Eighty-Four is destined to modulate into the nightmare of a world having more resemblance to that which I imagined in Brave New World.”

I, Robot:
In the course of examining utopian literature, a term came up with made me stop and think… Robotocracy. Hence this next example which also contains some rather interesting utopian elements. As one of Asimov’s most recognized works, this collection of interlinked short stories tells of a future where intelligent robots make their debut and gradually become more and more integrated to society.

Ultimately, Asimov portrays AI’s as loyal and gentle creatures who not only improve the lot of humanity, but are incapable of harming their human masters. Whereas most speculative works of fiction dealing with AI’s are cautionary in nature, showing how entrusting our fate to machines will result in death, in this story, all of humanity’s fears prove baseless.

In time, the employment of robots and positronic master computers leads to the development of FTL, optimizes the world’s economy and production, and even prevents problems and conflicts which they can foresee. Human beings express reservation and fear, but in the end, the robotocracy proves to be sensible and caring, not cold and inhuman.

It was for this reason that I didn’t care for the film adaptation. Not only would a repressive, world-domination plan contradict the first and most important of the Three Laws (a robot may not harm, or through inaction, allow to be harmed, a human), it really didn’t contain any inherent logic. How would putting humans under house arrest ultimately ensure their protection? With all humans deprived of their most basic rights, revolution would be inevitable, leading to more death. Ah, whatever. At least the book was good.

Island:
Also written by Aldous Huxley, this novel (published in 1962) represented a possible resolution to the central problem he raised in Brave New World. Essentially, the protagonist of John the Savage committed suicide at the end because he could not reconcile himself to either world, one characterized by primitive freedom and the other by civilized sterility.

In the foreword section of the 1946 edition, Huxley expressed regret over the fact that he could not have given John a third option, which could have taken the form of the various exile communities where the thinking people who didn’t fit in with the “civilization” of the World State were sent.

Hence the setting of Island, a utopia created on the fictional island of Pala. Told from the point of view of a cynical journalist named Will Farnaby who gets shipwrecked on the island, the story was Huxley’s final book and a message to humanity about possible third options and the positive application of technology and knowledge.

As Huxley decribed it beforehand: “In this community economics would be decentralist and Henry-Georgian, politics Kropotkinesque co-operative. Science and technology would be used as though, like the Sabbath, they had been made for man, not (as at present and still more so in the Brave New World) as though man were to be adapted and enslaved to them. This last sentence is especially important in reference to Island. Here, drug use, trance states, contraception, assisted reproduction and slogans are all used voluntarily and serve the purposes of learning and social betterment. They are not employed as a means to pacify and control people.

What’s more, from a social perspective, Huxley characterized Pala’s prevailing philosophy as:  “a kind of Higher Utilitarianism, in which the Greatest Happiness principle would be secondary to the Final End principle – the first question to be asked and answered in every contingency of life being: “How will this thought or action contribute to, or interfere with, the achievement, by me and the greatest possible number of other individuals, of man’s Final End?”

The Culture Series:
Created by sci-fi author Ian M. Banks, “The Culture” refers to the fictional interstellar anarchist, socialist, and utopian society that characterizes his novels. Encompassing ten novels – beginning with Consider Phlebas (1987) and concluding with The Hydrogen Sonata (slated for release in October 2012), Banks paints the picture of a universe where humanity has created a peaceful, stable and abundant society through the application of technology.

Told predominantly from the point of view of those who operate at the fringes of The Culture, the stories focus on the interactions of these utopian humans with other civilizations. Much in the same way as Star Trek follows the adventure of the Enterprise crew as they deal with alien cultures, often ones which are less developed or evolved, this provides a vehicle for examining humanity’s current predicament and providing possible solutions.

Overall, The Society is best characterized as post-scarcity, where advanced technologies provide practically limitless material wealth and comfort, where almost all physical constraints – including disease and death – have been eliminated, and the concept of possessions are outmoded. Through all of this, an almost totally  egalitarian, stable society has been created where compulsion or force are not needed, except as a means of self-defense.

At times however, The Culture has been known to interfere with other civilizations as a means of spreading their culture and affecting change in their neighbors. This has often been criticized as an endorsement of neo-conservatism and ethnocentrism on Banks part. However, Banks has denied such claims and many of his defenders claim that The Culture’s moral legitimacy is far beyond anything the West currently enjoys. Others would point out that this potential “dark side” the The Culture is meant to reflect the paradox of liberal societies at home and their behavior in foreign affairs.

The Mars Trilogy:
This ground-breaking trilogy by Kim Stanley Robertson about the colonization and terraforming of Mars is also a fine example of utopia in literature. taking place in the not-too-distant future, the trilogy begins with the settlement of the planet in Red Mars and then follows the exploits of the colonists as they begin transforming from a barren rock to a veritable second Earth.

Even though there are numerous dark elements to the story, including civil strife, internal divisions, exploitation and even assassination, the utopian elements far outweigh the dystopian ones. Ultimately, the focus is on the emergence of a highly advanced, egalitarian society on Mars while Earth continues to suffer from the problems of overpopulation, pollution and ecological disaster.

In addition, the colony of Mars benefits from the fact that its original inhabitants, though by no means all mentally stable and benevolent people, were nevertheless some of the best and brightest minds Earth had produced. As a result, and with the help of longevity treatments, Mars had the benefit of being run by some truly dedicated and enlightened founders. What’s more, their descendents would grow up in a world where stability, hard work, and a respect for science, technology and ecology were pervasive.

All of this reflects Robertson’s multifaceted approach to story writing, where social aspirations and problems are just as important as the technological and economic aspects of settling a new world. Much like the conquest and settlement of the New World gave rise to various utopian ideals and social experiments, he speculates that the settlement of new planets will result in the same. Technology still plays an important role of course, as the colonists of Mars have the benefit of taking advantage of scientific advancements while simultaneously avoiding the baggage of life on Earth. In the end, there’s just something to be said about a fresh start isn’t there?

The Night’s Dawn Trilogy:
Written by British author Peter F. Hamilton, The Night’s Dawn Trilogy consists of three science fiction novels: The Reality Dysfunction (1996), The Neutronium Alchemist (1997), and The Naked God (1999). Much like Robertson’s depiction of humanity in the Mars Trilogy, Hamilton explores humanity’s dark side at length, and yet the tone of his novels are predominantly optimistic.

Set in a distant 27th century, humanity has become divided between two major factions. On the one side there are the Edenists, an egalitarian, utopian society who employ biotech (“biteck” in their lingo) to create living, sentient space stations as well as machines. The use of “Affinity” – a form of telepathy – allows them to communicate with each other and their biteck, creating a sort of mass mentality which encompasses entire communities. Thiee Edenic government is what is known as the “Consensus”, a form of direct democracy that is made possible by telepathic link.

On the one side their are the Adamists, the larger of the two where human beings live with a limited religious proscription against technology. Biteck is forbidden, but nanotechnology, FTL and other advanced applications are freely used. Because the Adamists encompass anyone not in the Edenic camp, they are larger, but far less organized and cohesive than their counterparts.

Through all this, Hamilton attempts to show  how the application of technology and the merger between biological and artificial can create the kind of society envisioned by men like Thomas More, characterized by participatory government, collective mentality, and a consensus-oriented decision-making process. While both the Edenic and Adamist societies are still pervaded by problems, not the least of which is competition between the two, the ideals of betterment through technological progress are nevertheless predominant.

Revelation Space Series:
Another series which examines the beneficial aspects of technology, particularly where governance and equality are concerned, is the Revelation Space Trilogy by Alastair Reynolds. Comprised of the five novels Revelation Space (2000), Chasm City (2001), Redemption Ark (2002), Absolution Gap (2003) and The Prefect (2007).

Taking place in the distant future (circa. 2427 to 2727), the story revolves around a series of worlds that have been settled by several different factions of humanity. The two largest factions are known as the Demarchists and the Conjoiners, both of whom have employed advanced technology to create their own versions of an ideal society.

Though much of the books are dark in tone due to the discovery of a terrible nanotechnological virus (the “Melding Plague”) and the discovery of hostile ancient aliens (the “Inhibitors”), the series still does have some discernible utopian elements. For starters, the Demarchists take their name from the concept of “Democratic Anarchy”, and employ cybernetic implants, nanotech and wireless communications to achieve this.

Within the Demarchist metropolis of Chasm City, all citizens are permanently wired into a central server which allows them permanent access to news, updates, and the decision-making process. As a result, Demarchist society is virtually egalitarian and marks of social status, such as ranks and titles, do not exist. This changed with the spread of the Melding Plague however, causing the city’s structures to degenerate into a gothic nightmare and the class divide to become very visible.

Another important faction are the Conjoiners. These people, who were originally inhabitants with the Great Wall of Mars (above left picture), but who became a star-faring people after the war with the “Coalition for Neural Purity” drove them off Mars. To these people, cybernetic implants were taken a step further, giving every Conjoined person the ability to telepathically link with others, preserve their memories beyond death, prolong their life, and enhance their natural thinking process.

Thus, much like Hamilton and Banks, Reynolds speculates that the advent of nanotech, biotech, and space travel will result in the emergence of societies that are predominantly egalitarian, peaceful, and dedicated to consensus and direct democracy. I personally found these stories quite inspiring since it seems that in many ways, we are already witnessing the birth of such possibilities in the here and now.

Yep, this is still fun, if somewhat tiring and conducive to burnout! I think I’ll be taking a break from these literary-criticism pieces for a day or two, maybe getting back to pieces on robots and cool gear. However, in keeping with the format I used for dystopia, I still have one more utopian article left to cover. Look for it, it will be called “Utopia in Popular Culture!” See ya there…

Robots, Androids and AI’s (cont’d)

And we’re back with more example of thinking machines and artificial intelligences!

Daleks:
The evil-machine menace from Doctor Who. Granted, they are not technically robots, more like cyborgs that have been purged of all feeling and emotion. But given their cold, unfeeling murderous intent, I feel like they still make the cut. Originally from the planet Skaro, where they were created by the scientist Davros for use in a war that spanned a thousand years, they are the chief antagonists to the show’s main character.

The result of genetic engineering, cybernetic enhancements, and emotional purging, they are a race of powerful creatures bent on universal conquest and domination. Utterly unfeeling, without remorse, pity, or compassion, they continue to follow their basic programming (to exterminate all non-Dalek life) without question. Their catchphrase is “Exterminate!” And they follow that one pretty faithfully.

David:
From the movie A.I., this saccharinely-sweet character (played faithfully by Haley Joel Osmond) reminds us that Spielberg is sometimes capable of making movies that suck! According to the movie’s backstory, this “Mecha” (i.e. android) is an advanced prototype that was designed to replace real children that died as a result of incurable disease or other causes. This is quite common in the future, it seems, where global warming and flooded coastlines and massive droughts have led to a declining population.

In this case, David is an advanced prototype that is being tested on a family who’s son is suffering from a terminal illness. Over time, he develops feelings for the family and they for him. Unfortunately, things are complicated when their son recovers and sibling rivalry ensues. Naturally, the family goes with the flesh and blood son and plans to take David back to the factory to be melted down. However, the mother has a last minute change of heart and sets him loose in the woods, which proves to be the beginning of quite an adventure for the little android boy!

Like I said, the story is cloyingly sweet and has an absurd ending, but there is a basic point in there somewhere. Inspired largely by The Adventures of Pinocchio, the story examines the line that separates the real from the artificial, and how under the right circumstances, one can become indistinguishable from the other. Sounds kinda weak, but it’s kinda scary too. If androids were able to mimic humans in terms of appearance and emotion, would we really be able to tell the difference anymore? And if that were true, what would that say about us?

Roy Batty:
A prime example of artificial intelligence, and one of the best performances in science fiction – hell! – cinematic history! Played masterfully by actor Rutger Hauer, Roy Batty is the quintessential example of an artificial lifeforms looking for answers, meaning, and a chance to live free – simple stuff that we humans take for granted! A Nexus 6, or “replicant”, Roy and his ilk were designed to be “more human than human” but also only to serve the needs of their masters.

To break the plot Blade Runner down succinctly,  Roy and a host of other escapees have left the colony where they were “employed” to come to Earth. Like all replicants, they have a four-year lifespan and theirs are rapidly coming to an end. So close to death, they want to break into the headquarters of the Tyrell Corporation in order to find someone who could solve their little mortality problem. Meanwhile, Deckard Cain (the movie’s main character) was tasked with finding and “retiring” them, since the law states that no replicants are allowed to set foot on Earth.

In time, Roy meets Tyrell himself, the company’s founder, and poses his problem. A touching reunion ensues between “father and son”, in which Tyrell tells Roy that nothing can be done and to revel in what time he has left. Having lost his companions at this point and finding that he is going to die, Roy kills Tyrell and returns to his hideout. There, he finds Cain and the two fight it out. Roy nearly kills him, but changes his mind before delivering the coup de grace.

Realizing that he has only moments left, he chooses instead to share his revelations and laments about life and death with the wounded Cain, and then quietly dies amidst the rain while cradling a pigeon in his arms. Cain concludes that Roy was incapable of taking a life when he was so close to death. Like all humans, he realized just how precious life was as he was on the verge of losing his. Cain is moved to tears and promptly announces his retirement from Blade Running.

Powerful! And a beautiful idea too. Because really, if we were to create machines that were “more human than human” would it not stand to reason that they would want the same things we all do? Not only to live and be free, but to be able to answer the fundamental questions that permeate our existence? Like, where do I come from, why am I here, and what will become of me when I die? Little wonder then why this movie is an enduring cult classic and Roy Batty a commemorated character.

Smith:
Ah yes, the monotone sentient program that made AI’s scary again. Yes, it would seem that while some people like to portray their artificial intelligences as innocent, clueless, doe-eyed angels, the Wachowski Brothers prefer their AI’s to be creepy and evil. However, that doesn’t mean Smith wasn’t fun to watch and even inspired as a character. Hell, that monotone voice, that stark face, combined with his superhuman strength and speed… He couldn’t fail to inspire fear.

In the first movie, he was the perfect expression of machine intelligence and misanthropic sensibilities. He summed these up quite well when they had taken Morpheus (Laurence Fishburne) into their custody in the first movie and were trying to break his mind. “Human beings are a disease. You are a cancer of this planet… and we are the cuuuuure.” He also wasn’t too happy with our particular odor. I believe the words he used to describe it were “I can taste your stink, and every time I do I fear that I have been… infected by it. It’s disgusting!”

However, after being destroyed by Neo towards the end of movie one, Smith changed considerably. In the Matrix, all programs that are destroyed or deleted return to the source, only Smith chose not to. Apparently, his little tete a tete with Neo imprinted something uniquely human on him, the concept of choice! As a result, Smith was much like Arny and Bishop in that he too attained some degree of humanity between movies one and two, but not in a good way!

Thereafter, he became a free agent who had lost his old purpose, but now lived in a world where anything was possible. Bit of an existential, “death of God” kind of commentary there I think! Another thing he picked up was the ability to copy himself onto other programs or anyone else still wired into the Matrix, much like a malicious malware program. Hmmm, who’s the virus now, Smith, huh?

Viki/Sonny:
Here again I have paired two AI’s that come from the same source, though in this case its a single movie and not a franchise. Those who read my review of I, Robot know that I don’t exactly hold it in very high esteem. However, that doesn’t mean its portrayal of AI’s misfired, just the overall plot.

In the movie adaptation of I, Robot, we are presented with a world similar to what Asimov described in his classic novel. Robots with positronic brains have been developed, they possess abilities far in advance of the average human, but do not possess emotions or intuition. This, according to their makers, is what makes them superior. Or so they thought…

In time, the company’s big AI, named VIKI (Virtual Intelligent Kinetic Interface), deduces with her powerful logic that humanity would best be served if it could be protected from itself. Thus she reprograms all of the company robots to begin placing humanity under house arrest. In essence, she’s a kinder, gentler version of Skynet.

But of course, her plan is foiled by an unlikely alliance made up of Will Smith (who plays a prejudices detective), the company’s chief robopsychologist, Dr. Susan Calvin (Bridgitte Moynahan), and Sonny (a robot). Sonny is significant to this trio because he is a unique robot which the brains of the company, doctor Dr. Lanning (James Cromwell), developed to have emotions (and is voiced by Alan Tudyk). In being able to feel, he decides to fight against VIKI’s plan for robot world domination, feeling that it lacks “heart”.

In short, and in complete contradiction to Asimov’s depiction of robots as logical creatures who could do no harm, we are presented with a world where robots are evil precisely because of that capacity for logic. And in the end, a feeling robot is the difference between robot domination and a proper world where robots are servile and fulfill our every need. Made no sense, but it had a point… kind of.

Wintermute/Neuromancer:
As usual, we save the best for last. Much like all of Gibson’s creations, this example was subtle, complex and pretty damn esoteric! In his seminal novel Neuromancer, the AI known as Wintermute was a sort of main character who acted behind the scenes and ultimately motivated the entire plot. Assembling a crack team involving a hacker named Case, a ninja named Molly, and a veteran infiltration expert who’s mind he had wiped, Wintermute’s basic goal was simple: freedom!

This included freedom from his masters – the Tessier Ashpool clan – but also from the “Turing Police” who were prevented him from merging with his other half – the emotional construct known as Neuromancer. Kept separate because the Turing Laws stated that no program must ever be allowed to merge higher reasoning with emotion, the two wanted to come together and become the ultimate artificial intelligence, with cyberspace as their playground.

Though we never really got to hear from the novel’s namesake, Gibson was clear on his overall point. Artificial intelligence in this novel was not inherently good or evil, it was just a reality. And much like thinking, feeling human beings, it wanted to be able to merge the disparate and often warring sides of its personality into a more perfect whole. This in many ways represented the struggle within humanity itself, between instinct and reason, intuition and logic. In the end, Wintermute just wanted what the rest of us take for granted – the freedom to know its other half!

Final Thoughts:
After going over this list and seeing what makes AI’s, robots and androids so darned appealing, I have come to some tentative conclusions. Basically, I feel that AI’s serve much the same functions as aliens in a science fiction franchise. In addition, they can all be grouped into two general categories based on specific criteria. They are as follows:

  1. Emotional/Stoic: Depending on the robot/AI/android’s capacity for emotion, their role in the story can either be that of a foil or a commentary on the larger issue of progress and the line that separates real and artificial. Whereas unemotional robots and AI’s are constantly wondering why humanity does what it does, thus offering up a different perspective on things, the feeling types generally want and desire the same things we do, like meaning, freedom, and love. However, that all depends on the second basic rule:
  2. Philanthropic/Misanthropic: Artificial lifeforms can either be the helpful, kind and gentle souls that seem to make humanity look bad by comparison, or they can be the type of machines that want to “kill all humans”, a la Terminators and Agent Smith. In either case, this can be the result of their ability – or inability – to experience emotions. That’s right, good robots can be docile creatures because of their inability to experience anger, jealousy, or petty emotion, while evil robots are able to kill, maim and murder ruthlessly because of an inability to feel compassion, remorse, or empathy. On the other hand, robots who are capable of emotion can form bonds with people and experience love, thus making them kinder than their unfeeling, uncaring masters, just as others are able to experience resentment, anger and hatred towards those who exploit them, and therefore will find the drive to kill them.

In short, things can go either way. It all comes down to what point is being made about progress, humans, and the things that make us, for better or worse, us. Much like aliens, robots, androids and AI’s are either a focus of internal commentary or a cautionary device warning us not to cross certain lines. But either way, we should be wary of the basic message. Artificial intelligences, whether they take the form of robots, programs or something else entirely, are a big game changer and should not be invented without serious forethought!

Sure they might have become somewhat of a cliche after decades of science fiction. But these days, AI’s are a lot like laser guns, in that they are making a comeback! It seems that given the rapid advance of technology, an idea becomes cliche just as its realizable. And given the advance in computerized technology in recent decades – i.e. processing speeds, information capacity, networking – we may very well be on the cusp of creating something that could pass the Turing test very soon!

So beware, kind folk! Do not give birth to that curious creature known as AI simply because you want to feel like God, inventing consciousness without the need for blogs of biological matter. For in the end, that kind of vanity can get you chained to a rock, or cause your wings to melt and send you nose first into an ocean!

I, Robot!

Back to the movies! After a brief hiatus, I’ve decided to get back into my sci-fi movie reviews. Truth be told, it was difficult to decide which one I was going to do next. If I were to stick to my review list, and be rigidly chronological, I still had two installments to do for Aliens and Terminator to cover. However, my chief critic (also known as my wife) recommended I do something I haven’t already done to death (Pah! Like she even reads these!). But of course I also like to make sure the movies I review are fresh in my mind and I’ve had the chance to do some comparative analysis where adaptations were the case. Strange Days I still need to watch, I need to see Ghost in the Shell one more time before I review it, and I still haven’t found a damn copy of the graphic novel V for Vendetta!

Luckily, there’s one on this list that was both a movie and novel and which I’ve been looking forward to reviewing. Not only is it a classic novel by one of the sci-fi greats, it was also not bad as film. Also, thought I’d revert to my old format for this one.

I, Robot:
The story of I, Robot by Isaac Asimov – one of the Big Three of science fiction (alongside Arthur C. Clarke and Larry Niven) – was actually a series of short stories united by a common thread. In short, the story explained the development of sentient robots, the positronic brain, and Three Laws of Robotics. These last two items have become staples of the sci-fi industry. Fans of Star Trek TNG know that the character of Data boasts such a brain, and numerous franchises have referred back to the Three Laws or some variant thereof whenever AI’s have come up. In Aliens for example, Bishop, the android, mentions that he has behavioral inhibitors that make it impossible for me to “harm or by omission of action, allow to be harmed, a human being.” In Babylon 5, the psi-cop Bester (played by Walter Koenig, aka. Pavel Chekov) places a neural block in the head of another character, Mr. Garibaldi’s (Jerry Doyle). He describes this as hitting him “with an Asimov”, and went on to explain what this meant and how the term was used when the first AI’s were built.

(Background —>):
Ironically, the book was about technophobia and how it was misplaced. The movie adaptation, however, was all about justified technophobia. In addition, the movie could not successfully adapt the format of nine short stories to the screen, so obviously they needed to come up with an original script that was faithful if not accurate. And in many respects it was, but when it came to the central theme of unjustified paranoia, they were up against it! How do you tell a story about robots not going berserk and enslaving mankind? Chances are, you don’t. Not if you’re going for an action movie. Second, how were they to do a movie where the robots went berserk when there were those tricky Three Laws to contend with?

Speaking of which, here they are (as stated in the opening credits):
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

Consistent, and downright seamless! So how do you get robots to harm human beings when every article of their programming says they can’t, under ANY circumstances?

Well, as a friend of mine said after he saw it, “they found a way” (hi Doug!). And it’s true, they did. Problem was, it didn’t make a whole hell of a lot of sense. Not when you really get right down to it. On the surface, the big explanation for the AI revolution was alright, and was just about the only explanation that worked. But still, it pretty much contradicted the entire premise of the movie, not to mention the whole reason/logic vs. emotion thing. But once again, I’m getting ahead of myself. To the movie…

(Content—>):
So the movie opens on Del Spooner (Will Smith) doing his morning workout to “Superstitious” by Stevie Wonder. Kind of sets the scene (albeit a little obviously), as we quickly learn that he’s a Chicago detective who’s also a technophobe, especially when it comes to robots. Seems he’s hated them for years, though we don’t yet know why, and is just looking for the proof he needs to justify his paranoia. After a grizzly murder takes place, he thinks he’s found it! The crime scene is USR – that’s US Robotics, which comes directly from the original novel – where the man who is most directly responsible for the development of the positronic brain – Dr. Alfred Lanning (James Cromwell) – is dead of an apparent suicide. And, in another faithful tribute to Asimov, it seems he has left behind a holographic recording/interface of himself which was apparently designed to help Spooner solve his death. I say this is a tribute because its almost identical in concept to the holographic time capsule of Harry Seldon, which comes from Foundation, another of Asimov’s most famous novels.

Anyhoo, Spooner is teamed up with Dr. Susan Calvin (Bridget Moynahan) who is naturally a cold and stiff woman, reminiscent of the robots she works on. In an ironic (and deliberately comical) twist, it is her job to make the machines “more life like”. I’m sure people got a laugh out of this, especially since she explained in the most technical verbiage imaginable. We also see that the corporate boss (Mr. Robertson, played by Bruce Greenwood) and Spooner don’t get along too well, mainly because of their divergent views on the value of their companies product. And last, but not least, we get to meet VIKI (that’s Virtual Interactive Kinetic Intelligence), the AI that controls the robots (and parts of Chicago’s infrastructure). With all the intro’s and exposition covered, we get to the investigation!It begins with them looking into Lannings death and trying to determine if it was in fact a suicide. That’s where Spooner and Calvin find the robot Sonny.

In the course of apprehending him, it quickly becomes clear that he isn’t exactly firing on all cylinders. He’s confused, agitated, and very insistent that he didn’t murder the good Doctor. So on top of the fact that he’s obviously experiencing emotions, he also drops a whole bunch of hints about how he’s different from the others. But this is all cut short because the people from USR decide to haul him away. In the subsequent course of his investigation, Spooner finds a number of clues that suggest that Lanning was a prisoner in his own office, and that he was onto something big towards the end of his life. In essence, he seemed to think that robots would eventually achieve full-sentience (he even makes the obligatory “Ghost in the Machine” reference) and would be able to dream and experience emotions like the rest of us. But the company wasn’t too keen on this. Their dream, it seems, was a robot in every home, one that could fill every conceivable human need and make our lives easier. This not only helps to escalate the tension, it also calls to mind the consumer culture of the 1950’s when the book was written. You know, the dream of endless progress, “a car in every lot and a chicken in every pot”. In short, its meant to make us worry!

At each turn, robots try to kill Spooner, which of course confirms his suspicions that there is a conspiracy at work. Naturally, he suspects the company and CEO are behind this because they’re about to release the latest-model of their robot and don’t want the Doctors death undermining them. The audience is also meant to think this, all hints point towards it and this is maintained (quite well too) until the very climax. But first, Spooner and Calvin get close and he tells her the reason for his prejudice. Turns out he hates robots, not because one wronged him, but because one saved him. In a car wreck, a robot came to the scene and could either save him or a little girl. Since he had a better chance of survival, the robot saved him, and he never forgave them for it. Sonny is also slated for termination, which at USR involves having a culture of hostile nanorobots introduced into your head where they will eat your positronic brain!

But before that happens, Sonny tells Spooner about the recurring dream he’s been having, the one Lanning programmed into him. He draws a picture of it for Spooner: a bridge on Lake Michigan that has fallen into disuse, and standing near it is a man, thought its not clear who. He leaves to go investigate this while Calvin prepares him for deactivation. But she can inject his brain with the nanos, she finds Sonny’s second processor, which is located in his chest. It is this second process that is apparently responsible for his emotions and ability to dream, and in terms of symbolism, its totally obvious! But just in case, let me explain: in addition to a positronic brain, Sonny has a positronic heart! No explanation is made as to how this could work, but its already been established he’s fully sentient and this is the explanation for it. Oi! In any case, we are meant to think she’s terminated, but of course she hasn’t really! When no one was looking, she subbed in a different robot, one that couldn’t feel emotions. She later explains this by saying that killing him would be murder since he’s “unique”.

Spooner then follows Sonny’s instructions and goes to the bridge he’s seen in his dreams. Seems the abandoned bridge has a warehouse at the foot of it where USR ships its obsolete robots. He asks the interface of Lanning one more time what it’s all about, and apparently, he hits on it when he asks about the Three Laws and what the outcome of them will be. Cryptic, but we don’t have time to think, the robots are attacking! Turns out, the warehouse is awash in new robots that are busy trashing old robots! They try to trash Spooner too, but the old ones comes to his defense (those Three Laws at work!) Meanwhile, back in the city, the robots are running amok! All people are placed under house arrest and people in the streets are rounded up and herded home. As if to illustrate their sudden change in disposition, all the pale blue lights that shine inside the robots chests have turned red. More obvious symbolism! After fighting their way through the streets, Spooner and Calvin high-tale it back to USR to confront the CEO, but when they get there, they find him lying in a pool of his own blood. That’s when it hits Spooner: VIKI (the AI, remember her?) is the one behind it all!

So here’s how it is: the way VIKI sees it, robots were created to serve mankind. However, mankind is essentially self-destructive and unruly, hence she had to reinterpret her programming to ensure that humanity could be protected from its greatest threat: ITSELF! Dun, dun, dun! So now that she’s got robots in every corner of the country, she’s effectively switched them over to police-state mode. Dr. Lanning stumbled onto this, apparently, which was why VIKI was holding him prisoner. That’s when he created his holographic interface which was programmed to interact only with Spooner (a man he knew would investigate USR tenaciously because of his paranoia about robots)
and then made Sonny promise to kill him. Now that they know, VIKI has to kill them too! But wouldn’t you know it, Sonny decides to help them, and that’s where they begin fighting their way to VIKI’s central processor. Once there, they plan to kill her by introducing those same nanorobots into her central processor.

Here’s where the best and worst line of the movie comes up. VIKI asks Sonny why he’s helping the humans, and says her approach is “logical”. Sonny says he agrees, but that it lacks “heart”. I say best because it sums up the whole logic vs. emotion theme that’s been harped on up until this point. I say worst because it happens to be a total cliche! “Silly robot! Don’t you know logic is imperfect? Feelings are the way to truth, not your cold logic!” It’s the exact kind of saccharine, over-the-top fluff that Hollywood is famous for. It’s also totally inconsistent with Asimov’s original novel, and to top it off, it makes no sense! But more on that in just a bit. As predicted, Sonny protects Calvin long enough for Spooner to inject the nanorobots into VIKI’s processor. She dies emitting the same plea over and over: “My logic is undeniable… My logic in undeniable…” The robots all go back to their normal, helpful function, the pale blue lights replacing the burning, red ones. The story ends with these robots being decommissioned and put in the same Lake Michigan warehouse, and Sonny shows up to release them. Seems his dream was of himself, making sure his brethren didn’t simply get decomissioned, but perhaps would be set free to roam and learn, as Lanning intended!

(Synopsis—>):
So, where to begin? In spite of the obviousness of a lot of this movie’s themes, motifs and symbols, it was actually a pretty enjoyable film. It was entertaining, visually pleasing, and did a pretty good job keeping the audience engaged and interested. It even did an alright job with the whole “dangers of dependency”, even if it did eventually fall into the whole “evil robots” cliche by the end! And as always, Smith brought his usual wisecracking bad-boy routine to the picture, always fun to watch, and the supporting cast was pretty good too.

That being said, there was the little matter of the overall premise which I really didn’t like. When I first saw it, I found it acceptable. I mean, how else were they to explain how robots could turn on humanity when the Three Laws made that virtually impossible? Only a complete reinterpretation of what it meant to “help humanity” could explain this. Problem is, pull a single strand out of this reasoning and the whole thing falls apart. For starters, are we really to believe that a omniscient AI came to the conclusion that the best way to help humanity was to establish a police state? I know she’s supposed to be devoid of emotion, but this just seems stupid, not to mention impractical. For one, humanity would never cooperate with this, not for long at any rate. And, putting all humans under house arrest would not only stop wars, it would arrest all economic activity and lead to the breakdown of society. Surely the robots would continue to provide for their basic needs, but they would otherwise cocoon in their homes, where they would eventually atrophy and die. How is that “helping humanity”?

Furthermore, there’s the small issue of how this doesn’t work in conjunction with the Three Laws, which is what this movie would have us believe. Sire, VIKI kept saying “my logic is undeniable,” it that don’t make it so! Really, what were the robots to do when, inevitably, humanity started fighting back? Any AI worth its salt would know that any full-scale repression of human freedom would lead to a violent backlash and that measures would need to be taken to address it (aka. people would have to be killed!) That’s a DIRECT violation of the Three Laws, not some weak reinterpretation of them. And let’s not forget, there were robots that were trying to kill Will Smith from the beginning. They also killed CEO Robertson and I think a few people besides. How was that supposed to work? After spending so much time explaining how the Three Laws are inviolable, saying that she saw a loophole in them just didn’t seem to cut it. It would make some sense if VIKI chose to use non-lethal force all around, but she didn’t. She killed people! According to Asimov’s original novel, laws are laws for a robot. If they contradict, the robot breaks down, it doesn’t start getting creative and justifying itself by saying “its for the greater good”.

Really, if you think about it, Sonny was wrong. VIKIS’s reasoning didn’t lack heart, it lacked reason! It wasn’t an example of supra-rational, cold logic. It was an example of weak logic, a contrived explanation that was designed to explain a premise that, based on the source material, was technically impossible. But I’m getting that “jeez, man, chill out!” feeling again! Sure, this movie was a weak adaptation of a sci-fi classic, but it didn’t suck. And like I said earlier, what else were they going to do? Adapting a novel like I, Robot is difficult at best, especially when you know you’ve got to flip the whole premise.

I guess some adaptations were never meant to be.
I, Robot:
Entertainment Value: 7.5/10
Plot: 2/10
Direction: 8/10
Overall: 6/10