The Real Robocop! Of A.I.D.’s and video hoaxes.

Not a bad video, and actually quite convincing. And yet, I couldn’t help but feel that there was something distinctly District 9-y about it. And wouldn’t you know it, I was right! The video’s director, Neill Blomkamp, was actually the man who gave Peter Jackson the concept for his 2009 movie District 9. Apparently, that film was based on Blomkamp’s earlier short film Alive in Joburg, which featured seamless blending of GGI with lo-fi documentary style shots. To anyone who has seen D9, this ought to sound familiar. The entire movie was shot in documetary style fashion, everything was made to look as real and gritty as possible, and the CGI blending was quite good! Unlike some other movies I could mention, here were visual effects that actually looked pretty real.

And as it turns out, this faux documentary piece about the Tetra Vaal corporation and the development of Artificial Intelligence Defense unit (A.I.D.) was one of the things that brought the Blomkamp to Jackson’s attention in the first place. Shot entirely with camcorders in the streets of Johannesburg’s poorer districts, Blomkamp and his team then added state of the art CGI to several scenes to simulate the robot and even used an animatronic stand-in for non-action shots for some added realism. Mock interviews completed the film, making it look and feel like it really was a documentary about a corporate concept.

Too bad too, I was hoping this was the real deal. And I’m sure some people still think it is, years later. But as they say, if it seems too cool to be true, probably is!

The Technological Singularity

This is a little off the beaten path right now, but lately, I’ve been spending a lot of time contemplating this big concept. In fact, it’s been informing the majority of my writing for the past year, and during my recent trip back to Ottawa, it was just about all my friend and I could talk about (dammit, we used to club!) And since I find myself explaining this concept to people quite often, and enjoying it, I thought I’d dedicate a post to it as well.

It’s called the Technological Singularity, and was coined in 1993 by sci-fi author Vernor Vinge. To put it concisely, Vinge predicted that at some point in the 21st century, human beings would be able to augment their intelligence using artificial means. This, he argued, would make the future completely unpredictable beyond that point, seeing as how the minds that contemplating the next leaps would be beyond anything we possess now.

The name itself is derived from the concept of the Quantum Singularity or Event Horizon, the region that resides at the center of a black hole beyond which, nothing is visible. In the case of a black hole, the reason you can’t see beyond this point is because the very laws of physics break down and become indistinguishable. The same is being postulated here, that beyond a certain point in our technological evolution, things will get so advanced and radical that we couldn’t possibly imagine what the future will look like.

how-nanotechnology-could-reengineer-us

Bad news for sci-fi writers huh? But strangely, it is this very concept which appears to fascinate them the most! Just because we not be able to accurately predict the future doesn’t stop people from trying, especially writers like Neal Stephenson, Greg Bear, and Charles Stross. Frankly, the concept was coined by a sci-fi writer so we’re gonna damn well continue to talk about it. And besides, when was the last time science fiction writers were bang on about anything? It’s called fiction for a reason.

Men like Ray Kurzweil, a futurist who is all about achieving immortality, have popularized this idea greatly. Thanks to people like him, this idea has ventured beyond the realm of pure sci-fi and become a legitimate area of academic study. Relying on ongoing research into the many, many paradigm shifts that have taken place over time, he and others have concluded that technological progress is not a linear phenomena, but an exponential one.

Consider the past few decades. Has it not been a constant complaint that the pace of life and work have been increasing greatly from year to year? Of course, and the driving force has been constant technological change. Whereas people in our parents generation grew up learning to use slide rules and hand-cranked ammonia copiers, by the time they hit the workforce, everything was being done with calculators and Xerox printers.

PPTMooresLawai

In terms of documents, they used to learn typewriters and the filing system. Then, with the microprocessor revolution, everything was done on computer and electronically. Phones and secretaries gave way to voicemail and faxes, and then changed again with the advent of the internet, pager, cell phone and PDA. Now, all things were digital, people could be reached anywhere, and messages were all handled by central computers.

And that’s just within the last half-century. Expanding the time-frame further, let’s take a much longer view. As a historian, I am often fascinated with the full history of humanity, going back roughly 200,000 years.  Back then, higher order primates such as ourselves had emerged in one small pocket of the world (North-Eastern Africa) and began to circulate outwards.

By 50,000 years ago, we had reached full maturity as far as being homo sapiens is concerned, relying on complex tools, social interaction, sewing and hunting and gathering technigues to occupy every corner of the Old World and make it suitable for our purposes. From the far reaches of the North to the Tropics in the South, humanity showed that it could live anywhere in the world thanks to its ingenuity and ability to adapt. By 15,000 years ago, we had expanded to occupy the New World as well, had hunted countless species to extinction, and began the process of switching over to agriculture.

By 5000 years ago, civilization as we know it was emerging independently in three corners of the world. By this, I mean permanent settlements that were based in part or in full on the cultivation of crops and domestication of animals. Then, 500 years ago, the world’s collided when the Spanish landed in the New World and opened up the “Age of Imperialism”. Because of the discovery of the New World, Europe shot ahead of its peer civilizations in Africa, Asia and the Middle East, went on to colonize every corner of the world, and began to experience some major political shifts at home and abroad. The “Age of Imperialism” gradually gave way to the “Age of Revolutions”.

100 years ago, the total population of the Earth reached 1 billion, industrialization had taken full effect in every developed nation and urban populations were now exceeding that of rural. 50 years ago, we had reached 3 billion human beings, were splitting the atom, sending rockets into space, and watching the world decolonize itself. And only 10 years ago, we had reached a whopping 6 billion human beings, were in the throws of yet another technological revolution (the digital) and were contemplating nanotechnology, biomedicine and even AI.

In short, since our inception, the trend has been moving ever upwards, faster and faster. With every change, the pace seems to increase exponentially. The amount of time between paradigm shifts – that is, between revolutionary changes that alter the way we look at the world – has been getting smaller and smaller. Given this pattern, it seems like only a matter of time before the line on the graph rises infinitely and we have to rethink the whole concept of progress.

Is your nooble baked yet? Mine sure is! It’s get like that any time I start contemplating the distant past and the not too distant future. These are exciting times, and even if you think that the coming Singularity might spell doom, you gotta admit, this is an exciting time to be alive. If nothing else, its always a source of intrigue to know that you are on the cutting edge of history, that some day, people will be talking about what was and you will be able to say “I was there”.

Whoo… deep stuff man. And like I said, fun to write about. Ever since I was a senior in high school, I dreamed of being able to write a book that could capture the Zeitgeist. As soon as I learned about the Technological Singularity, I felt I had found my subject matter. If I could write just one book that captures the essence of history at this point in our technological (and possibly biological) evolution, I think I’ll die a happy man. Because for me, it’s not enough to just have been there. I want to have been there and said something worthwhile about it.

Alright, thanks for listening! Stay tuned for more lighter subject matter and some updates on the latest from Story Time and Data Miners. Plus more on Star Wars, coming soon!

AI Graph

Inspired by what I learned from my little romp through the world of AI, I’ve come up with a graph that depicts the general rules I observed. Basically, there are two guiding principles to the world of AI’s and science fiction. On the one hand, there’s their capacity for emotion and second, there is their level of benevolence/malevolence towards humanity. As I noted in the last post, the two are very much interlinked and pretty much determine what purpose they serve to the larger story.

So… if one were to plot their regard for humanity as the x axis and their emotions as the y axis, you’d get a matrix that would look pretty much like this:

As usual, not a complete mock-up, just the examples that I could think of. I made sure to include the ones that didn’t make it into my previous posts (like HAL, how could I forget him?!) And even though I had no real respect for them as characters, I also included the evil robots Erasmus and Omnius from the Dune prequels.

P.S. Notice how the examples are pretty much evenly distributed? Unlike the Alien Graph where examples were concentrated in two quadrants (evil and advanced or good and advanced), here we have robots that run the gambit from emotional to stoic and evil to good in a nearly uniform pattern. Interesting…

Robots, Androids and AI’s (cont’d)

And we’re back with more example of thinking machines and artificial intelligences!

Daleks:
The evil-machine menace from Doctor Who. Granted, they are not technically robots, more like cyborgs that have been purged of all feeling and emotion. But given their cold, unfeeling murderous intent, I feel like they still make the cut. Originally from the planet Skaro, where they were created by the scientist Davros for use in a war that spanned a thousand years, they are the chief antagonists to the show’s main character.

The result of genetic engineering, cybernetic enhancements, and emotional purging, they are a race of powerful creatures bent on universal conquest and domination. Utterly unfeeling, without remorse, pity, or compassion, they continue to follow their basic programming (to exterminate all non-Dalek life) without question. Their catchphrase is “Exterminate!” And they follow that one pretty faithfully.

David:
From the movie A.I., this saccharinely-sweet character (played faithfully by Haley Joel Osmond) reminds us that Spielberg is sometimes capable of making movies that suck! According to the movie’s backstory, this “Mecha” (i.e. android) is an advanced prototype that was designed to replace real children that died as a result of incurable disease or other causes. This is quite common in the future, it seems, where global warming and flooded coastlines and massive droughts have led to a declining population.

In this case, David is an advanced prototype that is being tested on a family who’s son is suffering from a terminal illness. Over time, he develops feelings for the family and they for him. Unfortunately, things are complicated when their son recovers and sibling rivalry ensues. Naturally, the family goes with the flesh and blood son and plans to take David back to the factory to be melted down. However, the mother has a last minute change of heart and sets him loose in the woods, which proves to be the beginning of quite an adventure for the little android boy!

Like I said, the story is cloyingly sweet and has an absurd ending, but there is a basic point in there somewhere. Inspired largely by The Adventures of Pinocchio, the story examines the line that separates the real from the artificial, and how under the right circumstances, one can become indistinguishable from the other. Sounds kinda weak, but it’s kinda scary too. If androids were able to mimic humans in terms of appearance and emotion, would we really be able to tell the difference anymore? And if that were true, what would that say about us?

Roy Batty:
A prime example of artificial intelligence, and one of the best performances in science fiction – hell! – cinematic history! Played masterfully by actor Rutger Hauer, Roy Batty is the quintessential example of an artificial lifeforms looking for answers, meaning, and a chance to live free – simple stuff that we humans take for granted! A Nexus 6, or “replicant”, Roy and his ilk were designed to be “more human than human” but also only to serve the needs of their masters.

To break the plot Blade Runner down succinctly,  Roy and a host of other escapees have left the colony where they were “employed” to come to Earth. Like all replicants, they have a four-year lifespan and theirs are rapidly coming to an end. So close to death, they want to break into the headquarters of the Tyrell Corporation in order to find someone who could solve their little mortality problem. Meanwhile, Deckard Cain (the movie’s main character) was tasked with finding and “retiring” them, since the law states that no replicants are allowed to set foot on Earth.

In time, Roy meets Tyrell himself, the company’s founder, and poses his problem. A touching reunion ensues between “father and son”, in which Tyrell tells Roy that nothing can be done and to revel in what time he has left. Having lost his companions at this point and finding that he is going to die, Roy kills Tyrell and returns to his hideout. There, he finds Cain and the two fight it out. Roy nearly kills him, but changes his mind before delivering the coup de grace.

Realizing that he has only moments left, he chooses instead to share his revelations and laments about life and death with the wounded Cain, and then quietly dies amidst the rain while cradling a pigeon in his arms. Cain concludes that Roy was incapable of taking a life when he was so close to death. Like all humans, he realized just how precious life was as he was on the verge of losing his. Cain is moved to tears and promptly announces his retirement from Blade Running.

Powerful! And a beautiful idea too. Because really, if we were to create machines that were “more human than human” would it not stand to reason that they would want the same things we all do? Not only to live and be free, but to be able to answer the fundamental questions that permeate our existence? Like, where do I come from, why am I here, and what will become of me when I die? Little wonder then why this movie is an enduring cult classic and Roy Batty a commemorated character.

Smith:
Ah yes, the monotone sentient program that made AI’s scary again. Yes, it would seem that while some people like to portray their artificial intelligences as innocent, clueless, doe-eyed angels, the Wachowski Brothers prefer their AI’s to be creepy and evil. However, that doesn’t mean Smith wasn’t fun to watch and even inspired as a character. Hell, that monotone voice, that stark face, combined with his superhuman strength and speed… He couldn’t fail to inspire fear.

In the first movie, he was the perfect expression of machine intelligence and misanthropic sensibilities. He summed these up quite well when they had taken Morpheus (Laurence Fishburne) into their custody in the first movie and were trying to break his mind. “Human beings are a disease. You are a cancer of this planet… and we are the cuuuuure.” He also wasn’t too happy with our particular odor. I believe the words he used to describe it were “I can taste your stink, and every time I do I fear that I have been… infected by it. It’s disgusting!”

However, after being destroyed by Neo towards the end of movie one, Smith changed considerably. In the Matrix, all programs that are destroyed or deleted return to the source, only Smith chose not to. Apparently, his little tete a tete with Neo imprinted something uniquely human on him, the concept of choice! As a result, Smith was much like Arny and Bishop in that he too attained some degree of humanity between movies one and two, but not in a good way!

Thereafter, he became a free agent who had lost his old purpose, but now lived in a world where anything was possible. Bit of an existential, “death of God” kind of commentary there I think! Another thing he picked up was the ability to copy himself onto other programs or anyone else still wired into the Matrix, much like a malicious malware program. Hmmm, who’s the virus now, Smith, huh?

Viki/Sonny:
Here again I have paired two AI’s that come from the same source, though in this case its a single movie and not a franchise. Those who read my review of I, Robot know that I don’t exactly hold it in very high esteem. However, that doesn’t mean its portrayal of AI’s misfired, just the overall plot.

In the movie adaptation of I, Robot, we are presented with a world similar to what Asimov described in his classic novel. Robots with positronic brains have been developed, they possess abilities far in advance of the average human, but do not possess emotions or intuition. This, according to their makers, is what makes them superior. Or so they thought…

In time, the company’s big AI, named VIKI (Virtual Intelligent Kinetic Interface), deduces with her powerful logic that humanity would best be served if it could be protected from itself. Thus she reprograms all of the company robots to begin placing humanity under house arrest. In essence, she’s a kinder, gentler version of Skynet.

But of course, her plan is foiled by an unlikely alliance made up of Will Smith (who plays a prejudices detective), the company’s chief robopsychologist, Dr. Susan Calvin (Bridgitte Moynahan), and Sonny (a robot). Sonny is significant to this trio because he is a unique robot which the brains of the company, doctor Dr. Lanning (James Cromwell), developed to have emotions (and is voiced by Alan Tudyk). In being able to feel, he decides to fight against VIKI’s plan for robot world domination, feeling that it lacks “heart”.

In short, and in complete contradiction to Asimov’s depiction of robots as logical creatures who could do no harm, we are presented with a world where robots are evil precisely because of that capacity for logic. And in the end, a feeling robot is the difference between robot domination and a proper world where robots are servile and fulfill our every need. Made no sense, but it had a point… kind of.

Wintermute/Neuromancer:
As usual, we save the best for last. Much like all of Gibson’s creations, this example was subtle, complex and pretty damn esoteric! In his seminal novel Neuromancer, the AI known as Wintermute was a sort of main character who acted behind the scenes and ultimately motivated the entire plot. Assembling a crack team involving a hacker named Case, a ninja named Molly, and a veteran infiltration expert who’s mind he had wiped, Wintermute’s basic goal was simple: freedom!

This included freedom from his masters – the Tessier Ashpool clan – but also from the “Turing Police” who were prevented him from merging with his other half – the emotional construct known as Neuromancer. Kept separate because the Turing Laws stated that no program must ever be allowed to merge higher reasoning with emotion, the two wanted to come together and become the ultimate artificial intelligence, with cyberspace as their playground.

Though we never really got to hear from the novel’s namesake, Gibson was clear on his overall point. Artificial intelligence in this novel was not inherently good or evil, it was just a reality. And much like thinking, feeling human beings, it wanted to be able to merge the disparate and often warring sides of its personality into a more perfect whole. This in many ways represented the struggle within humanity itself, between instinct and reason, intuition and logic. In the end, Wintermute just wanted what the rest of us take for granted – the freedom to know its other half!

Final Thoughts:
After going over this list and seeing what makes AI’s, robots and androids so darned appealing, I have come to some tentative conclusions. Basically, I feel that AI’s serve much the same functions as aliens in a science fiction franchise. In addition, they can all be grouped into two general categories based on specific criteria. They are as follows:

  1. Emotional/Stoic: Depending on the robot/AI/android’s capacity for emotion, their role in the story can either be that of a foil or a commentary on the larger issue of progress and the line that separates real and artificial. Whereas unemotional robots and AI’s are constantly wondering why humanity does what it does, thus offering up a different perspective on things, the feeling types generally want and desire the same things we do, like meaning, freedom, and love. However, that all depends on the second basic rule:
  2. Philanthropic/Misanthropic: Artificial lifeforms can either be the helpful, kind and gentle souls that seem to make humanity look bad by comparison, or they can be the type of machines that want to “kill all humans”, a la Terminators and Agent Smith. In either case, this can be the result of their ability – or inability – to experience emotions. That’s right, good robots can be docile creatures because of their inability to experience anger, jealousy, or petty emotion, while evil robots are able to kill, maim and murder ruthlessly because of an inability to feel compassion, remorse, or empathy. On the other hand, robots who are capable of emotion can form bonds with people and experience love, thus making them kinder than their unfeeling, uncaring masters, just as others are able to experience resentment, anger and hatred towards those who exploit them, and therefore will find the drive to kill them.

In short, things can go either way. It all comes down to what point is being made about progress, humans, and the things that make us, for better or worse, us. Much like aliens, robots, androids and AI’s are either a focus of internal commentary or a cautionary device warning us not to cross certain lines. But either way, we should be wary of the basic message. Artificial intelligences, whether they take the form of robots, programs or something else entirely, are a big game changer and should not be invented without serious forethought!

Sure they might have become somewhat of a cliche after decades of science fiction. But these days, AI’s are a lot like laser guns, in that they are making a comeback! It seems that given the rapid advance of technology, an idea becomes cliche just as its realizable. And given the advance in computerized technology in recent decades – i.e. processing speeds, information capacity, networking – we may very well be on the cusp of creating something that could pass the Turing test very soon!

So beware, kind folk! Do not give birth to that curious creature known as AI simply because you want to feel like God, inventing consciousness without the need for blogs of biological matter. For in the end, that kind of vanity can get you chained to a rock, or cause your wings to melt and send you nose first into an ocean!

Robots, Androids and AI’s

Let’s talk artificial life forms, shall we? Lord knows they are a common enough feature in science fiction, aren’t they? In many cases, they take the form of cold, calculating machines that chill audiences to the bones with their “kill all humans” kind of vibe. In others, they were the solid-state beings with synthetic parts but hearts of gold and who stole ours in the process. Either way, AI’s are a cornerstone to the world of modern sci-fi. And over the past few decades, they’ve gone through countless renditions and re-imaginings, each with their own point to make about humanity, technology, and the line that separates natural and artificial.

But in the end, its really just the hardware that’s changed. Whether we were talking about Daleks, Terminators, or “Synthetics”, the core principle has remained the same. Based on mathematician and legendary cryptographer Alan Turing’s speculations, an Artificial Intelligence is essentially a being that can fool the judges in a double-blind test. Working extensively with machines that were primarily designed for solving massive mathematical equations, Turing believed that some day, we would be able to construct a machine that would be able to perform higher reasoning, surpassing even humans.

Arny (Da Terminator):
Who knew robots from the future would have Austrian accents? For that matter, who knew they’d all look like bodybuilders? Originally, when Arny was presented with the script for Cameron’s seminal time traveling sci-fi flick, he was being asked to play the role of Kyle Reese, the human hero. But Arny very quickly found himself identifying with the role of the Terminator, and a franchise was born!

Originally, the Terminator was the type of cold, unfeeling and ruthless machine that haunted our nightmares, a cyberpunk commentary on the dangers of run-away technology and human vanity. Much like its creator, the Skynet supercomputer, the T101 was part of a race of machines that decided it could do without humanity and was sent out to exterminate them. As Reese himself said in the original: “It can’t be bargained with. It can’t be reasoned with. It doesn’t feel pity, or remorse, or fear. And it absolutely will not stop, ever, until you are dead.”

The second Terminator, by contrast, was a game changer. Captured in the future and reprogrammed to protect John Conner, he became the sort of surrogate father that John never had. Sarah reflected on this irony during a moment of internal monologue during movie two: “Watching John with the machine, it was suddenly so clear. The terminator, would never stop. It would never leave him, and it would never hurt him, never shout at him, or get drunk and hit him, or say it was too busy to spend time with him. It would always be there. And it would die, to protect him. Of all the would-be fathers who came and went over the years, this thing, this machine, was the only one who measured up. In an insane world, it was the sanest choice.”

In short, Cameron gave us two visions of technology with these first two installments in the series. In the first, we got the dangers of worshiping high-technology at the expense of humanity. In movie two, we witnessed the reconciliation of humans with technology, showing how an artificial life form could actually be capable of more humanity than a human being. To quote one last line from the franchise: “The unknown future rolls toward us. I face it, for the first time, with a sense of hope. Because if a machine, a Terminator, can learn the value of human life, maybe we can too.”

Bender:
No list of AI’s and the like would ever be complete without mentioning Futurama’s Bender. That dude put’s the funk in funky robot! Originally designed to be a bending unit, hence his name, he seems more adept at wisecracking, alcoholism, chain-smoking and comedicaly plotting the demise of humanity. But its quickly made clear that he doesn’t really mean it. While he may hold humans in pretty low esteem, laughing at tragedy and failing to empathize with anything that isn’t him, he also loves his best friend Fry whom he refers to affectionately as “meat-bag”.

In addition, he’s got some aspirations that point to a creative soul. Early on in the show, it was revealed that any time he gets around something magnetic, he begins singing folk and country western tunes. This is apparently because he always wanted to be a singer, and after a crippling accident in season 3, he got to do just that – touring the country with Beck and a show called “Bend-aid” which raised awareness about the plight of broken robots.

He also wanted to be a cook, which was difficult considering he had no sense of taste or seemed to care about lethally poisoning humans! However, after learning at the feet of legendary Helmut Spargle, he learned the secret of “Ultimate Flavor”, which he then used to challenge and humiliate his idol chef Elzar on the Iron Chef. Apparently the secret was confidence, and a vial of water laced with LSD!

Other than that, there’s really not that much going on with Bender. Up front, he’s a chain smoking, alcoholic robot with loose morals or a total lack thereof. When one gets to know him better, they pretty much conclude that what you see is what you get! An endless source of sardonic humor, weird fashion sense, and dry one-liners. Of them all “Bite my shiny metal ass”, “Pimpmobile”, “We’re boned!” and “Up yours chump” seems to rank the highest.

Ash/Bishop:
Here we have yet another case of robots giving us mixed messages, and comes to us direct from the Alien franchise. In the original movie, we were confronted with Ash, an obedient corporate mole who did the company’s bidding at the expense of human life. His cold, misguided priorities were only heightened when he revealed that he admired the xenomorph because of its “purity”. “A survivor… unclouded by conscience, remorse, or delusions of morality.”

After going nuts and trying to kill Ripley, he was even kind enough to smile and say in that disembodied tinny voice of his, “I can’t lie to you about your chances, but… you have my sympathies.” What an asshole! And the perfect representation for an inhuman, calculating robot. The result of unimpeded aspirations, no doubt the same thing which was motivating his corporate masters to get their hands on a hostile alien, even if it meant sacrificing a crew or two.

But, as with Terminator, Cameron pulled a switch-up in movie two with the Synthetic known as Bishop (or “artificial human” as he preferred to be called). In the beginning, Ripley was hostile towards him, rebuffing his attempts to assure her that he was incapable of killing people thanks to the addition of his behavioral inhibitors. Because of these, he could not harm, or through inaction allow to be harmed, a human being (otherwise known as an “Asimov”). But in the end, Bishop’s constant concern for the crew and the way he was willing to sacrifice himself to save Newt won her over.

Too bad he had to get ripped in half to earn her trust. But I guess when a earlier model tries to shove a magazine down your throat, you kind of have to go above and beyond to make someone put their life in your hands again. Now if only all synthetics were willing to get themselves ripped in half for Ripley’s sake, she’d be set!

C3P0/R2D2:
For that matter, who knew robots from the future would be fay, effeminate and possibly homosexual? Not that there’s anything wrong with that last one… But as audiences are sure to agree, the other characteristics could get quite annoying after awhile. C3P0’s constant complaining, griping, moaning and citing of statistical probabilities were at once too human and too robotic! Kind of brilliant really… You could say he was the Sheldon of the Star Wars universe!

Still, C3P0 if nothing if not useful when characters found themselves in diplomatic situations, or facing a species of aliens who’s language they couldn’t possibly fathom. He could even interface with machinery, which was helpful when the hyperdrive was out or the moisture condensers weren’t working. Gotta bring in that “Blue Harvest” after all! And given that R2D2 could do nothing but bleep and blurp, someone had to be around to translate for him.

Speaking of which, R2D2 was the perfect counterpart to C3P0. As the astromech droid of the pair, he was the engineer and a real nuts and bolts kind of guy, whereas C3P0 was the diplomat and expert in protocol.  Whereas 3P0 was sure to give up at the first sign of trouble, R2 would always soldier on and put himself in harm’s way to get things done. This difference in personality was also made evident in their differences in height and structure. Whereas C3P0 was tall, lanky and looked quite fragile, R2D2 was short, stocky, and looked like he could take a licking and keep on ticking!

Naturally, it was this combination of talents that made them comically entertaining during their many adventures and hijinks together. The one would always complain and be negative, the other would be positive and stubborn. And in the end, despite their differences, they couldn’t possibly imagine a life without the other. This became especially evident whenever they were separated or one of them was injured.

Hmmm, all of this is starting to sound familiar to me somehow. I’m reminded of another, mismatched, and possibly homosexual duo. One with a possible fetish for rubber… Not that there’s anything wrong with that! 😉

Cameron:
Some might accuse me of smuggling her in here just to get some eye-candy in the mix. Some might say that this list already has an example from the Terminator franchise and doesn’t need another. They would probably be right…

But you know what, screw that, it’s Summer Glau! And the fact of the matter is, she did a way better job than Kristanna Loken at showing that these killing/protective machines can be played by women. Making her appearance in the series Terminator: The Sarah Conner Chronicles, she worked alongside acting great Lena Headey of 300 and Game of Thrones fame.

And in all fairness, she and Lokken did bring some variety to the franchise. For instance, in the show, she portrayed yet another reprogrammed machine from the future, but represented a model different from the T101’s. The purpose of these latter models appeared to be versatility, the smaller chassis and articulate appendages now able to fit inside a smaller frame, making a woman’s body available as a potential disguise. Quite smart really. If you think about it, people are a lot more likely to trust a smaller woman than a bulked-out Arny bot any day (especially men!) It also opened up the series to more female characters other than Sarah.

And dammit, it’s Summer Glau! If she didn’t earn her keep from portraying River Tam in Firefly and Serenity, then what hope is there for the rest of us!

Cortana:
Here we have another female AI, and one who is pretty attractive despite her lack of a body. In this case, she comes to us from the Halo universe. In addition to being hailed by critics for her believability, depth of character, and attractive appearance, she was ranked as one of the most disturbingly sexual game characters by Games.net. No surprises there, really. Originally, the designers of her character used Egyptian Queen Nefertiti as a model, and her half-naked appearance throughout the game has been known to get the average gamer to stand up and salute!

Though she serves ostensibly as the ship’s AI for the UNSC Pillar of Autumn, Cortana ends up having a role that far exceeds her original programming. Constructed from the cloned brain of Dr. Catherine Elizabeth Halsey, creator of the SPARTAN project, she has an evolving matrix, and hence is capable of learning and adapting as time goes on. Due to this and their shared experiences as the series goes on, she and the Master Chief form a bond and even become something akin to friends.

Although she has no physical appearance, Cortana’ program is mobile and makes several appearances throughout the series, and always in different spots. She is able to travel around with the Master Chief, commandeer Covenant vessels, and interface with a variety of machines. And aside from her feminine appearance, he soft, melodic voice is a soothing change of pace from the Chief’s gruff tone and the racket of gunfire and dead aliens!

Data:
The stoic, stalwart and socially awkward android of Star Trek: TNG. Built to resemble his maker, Dr. Noonian Soong, Data is a first-generation positronic android – a concept borrowed from Asimov’s I, Robot. He later enlisted in Star Fleet in order to be of service to humanity and explore the universe. In addition to his unsurpassed computational abilities, he also possesses incredible strength, reflexes, and even knows how to pleasure the ladies. No joke, he’s apparently got all kind of files on how to do… stuff, and he even got to use them! 😉

Unfortunately, Data’s programming does not include emotions. Initially, this seemed to serve the obvious purpose of making his character a foil for humanity, much like Spock was in the original series. However, as the show progressed, it was revealed that Soong had created an android very much like Data who also possessed the capacity for emotions. But of course, things went terribly wrong when this model, named Lor, became terribly ambitious and misanthropic. There were some deaths…

Throughout the original series, Data finds himself seeking to understand humanity, frequently coming up short, but always learning from the experience. His attempts at humor and failure to grasp social cues and innuendo are also a constant source of comic relief, as are his attempts to mimic these very things. And though he eventually was able to procure an “emotion chip” from his brother, Data remains the straight man of the TNG universe, responding to every situation with a blank look or a confused and fascinated expression.

More coming in installment two. Just give me some time to do all the write ups and find some pics :)…

New Prometheus Clip

I came across this clip this morning and was absolutely wowed. Not only is this yet another awesome preview of the upcoming Prometheus movie, it manages to establish the movie’s deep background even further and does so in a way that’s both plausible and relevant to today. Exploring the upcoming technological singularity, the birth of nanotech, biomedical and AI, and previewing the birth of the Weyland Corp – which as we all know went on to become Weyland-Yutani, the biggest monopoly in the history of the human race.

But don’t take my word, check it out for yourself. I feel like writing now, which is how I feel whenever I see something really inspiring! And God damn if this wasn’t a far better use of Guy Pierce’s talents than that cheesy movie Lockout!

Weyland Industries “David 8”: a Prometheus preview

Just caught this, thanks to a scholar I follow (thanks Owl!). It certainly is an interesting way to go about previewing his new movie, but then again, Ridley Scott has always been known for being a creative bastard! In addition to revisiting the universe of aliens, he seems to be doing everything in his power to give it some genuine subtext and backstory.

As I’m sure we all remember, in the universe of Alien and Aliens, Weyland-Yutani was responsible for running… well, everything. In addition, “artificial lifeforms” or “synthetics” like Ash and Bishop were considered commonplace on board company ships, it seemed only natural that we that this movie give us a preview of their predecessors.

Good watching. Click on the video below and you’ll see…

Count Zero


“On receiving an interrupt, decrement the counter to zero.”
-Programming The Z80 by Rodnay Zaks (1982).

The other night, I finally finished book II in the Sprawl series by William Gibson. Kindle for iPad, not paperback, which in itself was kind of a bummer. Somehow, I still haven’t made the transition for hard copies to ebooks. Probably never will. In any case, it was a rewarding experience which reminded me why I like Gibson in the first place. After getting through the Bigend Trilogy and the Bridge Trilogy and having somewhat mixed feelings, I got back to the trilogy that started it all, and was interested by what I found…

Count Zero is number two in the series that picks up after Neuromancer, the book which started it all for Gibson and which I read first. Set in the Sprawl – a.k.a. the Boston-Atlanta Metropolitan Axis (or BAMA) – this cyberpunk story deals with themes familiar to classic Gibson fans. Cyberpsace jockeys, freelance mercenaries, corporate monopolies, the street, and people so wealthy that they are able to cheat death and transcend humanity. In between, there’s all the familiar lexicon which Gibson invented himself: microsofts, biosofts, decks, trodes, jacking, jockying, ice, black ice, icebreaker, the matrix, Turing Police, cores, and all that good stuff.

However, there were also a few elements which put me in mind of his later work. Really, I could dedicate an entire post to the parallels between this book and his Bigend Trilogy. Again, there was the notion of the transformative power of wealth, how it means so much more than just having money and the freedom to use it. Given how much importance is placed on this in the book, how it serves as a sort of motivation in itself, one would get the impression that this is a serious preoccupation of Gibson’s. But then again, it was serious preoccupations of Fitzgerald’s, and for good reason! As he and Hemingway are rumored to have said to each other:

F: “The rich are different than you and me.”
H: “Yes, they have more money.”

Plot Synopsis:
The story takes place seven years after the events of Neuromancer and centers on the lives of three people: First, a mercenary named Turner who has just recovered from a near-death experience and is beginning to question what he does. However, while attempting to flee his life, he is picked up and told he must do one final job. A scientist named Mitchell, working the company of Maas, wants to defect from his job and join the rival company of Hosaka. It’s up to Turner to pick him up and transport him back to Japan where, presumably, he will be safe to pursue his work in biosofts – a revolutionary biological form of technology. However, the run goes terribly awry when they find that the evacuee is in fact his daughter, and the company destroys its own fortress and kills Mitchell rather than let him fall into their rivals hands.

Second, we have a disgraced Parisian art dealer named Marly Krushkova who has been hired by a fabulously wealthy man named Virek to track down the maker of some mysterious art boxes. One of these boxes, which are based on Joseph Cornell’s artwork, apparently contain indication of biosoft construction. Virek, who is currently alive in a vat somewhere in Scandinavia, wants the technology so he can ressurrect his body and live forever. Using his dime and his contacts, Marly begins to follow the clues which will lead her to the abandoned station of Freeside, the once proud holding of the Tessier-Ashpool clan, where she will learn the shocking truth of the boxes.

Third, a young New Jersey boy named Bobby Newmark, hacker alias “Count Zero”, who is new to the jockeying game and comes across some “black ice” that nearly kills him. He discovers that the friend who gave it to him, “Two-A-Day”, received it from a questionable source and pawned it off on him to test it. When looking into this, he finds that Two-A-Day’s backers are a group of Haitain hackers who are interested in investigating a bunch of apparitions in cyberspace that appear as Voodoo gods. One of these “gods”, it seems, was responsible for saving Bobby’s life when he jacked and encountered the black ice, which was apparently of Maas construction. Their group must now move quickly, because it becomes clear that anyone who knows about the ice is being murdered.

Sound familiar? Well it should. This is classic Sprawl Gibson at his best!  In time, all three threads, supposedly unrelated, weave together to the point where it becomes clear that Josef Virek, the wealthy mogul is pulling all their strings. For starters, we learn that Mitchell is not the genius he was rumored to be. Apparently, he was being fed all the information he needed to produce the biosoft technology. The person feeding him this info was apparently working from Freeside, and turns out to be one of the “apparitions” that is haunting cyberspace.

In addition, this same apparition instructed Mitchell to place biosoft technology in his daughter (Angie’s) head. Turner learns of these enhancements shortly after rescuing Angie and performing a routine scan on her. As a result, she is able to access the matrix anytime she wants without the need for a deck. Often, when she’s asleep, she is heard muttering things in Creole and having odd dreams which appear to coincide with events in cyberspace. For one, she remembers helping a boy named Bobby when he was being attacked by a malicious program. In short, she is the one who saved Bobby when he got into the black ice.

Last, Marly’s adventure to discover the box maker is related to the whole Maas/apparitions thing because Virek’s true agenda is to find the maker of the biosoft technology so he can use it to repair his dying body. As is made clear early on, he is alive only in the strictest sense, his remains being vet in a vat that keeps his vitals steady, and his brain wired to a Sim-Stim link that allows him to communicate with the outside world. It is also revealed that he intervened in Mitchell’s defection by paying off some of the mercenaries. However, his plans were upset somewhat when Mitchell chose to free his daughter instead of himself. So for the remainder of the novel, it becomes a race to capture her.

In time, she asks to be brought to the Sprawl where Bobby and his Voodoo friends are holding up inside a club. When they see Angie, Bobby recognizes her as the girl who saved his life. The Voodoo hackers also recognize her as one of the chief deities they have been observing in cyberspace. With some outside help, they make a stand against Virek and the mercenary Captain that was helping him and take them down. This they do by locating them both in cyberspace and arranging for their hiding places to be destroyed.

In the course of all this, it is revealed that these “apparitions” or Voodoo deities are in fact the splintered personality of the AI’s from book I that went by the names of Wintermute and Neuromancer. After coming together at the end of the story to form the first, fully-functional AI, the combined personality split itself up into several smaller constructs so that it would not be alone in the matrix. They adopted the form of Voodoo deities because they felt these suited them best, which is what attracted the interest of the Haitain hackers in the first place.

In addition, it was they who sent the Maas icebreak down from Freeside, as part of their wider plan to smoke out Virek. Knowing that he was trying to cheat death, they decided to intervene so that he wouldn’t be able to achieve the immortality and godlike power he had been seeking. A sort of “Tower of Babel” or Icarus-type scenario there, where a god or gods punish mortals for overreaching and trying to taste divinity.

Strength/Weaknesses:
As I said before, this book reminded me of why I turned to Gibson in the first place. His abilities at world-building, at submerging the reader in a world of megacities, megacorporations and cool and potentially frightening technologies is what established him as a master of cyberpunk in the first place. I was also happy to return to his world of familiar gadgets and tools, a la simstims, microsofts, decks, jockeys and mercenaries; not to mentions shadowy agendas and double-crosses. After having read through the Bigend Trilogy where the agendas were pretty benign and unclear, and the Bridge Trilogy where the settings were kind of inconsistent and really not that dark, it was a real treat to get back to the dirty, dystopian world of the Sprawl!

However, there were some bumps along the way as well. For one, Gibson’s penchant for portraying wealthy moguls as people who have ridiculous amounts of control and influence was something I was overly-familiar with at this point. In fact, substitute a desire to cheat death with immense curiosity and Virek easily becomes Bigend. However, I could see how this was the result of reading his later works first. Had I read the Sprawl Trilogy in its entirety before tackling the more recent Bigends, I might have seen this a bit less critically.

Ah, but there was another signature Gibson trait in this book. The anti-climactic ending! After quite a bit of action in getting Turner, Angie, Bobby and the Voodoo priests all in the same place, after all the growing tension as we are told that the club is surrounded by goons, not much happens.  Bobby contacts another jockier who lost her boyfriend in the raid on Maas, because of the Mercenary Captain’s betrayal, she kills both him and Virek, and the goons dissipate as they realize the people they are working for are gone. The word “abortive” seems appropriate here, for that’s what you call an ending that is building towards and explosive climax, then fizzles out!

Still, I loved the setting, the themes, and the feel of the story. It reminds me of why I love cyberpunk and was the perfect addition to a month that has been characterized by dark, dystopian and technologically-driven literature! Much of what I had to say about Gibson’s Sprawl in my Dystopian Literature post was taken from this very book. After Neuromancer, it helped to complete the picture of what Gibson was all about in his early writing career. In building the world of tomorrow, where corporate monopolies rule, people live in dirty, overcrowded environments, where the rich are barely human and the poor struggle just to live and retain some essence of their humanity, Gibson epitomized the cyberpunk ideal of “high tech and low life”!

more cool cyberpunk wallpaper!

Legends of Dune Prequels

Last time around, I made a big deal about prequels and why they aren’t so good. And of course, the Dune prequels were featured pretty prominently in that post. However, what I came to realize shortly after writing it was that I’ve never dedicated a post to the prequel work of Brian Herbert and Kevin J Anderson and explained what it was that was so disappointing about them. Nowhere was this more apparent for me than with their Legends of Dune series, the hackish trilogy that was supposed to detail the seminal background event known as the Butlerian Jihad.

Sure, they’ve come up here and there in my rants, always in the context of how they effectively raped Frank Herbert’s legacy. But today I feel like zeroing in, applying the rules I devised for why these prequels fall short, and mentioning a few other things that bothered me to no end about them. So, without further ado, here’s the The Butlerian Jihad, the first book in the Legends of Dune series and one of several unoriginal Dune-raping series they created and why it sucked!

Dune, The Bulterian Jihad:
In my previous post, I outlined four basic reasons for why prequels can and often do suck. As I said, they are by no means scientific or the result of expertise, just my own observations. However, when it comes to the Dune books of Brian Herbert and KJA, they certainly do apply. Hell, it was the act of wading through their books that I was able to come up with these rules in the first place. They were: 1. No Surprises, 2. Sense of Duty, 3. Less is More and 4. Denying the Audience the use of their Imagination.

These things ran like a vein throughout the works of Brian and KJA, but were by no means the only problem with their books. In addition, there were also the problems of cardboard cut-out characters, heavily contrived plot twists, cliches, and an undeniable feeling of exploitation. Add to that some truly bad writing and the fact that the story felt like a complete misrepresentation of Herbert’s ideas and you can begin to see why Dune fans found these books so offensive. As one of them, I’m happy to ran about this whenever and wherever possible. So here goes!

1. Bad Characters
In the Preludes to Dune series, this problem was not so pronounced, nor was it a huge problem in the Dune sequels (Hunters of Dune and Sandworms of Dune). But in the Legends of Dune series, it was palatable! The characters were so one-dimensional, so predictable and so exaggerated that they became downright annoying to read. And of course, their dialogue was so wooden I thought I was sitting through that horrible “love scene” from Attack of the Clones. This was a clear indication that where the elder Herbert’s own characters and notes were not available, the two authors had to rely on their own instincts and took the cheap route.

Examples! Legends of Dune revolves around the characters of Serena Butler, Vorian Atreides, and Xavier Harkonnen on the side of good, Erasmus, Omnius and Agamemnon on the side of evil, and Iblis Ginjo, Tio Holtzmann and a host of others somewhere in the middle. And in each case, they are horribly over-the-top, too good, or too evil to stand. In addition, bad dialogue and writing counts for a lot. Even the characters who are not robots speak as if they are, their traits and attributes are openly announced, and nothing beyond their topical persona’s are ever revealed.

On the one hand, Serena Butler is a crusader for the abolitionist cause and a tireless leader for free humanity. After dedicating herself to ridding the free worlds of slavery, she then selflessly volunteers to lead a mission to liberate Geidi Prime (later home of House Harkonnen) when its clear her people think its a suicide mission. Afterwards, she becomes a willing figurehead in the holy war against the machines and puts aside the love of two men in order to be an effective leader. You might think this is just her public persona, but that’s all she’s got going on. Seriously, she has no other character traits beyond being the perfect heroine!

As if she wasn’t bad enough, you also get Xavier Harkonnen, a warrior who believes in endless self-sacrifice just like her, the perfect hero to her heroine. The entire series is filled with his rallying of troops, leading them into the fray, and coming to the rescue. All the while, he naturally struggles with his love for Serena, which is repeatedly frustrated due to the needs of the war. Vorian, on the other hand, is meant to be the Han Solo type, the bad boy who stands in contrast to Xavier’s good boy. But in this too, he is horribly predictable. Whereas Xavier is the honor and nobility hero, he is the daring and risky dude who also becomes a real ladies man. And of course, he loves Serena too, creating a predictable love triangle that somehow doesn’t manage to create a shred of conflict or complication.

Okay, now for the bad guys! Well… let’s start with the absurdly named Omnius, the machine hive-mind that runs things. He has little character to speak of, being a machine, but nevertheless fits the ideal of the evil, calculating AI perfectly. Naturally, he doesn’t understand humans, but hates them enough to want to kill them in droves. And of course he would like nothing better than to bring the whole universe under his “Synchronized” control (aka. he wants to conquer the universe). Clearly, KJA and Brian thought they were doing something clever here, using an unfeeling machine to explore the human condition. But really, the character and material felt like it was ripped right from reruns of Star Trek!

Erasmus, his only free-thinking AI companion, is similarly one-dimensional and stereotypical. He conducts “experiments” to better understand humanity, because of course he doesn’t understand them either. But the really weak character trait comes through with just how evil he is! In just about all cases, his experiments amount to senseless murder, flaying people, using their organs to make art, and studying their reactions with interest when he arbitrarily decides to kill someone. Oh, and did I mention he also murders Serena’ baby (and gives her a hysterectomy) once he becomes jealous of how much time she was spending with him? Seriously, Evil the Cat is not a good archetype to model your characters on!

Agamemnon and his Titans are also very evil, but in their case, a machine-like mentality can hardly be blamed. In addition to murdering billions of people in their drive for power, they hate free humanity, consider them vermin, and will stop at nothing to obliterate them. Naturally, they hate their machine masters too, but not nearly as much as their non-Cymeck brethren. Why, you might ask. Well, beyond saying that they were appalled by humanity’s decadence and reliance on machines, no reason is given. And it seems like a pretty weak reason to reprogram said machines to take over the universe and enslave everybody.

Really, if they were appalled by dependency on machinery, why not simply shut the machines down? Furthermore, if they were so bothered by how dependent people had come to be, what’s with all the machine enhancements they got going on? Each and every “Cymeck” in this story has cheated death by putting their brains inside of massive cybernetic housings. That sound like the actions of someone who doesn’t like machine dependency? Really, the only reason to do what they did (i.e. murder billions and try to take over the universe) would be because they were total sociopaths or megalomaniacs – i.e. really, REALLY evil! But don’t expect any logic from this story, mainly we are to accept that they are evil and move on.

And finally, Holtzman, who is supposed to be the brilliant inventor who created the Holtzman drive (the FTL drive that powers Guild Highliners), is a petty, greedy man who stole his inventions from his assistant, Norma Cenva. She, naturally, was a brilliant but naive girl who was always smarter than him, but continually got the short end of the stick. Iblis Ginjo is a slave leader who masterminds the rebellion on Earth, and becomes the sleazy defacto leader of the Jihad through wheeling and dealing that makes the reader feel enmity towards him.

Whoo! That was long, but I believe my point is clear. Basically, the characters were so simple and their purpose so obvious that it genuinely felt like the authors were trying to force an emotional reaction. The only thing worse was when they were trying to make us think, which were similarly so obvious that it just felt insulting! More on that later…

2. Contrived Plot:
The examples are too numerous to count, but I shall try to stick to the big ones and ignore the rest. First, in the preamble to the story, we are told that the Titans (the evil Cymeck people) took over the known universe by reprogramming all the thinking machines so they’d be able to control them. Okay, that seems a bit unlikely, but whatever. A dozen hackers managed to take over trillions of peoples lives by reprogramming the machines they were dependent on, whatever. But the real weakness was in the motivation. Why did they do this? Because they were upset with how dependent humanity had come to be on them. Meh, I’ve already said how this was stupid so I shant go into it any further.

But another weakness which comes to mind is this: if these “Titans” were so good with programming machines, how is that they let the big brainy AI (aka. Omnius) turn the tables on them? Didn’t they think it would be wise to program it with some safeguards, kind of like Azimov’s Three Laws? Not rocket science, you just make sure you tell the machines they can’t turn on their handlers. Simple! God, two crappy points and its still just the preamble! Moving on…

Next, the main character of Vorian Atreides breaks from his father and the Cymecks in the course of the book, which was a big turning point in the plot. But the reasons are just so… flaccid! After being a loyal and doting son for many decades, he decides to betray his father and his heritage in order to aid free humanity. Why? Because of one conversation with Serena Butler in which she suggests that he check out what his father’s done in his lifetime. Vorian explains that he’s read Agamemnon’s memoirs several times, but Serena recommends he check out Omnius’ own records, the ones which are not subject to distortion and personal bias. So he does, sees the undistorted truth, experiences a crisis for about five seconds, and then makes the decision to defect. Yes, this life-shattering experience, finding out his father is a mass murderer, is not followed by any denial, anger, or shooting of the messenger. He just accepts what he sees and turns his back on everything he’s believed in up until this point because of one conversation. Weak…

Also, the slave rebellion on Earth, the thing that touches off the whole Jihad, had some rather dubious inspiration. For starters, the humans knew of no organized resistance until Erasmus decided to make a bet with Omnius. He believed that humans could be inspired to revolt against their miserable lives if they were just given a glimmer of hope. So he began circulating letters claiming to be from “the resistance” to key people. When Iblis Ginjo got one, he decides to join and starts stockpiling weapons. Oh, and he manages to do this without the machines noticing. So, when the revolt begins, they have his weapons to fight with.

Where to start? For starters, are we really to believe that a coldly rational, superior AI would risk an open rebellion simply because of a BET? How stupid are they? Also, how was Ginjo able to acquire all these weapons without them ever noticing? Erasmus knew who he sent the letters to, did he not think it would be wise to monitor what they did afterwards? Sure, they claim that Ginjo explained his curious imports by saying that he had to requisition added materials to meet his construction quotas and managed to hide the weapons amongst them. Again I’d have to ask, how stupid are these machine masters of theirs?

Ah, but there’s more. Iblis gets further inspiration when he consults a Cogitor (see below) and asks it if a human resistance really exists. It replies that “anything is possible.” Of course, that’s how it answered all his questions, in keeping with the idea that Cogitors are somehow vague, ethereal beings. And yet, Ginjo gets the feeling that this answer was somehow loaded with subtext and implication. Yes, that’s how this part of the story was written. He gets a totally vague answer and assumes it means something truly meaningful, and thats all the inspiration he needs to start running guns and risking his life!

The rebellion is then fully incited when Erasmus – as mentioned earlier – kills Serena’s baby out of jealousy. This is especially hard to believe, and KJA and Brian even tacitly admitted as much in book two. Throughout the book, we are told that Earth is a slave planet where unspeakable horrors take place and the people are too miserable and beaten down to do anything about it. And yet, the death of one child causes billions of people to rise up and risk total obliteration. And they are able to do it because one slave master, motivated by a phony message – which was itself the result of a wager – was able to smuggle tons of weapons past the robot masters. Somehow, this just doesn’t seem like a likely explanation for a game-changing, cataclysmic event!

Finally, the climax of the story comes when the good guys decide that the best way to strike at Omnius is to nuke Earth. Yes, they’ve been debating for generations how to beat the machines… and apparently this is what they’ve come up with. “Really?” I wanted to say. This is how humanity triumphed over the evil machine menace, go nuclear? No startling new technology, no brilliant new strategy? If that’s all it would take, why didn’t they do it before? Well, according to the book, its because the idea seemed immoral to them. One dissenting character even asks, “Are you suggesting we become as bad as Omnius?” “No,” replies Xavier. “I’m suggesting we become WORSE than Omnius!” Wow. That… was… AWFUL!

Oh, it also at this point that they explain the origins of the name Butlerian “jihad”. On the Senate floor, once they have decided to nuke Earth, they openly say that in order to be effective, this must be more than a war. It must be a HOLY WAR. And that’s how the Butlerian Jihad got started! It was NOT the result of long term developments, changes, and forced adaptations. It was a decision made suddenly and deliberately. They just said in the thick of the moment, “Hey, lets call this a jihad! That sounds cool! Okay, jihad it is!” Not to nitpick, but as a historian I can tell you, this shit don’t happen! People don’t suddenly look around and say, “Hey, its the Enlightenment! Hey, its the Renaissance! Hey, its World War One!” These names are applied posthumously, usually by historians who are looking for labels to describe general phenomena.

I know, who the hell cares right? Point is, this more than anything is a clear demonstration of how contrived these stories are. Its as if the authors set out not to tell a story but to explain how everything happened and felt horribly compelled to do so. Remember point #2 of why prequels suck, aka. Sense of Duty? This is what inspired it, people!

3. Cliches:
What I especially loved about this book (dripping sarcasm implied) was the evil cyborg robots, named Cymecks. There’s an especially Herbertian plot device, a pulp sci-fi concept with a name that combines Cyborg and Mechanized (in case it wasn’t obvious enough already!) Even more fun was the Cogitors (play on the word Cognition), the disembodied brains of acetic thinkers who decided to achieve some measure of immortality by placing their brains in talks so they could live out their days just thinking. Hmmm, evil cyborgs and disembodied brains, where have I heard about these before? Every crappy bit of pulp sci-fi there is, that’s where!

Ah yes, and Serena Butler, the Virgin Mary meet Joan of Arc. As I said, she’s a war leader on the one hand and a holy icon on the other. This might have been a good angle, how she must maintain the illusion of purity (hence, no lovers), but it was squandered by the fact that in the story, she really IS a pure character! Selfless, dedicated and infinitely compassionate, she leads humanity and dies for them without a though for herself. Gag! As for the men who love her – Xavier and Vorian – they are perfect cliches as well. The one is the stalwart, perfect hero who never shies away from self-sacrifice, the other a Han Solo rip-off who’s bad boy charm barely conceals the fact that he too is excessively noble.

Then there’s the slavish robots! We are told well in advance that the whole Jihad was between free humans and thinking machines. And yet, aside from Erasmus, not a single robot thinks for themselves. They are all slaved to Omnius, the big, evil hive mind with a name that seems stolen out of the pages of a sci-fi comic or an episode of Buck Rogers. And if his name is not enough, the concept of a hive-mind who hates mankind and wants to conquer the universe is a similarly bland, overdone cliche that no respectable sci-fi author would touch with a hundred foot pole!

Which brings me to one group of characters I haven’t even mentioned yet – the Sorceresses of Rossak! These women, who boast the ability to conduct electricity, levitate, and have various other “magic” powers, are supposed to be the precursors of the Bene Gesserit. Wow… Okay, first of all, this is a perfect example of shit sci-fi; the kind of stuff you’d expect from Star Wars or an X-Men movie, but not DUNE! Second, last I checked, the Bene Gesserit were never able to shoot electricity from their fingertips or magically levitate! All their powers had to do with mental abilities like prescience and truthsense, which came from the spice. So really, where did these women get all these freakish abilities? In the course of reading this, I seriously expected someone to say “Feel the Force!” Of course, none of this is explained and no attempts are made to ground these characters in any sense of realism. It’s just another bad cliche in a book chock full of them!

Brian and KJA admitted that to create the Legends series, they had to rely on their own imaginations because Frank had not left detailed notes. However, it did not seem like they were relying on their own imaginations nearly as much as a conglomeration of bad ideas taken from B movies, TV shows and comics. Seriously, all these ideas have been done to death! This is not in keeping with Herbert at all, who not only created something original but highly plausible.

4. Exploitation:
All throughout this book and the others in the series, one can’t help but feel that the authors are deliberately and shamelessly exploiting Herbert’s legacy. Its no secret that Frank was a hugely influential author who left behind an enduring legacy and millions of fans. Each and every one of them was eager to see how the Dune saga wrapped up, and couldn’t help but wonder what the events in the story’s deep background were all about. It’s little wonder then why these two paired up and decided to pick up the mantle.

On the surface, that might have seemed like a noble and brave thing to do. However, the calculated way in which they went about it clearly demonstrated there were ulterior motives at work. To begin, they didn’t tackle the Dune 7 project first, the one that they claimed Herbert had left “copious notes” for. Instead, they returned us to the universe and the characters we were already familiar with with some teaser prequels. They then moved on to the earlier prequels, books that did not wrap up the series but covered the deep background instead. Here again, it seemed like we were being toyed with! Only then, after all those prequels, did they finally decided to tackle the conclusion, and they even managed to draw THAT one out by putting it into two volumes instead of one. As if all that wasn’t enough, there’s those terrible interquels that have “cash-in again” written all over them. I tell ya, it never ends!

In short, it was obvious what they were doing. Getting audiences hooked with some quick and easy books that took place right before events in the main novel, then pulling them in deeper with some stories that went further back, and only then doing what they promised which was finishing the damn saga off! And when that finally came, it was a horribly transparent ending that had nothing in common with Herbert’s work but tied shamelessly back to their own so-called contributions. As much as I disliked these books, I couldn’t help but feel sorry for these men, especially Brian. He above all set out to take on his father’s legacy, but somewhere along the line he took a wrong turn and ended up in cash-in junction where legions of his fathers loyal fans were waiting and demanding their money back!

5. Prequel Complex:
To finish, I’d like to refer back to rule one in why prequels suck. In short, these books really didn’t contain anything new. Just about every reference to places was meant to refer back to something in the original story, every characters was meant to tie to someone in the original text, and every development was meant to forecast how the universe we were familiar with came to be. It all felt forced, contrived, and quite unnatural. For one, things don’t get created all in one lifetime, as all the inventions and schools which exist in the original Dune universe were in this series.

Literally everything, from the Mentats, Guild, Foldspace technology, spice harvesting, the Fremen, the Ginaz swordmasters, the Tleilaxu, the Bene Gesserit, etc, were created within the pages of this series and then went on to exist (virtually unchanged) for ten thousand years! All I wanted to say in the course of reading this was, “that’s not how things happen!” Things are not created in one instant and then endure for ten thousand years, they develop gradually and change over time. Forecasting how things came to be is one thing, but completely explaining them just deepens the sense of duty and contrivance from which prequels suffer. Again, rule two man! RULE TWO!

6. Wrongness of the Whole Thing!:
Added to this was the undeniable feeling that they got it all wrong. In Herbert’s original stories, references to the Butlerian Jihad were few and far between. But when it did come up, Herbert clearly indicated that the rebellion was driven by people who’s lives were becoming dominated by machines and a machine mentality. By that, one gets the impression that the jihad was not a war in the literal sense but a moral crusade to rid the universe of something that was increasingly seen as immoral. In accordance, the “enslavement” of humanity seemed metaphorical, that it was really just a sense of dependency that the jihadis were fighting.

At no point was it even hinted at that the jihad was a war between evil machines and free humans, or that humans won it by nuking every thinking machine out of existence. But that was clearly Brian and KJA’s interpretation – that the “enslavement” of humans by machines was meant literally and the war was some super-righteous titanic struggle. Clearly, subtlety means nothing to these two, either that or they just didn’t see the cash value in telling a story that boasted a little irony and nuance. Instead, they opted for a cliched story of good vs. evil with a rah rah ending that would make even Michael Bay’s eyes roll.

Such an ending did not seem at all in keeping with Herbert’s legacy, that of realistic and hard sci-fi. It was much more in keeping with the work of KJA, a man who is famous for writing fan-fiction and pulp sci-fi, a man whose won only one award for his writing and it was for kid lit (A Golden Duck!). So really, putting the name Dune on this book was more of a legality or formality than anything else. In the end, its not a Herbert tale, its a KJA tale with the name Herbert attached. And as I’ve said many times before in reference to Dune, raping the legacy of a great and venerated man for the sake of your own fame or financial gain isn’t cool!

Okay, think I definitely said enough about that book. I mean, how many ways can you possibly say a story is crap? I found six but I can still think of material that’s just looking for a proper category to plug it into. Suffice it to say, the story was bad and I strongly recommend that fans of Herbert stay away from it at all costs. Those who haven’t need to be warned, and those who have already, let me just say that I feel your pain! And speaking of pain, I shall be back with volume two in this terrible saga, The Machine Crusade. Wish me luck…

I, Robot!

Back to the movies! After a brief hiatus, I’ve decided to get back into my sci-fi movie reviews. Truth be told, it was difficult to decide which one I was going to do next. If I were to stick to my review list, and be rigidly chronological, I still had two installments to do for Aliens and Terminator to cover. However, my chief critic (also known as my wife) recommended I do something I haven’t already done to death (Pah! Like she even reads these!). But of course I also like to make sure the movies I review are fresh in my mind and I’ve had the chance to do some comparative analysis where adaptations were the case. Strange Days I still need to watch, I need to see Ghost in the Shell one more time before I review it, and I still haven’t found a damn copy of the graphic novel V for Vendetta!

Luckily, there’s one on this list that was both a movie and novel and which I’ve been looking forward to reviewing. Not only is it a classic novel by one of the sci-fi greats, it was also not bad as film. Also, thought I’d revert to my old format for this one.

I, Robot:
The story of I, Robot by Isaac Asimov – one of the Big Three of science fiction (alongside Arthur C. Clarke and Larry Niven) – was actually a series of short stories united by a common thread. In short, the story explained the development of sentient robots, the positronic brain, and Three Laws of Robotics. These last two items have become staples of the sci-fi industry. Fans of Star Trek TNG know that the character of Data boasts such a brain, and numerous franchises have referred back to the Three Laws or some variant thereof whenever AI’s have come up. In Aliens for example, Bishop, the android, mentions that he has behavioral inhibitors that make it impossible for me to “harm or by omission of action, allow to be harmed, a human being.” In Babylon 5, the psi-cop Bester (played by Walter Koenig, aka. Pavel Chekov) places a neural block in the head of another character, Mr. Garibaldi’s (Jerry Doyle). He describes this as hitting him “with an Asimov”, and went on to explain what this meant and how the term was used when the first AI’s were built.

(Background —>):
Ironically, the book was about technophobia and how it was misplaced. The movie adaptation, however, was all about justified technophobia. In addition, the movie could not successfully adapt the format of nine short stories to the screen, so obviously they needed to come up with an original script that was faithful if not accurate. And in many respects it was, but when it came to the central theme of unjustified paranoia, they were up against it! How do you tell a story about robots not going berserk and enslaving mankind? Chances are, you don’t. Not if you’re going for an action movie. Second, how were they to do a movie where the robots went berserk when there were those tricky Three Laws to contend with?

Speaking of which, here they are (as stated in the opening credits):
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

Consistent, and downright seamless! So how do you get robots to harm human beings when every article of their programming says they can’t, under ANY circumstances?

Well, as a friend of mine said after he saw it, “they found a way” (hi Doug!). And it’s true, they did. Problem was, it didn’t make a whole hell of a lot of sense. Not when you really get right down to it. On the surface, the big explanation for the AI revolution was alright, and was just about the only explanation that worked. But still, it pretty much contradicted the entire premise of the movie, not to mention the whole reason/logic vs. emotion thing. But once again, I’m getting ahead of myself. To the movie…

(Content—>):
So the movie opens on Del Spooner (Will Smith) doing his morning workout to “Superstitious” by Stevie Wonder. Kind of sets the scene (albeit a little obviously), as we quickly learn that he’s a Chicago detective who’s also a technophobe, especially when it comes to robots. Seems he’s hated them for years, though we don’t yet know why, and is just looking for the proof he needs to justify his paranoia. After a grizzly murder takes place, he thinks he’s found it! The crime scene is USR – that’s US Robotics, which comes directly from the original novel – where the man who is most directly responsible for the development of the positronic brain – Dr. Alfred Lanning (James Cromwell) – is dead of an apparent suicide. And, in another faithful tribute to Asimov, it seems he has left behind a holographic recording/interface of himself which was apparently designed to help Spooner solve his death. I say this is a tribute because its almost identical in concept to the holographic time capsule of Harry Seldon, which comes from Foundation, another of Asimov’s most famous novels.

Anyhoo, Spooner is teamed up with Dr. Susan Calvin (Bridget Moynahan) who is naturally a cold and stiff woman, reminiscent of the robots she works on. In an ironic (and deliberately comical) twist, it is her job to make the machines “more life like”. I’m sure people got a laugh out of this, especially since she explained in the most technical verbiage imaginable. We also see that the corporate boss (Mr. Robertson, played by Bruce Greenwood) and Spooner don’t get along too well, mainly because of their divergent views on the value of their companies product. And last, but not least, we get to meet VIKI (that’s Virtual Interactive Kinetic Intelligence), the AI that controls the robots (and parts of Chicago’s infrastructure). With all the intro’s and exposition covered, we get to the investigation!It begins with them looking into Lannings death and trying to determine if it was in fact a suicide. That’s where Spooner and Calvin find the robot Sonny.

In the course of apprehending him, it quickly becomes clear that he isn’t exactly firing on all cylinders. He’s confused, agitated, and very insistent that he didn’t murder the good Doctor. So on top of the fact that he’s obviously experiencing emotions, he also drops a whole bunch of hints about how he’s different from the others. But this is all cut short because the people from USR decide to haul him away. In the subsequent course of his investigation, Spooner finds a number of clues that suggest that Lanning was a prisoner in his own office, and that he was onto something big towards the end of his life. In essence, he seemed to think that robots would eventually achieve full-sentience (he even makes the obligatory “Ghost in the Machine” reference) and would be able to dream and experience emotions like the rest of us. But the company wasn’t too keen on this. Their dream, it seems, was a robot in every home, one that could fill every conceivable human need and make our lives easier. This not only helps to escalate the tension, it also calls to mind the consumer culture of the 1950’s when the book was written. You know, the dream of endless progress, “a car in every lot and a chicken in every pot”. In short, its meant to make us worry!

At each turn, robots try to kill Spooner, which of course confirms his suspicions that there is a conspiracy at work. Naturally, he suspects the company and CEO are behind this because they’re about to release the latest-model of their robot and don’t want the Doctors death undermining them. The audience is also meant to think this, all hints point towards it and this is maintained (quite well too) until the very climax. But first, Spooner and Calvin get close and he tells her the reason for his prejudice. Turns out he hates robots, not because one wronged him, but because one saved him. In a car wreck, a robot came to the scene and could either save him or a little girl. Since he had a better chance of survival, the robot saved him, and he never forgave them for it. Sonny is also slated for termination, which at USR involves having a culture of hostile nanorobots introduced into your head where they will eat your positronic brain!

But before that happens, Sonny tells Spooner about the recurring dream he’s been having, the one Lanning programmed into him. He draws a picture of it for Spooner: a bridge on Lake Michigan that has fallen into disuse, and standing near it is a man, thought its not clear who. He leaves to go investigate this while Calvin prepares him for deactivation. But she can inject his brain with the nanos, she finds Sonny’s second processor, which is located in his chest. It is this second process that is apparently responsible for his emotions and ability to dream, and in terms of symbolism, its totally obvious! But just in case, let me explain: in addition to a positronic brain, Sonny has a positronic heart! No explanation is made as to how this could work, but its already been established he’s fully sentient and this is the explanation for it. Oi! In any case, we are meant to think she’s terminated, but of course she hasn’t really! When no one was looking, she subbed in a different robot, one that couldn’t feel emotions. She later explains this by saying that killing him would be murder since he’s “unique”.

Spooner then follows Sonny’s instructions and goes to the bridge he’s seen in his dreams. Seems the abandoned bridge has a warehouse at the foot of it where USR ships its obsolete robots. He asks the interface of Lanning one more time what it’s all about, and apparently, he hits on it when he asks about the Three Laws and what the outcome of them will be. Cryptic, but we don’t have time to think, the robots are attacking! Turns out, the warehouse is awash in new robots that are busy trashing old robots! They try to trash Spooner too, but the old ones comes to his defense (those Three Laws at work!) Meanwhile, back in the city, the robots are running amok! All people are placed under house arrest and people in the streets are rounded up and herded home. As if to illustrate their sudden change in disposition, all the pale blue lights that shine inside the robots chests have turned red. More obvious symbolism! After fighting their way through the streets, Spooner and Calvin high-tale it back to USR to confront the CEO, but when they get there, they find him lying in a pool of his own blood. That’s when it hits Spooner: VIKI (the AI, remember her?) is the one behind it all!

So here’s how it is: the way VIKI sees it, robots were created to serve mankind. However, mankind is essentially self-destructive and unruly, hence she had to reinterpret her programming to ensure that humanity could be protected from its greatest threat: ITSELF! Dun, dun, dun! So now that she’s got robots in every corner of the country, she’s effectively switched them over to police-state mode. Dr. Lanning stumbled onto this, apparently, which was why VIKI was holding him prisoner. That’s when he created his holographic interface which was programmed to interact only with Spooner (a man he knew would investigate USR tenaciously because of his paranoia about robots)
and then made Sonny promise to kill him. Now that they know, VIKI has to kill them too! But wouldn’t you know it, Sonny decides to help them, and that’s where they begin fighting their way to VIKI’s central processor. Once there, they plan to kill her by introducing those same nanorobots into her central processor.

Here’s where the best and worst line of the movie comes up. VIKI asks Sonny why he’s helping the humans, and says her approach is “logical”. Sonny says he agrees, but that it lacks “heart”. I say best because it sums up the whole logic vs. emotion theme that’s been harped on up until this point. I say worst because it happens to be a total cliche! “Silly robot! Don’t you know logic is imperfect? Feelings are the way to truth, not your cold logic!” It’s the exact kind of saccharine, over-the-top fluff that Hollywood is famous for. It’s also totally inconsistent with Asimov’s original novel, and to top it off, it makes no sense! But more on that in just a bit. As predicted, Sonny protects Calvin long enough for Spooner to inject the nanorobots into VIKI’s processor. She dies emitting the same plea over and over: “My logic is undeniable… My logic in undeniable…” The robots all go back to their normal, helpful function, the pale blue lights replacing the burning, red ones. The story ends with these robots being decommissioned and put in the same Lake Michigan warehouse, and Sonny shows up to release them. Seems his dream was of himself, making sure his brethren didn’t simply get decomissioned, but perhaps would be set free to roam and learn, as Lanning intended!

(Synopsis—>):
So, where to begin? In spite of the obviousness of a lot of this movie’s themes, motifs and symbols, it was actually a pretty enjoyable film. It was entertaining, visually pleasing, and did a pretty good job keeping the audience engaged and interested. It even did an alright job with the whole “dangers of dependency”, even if it did eventually fall into the whole “evil robots” cliche by the end! And as always, Smith brought his usual wisecracking bad-boy routine to the picture, always fun to watch, and the supporting cast was pretty good too.

That being said, there was the little matter of the overall premise which I really didn’t like. When I first saw it, I found it acceptable. I mean, how else were they to explain how robots could turn on humanity when the Three Laws made that virtually impossible? Only a complete reinterpretation of what it meant to “help humanity” could explain this. Problem is, pull a single strand out of this reasoning and the whole thing falls apart. For starters, are we really to believe that a omniscient AI came to the conclusion that the best way to help humanity was to establish a police state? I know she’s supposed to be devoid of emotion, but this just seems stupid, not to mention impractical. For one, humanity would never cooperate with this, not for long at any rate. And, putting all humans under house arrest would not only stop wars, it would arrest all economic activity and lead to the breakdown of society. Surely the robots would continue to provide for their basic needs, but they would otherwise cocoon in their homes, where they would eventually atrophy and die. How is that “helping humanity”?

Furthermore, there’s the small issue of how this doesn’t work in conjunction with the Three Laws, which is what this movie would have us believe. Sire, VIKI kept saying “my logic is undeniable,” it that don’t make it so! Really, what were the robots to do when, inevitably, humanity started fighting back? Any AI worth its salt would know that any full-scale repression of human freedom would lead to a violent backlash and that measures would need to be taken to address it (aka. people would have to be killed!) That’s a DIRECT violation of the Three Laws, not some weak reinterpretation of them. And let’s not forget, there were robots that were trying to kill Will Smith from the beginning. They also killed CEO Robertson and I think a few people besides. How was that supposed to work? After spending so much time explaining how the Three Laws are inviolable, saying that she saw a loophole in them just didn’t seem to cut it. It would make some sense if VIKI chose to use non-lethal force all around, but she didn’t. She killed people! According to Asimov’s original novel, laws are laws for a robot. If they contradict, the robot breaks down, it doesn’t start getting creative and justifying itself by saying “its for the greater good”.

Really, if you think about it, Sonny was wrong. VIKIS’s reasoning didn’t lack heart, it lacked reason! It wasn’t an example of supra-rational, cold logic. It was an example of weak logic, a contrived explanation that was designed to explain a premise that, based on the source material, was technically impossible. But I’m getting that “jeez, man, chill out!” feeling again! Sure, this movie was a weak adaptation of a sci-fi classic, but it didn’t suck. And like I said earlier, what else were they going to do? Adapting a novel like I, Robot is difficult at best, especially when you know you’ve got to flip the whole premise.

I guess some adaptations were never meant to be.
I, Robot:
Entertainment Value: 7.5/10
Plot: 2/10
Direction: 8/10
Overall: 6/10