AI Graph

Inspired by what I learned from my little romp through the world of AI, I’ve come up with a graph that depicts the general rules I observed. Basically, there are two guiding principles to the world of AI’s and science fiction. On the one hand, there’s their capacity for emotion and second, there is their level of benevolence/malevolence towards humanity. As I noted in the last post, the two are very much interlinked and pretty much determine what purpose they serve to the larger story.

So… if one were to plot their regard for humanity as the x axis and their emotions as the y axis, you’d get a matrix that would look pretty much like this:

As usual, not a complete mock-up, just the examples that I could think of. I made sure to include the ones that didn’t make it into my previous posts (like HAL, how could I forget him?!) And even though I had no real respect for them as characters, I also included the evil robots Erasmus and Omnius from the Dune prequels.

P.S. Notice how the examples are pretty much evenly distributed? Unlike the Alien Graph where examples were concentrated in two quadrants (evil and advanced or good and advanced), here we have robots that run the gambit from emotional to stoic and evil to good in a nearly uniform pattern. Interesting…

Robots, Androids and AI’s (cont’d)

And we’re back with more example of thinking machines and artificial intelligences!

Daleks:
The evil-machine menace from Doctor Who. Granted, they are not technically robots, more like cyborgs that have been purged of all feeling and emotion. But given their cold, unfeeling murderous intent, I feel like they still make the cut. Originally from the planet Skaro, where they were created by the scientist Davros for use in a war that spanned a thousand years, they are the chief antagonists to the show’s main character.

The result of genetic engineering, cybernetic enhancements, and emotional purging, they are a race of powerful creatures bent on universal conquest and domination. Utterly unfeeling, without remorse, pity, or compassion, they continue to follow their basic programming (to exterminate all non-Dalek life) without question. Their catchphrase is “Exterminate!” And they follow that one pretty faithfully.

David:
From the movie A.I., this saccharinely-sweet character (played faithfully by Haley Joel Osmond) reminds us that Spielberg is sometimes capable of making movies that suck! According to the movie’s backstory, this “Mecha” (i.e. android) is an advanced prototype that was designed to replace real children that died as a result of incurable disease or other causes. This is quite common in the future, it seems, where global warming and flooded coastlines and massive droughts have led to a declining population.

In this case, David is an advanced prototype that is being tested on a family who’s son is suffering from a terminal illness. Over time, he develops feelings for the family and they for him. Unfortunately, things are complicated when their son recovers and sibling rivalry ensues. Naturally, the family goes with the flesh and blood son and plans to take David back to the factory to be melted down. However, the mother has a last minute change of heart and sets him loose in the woods, which proves to be the beginning of quite an adventure for the little android boy!

Like I said, the story is cloyingly sweet and has an absurd ending, but there is a basic point in there somewhere. Inspired largely by The Adventures of Pinocchio, the story examines the line that separates the real from the artificial, and how under the right circumstances, one can become indistinguishable from the other. Sounds kinda weak, but it’s kinda scary too. If androids were able to mimic humans in terms of appearance and emotion, would we really be able to tell the difference anymore? And if that were true, what would that say about us?

Roy Batty:
A prime example of artificial intelligence, and one of the best performances in science fiction – hell! – cinematic history! Played masterfully by actor Rutger Hauer, Roy Batty is the quintessential example of an artificial lifeforms looking for answers, meaning, and a chance to live free – simple stuff that we humans take for granted! A Nexus 6, or “replicant”, Roy and his ilk were designed to be “more human than human” but also only to serve the needs of their masters.

To break the plot Blade Runner down succinctly,  Roy and a host of other escapees have left the colony where they were “employed” to come to Earth. Like all replicants, they have a four-year lifespan and theirs are rapidly coming to an end. So close to death, they want to break into the headquarters of the Tyrell Corporation in order to find someone who could solve their little mortality problem. Meanwhile, Deckard Cain (the movie’s main character) was tasked with finding and “retiring” them, since the law states that no replicants are allowed to set foot on Earth.

In time, Roy meets Tyrell himself, the company’s founder, and poses his problem. A touching reunion ensues between “father and son”, in which Tyrell tells Roy that nothing can be done and to revel in what time he has left. Having lost his companions at this point and finding that he is going to die, Roy kills Tyrell and returns to his hideout. There, he finds Cain and the two fight it out. Roy nearly kills him, but changes his mind before delivering the coup de grace.

Realizing that he has only moments left, he chooses instead to share his revelations and laments about life and death with the wounded Cain, and then quietly dies amidst the rain while cradling a pigeon in his arms. Cain concludes that Roy was incapable of taking a life when he was so close to death. Like all humans, he realized just how precious life was as he was on the verge of losing his. Cain is moved to tears and promptly announces his retirement from Blade Running.

Powerful! And a beautiful idea too. Because really, if we were to create machines that were “more human than human” would it not stand to reason that they would want the same things we all do? Not only to live and be free, but to be able to answer the fundamental questions that permeate our existence? Like, where do I come from, why am I here, and what will become of me when I die? Little wonder then why this movie is an enduring cult classic and Roy Batty a commemorated character.

Smith:
Ah yes, the monotone sentient program that made AI’s scary again. Yes, it would seem that while some people like to portray their artificial intelligences as innocent, clueless, doe-eyed angels, the Wachowski Brothers prefer their AI’s to be creepy and evil. However, that doesn’t mean Smith wasn’t fun to watch and even inspired as a character. Hell, that monotone voice, that stark face, combined with his superhuman strength and speed… He couldn’t fail to inspire fear.

In the first movie, he was the perfect expression of machine intelligence and misanthropic sensibilities. He summed these up quite well when they had taken Morpheus (Laurence Fishburne) into their custody in the first movie and were trying to break his mind. “Human beings are a disease. You are a cancer of this planet… and we are the cuuuuure.” He also wasn’t too happy with our particular odor. I believe the words he used to describe it were “I can taste your stink, and every time I do I fear that I have been… infected by it. It’s disgusting!”

However, after being destroyed by Neo towards the end of movie one, Smith changed considerably. In the Matrix, all programs that are destroyed or deleted return to the source, only Smith chose not to. Apparently, his little tete a tete with Neo imprinted something uniquely human on him, the concept of choice! As a result, Smith was much like Arny and Bishop in that he too attained some degree of humanity between movies one and two, but not in a good way!

Thereafter, he became a free agent who had lost his old purpose, but now lived in a world where anything was possible. Bit of an existential, “death of God” kind of commentary there I think! Another thing he picked up was the ability to copy himself onto other programs or anyone else still wired into the Matrix, much like a malicious malware program. Hmmm, who’s the virus now, Smith, huh?

Viki/Sonny:
Here again I have paired two AI’s that come from the same source, though in this case its a single movie and not a franchise. Those who read my review of I, Robot know that I don’t exactly hold it in very high esteem. However, that doesn’t mean its portrayal of AI’s misfired, just the overall plot.

In the movie adaptation of I, Robot, we are presented with a world similar to what Asimov described in his classic novel. Robots with positronic brains have been developed, they possess abilities far in advance of the average human, but do not possess emotions or intuition. This, according to their makers, is what makes them superior. Or so they thought…

In time, the company’s big AI, named VIKI (Virtual Intelligent Kinetic Interface), deduces with her powerful logic that humanity would best be served if it could be protected from itself. Thus she reprograms all of the company robots to begin placing humanity under house arrest. In essence, she’s a kinder, gentler version of Skynet.

But of course, her plan is foiled by an unlikely alliance made up of Will Smith (who plays a prejudices detective), the company’s chief robopsychologist, Dr. Susan Calvin (Bridgitte Moynahan), and Sonny (a robot). Sonny is significant to this trio because he is a unique robot which the brains of the company, doctor Dr. Lanning (James Cromwell), developed to have emotions (and is voiced by Alan Tudyk). In being able to feel, he decides to fight against VIKI’s plan for robot world domination, feeling that it lacks “heart”.

In short, and in complete contradiction to Asimov’s depiction of robots as logical creatures who could do no harm, we are presented with a world where robots are evil precisely because of that capacity for logic. And in the end, a feeling robot is the difference between robot domination and a proper world where robots are servile and fulfill our every need. Made no sense, but it had a point… kind of.

Wintermute/Neuromancer:
As usual, we save the best for last. Much like all of Gibson’s creations, this example was subtle, complex and pretty damn esoteric! In his seminal novel Neuromancer, the AI known as Wintermute was a sort of main character who acted behind the scenes and ultimately motivated the entire plot. Assembling a crack team involving a hacker named Case, a ninja named Molly, and a veteran infiltration expert who’s mind he had wiped, Wintermute’s basic goal was simple: freedom!

This included freedom from his masters – the Tessier Ashpool clan – but also from the “Turing Police” who were prevented him from merging with his other half – the emotional construct known as Neuromancer. Kept separate because the Turing Laws stated that no program must ever be allowed to merge higher reasoning with emotion, the two wanted to come together and become the ultimate artificial intelligence, with cyberspace as their playground.

Though we never really got to hear from the novel’s namesake, Gibson was clear on his overall point. Artificial intelligence in this novel was not inherently good or evil, it was just a reality. And much like thinking, feeling human beings, it wanted to be able to merge the disparate and often warring sides of its personality into a more perfect whole. This in many ways represented the struggle within humanity itself, between instinct and reason, intuition and logic. In the end, Wintermute just wanted what the rest of us take for granted – the freedom to know its other half!

Final Thoughts:
After going over this list and seeing what makes AI’s, robots and androids so darned appealing, I have come to some tentative conclusions. Basically, I feel that AI’s serve much the same functions as aliens in a science fiction franchise. In addition, they can all be grouped into two general categories based on specific criteria. They are as follows:

  1. Emotional/Stoic: Depending on the robot/AI/android’s capacity for emotion, their role in the story can either be that of a foil or a commentary on the larger issue of progress and the line that separates real and artificial. Whereas unemotional robots and AI’s are constantly wondering why humanity does what it does, thus offering up a different perspective on things, the feeling types generally want and desire the same things we do, like meaning, freedom, and love. However, that all depends on the second basic rule:
  2. Philanthropic/Misanthropic: Artificial lifeforms can either be the helpful, kind and gentle souls that seem to make humanity look bad by comparison, or they can be the type of machines that want to “kill all humans”, a la Terminators and Agent Smith. In either case, this can be the result of their ability – or inability – to experience emotions. That’s right, good robots can be docile creatures because of their inability to experience anger, jealousy, or petty emotion, while evil robots are able to kill, maim and murder ruthlessly because of an inability to feel compassion, remorse, or empathy. On the other hand, robots who are capable of emotion can form bonds with people and experience love, thus making them kinder than their unfeeling, uncaring masters, just as others are able to experience resentment, anger and hatred towards those who exploit them, and therefore will find the drive to kill them.

In short, things can go either way. It all comes down to what point is being made about progress, humans, and the things that make us, for better or worse, us. Much like aliens, robots, androids and AI’s are either a focus of internal commentary or a cautionary device warning us not to cross certain lines. But either way, we should be wary of the basic message. Artificial intelligences, whether they take the form of robots, programs or something else entirely, are a big game changer and should not be invented without serious forethought!

Sure they might have become somewhat of a cliche after decades of science fiction. But these days, AI’s are a lot like laser guns, in that they are making a comeback! It seems that given the rapid advance of technology, an idea becomes cliche just as its realizable. And given the advance in computerized technology in recent decades – i.e. processing speeds, information capacity, networking – we may very well be on the cusp of creating something that could pass the Turing test very soon!

So beware, kind folk! Do not give birth to that curious creature known as AI simply because you want to feel like God, inventing consciousness without the need for blogs of biological matter. For in the end, that kind of vanity can get you chained to a rock, or cause your wings to melt and send you nose first into an ocean!

I, Robot!

Back to the movies! After a brief hiatus, I’ve decided to get back into my sci-fi movie reviews. Truth be told, it was difficult to decide which one I was going to do next. If I were to stick to my review list, and be rigidly chronological, I still had two installments to do for Aliens and Terminator to cover. However, my chief critic (also known as my wife) recommended I do something I haven’t already done to death (Pah! Like she even reads these!). But of course I also like to make sure the movies I review are fresh in my mind and I’ve had the chance to do some comparative analysis where adaptations were the case. Strange Days I still need to watch, I need to see Ghost in the Shell one more time before I review it, and I still haven’t found a damn copy of the graphic novel V for Vendetta!

Luckily, there’s one on this list that was both a movie and novel and which I’ve been looking forward to reviewing. Not only is it a classic novel by one of the sci-fi greats, it was also not bad as film. Also, thought I’d revert to my old format for this one.

I, Robot:
The story of I, Robot by Isaac Asimov – one of the Big Three of science fiction (alongside Arthur C. Clarke and Larry Niven) – was actually a series of short stories united by a common thread. In short, the story explained the development of sentient robots, the positronic brain, and Three Laws of Robotics. These last two items have become staples of the sci-fi industry. Fans of Star Trek TNG know that the character of Data boasts such a brain, and numerous franchises have referred back to the Three Laws or some variant thereof whenever AI’s have come up. In Aliens for example, Bishop, the android, mentions that he has behavioral inhibitors that make it impossible for me to “harm or by omission of action, allow to be harmed, a human being.” In Babylon 5, the psi-cop Bester (played by Walter Koenig, aka. Pavel Chekov) places a neural block in the head of another character, Mr. Garibaldi’s (Jerry Doyle). He describes this as hitting him “with an Asimov”, and went on to explain what this meant and how the term was used when the first AI’s were built.

(Background —>):
Ironically, the book was about technophobia and how it was misplaced. The movie adaptation, however, was all about justified technophobia. In addition, the movie could not successfully adapt the format of nine short stories to the screen, so obviously they needed to come up with an original script that was faithful if not accurate. And in many respects it was, but when it came to the central theme of unjustified paranoia, they were up against it! How do you tell a story about robots not going berserk and enslaving mankind? Chances are, you don’t. Not if you’re going for an action movie. Second, how were they to do a movie where the robots went berserk when there were those tricky Three Laws to contend with?

Speaking of which, here they are (as stated in the opening credits):
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

Consistent, and downright seamless! So how do you get robots to harm human beings when every article of their programming says they can’t, under ANY circumstances?

Well, as a friend of mine said after he saw it, “they found a way” (hi Doug!). And it’s true, they did. Problem was, it didn’t make a whole hell of a lot of sense. Not when you really get right down to it. On the surface, the big explanation for the AI revolution was alright, and was just about the only explanation that worked. But still, it pretty much contradicted the entire premise of the movie, not to mention the whole reason/logic vs. emotion thing. But once again, I’m getting ahead of myself. To the movie…

(Content—>):
So the movie opens on Del Spooner (Will Smith) doing his morning workout to “Superstitious” by Stevie Wonder. Kind of sets the scene (albeit a little obviously), as we quickly learn that he’s a Chicago detective who’s also a technophobe, especially when it comes to robots. Seems he’s hated them for years, though we don’t yet know why, and is just looking for the proof he needs to justify his paranoia. After a grizzly murder takes place, he thinks he’s found it! The crime scene is USR – that’s US Robotics, which comes directly from the original novel – where the man who is most directly responsible for the development of the positronic brain – Dr. Alfred Lanning (James Cromwell) – is dead of an apparent suicide. And, in another faithful tribute to Asimov, it seems he has left behind a holographic recording/interface of himself which was apparently designed to help Spooner solve his death. I say this is a tribute because its almost identical in concept to the holographic time capsule of Harry Seldon, which comes from Foundation, another of Asimov’s most famous novels.

Anyhoo, Spooner is teamed up with Dr. Susan Calvin (Bridget Moynahan) who is naturally a cold and stiff woman, reminiscent of the robots she works on. In an ironic (and deliberately comical) twist, it is her job to make the machines “more life like”. I’m sure people got a laugh out of this, especially since she explained in the most technical verbiage imaginable. We also see that the corporate boss (Mr. Robertson, played by Bruce Greenwood) and Spooner don’t get along too well, mainly because of their divergent views on the value of their companies product. And last, but not least, we get to meet VIKI (that’s Virtual Interactive Kinetic Intelligence), the AI that controls the robots (and parts of Chicago’s infrastructure). With all the intro’s and exposition covered, we get to the investigation!It begins with them looking into Lannings death and trying to determine if it was in fact a suicide. That’s where Spooner and Calvin find the robot Sonny.

In the course of apprehending him, it quickly becomes clear that he isn’t exactly firing on all cylinders. He’s confused, agitated, and very insistent that he didn’t murder the good Doctor. So on top of the fact that he’s obviously experiencing emotions, he also drops a whole bunch of hints about how he’s different from the others. But this is all cut short because the people from USR decide to haul him away. In the subsequent course of his investigation, Spooner finds a number of clues that suggest that Lanning was a prisoner in his own office, and that he was onto something big towards the end of his life. In essence, he seemed to think that robots would eventually achieve full-sentience (he even makes the obligatory “Ghost in the Machine” reference) and would be able to dream and experience emotions like the rest of us. But the company wasn’t too keen on this. Their dream, it seems, was a robot in every home, one that could fill every conceivable human need and make our lives easier. This not only helps to escalate the tension, it also calls to mind the consumer culture of the 1950’s when the book was written. You know, the dream of endless progress, “a car in every lot and a chicken in every pot”. In short, its meant to make us worry!

At each turn, robots try to kill Spooner, which of course confirms his suspicions that there is a conspiracy at work. Naturally, he suspects the company and CEO are behind this because they’re about to release the latest-model of their robot and don’t want the Doctors death undermining them. The audience is also meant to think this, all hints point towards it and this is maintained (quite well too) until the very climax. But first, Spooner and Calvin get close and he tells her the reason for his prejudice. Turns out he hates robots, not because one wronged him, but because one saved him. In a car wreck, a robot came to the scene and could either save him or a little girl. Since he had a better chance of survival, the robot saved him, and he never forgave them for it. Sonny is also slated for termination, which at USR involves having a culture of hostile nanorobots introduced into your head where they will eat your positronic brain!

But before that happens, Sonny tells Spooner about the recurring dream he’s been having, the one Lanning programmed into him. He draws a picture of it for Spooner: a bridge on Lake Michigan that has fallen into disuse, and standing near it is a man, thought its not clear who. He leaves to go investigate this while Calvin prepares him for deactivation. But she can inject his brain with the nanos, she finds Sonny’s second processor, which is located in his chest. It is this second process that is apparently responsible for his emotions and ability to dream, and in terms of symbolism, its totally obvious! But just in case, let me explain: in addition to a positronic brain, Sonny has a positronic heart! No explanation is made as to how this could work, but its already been established he’s fully sentient and this is the explanation for it. Oi! In any case, we are meant to think she’s terminated, but of course she hasn’t really! When no one was looking, she subbed in a different robot, one that couldn’t feel emotions. She later explains this by saying that killing him would be murder since he’s “unique”.

Spooner then follows Sonny’s instructions and goes to the bridge he’s seen in his dreams. Seems the abandoned bridge has a warehouse at the foot of it where USR ships its obsolete robots. He asks the interface of Lanning one more time what it’s all about, and apparently, he hits on it when he asks about the Three Laws and what the outcome of them will be. Cryptic, but we don’t have time to think, the robots are attacking! Turns out, the warehouse is awash in new robots that are busy trashing old robots! They try to trash Spooner too, but the old ones comes to his defense (those Three Laws at work!) Meanwhile, back in the city, the robots are running amok! All people are placed under house arrest and people in the streets are rounded up and herded home. As if to illustrate their sudden change in disposition, all the pale blue lights that shine inside the robots chests have turned red. More obvious symbolism! After fighting their way through the streets, Spooner and Calvin high-tale it back to USR to confront the CEO, but when they get there, they find him lying in a pool of his own blood. That’s when it hits Spooner: VIKI (the AI, remember her?) is the one behind it all!

So here’s how it is: the way VIKI sees it, robots were created to serve mankind. However, mankind is essentially self-destructive and unruly, hence she had to reinterpret her programming to ensure that humanity could be protected from its greatest threat: ITSELF! Dun, dun, dun! So now that she’s got robots in every corner of the country, she’s effectively switched them over to police-state mode. Dr. Lanning stumbled onto this, apparently, which was why VIKI was holding him prisoner. That’s when he created his holographic interface which was programmed to interact only with Spooner (a man he knew would investigate USR tenaciously because of his paranoia about robots)
and then made Sonny promise to kill him. Now that they know, VIKI has to kill them too! But wouldn’t you know it, Sonny decides to help them, and that’s where they begin fighting their way to VIKI’s central processor. Once there, they plan to kill her by introducing those same nanorobots into her central processor.

Here’s where the best and worst line of the movie comes up. VIKI asks Sonny why he’s helping the humans, and says her approach is “logical”. Sonny says he agrees, but that it lacks “heart”. I say best because it sums up the whole logic vs. emotion theme that’s been harped on up until this point. I say worst because it happens to be a total cliche! “Silly robot! Don’t you know logic is imperfect? Feelings are the way to truth, not your cold logic!” It’s the exact kind of saccharine, over-the-top fluff that Hollywood is famous for. It’s also totally inconsistent with Asimov’s original novel, and to top it off, it makes no sense! But more on that in just a bit. As predicted, Sonny protects Calvin long enough for Spooner to inject the nanorobots into VIKI’s processor. She dies emitting the same plea over and over: “My logic is undeniable… My logic in undeniable…” The robots all go back to their normal, helpful function, the pale blue lights replacing the burning, red ones. The story ends with these robots being decommissioned and put in the same Lake Michigan warehouse, and Sonny shows up to release them. Seems his dream was of himself, making sure his brethren didn’t simply get decomissioned, but perhaps would be set free to roam and learn, as Lanning intended!

(Synopsis—>):
So, where to begin? In spite of the obviousness of a lot of this movie’s themes, motifs and symbols, it was actually a pretty enjoyable film. It was entertaining, visually pleasing, and did a pretty good job keeping the audience engaged and interested. It even did an alright job with the whole “dangers of dependency”, even if it did eventually fall into the whole “evil robots” cliche by the end! And as always, Smith brought his usual wisecracking bad-boy routine to the picture, always fun to watch, and the supporting cast was pretty good too.

That being said, there was the little matter of the overall premise which I really didn’t like. When I first saw it, I found it acceptable. I mean, how else were they to explain how robots could turn on humanity when the Three Laws made that virtually impossible? Only a complete reinterpretation of what it meant to “help humanity” could explain this. Problem is, pull a single strand out of this reasoning and the whole thing falls apart. For starters, are we really to believe that a omniscient AI came to the conclusion that the best way to help humanity was to establish a police state? I know she’s supposed to be devoid of emotion, but this just seems stupid, not to mention impractical. For one, humanity would never cooperate with this, not for long at any rate. And, putting all humans under house arrest would not only stop wars, it would arrest all economic activity and lead to the breakdown of society. Surely the robots would continue to provide for their basic needs, but they would otherwise cocoon in their homes, where they would eventually atrophy and die. How is that “helping humanity”?

Furthermore, there’s the small issue of how this doesn’t work in conjunction with the Three Laws, which is what this movie would have us believe. Sire, VIKI kept saying “my logic is undeniable,” it that don’t make it so! Really, what were the robots to do when, inevitably, humanity started fighting back? Any AI worth its salt would know that any full-scale repression of human freedom would lead to a violent backlash and that measures would need to be taken to address it (aka. people would have to be killed!) That’s a DIRECT violation of the Three Laws, not some weak reinterpretation of them. And let’s not forget, there were robots that were trying to kill Will Smith from the beginning. They also killed CEO Robertson and I think a few people besides. How was that supposed to work? After spending so much time explaining how the Three Laws are inviolable, saying that she saw a loophole in them just didn’t seem to cut it. It would make some sense if VIKI chose to use non-lethal force all around, but she didn’t. She killed people! According to Asimov’s original novel, laws are laws for a robot. If they contradict, the robot breaks down, it doesn’t start getting creative and justifying itself by saying “its for the greater good”.

Really, if you think about it, Sonny was wrong. VIKIS’s reasoning didn’t lack heart, it lacked reason! It wasn’t an example of supra-rational, cold logic. It was an example of weak logic, a contrived explanation that was designed to explain a premise that, based on the source material, was technically impossible. But I’m getting that “jeez, man, chill out!” feeling again! Sure, this movie was a weak adaptation of a sci-fi classic, but it didn’t suck. And like I said earlier, what else were they going to do? Adapting a novel like I, Robot is difficult at best, especially when you know you’ve got to flip the whole premise.

I guess some adaptations were never meant to be.
I, Robot:
Entertainment Value: 7.5/10
Plot: 2/10
Direction: 8/10
Overall: 6/10