Robots, Androids and AI’s

Let’s talk artificial life forms, shall we? Lord knows they are a common enough feature in science fiction, aren’t they? In many cases, they take the form of cold, calculating machines that chill audiences to the bones with their “kill all humans” kind of vibe. In others, they were the solid-state beings with synthetic parts but hearts of gold and who stole ours in the process. Either way, AI’s are a cornerstone to the world of modern sci-fi. And over the past few decades, they’ve gone through countless renditions and re-imaginings, each with their own point to make about humanity, technology, and the line that separates natural and artificial.

But in the end, its really just the hardware that’s changed. Whether we were talking about Daleks, Terminators, or “Synthetics”, the core principle has remained the same. Based on mathematician and legendary cryptographer Alan Turing’s speculations, an Artificial Intelligence is essentially a being that can fool the judges in a double-blind test. Working extensively with machines that were primarily designed for solving massive mathematical equations, Turing believed that some day, we would be able to construct a machine that would be able to perform higher reasoning, surpassing even humans.

Arny (Da Terminator):
Who knew robots from the future would have Austrian accents? For that matter, who knew they’d all look like bodybuilders? Originally, when Arny was presented with the script for Cameron’s seminal time traveling sci-fi flick, he was being asked to play the role of Kyle Reese, the human hero. But Arny very quickly found himself identifying with the role of the Terminator, and a franchise was born!

Originally, the Terminator was the type of cold, unfeeling and ruthless machine that haunted our nightmares, a cyberpunk commentary on the dangers of run-away technology and human vanity. Much like its creator, the Skynet supercomputer, the T101 was part of a race of machines that decided it could do without humanity and was sent out to exterminate them. As Reese himself said in the original: “It can’t be bargained with. It can’t be reasoned with. It doesn’t feel pity, or remorse, or fear. And it absolutely will not stop, ever, until you are dead.”

The second Terminator, by contrast, was a game changer. Captured in the future and reprogrammed to protect John Conner, he became the sort of surrogate father that John never had. Sarah reflected on this irony during a moment of internal monologue during movie two: “Watching John with the machine, it was suddenly so clear. The terminator, would never stop. It would never leave him, and it would never hurt him, never shout at him, or get drunk and hit him, or say it was too busy to spend time with him. It would always be there. And it would die, to protect him. Of all the would-be fathers who came and went over the years, this thing, this machine, was the only one who measured up. In an insane world, it was the sanest choice.”

In short, Cameron gave us two visions of technology with these first two installments in the series. In the first, we got the dangers of worshiping high-technology at the expense of humanity. In movie two, we witnessed the reconciliation of humans with technology, showing how an artificial life form could actually be capable of more humanity than a human being. To quote one last line from the franchise: “The unknown future rolls toward us. I face it, for the first time, with a sense of hope. Because if a machine, a Terminator, can learn the value of human life, maybe we can too.”

Bender:
No list of AI’s and the like would ever be complete without mentioning Futurama’s Bender. That dude put’s the funk in funky robot! Originally designed to be a bending unit, hence his name, he seems more adept at wisecracking, alcoholism, chain-smoking and comedicaly plotting the demise of humanity. But its quickly made clear that he doesn’t really mean it. While he may hold humans in pretty low esteem, laughing at tragedy and failing to empathize with anything that isn’t him, he also loves his best friend Fry whom he refers to affectionately as “meat-bag”.

In addition, he’s got some aspirations that point to a creative soul. Early on in the show, it was revealed that any time he gets around something magnetic, he begins singing folk and country western tunes. This is apparently because he always wanted to be a singer, and after a crippling accident in season 3, he got to do just that – touring the country with Beck and a show called “Bend-aid” which raised awareness about the plight of broken robots.

He also wanted to be a cook, which was difficult considering he had no sense of taste or seemed to care about lethally poisoning humans! However, after learning at the feet of legendary Helmut Spargle, he learned the secret of “Ultimate Flavor”, which he then used to challenge and humiliate his idol chef Elzar on the Iron Chef. Apparently the secret was confidence, and a vial of water laced with LSD!

Other than that, there’s really not that much going on with Bender. Up front, he’s a chain smoking, alcoholic robot with loose morals or a total lack thereof. When one gets to know him better, they pretty much conclude that what you see is what you get! An endless source of sardonic humor, weird fashion sense, and dry one-liners. Of them all “Bite my shiny metal ass”, “Pimpmobile”, “We’re boned!” and “Up yours chump” seems to rank the highest.

Ash/Bishop:
Here we have yet another case of robots giving us mixed messages, and comes to us direct from the Alien franchise. In the original movie, we were confronted with Ash, an obedient corporate mole who did the company’s bidding at the expense of human life. His cold, misguided priorities were only heightened when he revealed that he admired the xenomorph because of its “purity”. “A survivor… unclouded by conscience, remorse, or delusions of morality.”

After going nuts and trying to kill Ripley, he was even kind enough to smile and say in that disembodied tinny voice of his, “I can’t lie to you about your chances, but… you have my sympathies.” What an asshole! And the perfect representation for an inhuman, calculating robot. The result of unimpeded aspirations, no doubt the same thing which was motivating his corporate masters to get their hands on a hostile alien, even if it meant sacrificing a crew or two.

But, as with Terminator, Cameron pulled a switch-up in movie two with the Synthetic known as Bishop (or “artificial human” as he preferred to be called). In the beginning, Ripley was hostile towards him, rebuffing his attempts to assure her that he was incapable of killing people thanks to the addition of his behavioral inhibitors. Because of these, he could not harm, or through inaction allow to be harmed, a human being (otherwise known as an “Asimov”). But in the end, Bishop’s constant concern for the crew and the way he was willing to sacrifice himself to save Newt won her over.

Too bad he had to get ripped in half to earn her trust. But I guess when a earlier model tries to shove a magazine down your throat, you kind of have to go above and beyond to make someone put their life in your hands again. Now if only all synthetics were willing to get themselves ripped in half for Ripley’s sake, she’d be set!

C3P0/R2D2:
For that matter, who knew robots from the future would be fay, effeminate and possibly homosexual? Not that there’s anything wrong with that last one… But as audiences are sure to agree, the other characteristics could get quite annoying after awhile. C3P0’s constant complaining, griping, moaning and citing of statistical probabilities were at once too human and too robotic! Kind of brilliant really… You could say he was the Sheldon of the Star Wars universe!

Still, C3P0 if nothing if not useful when characters found themselves in diplomatic situations, or facing a species of aliens who’s language they couldn’t possibly fathom. He could even interface with machinery, which was helpful when the hyperdrive was out or the moisture condensers weren’t working. Gotta bring in that “Blue Harvest” after all! And given that R2D2 could do nothing but bleep and blurp, someone had to be around to translate for him.

Speaking of which, R2D2 was the perfect counterpart to C3P0. As the astromech droid of the pair, he was the engineer and a real nuts and bolts kind of guy, whereas C3P0 was the diplomat and expert in protocol.  Whereas 3P0 was sure to give up at the first sign of trouble, R2 would always soldier on and put himself in harm’s way to get things done. This difference in personality was also made evident in their differences in height and structure. Whereas C3P0 was tall, lanky and looked quite fragile, R2D2 was short, stocky, and looked like he could take a licking and keep on ticking!

Naturally, it was this combination of talents that made them comically entertaining during their many adventures and hijinks together. The one would always complain and be negative, the other would be positive and stubborn. And in the end, despite their differences, they couldn’t possibly imagine a life without the other. This became especially evident whenever they were separated or one of them was injured.

Hmmm, all of this is starting to sound familiar to me somehow. I’m reminded of another, mismatched, and possibly homosexual duo. One with a possible fetish for rubber… Not that there’s anything wrong with that! 😉

Cameron:
Some might accuse me of smuggling her in here just to get some eye-candy in the mix. Some might say that this list already has an example from the Terminator franchise and doesn’t need another. They would probably be right…

But you know what, screw that, it’s Summer Glau! And the fact of the matter is, she did a way better job than Kristanna Loken at showing that these killing/protective machines can be played by women. Making her appearance in the series Terminator: The Sarah Conner Chronicles, she worked alongside acting great Lena Headey of 300 and Game of Thrones fame.

And in all fairness, she and Lokken did bring some variety to the franchise. For instance, in the show, she portrayed yet another reprogrammed machine from the future, but represented a model different from the T101’s. The purpose of these latter models appeared to be versatility, the smaller chassis and articulate appendages now able to fit inside a smaller frame, making a woman’s body available as a potential disguise. Quite smart really. If you think about it, people are a lot more likely to trust a smaller woman than a bulked-out Arny bot any day (especially men!) It also opened up the series to more female characters other than Sarah.

And dammit, it’s Summer Glau! If she didn’t earn her keep from portraying River Tam in Firefly and Serenity, then what hope is there for the rest of us!

Cortana:
Here we have another female AI, and one who is pretty attractive despite her lack of a body. In this case, she comes to us from the Halo universe. In addition to being hailed by critics for her believability, depth of character, and attractive appearance, she was ranked as one of the most disturbingly sexual game characters by Games.net. No surprises there, really. Originally, the designers of her character used Egyptian Queen Nefertiti as a model, and her half-naked appearance throughout the game has been known to get the average gamer to stand up and salute!

Though she serves ostensibly as the ship’s AI for the UNSC Pillar of Autumn, Cortana ends up having a role that far exceeds her original programming. Constructed from the cloned brain of Dr. Catherine Elizabeth Halsey, creator of the SPARTAN project, she has an evolving matrix, and hence is capable of learning and adapting as time goes on. Due to this and their shared experiences as the series goes on, she and the Master Chief form a bond and even become something akin to friends.

Although she has no physical appearance, Cortana’ program is mobile and makes several appearances throughout the series, and always in different spots. She is able to travel around with the Master Chief, commandeer Covenant vessels, and interface with a variety of machines. And aside from her feminine appearance, he soft, melodic voice is a soothing change of pace from the Chief’s gruff tone and the racket of gunfire and dead aliens!

Data:
The stoic, stalwart and socially awkward android of Star Trek: TNG. Built to resemble his maker, Dr. Noonian Soong, Data is a first-generation positronic android – a concept borrowed from Asimov’s I, Robot. He later enlisted in Star Fleet in order to be of service to humanity and explore the universe. In addition to his unsurpassed computational abilities, he also possesses incredible strength, reflexes, and even knows how to pleasure the ladies. No joke, he’s apparently got all kind of files on how to do… stuff, and he even got to use them! 😉

Unfortunately, Data’s programming does not include emotions. Initially, this seemed to serve the obvious purpose of making his character a foil for humanity, much like Spock was in the original series. However, as the show progressed, it was revealed that Soong had created an android very much like Data who also possessed the capacity for emotions. But of course, things went terribly wrong when this model, named Lor, became terribly ambitious and misanthropic. There were some deaths…

Throughout the original series, Data finds himself seeking to understand humanity, frequently coming up short, but always learning from the experience. His attempts at humor and failure to grasp social cues and innuendo are also a constant source of comic relief, as are his attempts to mimic these very things. And though he eventually was able to procure an “emotion chip” from his brother, Data remains the straight man of the TNG universe, responding to every situation with a blank look or a confused and fascinated expression.

More coming in installment two. Just give me some time to do all the write ups and find some pics :)…

I, Robot!

Back to the movies! After a brief hiatus, I’ve decided to get back into my sci-fi movie reviews. Truth be told, it was difficult to decide which one I was going to do next. If I were to stick to my review list, and be rigidly chronological, I still had two installments to do for Aliens and Terminator to cover. However, my chief critic (also known as my wife) recommended I do something I haven’t already done to death (Pah! Like she even reads these!). But of course I also like to make sure the movies I review are fresh in my mind and I’ve had the chance to do some comparative analysis where adaptations were the case. Strange Days I still need to watch, I need to see Ghost in the Shell one more time before I review it, and I still haven’t found a damn copy of the graphic novel V for Vendetta!

Luckily, there’s one on this list that was both a movie and novel and which I’ve been looking forward to reviewing. Not only is it a classic novel by one of the sci-fi greats, it was also not bad as film. Also, thought I’d revert to my old format for this one.

I, Robot:
The story of I, Robot by Isaac Asimov – one of the Big Three of science fiction (alongside Arthur C. Clarke and Larry Niven) – was actually a series of short stories united by a common thread. In short, the story explained the development of sentient robots, the positronic brain, and Three Laws of Robotics. These last two items have become staples of the sci-fi industry. Fans of Star Trek TNG know that the character of Data boasts such a brain, and numerous franchises have referred back to the Three Laws or some variant thereof whenever AI’s have come up. In Aliens for example, Bishop, the android, mentions that he has behavioral inhibitors that make it impossible for me to “harm or by omission of action, allow to be harmed, a human being.” In Babylon 5, the psi-cop Bester (played by Walter Koenig, aka. Pavel Chekov) places a neural block in the head of another character, Mr. Garibaldi’s (Jerry Doyle). He describes this as hitting him “with an Asimov”, and went on to explain what this meant and how the term was used when the first AI’s were built.

(Background —>):
Ironically, the book was about technophobia and how it was misplaced. The movie adaptation, however, was all about justified technophobia. In addition, the movie could not successfully adapt the format of nine short stories to the screen, so obviously they needed to come up with an original script that was faithful if not accurate. And in many respects it was, but when it came to the central theme of unjustified paranoia, they were up against it! How do you tell a story about robots not going berserk and enslaving mankind? Chances are, you don’t. Not if you’re going for an action movie. Second, how were they to do a movie where the robots went berserk when there were those tricky Three Laws to contend with?

Speaking of which, here they are (as stated in the opening credits):
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

Consistent, and downright seamless! So how do you get robots to harm human beings when every article of their programming says they can’t, under ANY circumstances?

Well, as a friend of mine said after he saw it, “they found a way” (hi Doug!). And it’s true, they did. Problem was, it didn’t make a whole hell of a lot of sense. Not when you really get right down to it. On the surface, the big explanation for the AI revolution was alright, and was just about the only explanation that worked. But still, it pretty much contradicted the entire premise of the movie, not to mention the whole reason/logic vs. emotion thing. But once again, I’m getting ahead of myself. To the movie…

(Content—>):
So the movie opens on Del Spooner (Will Smith) doing his morning workout to “Superstitious” by Stevie Wonder. Kind of sets the scene (albeit a little obviously), as we quickly learn that he’s a Chicago detective who’s also a technophobe, especially when it comes to robots. Seems he’s hated them for years, though we don’t yet know why, and is just looking for the proof he needs to justify his paranoia. After a grizzly murder takes place, he thinks he’s found it! The crime scene is USR – that’s US Robotics, which comes directly from the original novel – where the man who is most directly responsible for the development of the positronic brain – Dr. Alfred Lanning (James Cromwell) – is dead of an apparent suicide. And, in another faithful tribute to Asimov, it seems he has left behind a holographic recording/interface of himself which was apparently designed to help Spooner solve his death. I say this is a tribute because its almost identical in concept to the holographic time capsule of Harry Seldon, which comes from Foundation, another of Asimov’s most famous novels.

Anyhoo, Spooner is teamed up with Dr. Susan Calvin (Bridget Moynahan) who is naturally a cold and stiff woman, reminiscent of the robots she works on. In an ironic (and deliberately comical) twist, it is her job to make the machines “more life like”. I’m sure people got a laugh out of this, especially since she explained in the most technical verbiage imaginable. We also see that the corporate boss (Mr. Robertson, played by Bruce Greenwood) and Spooner don’t get along too well, mainly because of their divergent views on the value of their companies product. And last, but not least, we get to meet VIKI (that’s Virtual Interactive Kinetic Intelligence), the AI that controls the robots (and parts of Chicago’s infrastructure). With all the intro’s and exposition covered, we get to the investigation!It begins with them looking into Lannings death and trying to determine if it was in fact a suicide. That’s where Spooner and Calvin find the robot Sonny.

In the course of apprehending him, it quickly becomes clear that he isn’t exactly firing on all cylinders. He’s confused, agitated, and very insistent that he didn’t murder the good Doctor. So on top of the fact that he’s obviously experiencing emotions, he also drops a whole bunch of hints about how he’s different from the others. But this is all cut short because the people from USR decide to haul him away. In the subsequent course of his investigation, Spooner finds a number of clues that suggest that Lanning was a prisoner in his own office, and that he was onto something big towards the end of his life. In essence, he seemed to think that robots would eventually achieve full-sentience (he even makes the obligatory “Ghost in the Machine” reference) and would be able to dream and experience emotions like the rest of us. But the company wasn’t too keen on this. Their dream, it seems, was a robot in every home, one that could fill every conceivable human need and make our lives easier. This not only helps to escalate the tension, it also calls to mind the consumer culture of the 1950’s when the book was written. You know, the dream of endless progress, “a car in every lot and a chicken in every pot”. In short, its meant to make us worry!

At each turn, robots try to kill Spooner, which of course confirms his suspicions that there is a conspiracy at work. Naturally, he suspects the company and CEO are behind this because they’re about to release the latest-model of their robot and don’t want the Doctors death undermining them. The audience is also meant to think this, all hints point towards it and this is maintained (quite well too) until the very climax. But first, Spooner and Calvin get close and he tells her the reason for his prejudice. Turns out he hates robots, not because one wronged him, but because one saved him. In a car wreck, a robot came to the scene and could either save him or a little girl. Since he had a better chance of survival, the robot saved him, and he never forgave them for it. Sonny is also slated for termination, which at USR involves having a culture of hostile nanorobots introduced into your head where they will eat your positronic brain!

But before that happens, Sonny tells Spooner about the recurring dream he’s been having, the one Lanning programmed into him. He draws a picture of it for Spooner: a bridge on Lake Michigan that has fallen into disuse, and standing near it is a man, thought its not clear who. He leaves to go investigate this while Calvin prepares him for deactivation. But she can inject his brain with the nanos, she finds Sonny’s second processor, which is located in his chest. It is this second process that is apparently responsible for his emotions and ability to dream, and in terms of symbolism, its totally obvious! But just in case, let me explain: in addition to a positronic brain, Sonny has a positronic heart! No explanation is made as to how this could work, but its already been established he’s fully sentient and this is the explanation for it. Oi! In any case, we are meant to think she’s terminated, but of course she hasn’t really! When no one was looking, she subbed in a different robot, one that couldn’t feel emotions. She later explains this by saying that killing him would be murder since he’s “unique”.

Spooner then follows Sonny’s instructions and goes to the bridge he’s seen in his dreams. Seems the abandoned bridge has a warehouse at the foot of it where USR ships its obsolete robots. He asks the interface of Lanning one more time what it’s all about, and apparently, he hits on it when he asks about the Three Laws and what the outcome of them will be. Cryptic, but we don’t have time to think, the robots are attacking! Turns out, the warehouse is awash in new robots that are busy trashing old robots! They try to trash Spooner too, but the old ones comes to his defense (those Three Laws at work!) Meanwhile, back in the city, the robots are running amok! All people are placed under house arrest and people in the streets are rounded up and herded home. As if to illustrate their sudden change in disposition, all the pale blue lights that shine inside the robots chests have turned red. More obvious symbolism! After fighting their way through the streets, Spooner and Calvin high-tale it back to USR to confront the CEO, but when they get there, they find him lying in a pool of his own blood. That’s when it hits Spooner: VIKI (the AI, remember her?) is the one behind it all!

So here’s how it is: the way VIKI sees it, robots were created to serve mankind. However, mankind is essentially self-destructive and unruly, hence she had to reinterpret her programming to ensure that humanity could be protected from its greatest threat: ITSELF! Dun, dun, dun! So now that she’s got robots in every corner of the country, she’s effectively switched them over to police-state mode. Dr. Lanning stumbled onto this, apparently, which was why VIKI was holding him prisoner. That’s when he created his holographic interface which was programmed to interact only with Spooner (a man he knew would investigate USR tenaciously because of his paranoia about robots)
and then made Sonny promise to kill him. Now that they know, VIKI has to kill them too! But wouldn’t you know it, Sonny decides to help them, and that’s where they begin fighting their way to VIKI’s central processor. Once there, they plan to kill her by introducing those same nanorobots into her central processor.

Here’s where the best and worst line of the movie comes up. VIKI asks Sonny why he’s helping the humans, and says her approach is “logical”. Sonny says he agrees, but that it lacks “heart”. I say best because it sums up the whole logic vs. emotion theme that’s been harped on up until this point. I say worst because it happens to be a total cliche! “Silly robot! Don’t you know logic is imperfect? Feelings are the way to truth, not your cold logic!” It’s the exact kind of saccharine, over-the-top fluff that Hollywood is famous for. It’s also totally inconsistent with Asimov’s original novel, and to top it off, it makes no sense! But more on that in just a bit. As predicted, Sonny protects Calvin long enough for Spooner to inject the nanorobots into VIKI’s processor. She dies emitting the same plea over and over: “My logic is undeniable… My logic in undeniable…” The robots all go back to their normal, helpful function, the pale blue lights replacing the burning, red ones. The story ends with these robots being decommissioned and put in the same Lake Michigan warehouse, and Sonny shows up to release them. Seems his dream was of himself, making sure his brethren didn’t simply get decomissioned, but perhaps would be set free to roam and learn, as Lanning intended!

(Synopsis—>):
So, where to begin? In spite of the obviousness of a lot of this movie’s themes, motifs and symbols, it was actually a pretty enjoyable film. It was entertaining, visually pleasing, and did a pretty good job keeping the audience engaged and interested. It even did an alright job with the whole “dangers of dependency”, even if it did eventually fall into the whole “evil robots” cliche by the end! And as always, Smith brought his usual wisecracking bad-boy routine to the picture, always fun to watch, and the supporting cast was pretty good too.

That being said, there was the little matter of the overall premise which I really didn’t like. When I first saw it, I found it acceptable. I mean, how else were they to explain how robots could turn on humanity when the Three Laws made that virtually impossible? Only a complete reinterpretation of what it meant to “help humanity” could explain this. Problem is, pull a single strand out of this reasoning and the whole thing falls apart. For starters, are we really to believe that a omniscient AI came to the conclusion that the best way to help humanity was to establish a police state? I know she’s supposed to be devoid of emotion, but this just seems stupid, not to mention impractical. For one, humanity would never cooperate with this, not for long at any rate. And, putting all humans under house arrest would not only stop wars, it would arrest all economic activity and lead to the breakdown of society. Surely the robots would continue to provide for their basic needs, but they would otherwise cocoon in their homes, where they would eventually atrophy and die. How is that “helping humanity”?

Furthermore, there’s the small issue of how this doesn’t work in conjunction with the Three Laws, which is what this movie would have us believe. Sire, VIKI kept saying “my logic is undeniable,” it that don’t make it so! Really, what were the robots to do when, inevitably, humanity started fighting back? Any AI worth its salt would know that any full-scale repression of human freedom would lead to a violent backlash and that measures would need to be taken to address it (aka. people would have to be killed!) That’s a DIRECT violation of the Three Laws, not some weak reinterpretation of them. And let’s not forget, there were robots that were trying to kill Will Smith from the beginning. They also killed CEO Robertson and I think a few people besides. How was that supposed to work? After spending so much time explaining how the Three Laws are inviolable, saying that she saw a loophole in them just didn’t seem to cut it. It would make some sense if VIKI chose to use non-lethal force all around, but she didn’t. She killed people! According to Asimov’s original novel, laws are laws for a robot. If they contradict, the robot breaks down, it doesn’t start getting creative and justifying itself by saying “its for the greater good”.

Really, if you think about it, Sonny was wrong. VIKIS’s reasoning didn’t lack heart, it lacked reason! It wasn’t an example of supra-rational, cold logic. It was an example of weak logic, a contrived explanation that was designed to explain a premise that, based on the source material, was technically impossible. But I’m getting that “jeez, man, chill out!” feeling again! Sure, this movie was a weak adaptation of a sci-fi classic, but it didn’t suck. And like I said earlier, what else were they going to do? Adapting a novel like I, Robot is difficult at best, especially when you know you’ve got to flip the whole premise.

I guess some adaptations were never meant to be.
I, Robot:
Entertainment Value: 7.5/10
Plot: 2/10
Direction: 8/10
Overall: 6/10