The Birth of AI: Computer Beats the Turing Test!

turing-statueAlan Turing, the British mathematician and cryptogropher, is widely known as the “Father of Theoretical Computer Science and Artificial Intelligence”. Amongst his many accomplishments – such as breaking Germany’s Enigma Code – was the development of the Turing Test. The test was introduced by Turing’s 1950 paper “Computing Machinery and Intelligence,” in which he proposed a game wherein a computer and human players would play an imitation game.

In the game, which involves three players, involves Player C  asking the other two a series of written questions and attempts to determine which of the other two players is a human and which one is a computer. If Player C cannot distinguish which one is which, then the computer can be said to fit the criteria of an “artificial intelligence”. And this past weekend, a computer program finally beat the test, in what experts are claiming to be the first time AI has legitimately fooled people into believing it’s human.

eugene_goostmanThe event was known as the Turing Test 2014, and was held in partnership with RoboLaw, an organization that examines the regulation of robotic technologies. The machine that won the test is known as Eugene Goostman, a program that was developed in Russia in 2001 and goes under the character of a 13-year-old Ukrainian boy. In a series of chatroom-style conversations at the University of Reading’s School of Systems Engineering, the Goostman program managed to convince 33 percent of a team of judges that he was human.

This may sound modest, but that score placed his performance just over the 30 percent requirement that Alan Turing wrote he expected to see by the year 2000. Kevin Warwick, one of the organisers of the event at the Royal Society in London this weekend, was on hand for the test and monitored it rigorously. As Deputy chancellor for research at Coventry University, and considered by some to be the world’s first cyborg, Warwick knows a thing or two about human-computer relations

kevin_warwickIn a post-test interview, he explained how the test went down:

We stuck to the Turing test as designed by Alan Turing in his paper; we stuck as rigorously as possible to that… It’s quite a difficult task for the machine because it’s not just trying to show you that it’s human, but it’s trying to show you that it’s more human than the human it’s competing against.

For the sake of conducting the test, thirty judges had conversations with two different partners on a split screen—one human, one machine. After chatting for five minutes, they had to choose which one was the human. Five machines took part, but Eugene was the only one to pass, fooling one third of his interrogators. Warwick put Eugene’s success down to his ability to keep conversation flowing logically, but not with robotic perfection.

Turing-Test-SchemeEugene can initiate conversations, but won’t do so totally out of the blue, and answers factual questions more like a human. For example, some factual question elicited the all-too-human answer “I don’t know”, rather than an encyclopaedic-style answer where he simply stated cold, hard facts and descriptions. Eugene’s successful trickery is also likely helped by the fact he has a realistic persona. From the way he answered questions, it seemed apparent that he was in fact a teenager.

Some of the “hidden humans” competing against the bots were also teenagers as well, to provide a basis of comparison. As Warwick explained:

In the conversations it can be a bit ‘texty’ if you like, a bit short-form. There can be some colloquialisms, some modern-day nuances with references to pop music that you might not get so much of if you’re talking to a philosophy professor or something like that. It’s hip; it’s with-it.

Warwick conceded the teenage character could be easier for a computer to convincingly emulate, especially if you’re using adult interrogators who aren’t so familiar with youth culture. But this is consistent with what scientists and analysts predict about the development of AI, which is that as computers achieve greater and greater sophistication, they will be able to imitate human beings of greater intellectual and emotional development.

artificial-intelligenceNaturally, there are plenty of people who criticize the Turing test for being an inaccurate way of testing machine intelligence, or of gauging this thing known as intelligence in general. The test is also controversial because of the tendency of interrogators to attribute human characteristics to what is often a very simple algorithm. This is unfortunate because chatbots are easy to trip up if the interrogator is even slightly suspicious.

For instance, chatbots have difficulty answering follow up questions and are easily thrown by non-sequiturs. In these cases, a human would either give a straight answer, or respond to by specifically asking what the heck the person posing the questions is talking about, then replying in context to the answer. There are also several versions of the test, each with its own rules and criteria of what constitutes success. And as Professor Warwick freely admitted:

Some will claim that the Test has already been passed. The words Turing Test have been applied to similar competitions around the world. However this event involved more simultaneous comparison tests than ever before, was independently verified and, crucially, the conversations were unrestricted. A true Turing Test does not set the questions or topics prior to the conversations. We are therefore proud to declare that Alan Turing’s Test was passed for the first time on Saturday.

artificial_intelligence1So what are the implications of this computing milestone? Is it a step in the direction of a massive explosion in learning and research, an age where computing intelligences vastly exceed human ones and are able to assist us in making countless ideas real? Or it is a step in the direction of a confused, sinister age, where the line between human beings and machines is non-existent, and no one can tell who or what the individual addressing them is anymore?

Difficult to say, but such is the nature of groundbreaking achievements. And as Warwick suggested, an AI like Eugene could be very helpful to human beings and address real social issues. For example, imagine an AI that is always hard at work on the other side of the cybercrime battle, locating “black-hat” hackers and cyber predators for law enforcement agencies. And what of assisting in research endeavors, helping human researchers to discover cures for disease, or design cheaper, cleaner, energy sources?

As always, what the future holds varies, depending on who you ask. But in the end, it really comes down to who is involved in making it a reality. So a little fear and optimism are perfectly understandable when something like this occurs, not to mention healthy.

Sources: motherboard.vice.com, gizmag.com, reading.ac.uk

The Future is… Worms: Life Extension and Computer-Simulations

genetic_circuitPost-mortality is considered by most to be an intrinsic part of the so-called Technological Singularity. For centuries, improvements in medicine, nutrition and health have led to improved life expectancy. And in an age where so much more is possible – thanks to cybernetics, bio, nano, and medical advances – it stands to reason that people will alter their physique in order slow the onset of age and extend their lives even more.

And as research continues, new and exciting finds are being made that would seem to indicate that this future may be just around the corner. And at the heart of it may be a series of experiments involving worms. At the Buck Institute for Research and Aging in California, researchers have been tweaking longevity-related genes in nematode worms in order to amplify their lifespans.

immortal_wormsAnd the latest results caught even the researchers by surprise. By triggering mutations in two pathways known for lifespan extension – mutations that inhibit key molecules involved in insulin signaling (IIS) and the nutrient signaling pathway Target of Rapamycin (TOR) – they created an unexpected feedback effect that amplified the lifespan of the worms by a factor of five.

Ordinarily, a tweak to the TOR pathway results in a 30% lifespan extension in C. Elegans worms, while mutations in IIS (Daf-2) results in a doubling of lifespan. By combining the mutations, the researchers were expecting something around a 130% extension to lifespan. Instead, the worms lived the equivalent of about 400 to 500 human years.

antiagingAs Doctor Pankaj Kapahi said in an official statement:

Instead, what we have here is a synergistic five-fold increase in lifespan. The two mutations set off a positive feedback loop in specific tissues that amplified lifespan. These results now show that combining mutants can lead to radical lifespan extension — at least in simple organisms like the nematode worm.

The positive feedback loop, say the researchers, originates in the germline tissue of worms – a sequence of reproductive cells that may be passed onto successive generations. This may be where the interactions between the two mutations are integrated; and if correct, might apply to the pathways of more complex organisms. Towards that end, Kapahi and his team are looking to perform similar experiments in mice.

DNA_antiagingBut long-term, Kapahi says that a similar technique could be used to produce therapies for aging in humans. It’s unlikely that it would result in the dramatic increase to lifespan seen in worms, but it could be significant nonetheless. For example, the research could help explain why scientists are having a difficult time identifying single genes responsible for the long lives experienced by human centenarians:

In the early years, cancer researchers focused on mutations in single genes, but then it became apparent that different mutations in a class of genes were driving the disease process. The same thing is likely happening in aging. It’s quite probable that interactions between genes are critical in those fortunate enough to live very long, healthy lives.

A second worm-related story comes from the OpenWorm project, an international open source project dedicated to the creation of a bottom-up computer model of a millimeter-sized nemotode. As one of the simplest known multicellular life forms on Earth, it is considered a natural starting point for creating computer-simulated models of organic beings.

openworm-nematode-roundworm-simulation-artificial-lifeIn an important step forward, OpenWorm researchers have completed the simulation of the nematode’s 959 cells, 302 neurons, and 95 muscle cells and their worm is wriggling around in fine form. However, despite this basic simplicity, the nematode is not without without its share of complex behaviors, such as feeding, reproducing, and avoiding being eaten.

To model the complex behavior of this organism, the OpenWorm collaboration (which began in May 2013) is developing a bottom-up description. This involves making models of the individual worm cells and their interactions, based on their observed functionality in the real-world nematodes. Their hope is that realistic behavior will emerge if the individual cells act on each other as they do in the real organism.

openworm-nematode-roundworm-simulation-artificial-life-0Fortunately, we know a lot about these nematodes. The complete cellular structure is known, as well as rather comprehensive information concerning the behavior of the thing in reaction to its environment. Included in our knowledge is the complete connectome, a comprehensive map of neural connections (synapses) in the worm’s nervous system.

The big question is, assuming that the behavior of the simulated worms continues to agree with the real thing, at what stage might it be reasonable to call it a living organism? The usual definition of living organisms is behavioral, that they extract usable energy from their environment, maintain homeostasis, possess a capacity to grow, respond to stimuli, reproduce, and adapt to their environment in successive generations.

openworm-nematode1If the simulation exhibits these behaviors, combined with realistic responses to its external environment, should we consider it to be alive? And just as importantly, what tests would be considered to test such a hypothesis? One possibility is an altered version of the Turing test – Alan Turing’s proposed idea for testing whether or not a computer could be called sentient.

In the Turing test, a computer is considered sentient and sapient if it can simulate the responses of a conscious sentient being so that an auditor can’t tell the difference. A modified Turing test might say that a simulated organism is alive if a skeptical biologist cannot, after thorough study of the simulation, identify a behavior that argues against the organism being alive.

openworm-nematode2And of course, this raises an even larger questions. For one, is humanity on the verge of creating “artificial life”? And what, if anything, does that really look like? Could it just as easily be in the form of computer simulations as anthropomorphic robots and biomachinery? And if the answer to any of these questions is yes, then what exactly does that say about our preconceived notions about what life is?

If humanity is indeed moving into an age of “artificial life”, and from several different directions, it is probably time that we figure out what differentiates the living from the nonliving. Structure? Behavior? DNA? Local reduction of entropy? The good news is that we don’t have to answer that question right away. Chances are, we wouldn’t be able to at any rate.

Brain-ScanAnd though it might not seem apparent, there is a connection between the former and latter story here. In addition to being able to prolong life through genetic engineering, the ability to simulate consciousness through computer-generated constructs might just prove a way to cheat death in the future. If complex life forms and connectomes (like that involved in the human brain) can be simulated, then people may be able to transfer their neural patterns before death and live on in simulated form indefinitely.

So… anti-aging, artificial life forms, and the potential for living indefinitely. And to think that it all begins with the simplest multicellular life form on Earth – the nemotode worm. But then again, all life – nay, all of existence – depends upon the most simple of interactions, which in turn give rise to more complex behaviors and organisms. Where else would we expect the next leap in biotechnological evolution to come from?

And in the meantime, be sure to enjoy this video of the OpenWorm’s simulated nemotode in action


Sources:
IO9, cell.com, gizmag, openworm

Alan Turing Pardoned… Finally!

Alan TuringWhen it comes to the history of computing, cryptography and and mathematics, few people have earned more renown and respect than Alan Turing. In addition to helping the Allied forces of World War II break the Enigma Code, a feat which was the difference between victory and defeat in Europe, he also played an important role in the development of computers with his “Turing Machine” and designed the Turning Test – a basic intelligence requirement for future AIs.

Despite these accomplishments, Alan Turing became the target of government persecution when it was revealed in 1952 that he was gay. At the time, homosexuality was illegal in the United Kingdom, and Alan Turing was charged with “gross indecency” and given the choice between prison and chemical castration. He chose the latter, and after two years of enduring the effects of the drug, he ate an apple laced with cyanide and died.

turing-science-museum-2Officially ruled as a suicide, though some suggested that foul play may have been involved, Turing died at the tender age of 41. Despite his lifelong accomplishments and the fact that he helped to save Britain from a Nazi invasion, he was destroyed by his own government for the simple crime of being gay.

But in a recent landmark decision, the British government made a historic ruling by indicating that they would support a backbench bill that would clear his name posthumously of all charges. This ruling is not the first time that the subject of Turing’s sentencing has been visited by the British Parliament. Though for years they have been resistant to offering an official pardon, Prime Minister Gordon Brown did offer an apology for the “appalling” treatent Turing received.

Sackville_Park_Turing_plaqueHowever, it was not until now that it sought to wipe the slate clean and begin to redress the issue, starting with the ruling that ruined the man’s life. The government ruling came on Friday, and Lord Ahmad of Wimbledon, a government whip, told peers that the government would table the third reading of the Alan Turin bill at the end of October if no amendments are made.

Every year since 1966, the Turing Award – the computing worlds highest honor and equivalent of the Nobel Prize- has been given by the Association for Computing Machinery for technical or theoretical contributions to the computing community. In addition, on 23 June 1998 – what would have been Turing’s 86th birthday – an English Heritage blue plague was unveiled at his birthplace in and childhood home in Warrington Crescent, London.

Alan_Turing_Memorial_CloserIn addition, in 1994, a stretch of the A6010 road – the Manchester city intermediate ring road – was named “Alan Turing Way”, and a bridge connected to the road was named “Alan Turing Bridge”. A statue of Turing was also unveiled in Manchester in 2001 in Sackville Park, between the University of Manchester building on Whitworth Street and the Canal Street gay village.

This memorial statue depicts the “father of Computer Science” sitting on a bench at a central position in the park holding an apple. The cast bronze bench carries in relief the text ‘Alan Mathison Turing 1912–1954’, and the motto ‘Founder of Computer Science’ as it would appear if encoded by an Enigma machine: ‘IEKYF ROMSI ADXUO KVKZC GUBJ’.

turing-statueBut perhaps the greatest and most creative tribute to Turning comes in the form of the statue of him that adorns Bletchley Park, the site of the UK’s main decryption department during World War II. The 1.5-ton, life-size statue of Turing was unveiled on June 19th, 2007. Built from approximately half a million pieces of Welsh slate, it was sculpted by Stephen Kettle and commissioned by the late American billionaire Sidney Frank.

Last year, Turing was even commemorated with a Google doodle last year in honor of what would have been his 100th birthday. In a fitting tribute to Turing’s code-breaking work, this doodle designed to spell out the name Google in binary. Unlike previous tributes produced by Google, this one was remarkably complicated. Those who attempted to figure it out apparently had to consult the online source Mashable just to realize what the purpose of it was.

google_doodle_turing

For many, this news is seen as a development that has been too long in coming. Much like Canada’s own admission to wrongdoing in the case of Residential Schools, or the Church’s persecution of Galileo, it seems that some institutions are very slow to acknowledge that mistakes were made and injustices committed. No doubt, anyone in a position of power and authority is afraid to admit to wrongdoing for fear that it will open the floodgates.

But as with all things having to do with history and criminal acts, people cannot be expected to move forward until accounts are settled. And for those who would say “get over it already!”, or similar statements which would place responsibility for moving forward on the victims, I would say “just admit you were wrong already!”

Rest in peace, Alan Turing, and may continued homophobes who refuse to admit they’re wrong find the wisdom and self-respect to learn and grow from their mistakes. Orson Scott Card, I’m looking in your direction!

Sources: news.cnet.com, guardian.co.uk

Of Mechanical Minds

A few weeks back, a friend of mine, Nicola Higgins, directed me to an article about Google’s new neural net. Not only did she provide me with a damn interesting read, she also challenged me to write an article about the different types of robot brains. Well, Nicola, as Barny Stintson would say “Challenge Accepted!”And I got to say, it was a fun topic to get into.

After much research and plugging away at the lovely thing known as the internet (which was predicted by Vannevar Bush with his proposed Memor-Index system (aka. Memex) 50 years ago, btw) I managed to compile a list of the most historically relevant examples of mechanical minds, culminating in the development of Google’s Neural Net. Here we go..

Earliest Examples:
Even in ancient times, the concept of automata and arithmetic machinery can be found in certain cultures. In the Near East, the Arab World, and as far East as China, historians have found examples of primitive machinery that was designed to perform one task or another. And even though few specimens survive, there are even examples of machines that could perform complex mathematical calculations…

Antikythera mechanism:
Invented in ancient Greece, and recovered in 1901 on the ship that bears the same name, the Antikythera is the world’s oldest known analog calculator, invented to calculate the positions of the heavens for ancient astronomers. However, it was not until a century later that its true complexity and significance would be fully understood. Having been built in the 1st century BCE, it would not be until the 14th century CE that machines of its complexity would be built again.

Although it is widely theorized that this “clock of the heavens” must have had several predecessors during the Hellenistic Period, it remains the oldest surviving analog computer in existence. After collecting all the surviving pieces, scientists were able to reconstruct the design (pictured at right), which essentially amounted to a large box of interconnecting gears.

Pascaline:
Otherwise known as the Arithmetic Machine and Pascale Calculator, this device was invented by French mathematician Blaise Pascal in 1642 and is the first known example of a mechanized mathematical calculator. Apparently, Pascale invented this device to help his father reorganize the tax revenues of the French province of Haute-Normandie, and went on to create 50 prototypes before he was satisfied.

Of those 50, nine survive and are currently on display in various European museums. In addition to giving his father a helping hand, its introduction launched the development of mechanical calculators all over Europe and then the world. It’s invention is also directly linked to the development of the microprocessing circuit roughly three centuries later, which in turn is what led to the development of PC’s and embedded systems.

The Industrial Revolution:
With the rise of machine production, computational technology would see a number of developments. Key to all of this was the emergence of the concept of automation and the rationalization of society. Between the 18th and late 19th centuries, as every aspect of western society came to be organized and regimented based on the idea of regular production, machines needed to be developed that could handle this task of crunching numbers and storing the results.

Jacquard Loom:
Invented by Joseph Marie Jacquard, a French weaver and merchant, in 1801, the Loom that bears his name is the first programmable machine in history, which relied on punch cards to input orders and turn out textiles of various patterns. Thought it was based on earlier inventions by Basile Bouchon (1725), Jean Baptiste Falcon (1728) and Jacques Vaucanson (1740), it remains the most well-known example of a programmable loom and the earliest machine that was controlled through punch cards.

Though the Loom was did not perform computations, the design was nevertheless an important step in the development of computer hardware. Charles Babbage would use many of its features to design his Analytical Engine (see next example) and the use of punch cards would remain a stable in the computing industry well into the 20th century until the development of the microprocessor.

Analytical Engine:
Also known as the “Difference Engine”, this concept was originally proposed by English Mathematician Charles Babbage. Beginning in 1822 Babbage began contemplating designs for a machine that would be capable of automating the process of creating error free tables, which arose out of difficulties encountered by teams of mathematicians who were attempting to do it by hand.

Though he was never able to complete construction of a finished product, due to apparent difficulties with the chief engineer and funding shortages, his proposed engine incorporated an arithmetical unit, control flow in the form of conditional branching and loops, and integrated memory, making it the first Turing-complete design for a general-purpose computer. His various trial models (like that featured at left) are currently on display in the Science Museum in London, England.

The Birth of Modern Computing:
The early 20th century saw the rise of several new developments, many of which would play a key role in the development of modern computers. The use of electricity for industrial applications was foremost, with all computers from this point forward being powered by Alternating and/or Direct Current and even using it to store information. At the same time, older ideas would be remain in use but become refined, most notably the use of punch cards and tape to read instructions and store results.

Tabulating Machine:
The next development in computation came roughly 70 years later when Herman Hollerith, an American statistician, developed a “tabulator” to help him process information from the 1890 US Census. In addition to being the first electronic computational device designed to assist in summarizing information (and later, accounting), it also went on to spawn the entire data processing industry.

Six years after the 1890 Census, Hollerith formed his own company known as the Tabulating Machine Company that was responsible for creating machines that could tabulate info based on punch cards. In 1924, after several mergers and consolidations, Hollerith’c company was renamed International Business Machines (IBM), which would go on to build the first “supercomputer” for Columbia University in 1931.

Atanasoff–Berry Computer:
Next, we have the ABC, the first electronic digital computing device in the world. Conceived in 1937, the ABC shares several characteristics with its predecessors, not the least of which is the fact that it is electrically powered and relied on punch cards to store data. However, unlike its predecessors, it was the first machine to use digital symbols to compute and was the first computer to use vacuum tube technology

These additions allowed the ABC to acheive computational speeds that were previously thought impossible for a mechanical computer. However, the machine was limited in that it could only solve systems of linear equations, and its punch card system of storage was deemed unreliable. Work on the machine also stopped when it’s inventor John Vincent Atanasoff was called off to assist in World War II cryptographic assignments. Nevertheless, the machine remains an important milestone in the development of modern computers.

Colossus:
There’s something to be said about war being the engine of innovation. The Colossus is certainly no stranger to this rule, the machine used to break German codes in the Second World War. Due to the secrecy surrounding it, it would not have much of an influence on computing and would not be rediscovered until the 1990’s. Still, it represents a step in the development of computing, as it relied on vacuum tube technology and punch tape in order to perform calculations, and proved most adept at solving complex mathematical computations.

Originally conceived by Max Newman, the British mathematician who was chiefly responsible fore breaking German codes in Bletchley Park during the war, the machine was a proposed means of combatting the German Lorenz machine, which the Nazis used to encode all of their wireless transmissions. With the first model built in 1943, ten variants of the machine for the Allies before war’s end and were intrinsic in bringing down the Nazi war machine.

Harvard Mark I:
Also known as the “IBM Automatic Sequence Controlled Calculator (ASCC)”, the Mark I was an electro-mechanical computer that was devised by Howard H. Aiken, built by IBM, and officially presented to Harvard University in 1944. Due to its success at performing long, complex calculations, it inspired several successors, most of which were used by the US Navy and Air Force for the purpose of running computations.

According to IBM’s own archives, the Mark I was the first computer that could execute long computations automatically. Built within a steel frame 51 feet (16 m) long and eight feet high, and using 500 miles (800 km) of wire with three million connections, it was the industry’s largest electromechanical calculator and the largest computer of its day.

Manchester SSEM:
Nicknamed “Baby”, the Manchester Small-Scale Experimental Machine (SSEM) was developed in 1948 and was the world’s first computer to incorporate stored-program architecture.Whereas previous computers relied on punch tape or cards to store calculations and results, “Baby” was able to do this electronically.

Although its abilities were still modest – with a 32-bit word length, a memory of 32 words, and only capable of performing subtraction and negation without additional software – it was still revolutionary for its time. In addition, the SSEM also had the distinction of being the result of Alan Turing’s own work – another British crytographer who’s theories on the “Turing Machine” and development of the algorithm would form the basis of modern computer technology.

The Nuclear Age to the Digital Age:
With the end of World War II and the birth of the Nuclear Age, technology once again took several explosive leaps forward. This could be seen in the realm of computer technology as well, where wartime developments and commercial applications grew by leaps and bounds. In addition to processor speeds and stored memory multiplying expontentially every few years, the overall size of computers got smaller and smaller. This, some theorized would lead to the development of computers that were perfectly portable and smart enough to pass the “Turing Test”. Imagine!

IBM 7090:
The 7090 model which was released in 1959, is often referred to as a third generation computer because, unlike its predecessors which were either electormechanical  or used vacuum tubes, this machine relied transistors to conduct its computations. In addition, it was an improvement on earlier models in that it used a 36-bit word length and could store up to 32K (32,768) words, a modest increase in processing over the SSEM, but a ten thousand-fold increase in terms of storage capacity.

And of course, these improvements were mirrored in the fact the 7090 series were also significantly smaller than previous versions, being about the size of a desk rather than an entire room. They were also cheaper and were quite popular with NASA, Caltech and MIT.

PDP-8:
In keeping with the trend towards miniaturization, 1965 saw the development of the first commercial minicomputer by the Digital Equipment Corporation (DEC). Though large by modern standards (about the size of a minibar) the PDP-8, also known as the “Straight-8”, was a major improvement over previous models, and therefore a commercial success.

In addition, later models also incorporated advanced concepts like the Real-Time Operating System and preemptive multitasking. Unfortunately, early models still relied on paper tape in order to process information. It was not until later that the computer was upgraded to take advantage of controlling language  such as FORTRAN, BASIC, and DIBOL.

Intel 4004:
Founded in California in 1968, the Intel Corporation quickly moved to the forefront of computational hardware development with the creation of the 4004, the worlds first Central Processing Unit, in 1971. Continuing the trend towards smaller computers, the development of this internal processor paved the way for personal computers, desktops, and laptops.

Incorporating the then-new silicon gate technology, Intel was able to create a processor that allowed for a higher number of transistors and therefore a faster processing speed than ever possible before. On top of all that, they were able to pack in into a much smaller frame, which ensured that computers built with the new CPU would be smaller, cheaper and more ergonomic. Thereafter, Intel would be a leading designer of integrated circuits and processors, supplanting even giants like IBM.

Apple I:
The 60’s and 70’s seemed to be a time for the birthing of future giants. Less than a decade after the first CPU was created, another upstart came along with an equally significant development. Named Apple and started by three men in 1976 – Steve Jobs, Steve Wozniak, and Ronald Wayne – the first product to be marketed was a “personal computer” (PC) which Wozniak built himself.

One of the most distinctive features of the Apple I was the fact that it had a built-in keyboard. Competing models of the day, such as the Altair 8800, required a hardware extension to allow connection to a computer terminal or a teletypewriter machine. The company quickly took off and began introducing an upgraded version (the Apple II) just a year later. As a result, Apple I’s remain a scarce commodity and very valuable collector’s item.

The Future:
The last two decades of the 20th century also saw far more than its fair of developments. From the CPU and the PC came desktop computers, laptop computers, PDA’s, tablet PC’s, and networked computers. This last creation, aka. the Internet, was the greatest leap by far, allowing computers from all over the world to be networked together and share information. And with the exponential increase in information sharing that occurred as a result, many believe that it’s only a matter of time before wearable computers, fully portable computers, and artificial intelligences are possible. Ah, which brings me to the last entry in this list…

The Google Neural Network:
googleneuralnetworkFrom mechanical dials to vacuum tubes, from CPU’s to PC’s and laptops, computer’s have come a hell of a long way since the days of Ancient Greece. Hell, even within the last century, the growth in this one area of technology has been explosive, leading some to conclude that it was just a matter of time before we created a machine that was capable of thinking all on its own.

Well, my friends, that day appears to have dawned. Already, Nicola and myself blogged about this development, so I shan’t waste time going over it again. Suffice it to say, this new program, which thus far has been able to identify pictures of cats at random, contains the necessary neural capacity to acheive 1/1000th of what the human brain is capable of. Sounds small, but given the exponential growth in computing, it won’t be long before that gap is narrowed substantially.

Who knows what else the future will hold?  Optical computers that use not electrons but photons to move information about? Quantum computers, capable of connecting machines not only across space, but also time? Biocomputers that can be encoded directly into our bodies through our mitochondrial DNA? Oh, the possibilities…

Creating machines in the likeness of the human mind. Oh Brave New World that hath such machinery in it. Cool… yet scary!

A Tribute to Alan Turing

Wouldn’t you know it? Today marks what would have been Alan Turing’s 100th birthday. This man was not only immensely influential in the development of computer science and cryptanalysis, he is also considered the father of Artificial Intelligence. In fact, words like “algorithm” and “computation” are traced to him, as was the development of the “Turing machine” concept which has helped computer scientists to understand the limits of mechanical computation.

However, his reputation goes far beyond the field of computer science. During World War II, he worked at the Government Code and Cypher School (GCCS) at Bletchley Park, Britain’s codebreaking centre. For a time, he was acting head up Hut 8, the section responsible for breaking the Enigma Code, Germany’s wartime cypher which they used to encrypt all their communications. Were it not for this achievement, the Allies may very well have lost the war.

Especially in the Atlantic, where German U-boats were causing extensive losses in Allied shipping, Turing’s work proved to be the different between victory and defeat. By knowing the disposition and orders of the German fleet, crucial shipments of food, raw material, weapons and troops were able to make it across the Atlantic and keep Britain in the war. Eventually, the broken codes would also help the Allied navy to hunt down and eviscerate Germany’s fleet of subs.

After the war, he worked at the National Physical Laboratory in London, where he created one of the first designs for a stored-program computer, the ACE (Automatic Computing Engine). He named this in honor of Charles Babbage’s Difference Engine, a mathematical machine built a century before. This machine was the culmination of theoretical work which began in the mid 30’s and his experiences at Bletchley Park.

In 1948, he joined the Computing Laboratory at Manchester University, where he assisted fellow mathematician and codebreaker Max Newman in the development of the Manchester computers. Their work would eventually yield the world’s first stored-program computer, the world’s first computer to use transistors, and what was the world’s fastest computer at the time of its inauguration (in 1962).

He then switched for a time to emergent and theoretical field of mathematical biology, a science which was concerned with the mathematical representation, treatment and modeling of biological processes, using a variety of applied mathematical techniques and tools. This field has numerous applications in medicine, biology, and the proposed field of biotechnology. As always, the man was on the cutting edge!

In terms of Artificial Intelligence, Turing proposed that it might be possible one day to create a machine that was capable of replicating the same processes as the human mind. The “Turing Test” was a proposed way of testing this hypothesis, whereby a human test subject and computer would both be subjected to the same questions in a blind test. If the person administering the test could not differentiate between the answers that came from a person or a machine, then the machine could be accurately deemed as an “artificial intelligence”.

Tragically, his life ended in 1954, just weeks shy of his 42nd birthday. This was all due to the fact that Turing was gay and did not try to conceal this about himself. In 1952, after years of service with the British government, he was tried as a criminal for “indecency”, homesexuality being considered a crime at the time. In exchange for no jail time, he agreed to submit to female hormone treatment, which is tantamount to “chemical castration”. After a year of enduring this treatment, he committed suicide by ingesting cyanide.

In 2009, Prime Minister Gordon Brown issued a formal apology on behalf of the British government for “the appalling way he was treated”. Between his wartime contributions and ongoing influence in the field of computer science, mathematics, and the emerging fields of biotechnology, and artificial intelligence, Turing has left a lasting legacy. For example, at King’s College in Cambridge, the computer room is named after him in honor of his achievements and that fact that he was a student there in 1931 and a Fellow in 1935.

In Manchester, where Turing spent much of his life, many tributes have been in his honor. In 1994, a stretch of the Manchester city intermediate ring road was named “Alan Turing Way” while a bridge carrying this road was widened and renamed the Alan Turing Bridge. In 2001, a statue of Turing was unveiled in Sackville Park, which commemorates his work towards the end of his life. The statue shows Turing sitting on a bench, strategically located between the University of Manchester and the Canal Street gay village.

The commemorative plaque reads ‘Founder of Computer Science’ as it would appear if encoded by an Enigma machine: ‘IEKYF ROMSI ADXUO KVKZC GUBJ’. Another statue of Turing was unveiled in Bletchley Park in 2007, made out of approximately half a million pieces slate and showing the young Turing studying an Enigma machine. A commemorative English Heritage blue plaque was also mounted outside the house where Turing grew up in Wilmslow, Cheshire.

In literature, Turing’s name and persona have made several appearances. The 1986 play, Breaking the Code, was about Turing’s life, went from London’s West End to Broadway and won three Tony Awards. The 1996, the BBC television network produced a series on his life, starring Derek Jacobi in the leading role. In 2010, actor/playwright Jade Esteban Estrada portrayed Turing in the solo musical, ICONS: The Lesbian and Gay History of the World, Vol. 4. And, my personal favorite, he was featured heavily in Neal Stephenson’s 1999 novel Cryptonomicon.

Rest in peace Alan Turning. Like many geniuses, you were ahead of your time and destroyed by the very people you helped to educate and protect. I hope Galileo, Socrates, Oppenheimer and Tupac are there to keep you company! You have a lot to discuss, I’m sure 😉

Robots, Androids and AI’s

Let’s talk artificial life forms, shall we? Lord knows they are a common enough feature in science fiction, aren’t they? In many cases, they take the form of cold, calculating machines that chill audiences to the bones with their “kill all humans” kind of vibe. In others, they were the solid-state beings with synthetic parts but hearts of gold and who stole ours in the process. Either way, AI’s are a cornerstone to the world of modern sci-fi. And over the past few decades, they’ve gone through countless renditions and re-imaginings, each with their own point to make about humanity, technology, and the line that separates natural and artificial.

But in the end, its really just the hardware that’s changed. Whether we were talking about Daleks, Terminators, or “Synthetics”, the core principle has remained the same. Based on mathematician and legendary cryptographer Alan Turing’s speculations, an Artificial Intelligence is essentially a being that can fool the judges in a double-blind test. Working extensively with machines that were primarily designed for solving massive mathematical equations, Turing believed that some day, we would be able to construct a machine that would be able to perform higher reasoning, surpassing even humans.

Arny (Da Terminator):
Who knew robots from the future would have Austrian accents? For that matter, who knew they’d all look like bodybuilders? Originally, when Arny was presented with the script for Cameron’s seminal time traveling sci-fi flick, he was being asked to play the role of Kyle Reese, the human hero. But Arny very quickly found himself identifying with the role of the Terminator, and a franchise was born!

Originally, the Terminator was the type of cold, unfeeling and ruthless machine that haunted our nightmares, a cyberpunk commentary on the dangers of run-away technology and human vanity. Much like its creator, the Skynet supercomputer, the T101 was part of a race of machines that decided it could do without humanity and was sent out to exterminate them. As Reese himself said in the original: “It can’t be bargained with. It can’t be reasoned with. It doesn’t feel pity, or remorse, or fear. And it absolutely will not stop, ever, until you are dead.”

The second Terminator, by contrast, was a game changer. Captured in the future and reprogrammed to protect John Conner, he became the sort of surrogate father that John never had. Sarah reflected on this irony during a moment of internal monologue during movie two: “Watching John with the machine, it was suddenly so clear. The terminator, would never stop. It would never leave him, and it would never hurt him, never shout at him, or get drunk and hit him, or say it was too busy to spend time with him. It would always be there. And it would die, to protect him. Of all the would-be fathers who came and went over the years, this thing, this machine, was the only one who measured up. In an insane world, it was the sanest choice.”

In short, Cameron gave us two visions of technology with these first two installments in the series. In the first, we got the dangers of worshiping high-technology at the expense of humanity. In movie two, we witnessed the reconciliation of humans with technology, showing how an artificial life form could actually be capable of more humanity than a human being. To quote one last line from the franchise: “The unknown future rolls toward us. I face it, for the first time, with a sense of hope. Because if a machine, a Terminator, can learn the value of human life, maybe we can too.”

Bender:
No list of AI’s and the like would ever be complete without mentioning Futurama’s Bender. That dude put’s the funk in funky robot! Originally designed to be a bending unit, hence his name, he seems more adept at wisecracking, alcoholism, chain-smoking and comedicaly plotting the demise of humanity. But its quickly made clear that he doesn’t really mean it. While he may hold humans in pretty low esteem, laughing at tragedy and failing to empathize with anything that isn’t him, he also loves his best friend Fry whom he refers to affectionately as “meat-bag”.

In addition, he’s got some aspirations that point to a creative soul. Early on in the show, it was revealed that any time he gets around something magnetic, he begins singing folk and country western tunes. This is apparently because he always wanted to be a singer, and after a crippling accident in season 3, he got to do just that – touring the country with Beck and a show called “Bend-aid” which raised awareness about the plight of broken robots.

He also wanted to be a cook, which was difficult considering he had no sense of taste or seemed to care about lethally poisoning humans! However, after learning at the feet of legendary Helmut Spargle, he learned the secret of “Ultimate Flavor”, which he then used to challenge and humiliate his idol chef Elzar on the Iron Chef. Apparently the secret was confidence, and a vial of water laced with LSD!

Other than that, there’s really not that much going on with Bender. Up front, he’s a chain smoking, alcoholic robot with loose morals or a total lack thereof. When one gets to know him better, they pretty much conclude that what you see is what you get! An endless source of sardonic humor, weird fashion sense, and dry one-liners. Of them all “Bite my shiny metal ass”, “Pimpmobile”, “We’re boned!” and “Up yours chump” seems to rank the highest.

Ash/Bishop:
Here we have yet another case of robots giving us mixed messages, and comes to us direct from the Alien franchise. In the original movie, we were confronted with Ash, an obedient corporate mole who did the company’s bidding at the expense of human life. His cold, misguided priorities were only heightened when he revealed that he admired the xenomorph because of its “purity”. “A survivor… unclouded by conscience, remorse, or delusions of morality.”

After going nuts and trying to kill Ripley, he was even kind enough to smile and say in that disembodied tinny voice of his, “I can’t lie to you about your chances, but… you have my sympathies.” What an asshole! And the perfect representation for an inhuman, calculating robot. The result of unimpeded aspirations, no doubt the same thing which was motivating his corporate masters to get their hands on a hostile alien, even if it meant sacrificing a crew or two.

But, as with Terminator, Cameron pulled a switch-up in movie two with the Synthetic known as Bishop (or “artificial human” as he preferred to be called). In the beginning, Ripley was hostile towards him, rebuffing his attempts to assure her that he was incapable of killing people thanks to the addition of his behavioral inhibitors. Because of these, he could not harm, or through inaction allow to be harmed, a human being (otherwise known as an “Asimov”). But in the end, Bishop’s constant concern for the crew and the way he was willing to sacrifice himself to save Newt won her over.

Too bad he had to get ripped in half to earn her trust. But I guess when a earlier model tries to shove a magazine down your throat, you kind of have to go above and beyond to make someone put their life in your hands again. Now if only all synthetics were willing to get themselves ripped in half for Ripley’s sake, she’d be set!

C3P0/R2D2:
For that matter, who knew robots from the future would be fay, effeminate and possibly homosexual? Not that there’s anything wrong with that last one… But as audiences are sure to agree, the other characteristics could get quite annoying after awhile. C3P0’s constant complaining, griping, moaning and citing of statistical probabilities were at once too human and too robotic! Kind of brilliant really… You could say he was the Sheldon of the Star Wars universe!

Still, C3P0 if nothing if not useful when characters found themselves in diplomatic situations, or facing a species of aliens who’s language they couldn’t possibly fathom. He could even interface with machinery, which was helpful when the hyperdrive was out or the moisture condensers weren’t working. Gotta bring in that “Blue Harvest” after all! And given that R2D2 could do nothing but bleep and blurp, someone had to be around to translate for him.

Speaking of which, R2D2 was the perfect counterpart to C3P0. As the astromech droid of the pair, he was the engineer and a real nuts and bolts kind of guy, whereas C3P0 was the diplomat and expert in protocol.  Whereas 3P0 was sure to give up at the first sign of trouble, R2 would always soldier on and put himself in harm’s way to get things done. This difference in personality was also made evident in their differences in height and structure. Whereas C3P0 was tall, lanky and looked quite fragile, R2D2 was short, stocky, and looked like he could take a licking and keep on ticking!

Naturally, it was this combination of talents that made them comically entertaining during their many adventures and hijinks together. The one would always complain and be negative, the other would be positive and stubborn. And in the end, despite their differences, they couldn’t possibly imagine a life without the other. This became especially evident whenever they were separated or one of them was injured.

Hmmm, all of this is starting to sound familiar to me somehow. I’m reminded of another, mismatched, and possibly homosexual duo. One with a possible fetish for rubber… Not that there’s anything wrong with that! 😉

Cameron:
Some might accuse me of smuggling her in here just to get some eye-candy in the mix. Some might say that this list already has an example from the Terminator franchise and doesn’t need another. They would probably be right…

But you know what, screw that, it’s Summer Glau! And the fact of the matter is, she did a way better job than Kristanna Loken at showing that these killing/protective machines can be played by women. Making her appearance in the series Terminator: The Sarah Conner Chronicles, she worked alongside acting great Lena Headey of 300 and Game of Thrones fame.

And in all fairness, she and Lokken did bring some variety to the franchise. For instance, in the show, she portrayed yet another reprogrammed machine from the future, but represented a model different from the T101’s. The purpose of these latter models appeared to be versatility, the smaller chassis and articulate appendages now able to fit inside a smaller frame, making a woman’s body available as a potential disguise. Quite smart really. If you think about it, people are a lot more likely to trust a smaller woman than a bulked-out Arny bot any day (especially men!) It also opened up the series to more female characters other than Sarah.

And dammit, it’s Summer Glau! If she didn’t earn her keep from portraying River Tam in Firefly and Serenity, then what hope is there for the rest of us!

Cortana:
Here we have another female AI, and one who is pretty attractive despite her lack of a body. In this case, she comes to us from the Halo universe. In addition to being hailed by critics for her believability, depth of character, and attractive appearance, she was ranked as one of the most disturbingly sexual game characters by Games.net. No surprises there, really. Originally, the designers of her character used Egyptian Queen Nefertiti as a model, and her half-naked appearance throughout the game has been known to get the average gamer to stand up and salute!

Though she serves ostensibly as the ship’s AI for the UNSC Pillar of Autumn, Cortana ends up having a role that far exceeds her original programming. Constructed from the cloned brain of Dr. Catherine Elizabeth Halsey, creator of the SPARTAN project, she has an evolving matrix, and hence is capable of learning and adapting as time goes on. Due to this and their shared experiences as the series goes on, she and the Master Chief form a bond and even become something akin to friends.

Although she has no physical appearance, Cortana’ program is mobile and makes several appearances throughout the series, and always in different spots. She is able to travel around with the Master Chief, commandeer Covenant vessels, and interface with a variety of machines. And aside from her feminine appearance, he soft, melodic voice is a soothing change of pace from the Chief’s gruff tone and the racket of gunfire and dead aliens!

Data:
The stoic, stalwart and socially awkward android of Star Trek: TNG. Built to resemble his maker, Dr. Noonian Soong, Data is a first-generation positronic android – a concept borrowed from Asimov’s I, Robot. He later enlisted in Star Fleet in order to be of service to humanity and explore the universe. In addition to his unsurpassed computational abilities, he also possesses incredible strength, reflexes, and even knows how to pleasure the ladies. No joke, he’s apparently got all kind of files on how to do… stuff, and he even got to use them! 😉

Unfortunately, Data’s programming does not include emotions. Initially, this seemed to serve the obvious purpose of making his character a foil for humanity, much like Spock was in the original series. However, as the show progressed, it was revealed that Soong had created an android very much like Data who also possessed the capacity for emotions. But of course, things went terribly wrong when this model, named Lor, became terribly ambitious and misanthropic. There were some deaths…

Throughout the original series, Data finds himself seeking to understand humanity, frequently coming up short, but always learning from the experience. His attempts at humor and failure to grasp social cues and innuendo are also a constant source of comic relief, as are his attempts to mimic these very things. And though he eventually was able to procure an “emotion chip” from his brother, Data remains the straight man of the TNG universe, responding to every situation with a blank look or a confused and fascinated expression.

More coming in installment two. Just give me some time to do all the write ups and find some pics :)…

Cryptonomicon

Having covered Snow Crash and Diamond Age awhile back, I thought it was time to move on to the third installment in my Neal Stephenson series. Today, for consideration, the historic techno-thriller Cryptonomicon! This story took me close to a year to read, in part due to interruptions, but also because the book is pretty freaking dense! However, the read was not only enjoyable and informative, it was also pretty poignant. As a historian and a sci-fi buff, there was plenty there for me to enjoy and learn from. And for those who enjoy techno-thrillers and dissertations on mathematics, this book is also a page turner! Little wonder then why this novel was dubbed the “ultimate geek novel”.

The name is derived from H.P. Lovecraft’s Necronomicon, a fictitious book that has been referenced numerous times in western literature and pop culture. The name is indicative of the book’s main theme, cryptology, as well as the unofficial manual used by cryptologists during and after World War II. In addition to featuring fictionalized versions of real events, it is also chock-full of fictionalized personalities drawn from history. They include Alan Turing, Albert Einstein, Douglas MacArthur, Winston Churchill, Isoroku Yamamoto, Karl Dönitz, and Ronald Reagan, as well as some highly technical and detailed descriptions of modern cryptography and information security, with discussions of prime numbers, modular arithmetic, and Van Eck phreaking.

Unlike his other novels, Cryptonomicon was much more akin to historical fiction and techno-thriller than actual sci-fi, mainly because its narratives take place in the past and present day. However, this is a bit of an arbitrary designation. As most fans of science fiction know, a story need not take place in the future in order to explore the kinds of themes common to the genre. And really, all science fiction is actually about the time period in which it is written, and actively draws on the past to create a picture of the future. So putting aside the question of where it falls in the literary spectrum for now, allow me to delve into this bad boy and what was good about it!

Synopsis:
The story contains four intertwining plotlines, three of which are set in the Second World War, and a fourth which takes in the late 90’s. The first follows the exploits of a man named Bobby Shaftoe, a decorated Marine who has just survived the battle of Gaudacanal and is being transferred to the OSS’s counterintelligence division. The second follows Lawrence Pritchard Waterhouse, a mathematician and cryptologist working for the joint American and British cryptology unit 2702. This work involves breaking German codes and leads him to several interesting encounters with famous people. including Albert Einstein and Alan Turing. The third involves a Japanese man named Goto Dengo, an Imperial Army officer and a mining engineer who becomes involved in a a secret Axis project to bury looted gold in the Philippines. The fourth and final perspective which takes place in the 90’s centers on Randy Lawrence Waterhouse, an expert programmer working for an IT company (Epiphyte) that is been doing business in the Philippines.

As the story develops, we see Shaftoe become marooned in Finland where he meets up with some unlikely compatriots. The first is a Catholic priest and physician named Enoch Root, who is attached to 2702, while the second is a Kriegsmarine Captain named Günter Bischoff, who is the commanding officer of an experimental rocket-propelled U-Boat. We learn that an alliance has formed between these individuals, mainly because Bischoff, who became marooned in Finland with the rest of them, has learned that the Kriegsmarine has been given the task of smuggling gold to Japan in order to buy their continued cooperation in the war. He and the others decide to work together to get their hands on some, and soon find themselves back in the Philippines. Before the war, Shaftoe had a sweetheart there named Glory, who he has not seen since the Japanese invaded, and whom he is eager to get back to.

Meanwhile, Waterhouse is bounced around the globe in his efforts to break the Axis’ codes. First, he is sent to a fictional island in the English Channel known as Qwghlm (pronounced ???). On this island, the people wear incredibly thick wool sweaters and speak a language that is loosely related to Gaelic, and incredibly hard to understand. He is then sent off to Brisbane, Australia, to work on breaking the Japanese’s codes. While there, he finds a community of Qwghlmians, who he learns are serving as operators for the British. Whereas the US had their “Wind Talkers”, Navaho signal officers who used their native languages to confuse Japanese listeners, the British had Qwghlmians. Here, he falls in love with, and eventually marries, a young woman named Mary cCmndhd.

At the same time, Goto Dengo is nearly drowned when his troop ship is sunk in the South Pacific. He narrowly survives and drifts to an island where he is forced to survive amidst squalor, decay, and a group of Japanese soldiers who are pillaging and raping amongst the natives. In time, he is found by his fellow officers and is sent to the Philippines where he is put to work on the construction of a series of underground caverns. The purpose of these caves is to store the vast amounts of looted gold which is being shipped from Germany since the Germans are now losing the war and fear being overrun. After many years, the caves are completed and the Americans invade, during which time Dengo is reunited with Shaftoe. Having reenlisted with the Marines, Bobby was sent ahead to organize the resistance, and has learned that he has a son. After convincing Dengo to surrender and defect, he heads off for what turns out to be his final mission. Meanwhile, the sub carrying Gunter Bischoff and a hoarded supply of gold runs aground in the Philippines and the crew drown.

Fast forward to 1997, we come to meet Lawrence Waterhouse as he begins his work in the Philippines. Ostensibly, this involves selling Pinoy-grams to migrant Filipinos, a sort of fiber-optic communication system that allows migrants to speak with family instantaneously. However, he soon learns that his friend and CEO of Epiphyte, Avi Halaby, is interested in using this stream of capital to fund the building of a data haven in the nearby (and fictional) island of Kinakuta. At this point, his job description changes to surveying the laying of the underwater fiber optic cables that will run from the Philippines to Kinakuta, a job which leads him to enlists the help of a Vietnam veteran and mariner named Douglas MacArthur Shaftoe and his daughter, America “Amy” Shaftoe. These people, we quickly learn, are the son and granddaughter of Bobby Shaftoe. In addition, on the island of Kinakuta, the company that is contracted to build the underground facility that will house the haven is run by a Japanese man named Goto Furudenendu, who just happens to be the son of Goto Dengo.

Over time, there plans to create a haven free of repression and scrutiny comes under fire from various quarters. At this point, Amy and Doug begin to help Lawrence and his company find an alternative source of revenue – a hidden cache of gold rumored to be at the bottom of a Philippine harbor. They find the gold and have the money they need, but in the course of it, they also uncover the plot involving detachment 2702, the Japanese, the Nazis, and an unbreakable code named Arethusa. This discovery makes them more enemies, people who want the gold for themselves, or just revenge, and things start to get dicey! However, through this they also get to meet an aged Goto Dengo, CEO of the construction company and man who buried the gold. He agrees to show them where the cache is hidden so that it can be repatriated; and with his help, they find it, Randy and Amy get together, the haven is built, and just about everyone lives happily ever after!

Strengths:
From the description alone, I’m thinking people will assume that this story was dense, well-conceived and came together quite nicely. And they would be right!  One thing that is immediately clear about it is how well Stephenson weaves past and present together to create a grand narrative that is chock-full of suspense, intrigue and history. This last element is especially prevalent. I can’t tell you how many historical cameos made it into the novel. Through the character of Randy Waterhouse, Albert Einstein and Alan Turing make an appearance. Through his German counterpart, Rudy von Hacklheber, Hermann Goering makes several. Gunter Bischoff, though he never meets Karl Doenitz in the story, repeatedly references him since it he whom he is blackmailing and gets all his orders from! And through Bobby Shaftoe and Goto Dengo, Douglas MacArthur and Isoroku Yamamoto are also woven into the story.

In addition, the way he brings past and present together is done masterfully through his main characters, all of whom are apparently related. Lawrence Waterhouse is the son of Randy Waterhouse and Mary cCmndhd, Doug and Amy are the and granddaughter of Bobby Shaftoe respectively, and Furudenendu is the son of Goto. Hell, even Lawrence ex-girlfriend ends up shacking up with the son of a character in the story! In this way, the sense of connection between past and present is made more clear, as is the sense that destiny or some kind of long-term plan is being fulfilled. The evolution between cryptology and modern computing, how one grew out of the other, is also made abundantly clear.

Weaknesses:
As more than one critic observed, this book tends to appeal to the techno geeks in the crowd. In fact, that aspect of the novel can be quite oppressive at times. In several parts, the descriptions of mathematical concepts as they apply to various things (even the everyday), can go on and on and on. Two examples come to mind: the equation Randy comes up with to describe the rotation of a bicycle wheel, and the section where Lawrence and his peers are conducting some Van-Eck phreaking email surveillance. I mean really, page after page after page of inane detail! I got that the intent was to be comical in the sheer geekiness of it all, but for the non-geeky, the only way to survive these sections was to skip ahead or just keep reading and pray there was a point in there somewhere. Other than that, the sheer length of the book can feel somewhat stifling, which is why it took me a few months to finish it.

However, this book goes far beyond the mere technical. History buffs, fans of sci-fi and people who just plain like a good, complex and interwoven story will find something to enjoy here. Not only was it a good read, it previewed Stephenson’s ability to combine historical fiction and sci-fi, something he would reprise with the Baroque Cycle trilogy and the more recent Mongoliad, all of which I have yet to read! However, one thing at a time. I have yet to finish Anathem, and I’ve been eyeing Readme with keen interest lately…