The Birth of AI: Computer Beats the Turing Test!

turing-statueAlan Turing, the British mathematician and cryptogropher, is widely known as the “Father of Theoretical Computer Science and Artificial Intelligence”. Amongst his many accomplishments – such as breaking Germany’s Enigma Code – was the development of the Turing Test. The test was introduced by Turing’s 1950 paper “Computing Machinery and Intelligence,” in which he proposed a game wherein a computer and human players would play an imitation game.

In the game, which involves three players, involves Player C  asking the other two a series of written questions and attempts to determine which of the other two players is a human and which one is a computer. If Player C cannot distinguish which one is which, then the computer can be said to fit the criteria of an “artificial intelligence”. And this past weekend, a computer program finally beat the test, in what experts are claiming to be the first time AI has legitimately fooled people into believing it’s human.

eugene_goostmanThe event was known as the Turing Test 2014, and was held in partnership with RoboLaw, an organization that examines the regulation of robotic technologies. The machine that won the test is known as Eugene Goostman, a program that was developed in Russia in 2001 and goes under the character of a 13-year-old Ukrainian boy. In a series of chatroom-style conversations at the University of Reading’s School of Systems Engineering, the Goostman program managed to convince 33 percent of a team of judges that he was human.

This may sound modest, but that score placed his performance just over the 30 percent requirement that Alan Turing wrote he expected to see by the year 2000. Kevin Warwick, one of the organisers of the event at the Royal Society in London this weekend, was on hand for the test and monitored it rigorously. As Deputy chancellor for research at Coventry University, and considered by some to be the world’s first cyborg, Warwick knows a thing or two about human-computer relations

kevin_warwickIn a post-test interview, he explained how the test went down:

We stuck to the Turing test as designed by Alan Turing in his paper; we stuck as rigorously as possible to that… It’s quite a difficult task for the machine because it’s not just trying to show you that it’s human, but it’s trying to show you that it’s more human than the human it’s competing against.

For the sake of conducting the test, thirty judges had conversations with two different partners on a split screen—one human, one machine. After chatting for five minutes, they had to choose which one was the human. Five machines took part, but Eugene was the only one to pass, fooling one third of his interrogators. Warwick put Eugene’s success down to his ability to keep conversation flowing logically, but not with robotic perfection.

Turing-Test-SchemeEugene can initiate conversations, but won’t do so totally out of the blue, and answers factual questions more like a human. For example, some factual question elicited the all-too-human answer “I don’t know”, rather than an encyclopaedic-style answer where he simply stated cold, hard facts and descriptions. Eugene’s successful trickery is also likely helped by the fact he has a realistic persona. From the way he answered questions, it seemed apparent that he was in fact a teenager.

Some of the “hidden humans” competing against the bots were also teenagers as well, to provide a basis of comparison. As Warwick explained:

In the conversations it can be a bit ‘texty’ if you like, a bit short-form. There can be some colloquialisms, some modern-day nuances with references to pop music that you might not get so much of if you’re talking to a philosophy professor or something like that. It’s hip; it’s with-it.

Warwick conceded the teenage character could be easier for a computer to convincingly emulate, especially if you’re using adult interrogators who aren’t so familiar with youth culture. But this is consistent with what scientists and analysts predict about the development of AI, which is that as computers achieve greater and greater sophistication, they will be able to imitate human beings of greater intellectual and emotional development.

artificial-intelligenceNaturally, there are plenty of people who criticize the Turing test for being an inaccurate way of testing machine intelligence, or of gauging this thing known as intelligence in general. The test is also controversial because of the tendency of interrogators to attribute human characteristics to what is often a very simple algorithm. This is unfortunate because chatbots are easy to trip up if the interrogator is even slightly suspicious.

For instance, chatbots have difficulty answering follow up questions and are easily thrown by non-sequiturs. In these cases, a human would either give a straight answer, or respond to by specifically asking what the heck the person posing the questions is talking about, then replying in context to the answer. There are also several versions of the test, each with its own rules and criteria of what constitutes success. And as Professor Warwick freely admitted:

Some will claim that the Test has already been passed. The words Turing Test have been applied to similar competitions around the world. However this event involved more simultaneous comparison tests than ever before, was independently verified and, crucially, the conversations were unrestricted. A true Turing Test does not set the questions or topics prior to the conversations. We are therefore proud to declare that Alan Turing’s Test was passed for the first time on Saturday.

artificial_intelligence1So what are the implications of this computing milestone? Is it a step in the direction of a massive explosion in learning and research, an age where computing intelligences vastly exceed human ones and are able to assist us in making countless ideas real? Or it is a step in the direction of a confused, sinister age, where the line between human beings and machines is non-existent, and no one can tell who or what the individual addressing them is anymore?

Difficult to say, but such is the nature of groundbreaking achievements. And as Warwick suggested, an AI like Eugene could be very helpful to human beings and address real social issues. For example, imagine an AI that is always hard at work on the other side of the cybercrime battle, locating “black-hat” hackers and cyber predators for law enforcement agencies. And what of assisting in research endeavors, helping human researchers to discover cures for disease, or design cheaper, cleaner, energy sources?

As always, what the future holds varies, depending on who you ask. But in the end, it really comes down to who is involved in making it a reality. So a little fear and optimism are perfectly understandable when something like this occurs, not to mention healthy.

Sources: motherboard.vice.com, gizmag.com, reading.ac.uk

The Future of Computing: Brain-Like Computers

neuronsIt’s no secret that computer scientists and engineers are looking to the human brain as means of achieving the next great leap in computer evolution. Already, machines are being developed that rely on machine blood, can continue working despite being damaged, and recognize images and speech. And soon, a computer chip that is capable of learning from its mistakes will also be available.

The new computing approach, already in use by some large technology companies, is based on the biological nervous system – specifically on how neurons react to stimuli and connect with other neurons to interpret information. It allows computers to absorb new information while carrying out a task, and adjust what they do based on the changing signals.

brain_chip2The first commercial version of the new kind of computer chip is scheduled to be released in 2014, and was the result of a collaborative effort between I.B.M. and Qualcomm, as well as a Stanford research team. This “neuromorphic processor” can not only automate tasks that once required painstaking programming, but can also sidestep and even tolerate errors, potentially making the term “computer crash” obsolete.

In coming years, the approach will make possible a new generation of artificial intelligence systems that will perform some functions that humans do with ease: see, speak, listen, navigate, manipulate and control. That can hold enormous consequences for tasks like facial and speech recognition, navigation and planning, which are still in elementary stages and rely heavily on human programming.

googleneuralnetworkFor example, computer vision systems only “recognize” objects that can be identified by the statistics-oriented algorithms programmed into them. An algorithm is like a recipe, a set of step-by-step instructions to perform a calculation. But last year, Google researchers were able to get a machine-learning algorithm, known as a “Google Neural Network”, to perform an identification task (involving cats) without supervision.

And this past June, the company said it had used those neural network techniques to develop a new search service to help customers find specific photos more accurately. And this past November, researchers at Standford University came up with a new algorithm that could give computers the power to more reliably interpret language. It’s known as the Neural Analysis of Sentiment (NaSent).

deep_learning_laptopA similar concept known as Deep Leaning is also looking to endow software with a measure of common sense. Google is using this technique with their voice recognition technology to aid in performing searches. In addition, the social media giant Facebook is looking to use deep learning to help them improve Graph Search, an engine that allows users to search activity on their network.

Until now, the design of computers was dictated by ideas originated by the mathematician John von Neumann about 65 years ago. Microprocessors perform operations at lightning speed, following instructions programmed using long strings of binary code (0s and 1s). The information is stored separately in what is known as memory, either in the processor itself, in adjacent storage chips or in higher capacity magnetic disk drives.

neural-networksBy contrast, the new processors consist of electronic components that can be connected by wires that mimic biological synapses. Because they are based on large groups of neuron-like elements, they are known as neuromorphic processors, a term credited to the California Institute of Technology physicist Carver Mead, who pioneered the concept in the late 1980s.

These processors are not “programmed”, in the conventional sense. Instead, the connections between the circuits are “weighted” according to correlations in data that the processor has already “learned.” Those weights are then altered as data flows in to the chip, causing them to change their values and to “spike.” This, in turn, strengthens some connections and weakens others, reacting much the same way the human brain does.

Neuromorphic-chip-640x353In the words of Dharmendra Modha, an I.B.M. computer scientist who leads the company’s cognitive computing research effort:

Instead of bringing data to computation as we do today, we can now bring computation to data. Sensors become the computer, and it opens up a new way to use computer chips that can be everywhere.

One great advantage of the new approach is its ability to tolerate glitches, whereas traditional computers are cannot work around the failure of even a single transistor. With the biological designs, the algorithms are ever changing, allowing the system to continuously adapt and work around failures to complete tasks. Another benefit is energy efficiency, another inspiration drawn from the human brain.

IBM_stacked3dchipsThe new computers, which are still based on silicon chips, will not replace today’s computers, but augment them; at least for the foreseeable future. Many computer designers see them as coprocessors, meaning they can work in tandem with other circuits that can be embedded in smartphones and the centralized computers that run computing clouds.

However, the new approach is still limited, thanks to the fact that scientists still do not fully understand how the human brain functions. As Kwabena Boahen, a computer scientist who leads Stanford’s Brains in Silicon research program, put it:

We have no clue. I’m an engineer, and I build things. There are these highfalutin theories, but give me one that will let me build something.

calit2PhotoLuckily, there are efforts underway that are designed to remedy this, with the specific intention of directing that knowledge towards the creation of better computers and AIs. One such effort comes from the National Science Foundation financed the Center for Brains, Minds and Machines, a new research center based at the Massachusetts Institute of Technology, with Harvard and Cornell.

Another is the California Institute for Telecommunications and Information Technology (aka. Calit2) – a center dedicated to innovation in nanotechnology, life sciences, information technology, and telecommunications. As
Larry Smarr, an astrophysicist and director of Institute, put it:

We’re moving from engineering computing systems to something that has many of the characteristics of biological computing.

Human-Brain-project-Alp-ICTAnd last, but certainly not least, is the Human Brain Project, an international group of 200 scientists from 80 different research institutions and based in Lausanne, Switzerland. Having secured the $1.6 billion they need to fund their efforts, these researchers will spend the next ten years conducting research that cuts across multiple disciplines.

This initiative, which has been compared to the Large Hadron Collider, will attempt to reconstruct the human brain piece-by-piece and gradually bring these cognitive components into an overarching supercomputer. The expected result of this research will be new platforms for “neuromorphic computing” and “neurorobotics,” allowing for the creation of computing and robotic architectures that mimic the functions of the human brain.

neuromorphic_revolutionWhen future generations look back on this decade, no doubt they will refer to it as the birth of the neuromophic computing revolution. Or maybe just Neuromorphic Revolution for short, but that sort of depends on the outcome. With so many technological revolutions well underway, it is difficult to imagine how the future will look back and characterize this time.

Perhaps, as Charles Stross suggest, it will simply be known as “the teens”, that time in pre-Singularity history where it was all starting to come together, but was yet to explode and violently change everything we know. I for one am looking forward to being around to witness it all!

Sources: nytimes.com, technologyreview.com, calit2.net, humanbrainproject.eu

Alan Turing Pardoned… Finally!

Alan TuringWhen it comes to the history of computing, cryptography and and mathematics, few people have earned more renown and respect than Alan Turing. In addition to helping the Allied forces of World War II break the Enigma Code, a feat which was the difference between victory and defeat in Europe, he also played an important role in the development of computers with his “Turing Machine” and designed the Turning Test – a basic intelligence requirement for future AIs.

Despite these accomplishments, Alan Turing became the target of government persecution when it was revealed in 1952 that he was gay. At the time, homosexuality was illegal in the United Kingdom, and Alan Turing was charged with “gross indecency” and given the choice between prison and chemical castration. He chose the latter, and after two years of enduring the effects of the drug, he ate an apple laced with cyanide and died.

turing-science-museum-2Officially ruled as a suicide, though some suggested that foul play may have been involved, Turing died at the tender age of 41. Despite his lifelong accomplishments and the fact that he helped to save Britain from a Nazi invasion, he was destroyed by his own government for the simple crime of being gay.

But in a recent landmark decision, the British government made a historic ruling by indicating that they would support a backbench bill that would clear his name posthumously of all charges. This ruling is not the first time that the subject of Turing’s sentencing has been visited by the British Parliament. Though for years they have been resistant to offering an official pardon, Prime Minister Gordon Brown did offer an apology for the “appalling” treatent Turing received.

Sackville_Park_Turing_plaqueHowever, it was not until now that it sought to wipe the slate clean and begin to redress the issue, starting with the ruling that ruined the man’s life. The government ruling came on Friday, and Lord Ahmad of Wimbledon, a government whip, told peers that the government would table the third reading of the Alan Turin bill at the end of October if no amendments are made.

Every year since 1966, the Turing Award – the computing worlds highest honor and equivalent of the Nobel Prize- has been given by the Association for Computing Machinery for technical or theoretical contributions to the computing community. In addition, on 23 June 1998 – what would have been Turing’s 86th birthday – an English Heritage blue plague was unveiled at his birthplace in and childhood home in Warrington Crescent, London.

Alan_Turing_Memorial_CloserIn addition, in 1994, a stretch of the A6010 road – the Manchester city intermediate ring road – was named “Alan Turing Way”, and a bridge connected to the road was named “Alan Turing Bridge”. A statue of Turing was also unveiled in Manchester in 2001 in Sackville Park, between the University of Manchester building on Whitworth Street and the Canal Street gay village.

This memorial statue depicts the “father of Computer Science” sitting on a bench at a central position in the park holding an apple. The cast bronze bench carries in relief the text ‘Alan Mathison Turing 1912–1954’, and the motto ‘Founder of Computer Science’ as it would appear if encoded by an Enigma machine: ‘IEKYF ROMSI ADXUO KVKZC GUBJ’.

turing-statueBut perhaps the greatest and most creative tribute to Turning comes in the form of the statue of him that adorns Bletchley Park, the site of the UK’s main decryption department during World War II. The 1.5-ton, life-size statue of Turing was unveiled on June 19th, 2007. Built from approximately half a million pieces of Welsh slate, it was sculpted by Stephen Kettle and commissioned by the late American billionaire Sidney Frank.

Last year, Turing was even commemorated with a Google doodle last year in honor of what would have been his 100th birthday. In a fitting tribute to Turing’s code-breaking work, this doodle designed to spell out the name Google in binary. Unlike previous tributes produced by Google, this one was remarkably complicated. Those who attempted to figure it out apparently had to consult the online source Mashable just to realize what the purpose of it was.

google_doodle_turing

For many, this news is seen as a development that has been too long in coming. Much like Canada’s own admission to wrongdoing in the case of Residential Schools, or the Church’s persecution of Galileo, it seems that some institutions are very slow to acknowledge that mistakes were made and injustices committed. No doubt, anyone in a position of power and authority is afraid to admit to wrongdoing for fear that it will open the floodgates.

But as with all things having to do with history and criminal acts, people cannot be expected to move forward until accounts are settled. And for those who would say “get over it already!”, or similar statements which would place responsibility for moving forward on the victims, I would say “just admit you were wrong already!”

Rest in peace, Alan Turing, and may continued homophobes who refuse to admit they’re wrong find the wisdom and self-respect to learn and grow from their mistakes. Orson Scott Card, I’m looking in your direction!

Sources: news.cnet.com, guardian.co.uk