Moore’s Law, a sort of general law of computing that was coined by Intel co-founder Gordon E. Moore, states that the number of transistors on integrated circuits doubles approximately every two years. In accordance with this observation, which was made back in 1965, computers have evolved to the point where they are over 650 times faster than they were in the early 1970’s.
Given this trend, scientists and futurists have long suspected that it is only a matter of time before computers surpass human beings in terms of raw intelligence. Already, supercomputers like IBM’s Watson, University of Illinois’ “Blue Waters”, and MIT’s ConceptNet 4 have demonstrated that they are superior in terms of raw knowledge and retention. But when it comes to actual thinking – i.e. independent reasoning and common sense – they are seriously lacking.
Yes, despite all the progress made in the field of computing in the past 40 years, bridging that gap between computational power and artificial intelligence has remained elusive. And at this point, no one is quite sure what it will take to make that leap happen. However, a number of developers and designers are coming up with breakthroughs that may constitute a small jump.
One such breakthrough comes from Standford University, where computer engineers have come up with a new algorithm that could give computers the power to more reliably interpret language. Called Neural Analysis of Sentiment – or NaSent for short – the algorithm seeks to improve on current methods of written language analysis by drawing inspiration from the human brain.
NaSent is part of a movement in computer science known as deep learning, a new field that seeks to build programs that can process data in much the same way the brain does. According to Richard Socher, the Stanford University grad student who helped develop NaSent:
In the past, sentiment analysis has largely focused on models that ignore word order or rely on human experts. While this works for really simple examples, it will never reach human-level understanding because word meaning changes in context and even experts cannot accurately define all the subtleties of how sentiment works. Our deep learning model solves both problems.
Socher was joined by artificial-intelligence researchers Chris Manning, a Professor of Computer Science and Linguistics at Standford, and Andrew Ng, one of the engineers behind Google’s deep learning project and who is working on creating the “Google Brain”. Their aim is to develop algorithms that can operate without continued help from humans.
What sets this algorithm apart is its ability to identify the meaning of words in the context of phrases and sentences. This is a big change from previous methods of sentiment analysis which have been limited to parsing through a collection of words and ranking them as either positive or negative without taking word order into account.
To create NaSent, Socher and his team used 12,000 sentences taken from the movie reviews website Rotten Tomatoes. They split these sentences into roughly 214,000 phrases that were labeled as very negative, negative, neutral, positive, or very positive, and then they fed this labeled data into the system, which NaSent then used to predict whether sentences were positive, neutral or negative on its own.
According to the researchers, NaSent is about 85 percent accurate, a 5% improvement over previous models. However, Socher and his team are working on increasing that rate of success by feeding the system more data from Twitter and the Internet Movie Database, and have also set up a live demo where people can type in their own sentences.
And whereas the field of deep learning began in the academic world, it has since spread to web giants such as Google and Facebook. In both cases, the companies have taken to hiring people familiar with the field to help them sort through the growing mountains of data they are indexing to help them improve their products, particularly those that already rely on machine learning.
In Google’s case, these include voice recognition technology, as well as their ongoing work in neural networks. In the case of Facebook, they are looking to deep learning to help them improve Graph Search, a search engine that allows people to search activity on their network. This presents a challenge, mainly because machines have a hard time interpreting the nuances of human language. Just ask Watson, IBM’s own supercomputer!
Sources: wired, (2), (3), engineering.stanford.edu