The Birth of AI: Computer Beats the Turing Test!

turing-statueAlan Turing, the British mathematician and cryptogropher, is widely known as the “Father of Theoretical Computer Science and Artificial Intelligence”. Amongst his many accomplishments – such as breaking Germany’s Enigma Code – was the development of the Turing Test. The test was introduced by Turing’s 1950 paper “Computing Machinery and Intelligence,” in which he proposed a game wherein a computer and human players would play an imitation game.

In the game, which involves three players, involves Player C  asking the other two a series of written questions and attempts to determine which of the other two players is a human and which one is a computer. If Player C cannot distinguish which one is which, then the computer can be said to fit the criteria of an “artificial intelligence”. And this past weekend, a computer program finally beat the test, in what experts are claiming to be the first time AI has legitimately fooled people into believing it’s human.

eugene_goostmanThe event was known as the Turing Test 2014, and was held in partnership with RoboLaw, an organization that examines the regulation of robotic technologies. The machine that won the test is known as Eugene Goostman, a program that was developed in Russia in 2001 and goes under the character of a 13-year-old Ukrainian boy. In a series of chatroom-style conversations at the University of Reading’s School of Systems Engineering, the Goostman program managed to convince 33 percent of a team of judges that he was human.

This may sound modest, but that score placed his performance just over the 30 percent requirement that Alan Turing wrote he expected to see by the year 2000. Kevin Warwick, one of the organisers of the event at the Royal Society in London this weekend, was on hand for the test and monitored it rigorously. As Deputy chancellor for research at Coventry University, and considered by some to be the world’s first cyborg, Warwick knows a thing or two about human-computer relations

kevin_warwickIn a post-test interview, he explained how the test went down:

We stuck to the Turing test as designed by Alan Turing in his paper; we stuck as rigorously as possible to that… It’s quite a difficult task for the machine because it’s not just trying to show you that it’s human, but it’s trying to show you that it’s more human than the human it’s competing against.

For the sake of conducting the test, thirty judges had conversations with two different partners on a split screen—one human, one machine. After chatting for five minutes, they had to choose which one was the human. Five machines took part, but Eugene was the only one to pass, fooling one third of his interrogators. Warwick put Eugene’s success down to his ability to keep conversation flowing logically, but not with robotic perfection.

Turing-Test-SchemeEugene can initiate conversations, but won’t do so totally out of the blue, and answers factual questions more like a human. For example, some factual question elicited the all-too-human answer “I don’t know”, rather than an encyclopaedic-style answer where he simply stated cold, hard facts and descriptions. Eugene’s successful trickery is also likely helped by the fact he has a realistic persona. From the way he answered questions, it seemed apparent that he was in fact a teenager.

Some of the “hidden humans” competing against the bots were also teenagers as well, to provide a basis of comparison. As Warwick explained:

In the conversations it can be a bit ‘texty’ if you like, a bit short-form. There can be some colloquialisms, some modern-day nuances with references to pop music that you might not get so much of if you’re talking to a philosophy professor or something like that. It’s hip; it’s with-it.

Warwick conceded the teenage character could be easier for a computer to convincingly emulate, especially if you’re using adult interrogators who aren’t so familiar with youth culture. But this is consistent with what scientists and analysts predict about the development of AI, which is that as computers achieve greater and greater sophistication, they will be able to imitate human beings of greater intellectual and emotional development.

artificial-intelligenceNaturally, there are plenty of people who criticize the Turing test for being an inaccurate way of testing machine intelligence, or of gauging this thing known as intelligence in general. The test is also controversial because of the tendency of interrogators to attribute human characteristics to what is often a very simple algorithm. This is unfortunate because chatbots are easy to trip up if the interrogator is even slightly suspicious.

For instance, chatbots have difficulty answering follow up questions and are easily thrown by non-sequiturs. In these cases, a human would either give a straight answer, or respond to by specifically asking what the heck the person posing the questions is talking about, then replying in context to the answer. There are also several versions of the test, each with its own rules and criteria of what constitutes success. And as Professor Warwick freely admitted:

Some will claim that the Test has already been passed. The words Turing Test have been applied to similar competitions around the world. However this event involved more simultaneous comparison tests than ever before, was independently verified and, crucially, the conversations were unrestricted. A true Turing Test does not set the questions or topics prior to the conversations. We are therefore proud to declare that Alan Turing’s Test was passed for the first time on Saturday.

artificial_intelligence1So what are the implications of this computing milestone? Is it a step in the direction of a massive explosion in learning and research, an age where computing intelligences vastly exceed human ones and are able to assist us in making countless ideas real? Or it is a step in the direction of a confused, sinister age, where the line between human beings and machines is non-existent, and no one can tell who or what the individual addressing them is anymore?

Difficult to say, but such is the nature of groundbreaking achievements. And as Warwick suggested, an AI like Eugene could be very helpful to human beings and address real social issues. For example, imagine an AI that is always hard at work on the other side of the cybercrime battle, locating “black-hat” hackers and cyber predators for law enforcement agencies. And what of assisting in research endeavors, helping human researchers to discover cures for disease, or design cheaper, cleaner, energy sources?

As always, what the future holds varies, depending on who you ask. But in the end, it really comes down to who is involved in making it a reality. So a little fear and optimism are perfectly understandable when something like this occurs, not to mention healthy.

Sources: motherboard.vice.com, gizmag.com, reading.ac.uk

The Future is Here: The Thumbles Robot Touch Screen

thumblesSmartphones and tablets, with their high-resolution touchscreens and ever-increasing number of apps, are all very impressive and good. And though some apps are even able to jump from the screen in 3D, the vast majority are still limited to two-dimensions and are limited in terms of interaction. More and more, interface designers are attempting to break this fourth wall and make information something that you can really feel and move with your own two hands.

Take the Thumbles, an interactive screen created by James Patten from Patten Studio. Rather than your convention 2D touchscreen that responds to the heat in your fingers, this desktop interface combines touch screens with tiny robots that act as interactive controls. Whenever a new button would normally pop on the screen, a robot drives up instead, precisely parking for the user to grab it, turn it, or rearrange it. And the idea is surprisingly versatile.

thumbles1As the video below demonstrates, the robots serve all sorts of functions. In various applications, they appear as grabbable hooks at the ends of molecules, twistable knobs in a sound and video editor, trackable police cars on traffic maps, and swappable space ships in a video game. If you move or twist one robot, another robot can mirror the movement perfectly. And thanks to their omnidirectional wheels, the robots always move with singular intent, driving in any direction without turning first.

Naturally, there are concerns about the practicality of this technology where size is concerned. While it makes sense for instances where space isn’t a primary concern, it doesn’t exactly work for a smartphone or tablet touchscreen. In that case, the means simply don’t exist to create robots small enough to wander around the tiny screen space and act as interfaces. But in police stations, architecture firms, industrial design settings, or military command centers, the Thumbles and systems like it are sure to be all the rage.

thumbles2Consider another example shown in the video, where we see a dispatcher who is able to pick up and move a police car to a new location to dispatch it. Whereas a dispatcher is currently required to listen for news of a disturbance, check an available list of vehicles, see who is close to the scene, and then call that police officer to go to that scene, this tactile interface streamlines such tasks into quick movements and manipulations.

The same holds true for architects who want to move design features around on a CAD model; corporate officers who need to visualize their business model; landscapers who want to see what a stretch of Earth will look like once they’ve raised a section of land, changed the drainage, planted trees or bushes, etc.; and military planners can actively tell different units on a battlefield (or a natural disaster) what to do in real-time, responding to changing circumstances quicker and more effectively, and with far less confusion.

Be sure to check out the demo video below, showing the Thumbles in action. And be sure to check out Patten Studio on their website.


Sources: fastcodesign.com, pattenstudio.com

Cyberwars: The Heartbleed Bug and Web Security

heartbleed-iconA little over two years ago, a tiny piece of code was introduced to the internet that contained a bug. This bug was known as Heartbleed, and in the two years it has taken for the world to recognize its existence, it has caused quite a few headaches. In addition to allowing cybercriminals to steal passwords and usernames from Yahoo, it has also allowed people to steal from online bank accounts, infiltrate governments institutions (such as Revenue Canada), and generally undermine confidence in the internet.

What’s more, in an age of cyberwarfare and domestic surveillance, its appearance would give conspiracy theorists a field day. And since it was first disclosed a month to the day ago, some rather interesting theories as to how the NSA and China have been exploiting this to spy on people have surfaced. But more on that later. First off, some explanation as to what Heartbleed is, where it came from, and how people can protect themselves from it, seems in order.

cyber_securityFirst off, Heartbleed is not a virus or a type of malware in the traditional sense, though it can be exploited by malware and cybercriminals to achieve similar results. Basically, it is a security bug or programming error in popular versions of OpenSSL, a software code that encrypts and protects the privacy of your password, banking information and any other sensitive data you provide in the course of checking your email or doing a little online banking.

Though it was only made public a month ago, the origins of the bug go back just over two years – to New Year’s Eve 2011, to be exact. It was at this time that Stephen Henson, one of the collaborators on the OpenSSL Project, received the code from Robin Seggelmann – a respected academic who’s an expert in internet protocols. Henson reviewed the code – an update for the OpenSSL internet security protocol — and by the time he and his colleagues were ringing in the New Year, he had added it to a software repository used by sites across the web.

Hackers-With-An-AgendaWhat’s interesting about the bug, which is named for the “heartbeat” part of the code that it affects, is that it is not a virus or piece of malware in the traditional sense. What it does is allow people the ability to read the memory of systems that are protected by the bug-affected code, which accounts for two-thirds of the internet. That way, cybercriminals can get the keys they need to decode and read the encrypted data they want.

The bug was independently discovered recently by Codenomicon – a Finnish web security firm – and Google Security researcher Neel Mehta. Since information about its discovery was disclosed on April 7th, 2014, The official name for the vulnerability is CVE-2014-0160.it is estimated that some 17 percent (around half a million) of the Internet’s secure web servers that were certified by trusted authorities have been made vulnerable.

cyberwarfare1Several institutions have also come forward in that time to declare that they were subject to attack. For instance, The Canada Revenue Agency that they were accessed through the exploit of the bug during a 6-hour period on April 8th and reported the theft of Social Insurance Numbers belonging to 900 taxpayers. When the attack was discovered, the agency shut down its web site and extended the taxpayer filing deadline from April 30 to May 5.

The agency also said it would provide anyone affected with credit protection services at no cost, and it appears that the guilty parties were apprehended. This was announced on April 16, when the RCMP claimed that they had charged an engineering student in relation to the theft with “unauthorized use of a computer” and “mischief in relation to data”. In another incident, the UK parenting site Mumsnet had several user accounts hijacked, and its CEO was impersonated.

nsa_aerialAnother consequence of the bug is the impetus it has given to conspiracy theorists who believe it may be part of a government-sanctioned ploy. Given recent revelations about the NSA’s extensive efforts to eavesdrop on internet activity and engage in cyberwarfare, this is hardly a surprise. Nor would it be the first time, as anyone who recalls the case made for the NIST SP800-90 Dual Ec Prng program – a pseudorandom number generator is used extensively in cryptography – acting as a “backdoor” for the NSA to exploit.

In that, and this latest bout of speculation, it is believed that the vulnerability in the encryption itself may have been intentionally created to allow spy agencies to steal the private keys that vulnerable web sites use to encrypt your traffic to them. And cracking SSL to decrypt internet traffic has long been on the NSA’s wish list. Last September, the Guardian reported that the NSA and Britain’s GCHQ had “successfully cracked” much of the online encryption we rely on to secure email and other sensitive transactions and data.

Edward-Snowden-660x367According to documents the paper obtained from Snowden, GCHQ had specifically been working to develop ways into the encrypted traffic of Google, Yahoo, Facebook, and Hotmail to decrypt traffic in near-real time; and in 2010, there was documentation that suggested that they might have succeeded. Although this was two years before the Heartbleed vulnerability existed, it does serve to highlight the agency’s efforts to get at encrypted traffic.

For some time now, security experts have speculated about whether the NSA cracked SSL communications; and if so, how the agency might have accomplished the feat. But now, the existence of Heartbleed raises the possibility that in some cases, the NSA might not have needed to crack SSL at all. Instead, it’s possible the agency simply used the vulnerability to obtain the private keys of web-based companies to decrypt their traffic.

hackers_securityThough security vulnerabilities come and go, this one is deemed catastrophic because it’s at the core of SSL, the encryption protocol trusted by so many to protect their data. And beyond abuse by government sources, the bug is also worrisome because it could possibly be used by hackers to steal usernames and passwords for sensitive services like banking, ecommerce, and email. In short, it empowers individual troublemakers everywhere by ensuring that the locks on our information can be exploited by anyone who knows how to do it.

Matt Blaze, a cryptographer and computer security professor at the University of Pennsylvania, claims that “It really is the worst and most widespread vulnerability in SSL that has come out.” The Electronic Frontier Foundation, Ars Technica, and Bruce Schneier all deemed the Heartbleed bug “catastrophic”, and Forbes cybersecurity columnist Joseph Steinberg event went as far as to say that:

Some might argue that [Heartbleed] is the worst vulnerability found (at least in terms of its potential impact) since commercial traffic began to flow on the Internet.

opensslRegardless, Heartbleed does point to a much larger problem with the design of the internet. Some of its most important pieces are controlled by just a handful of people, many of whom aren’t paid well — or aren’t paid at all. In short, Heartbleed has shown that more oversight is needed to protect the internet’s underlying infrastructure. And the sad truth is that open source software — which underpins vast swathes of the net — has a serious sustainability problem.

Another problem is money, in that important projects just aren’t getting enough of it. Whereas well-known projects such as Linux, Mozilla, and the Apache web server enjoy hundreds of millions of dollars in annual funding, projects like the OpenSSL Software Foundation – which are forced to raise money for the project’s software development – have never raised more than $1 million in a year. To top it all off, there are issues when it comes to the open source ecosystem itself.

Cyber-WarTypically, projects start when developers need to fix a particular problem; and when they open source their solution, it’s instantly available to everyone. If the problem they address is common, the software can become wildly popular overnight. As a result, some projects never get the full attention from developers they deserve. Steve Marquess, one of the OpenSSL foundation’s partners, believes that part of the problem is that whereas people can see and touch their web browsers and Linux, they are out of touch with the cryptographic library.

In the end, the only real solutions is in informing the public. Since internet security affects us all, and the processes by which we secure our information is entrusted to too few hands, then the immediate solution is to widen the scope of inquiry and involvement. It also wouldn’t hurt to commit additional resources to the process of monitoring and securing the web, thereby ensuring that spy agencies and private individuals are not exercising too much or control over it, or able to do clandestine things with it.

In the meantime, the researchers from Codenomicon have set up a website with more detailed information. Click here to access it and see what you can do to protect yourself.

Sources: cbc.ca, wired.com, (2), heartbleed.com

The Future of WiFi: Solar-Powered Internet Drones

titan-aerospace-solara-50-640x353Facebook, that massive social utility company that is complicit in just about everything internet-related, recently announced that it is seeking to acquire Titan Aerospace. This company is famous for the development of UAVs, the most recent of which is their solar powered Solara 50. In what they describe as “bringing internet access to the underconnected,” their aim is to use an army of Solara’s to bring wireless internet access to the roughly 5 billion people who live without it worldwide.

Titan Aerospace has two products – the Solara 50 and Solara 60 – which the company refers to as “atmospheric satellites.” Both aircraft are powered by a large number of solar cells, have a service ceiling of up to 20,000 meters (65,000 feet) and then circle over a specific region for up to five years. This of length of service is based on the estimated lifespan of the on-board lithium-ion batteries that are required for night-time operation.

solara-50-titan-640x320The high altitude is important, as the FAA only regulates airspace up to 18,000 meters (60,000 feet). Above that, pretty much anything goes, which is intrinsic if you’re a company that is looking to do something incredibly audacious and soaked in self-interest. As an internet company and social utility, Facebook’s entire business model is based on continued expansion. Aiming to blanket the world in wireless access would certainly ensure that much, so philanthropy isn’t exactly the real aim here!

Nevertheless, once these atmospheric satellites are deployed, there is a wide range of possible applications to be had. Facebook is obviously interested in internet connectivity, but mapping, meteorology, global positioning, rapid response to disasters and wildfires, and a whole slew of other scientific and military applications would also be possible. As for what level of connectivity Facebook hopes to provide with these drones, it’s too early to say.

internetHowever, TechCrunch reports that Facebook would launch 11,000 Solara 60 drones. Their coverage would begin with Africa, and then spread out from there. There’s no word on how fast these connections might be, nor how much such a connection would cost per user. Perhaps more importantly, there’s also no word on how Facebook intends to connect these 11,000 satellites to the internet, though it is obvious that Facebook would need to build a series of ground stations.

Many of these might have to be built in very remote and very hard to administer areas, which would also require fiber optic cables running from them to hook them up to the internet. In addition, Titan hasn’t produced a commercial UAV yet and have confined themselves to technology demonstrations. What they refer to as “initial commercial operations” will start sometime in 2015, which is perhaps this is why Facebook is only paying $60 million for Titan, rather than the $19 billion it paid for WhatsApp.

Google_Loon_-_Launch_EventAs already noted, this move is hardly purely altruistic. In many ways, Facebook is a victim of its own success, as its rapid, early growth quickly became impossible to maintain. Acquiring Instagram and WhatsApp were a savvy moves to bring in a few hundred million more users, but ultimately they were nothing more than stopgap measures. Bringing the next billion users online and into Facebook’s monopolistic grasp will be a very hard task, but one which it must figure out if it wants its stock not to plummet.

To be fair, this idea is very similar to Google’s Project Loon, a plan that involves a series of high-altitude, solar-powered hot air balloons that would provide wireless to roughly two-thirds of the worlds population. The idea was unveiled back in June of 2013 and has since begun testing in New Zealand. And given their hold on the market in the developed world, bringing broadband access to the developing world is seen like the next logical step for companies like Verizon, Time Warner, Comcast, and every other internet and telecom provider.

Wireless-Internet-1One can only imagine the kind of world our children and grandchildren will be living in, when virtually everyone on the planet (and keeping in mind that there will be between 9 and 11 billion of them by that time) will be able to communicate instantaneously with each other. The sheer amount of opinions exchanged, information shared, and background noise produced is likely to make today’s world seem quiet, slow and civilized by comparison!

Incidentally, I may need to call a  lawyer as it seems that someone has been ripping off my ideas… again! Before reading up on this story, the only time I ever heard the name Titan Aerospace was in a story… MY STORY! Yes, in the Legacies universe, the principal developer of space ships and aerospace fighters carried this very name. They say its a guilty pleasure when stuff you predict comes true when you are writing about it. But really, if you can’t cash in on it, what’s the point?

Consider yourself warned, Titan! J.J. Abrams may have gotten off the hook with that whole Revolution show of his, but you are not nearly as rich and powerful… yet! 😉 And the meantime, be sure to check out these videos of Titan’s Solar 50 and Google’s Project Loon below:

Titan Aerospace Solara 50:


Project Loon:


Source:
extremetech.com

Breaking Moore’s Law: Graphene Nanoribbons

^Ask a technician or a computer science major, and they will likely tell you that the next great leap in computing will only come once Moore’s Law is overcome. This law, which states that the number of transistors on a single chip doubles every 18 months to two years, is proceeding towards a bottleneck. For decades, CPUs and computer chips have been getting smaller, but they are fast approaching their physical limitations.

One of the central problems arising from the Moore’s Law bottleneck has to do with the materials we used to create microchips. Short of continued miniaturization, there is simply no way to keep placing more and more components on a microchip. And copper wires can only be miniaturized so much before they lose the ability to conduct electricity effectively.

graphene_ribbons1This has led scientists and engineers to propose that new materials be used, and graphene appears to be the current favorite. And researchers at the University of California at Berkeley are busy working on a form of so-called nanoribbon graphene that could increase the density of transistors on a computer chip by as much as 10,000 times.

Graphene, for those who don’t know, is a miracle material that is basically a sheet of carbon only one layer of atoms thick. This two-dimensional physical configuration gives it some incredible properties, like extreme electrical conductivity at room temperature. Researchers have been working on producing high quality sheets of the material, but nanoribbons ask more of science than it can currently deliver.

graphene_ribbonsWork on nanoribbons over the past decade has revolved around using lasers to carefully sculpt ribbons 10 or 20 atoms wide from larger sheets of graphene. On the scale of billionths of an inch, that calls for incredible precision. If the makers are even a few carbon atoms off, it can completely alter the properties of the ribbon, preventing it from working as a semiconductor at room temperature.

Alas, Berkeley chemist Felix Fischer thinks he might have found a solution. Rather than carving ribbons out of larger sheets like a sculptor, Fischer has begun creating nanoribbons from carbon atoms using a chemical process. Basically, he’s working on a new way to produce graphene that happens to already be in the right configuration for nanoribbons.

graphene-solarHe begins by synthesizing rings of carbon atoms similar in structure to benzene, then heats the molecules to encourage them to form a long chain. A second heating step strips away most of the hydrogen atoms, freeing up the carbon to form bonds in a honeycomb-like graphene structure. This process allows Fischer and his colleagues to control where each atom of carbon goes in the final nanoribbon.

On the scale Fischer is making them, graphene nanoribbons could be capable of transporting electrons thousands of times faster than a traditional copper conductor. They could also be packed very close together since a single ribbon is 1/10,000th the thickness of a human hair. Thus, if the process is perfected and scaled up, everything from CPUs to storage technology could be much faster and smaller.

Sources: extremetech.com

The Future of Computing: Brain-Like Computers

neuronsIt’s no secret that computer scientists and engineers are looking to the human brain as means of achieving the next great leap in computer evolution. Already, machines are being developed that rely on machine blood, can continue working despite being damaged, and recognize images and speech. And soon, a computer chip that is capable of learning from its mistakes will also be available.

The new computing approach, already in use by some large technology companies, is based on the biological nervous system – specifically on how neurons react to stimuli and connect with other neurons to interpret information. It allows computers to absorb new information while carrying out a task, and adjust what they do based on the changing signals.

brain_chip2The first commercial version of the new kind of computer chip is scheduled to be released in 2014, and was the result of a collaborative effort between I.B.M. and Qualcomm, as well as a Stanford research team. This “neuromorphic processor” can not only automate tasks that once required painstaking programming, but can also sidestep and even tolerate errors, potentially making the term “computer crash” obsolete.

In coming years, the approach will make possible a new generation of artificial intelligence systems that will perform some functions that humans do with ease: see, speak, listen, navigate, manipulate and control. That can hold enormous consequences for tasks like facial and speech recognition, navigation and planning, which are still in elementary stages and rely heavily on human programming.

googleneuralnetworkFor example, computer vision systems only “recognize” objects that can be identified by the statistics-oriented algorithms programmed into them. An algorithm is like a recipe, a set of step-by-step instructions to perform a calculation. But last year, Google researchers were able to get a machine-learning algorithm, known as a “Google Neural Network”, to perform an identification task (involving cats) without supervision.

And this past June, the company said it had used those neural network techniques to develop a new search service to help customers find specific photos more accurately. And this past November, researchers at Standford University came up with a new algorithm that could give computers the power to more reliably interpret language. It’s known as the Neural Analysis of Sentiment (NaSent).

deep_learning_laptopA similar concept known as Deep Leaning is also looking to endow software with a measure of common sense. Google is using this technique with their voice recognition technology to aid in performing searches. In addition, the social media giant Facebook is looking to use deep learning to help them improve Graph Search, an engine that allows users to search activity on their network.

Until now, the design of computers was dictated by ideas originated by the mathematician John von Neumann about 65 years ago. Microprocessors perform operations at lightning speed, following instructions programmed using long strings of binary code (0s and 1s). The information is stored separately in what is known as memory, either in the processor itself, in adjacent storage chips or in higher capacity magnetic disk drives.

neural-networksBy contrast, the new processors consist of electronic components that can be connected by wires that mimic biological synapses. Because they are based on large groups of neuron-like elements, they are known as neuromorphic processors, a term credited to the California Institute of Technology physicist Carver Mead, who pioneered the concept in the late 1980s.

These processors are not “programmed”, in the conventional sense. Instead, the connections between the circuits are “weighted” according to correlations in data that the processor has already “learned.” Those weights are then altered as data flows in to the chip, causing them to change their values and to “spike.” This, in turn, strengthens some connections and weakens others, reacting much the same way the human brain does.

Neuromorphic-chip-640x353In the words of Dharmendra Modha, an I.B.M. computer scientist who leads the company’s cognitive computing research effort:

Instead of bringing data to computation as we do today, we can now bring computation to data. Sensors become the computer, and it opens up a new way to use computer chips that can be everywhere.

One great advantage of the new approach is its ability to tolerate glitches, whereas traditional computers are cannot work around the failure of even a single transistor. With the biological designs, the algorithms are ever changing, allowing the system to continuously adapt and work around failures to complete tasks. Another benefit is energy efficiency, another inspiration drawn from the human brain.

IBM_stacked3dchipsThe new computers, which are still based on silicon chips, will not replace today’s computers, but augment them; at least for the foreseeable future. Many computer designers see them as coprocessors, meaning they can work in tandem with other circuits that can be embedded in smartphones and the centralized computers that run computing clouds.

However, the new approach is still limited, thanks to the fact that scientists still do not fully understand how the human brain functions. As Kwabena Boahen, a computer scientist who leads Stanford’s Brains in Silicon research program, put it:

We have no clue. I’m an engineer, and I build things. There are these highfalutin theories, but give me one that will let me build something.

calit2PhotoLuckily, there are efforts underway that are designed to remedy this, with the specific intention of directing that knowledge towards the creation of better computers and AIs. One such effort comes from the National Science Foundation financed the Center for Brains, Minds and Machines, a new research center based at the Massachusetts Institute of Technology, with Harvard and Cornell.

Another is the California Institute for Telecommunications and Information Technology (aka. Calit2) – a center dedicated to innovation in nanotechnology, life sciences, information technology, and telecommunications. As
Larry Smarr, an astrophysicist and director of Institute, put it:

We’re moving from engineering computing systems to something that has many of the characteristics of biological computing.

Human-Brain-project-Alp-ICTAnd last, but certainly not least, is the Human Brain Project, an international group of 200 scientists from 80 different research institutions and based in Lausanne, Switzerland. Having secured the $1.6 billion they need to fund their efforts, these researchers will spend the next ten years conducting research that cuts across multiple disciplines.

This initiative, which has been compared to the Large Hadron Collider, will attempt to reconstruct the human brain piece-by-piece and gradually bring these cognitive components into an overarching supercomputer. The expected result of this research will be new platforms for “neuromorphic computing” and “neurorobotics,” allowing for the creation of computing and robotic architectures that mimic the functions of the human brain.

neuromorphic_revolutionWhen future generations look back on this decade, no doubt they will refer to it as the birth of the neuromophic computing revolution. Or maybe just Neuromorphic Revolution for short, but that sort of depends on the outcome. With so many technological revolutions well underway, it is difficult to imagine how the future will look back and characterize this time.

Perhaps, as Charles Stross suggest, it will simply be known as “the teens”, that time in pre-Singularity history where it was all starting to come together, but was yet to explode and violently change everything we know. I for one am looking forward to being around to witness it all!

Sources: nytimes.com, technologyreview.com, calit2.net, humanbrainproject.eu

Top Stories from CES 2014

CES2014_GooglePlus_BoxThe Consumer Electronics Show has been in full swing for two days now, and already the top spots for most impressive technology of the year has been selected. Granted, opinion is divided, and there are many top contenders, but between displays, gaming, smartphones, and personal devices, there’s been no shortage of technologies to choose from.

And having sifted through some news stories from the front lines, I have decided to compile a list of what I think the most impressive gadgets, displays and devices of this year’s show were. And as usual, they range from the innovative and creative, to the cool and futuristic, with some quirky and fun things holding up the middle. And here they are, in alphabetical order:

celestron_cosmosAs an astronomy enthusiast, and someone who enjoys hearing about new and innovative technologies, Celestron’s Cosmos 90GT WiFi Telescope was quite the story. Hoping to make astronomy more accessible to the masses, this new telescope is the first that can be controlled by an app over WiFi. Once paired, the system guides stargazers through the cosmos as directions flow from the app to the motorized scope base.

In terms of comuting, Lenovo chose to breathe some new life into the oft-declared dying industry of desktop PCs this year, thanks to the unveiling of their Horizon 2. Its 27-inch touchscreen can go fully horizontal, becoming both a gaming and media table. The large touch display has a novel pairing technique that lets you drop multiple smartphones directly onto the screen, as well as group, share, and edit photos from them.

Lenovo Horizon 2 Aura scanNext up is the latest set of display glasses to the world by storm, courtesy of the Epson Smart Glass project. Ever since Google Glass was unveiled in 2012, other electronics and IT companies have been racing to produce a similar product, one that can make heads-up display tech, WiFi connectivity, internet browsing, and augmented reality portable and wearable.

Epson was already moving in that direction back in 2011 when they released their BT100 augmented reality glasses. And now, with their Moverio BT200, they’ve clearly stepped up their game. In addition to being 60 percent lighter than the previous generation, the system has two parts – consisting of a pair of glasses and a control unit.

moverio-bt200-1The glasses feature a tiny LCD-based projection lens system and optical light guide which project digital content onto a transparent virtual display (960 x 540 resolution) and has a camera for video and stills capture, or AR marker detection. With the incorporation of third-party software, and taking advantage of the internal gyroscope and compass, a user can even create 360 degree panoramic environments.

At the other end, the handheld controller runs on Android 4.0, has a textured touchpad control surface, built-in Wi-Fi connectivity for video content streaming, and up to six hours of battery life.


The BT-200 smart glasses are currently being demonstrated at Epson’s CES booth, where visitors can experience a table-top virtual fighting game with AR characters, a medical imaging system that allows wearers to see through a person’s skin, and an AR assistance app to help perform unfamiliar tasks .

This year’s CES also featured a ridiculous amount of curved screens. Samsung seemed particularly proud of its garish, curved LCD TV’s, and even booked headliners like Mark Cuban and Michael Bay to promote them. In the latter case, this didn’t go so well. However, one curved screen device actually seemed appropriate – the LG G Flex 6-inch smartphone.

LG_G_GlexWhen it comes to massive curved screens, only one person can benefit from the sweet spot of the display – that focal point in the center where they feel enveloped. But in the case of the LG G Flex-6, the subtle bend in the screen allows for less light intrusion from the sides, and it distorts your own reflection just enough to obscure any distracting glare. Granted, its not exactly the flexible tech I was hoping to see, but its something!

In the world of gaming, two contributions made a rather big splash this year. These included the Playstation Now, a game streaming service just unveiled by Sony that lets gamers instantly play their games from a PS3, PS4, or PS Vita without downloading and always in the most updated version. Plus, it gives users the ability to rent titles they’re interested in, rather than buying the full copy.

maingear_sparkThen there was the Maingear Spark, a gaming desktop designed to run Valve’s gaming-centric SteamOS (and Windows) that measures just five inches square and weighs less than a pound. This is a big boon for gamers who usually have to deal gaming desktops that are bulky, heavy, and don’t fit well on an entertainment stand next to other gaming devices, an HD box, and anything else you might have there.

Next up, there is a device that helps consumers navigate the complex world of iris identification that is becoming all the rage. It’s known as the Myris Eyelock, a simple, straightforward gadget that takes a quick video of your eyeball, has you log in to your various accounts, and then automatically signs you in, without you ever having to type in your password.

myris_eyelockSo basically, you can utilize this new biometric ID system by having your retinal scan on your person wherever you go. And then, rather than go through the process of remembering multiple (and no doubt, complicated passwords, as identity theft is becoming increasingly problematic), you can upload a marker that leaves no doubt as to your identity. And at less than $300, it’s an affordable option, too.

And what would an electronics show be without showcasing a little drone technology? And the Parrot MiniDrone was this year’s crowd pleaser: a palm-sized, camera-equipped, remotely-piloted quad-rotor. However, this model has the added feature of two six-inch wheels, which affords it the ability to zip across floors, climb walls, and even move across ceilings! A truly versatile personal drone.

 

scanaduAnother very interesting display this year was the Scanadu Scout, the world’s first real-life tricorder. First unveiled back in May of 2013, the Scout represents the culmination of years of work by the NASA Ames Research Center to produce the world’s first, non-invasive medical scanner. And this year, they chose to showcase it at CES and let people test it out on themselves and each other.

All told, the Scanadu Scout can measure a person’s vital signs – including their heart rate, blood pressure, temperature – without ever touching them. All that’s needed is to place the scanner above your skin, wait a moment, and voila! Instant vitals. The sensor will begin a pilot program with 10,000 users this spring, the first key step toward FDA approval.

wowwee_mip_sg_4And of course, no CES would be complete without a toy robot or two. This year, it was the WowWee MiP (Mobile Inverted Pendulum) that put on a big show. Basically, it is an eight-inch bot that balances itself on dual wheels (like a Segway), is controllable by hand gestures, a Bluetooth-conncted phone, or can autonomously roll around.

Its sensitivity to commands and its ability to balance while zooming across the floor are super impressive. While on display, many were shown carrying a tray around (sometimes with another MiP on a tray). And, a real crowd pleaser, the MiP can even dance. Always got to throw in something for the retro 80’s crowd, the people who grew up with the SICO robot, Jinx, and other friendly automatons!

iOptikBut perhaps most impressive of all, at least in my humble opinion, is the display of the prototype for the iOptik AR Contact Lens. While most of the focus on high-tech eyewear has been focused on wearables like Google Glass of late, other developers have been steadily working towards display devices that are small enough to worse over your pupil.

Developed by the Washington-based company Innovega with support from DARPA, the iOptik is a heads-up display built into a set of contact lenses. And this year, the first fully-functioning prototypes are being showcased at CES. Acting as a micro-display, the glasses project a picture onto the contact lens, which works as a filter to separate the real-world from the digital environment and then interlaces them into the one image.

ioptik_contact_lenses-7Embedded in the contact lenses are micro-components that enable the user to focus on near-eye images. Light projected by the display (built into a set of glasses) passes through the center of the pupil and then works with the eye’s regular optics to focus the display on the retina, while light from the real-life environment reaches the retina via an outer filter.

This creates two separate images on the retina which are then superimposed to create one integrated image, or augmented reality. It also offers an alternative solution to traditional near-eye displays which create the illusion of an object in the distance so as not to hinder regular vision. At present, still requires clearance from the FDA before it becomes commercially available, which may come in late 2014 or early 2015.


Well, its certainly been an interesting year, once again, in the world of electronics, robotics, personal devices, and wearable technology. And it manages to capture the pace of change that is increasingly coming to characterize our lives. And according to the tech site Mashable, this year’s show was characterized by televisions with 4K pixel resolution, wearables, biometrics, the internet of personalized and data-driven things, and of course, 3-D printing and imaging.

And as always, there were plenty of videos showcasing tons of interesting concepts and devices that were featured this year. Here are a few that I managed to find and thought were worthy of passing on:

Internet of Things Highlights:


Motion Tech Highlights:


Wearable Tech Highlights:


Sources: popsci.com, (2), cesweb, mashable, (2), gizmag, (2), news.cnet

Year-End Tech News: Stanene and Nanoparticle Ink

3d.printingThe year of 2013 was also a boon for the high-tech industry, especially where electronics and additive manufacturing were concerned. In fact, several key developments took place last year that may help scientists and researchers to move beyond Moore’s Law, as well as ring in a new era of manufacturing and production.

In terms of computing, developers have long feared that Moore’s Law – which states that the number of transistors on integrated circuits doubles approximately every two years – could be reaching a bottleneck. While the law (really it’s more of an observation) has certainly held true for the past forty years, it has been understood for some time that the use of silicon and copper wiring would eventually impose limits.

copper_in_chips__620x350Basically, one can only miniaturize circuits made from these materials so much before resistance occurs and they are too fragile to be effective. Because of this, researchers have been looking for replacement materials to substitute the silicon that makes up the 1 billion transistors, and the one hundred or so kilometers of copper wire, that currently make up an integrated circuit.

Various materials have been proposed, such as graphene, carbyne, and even carbon nanotubes. But now, a group of researchers from Stanford University and the SLAC National Accelerator Laboratory in California are proposing another material. It’s known as Stanene, a theorized material fabricated from a single layer of tin atoms that is theoretically extremely efficient, even at high temperatures.

computer_chip5Compared to graphene, which is stupendously conductive, the researchers at Stanford and the SLAC claim that stanene should be a topological insulator. Topological insulators, due to their arrangement of electrons/nuclei, are insulators on their interior, but conductive along their edge and/or surface. Being only a single atom in thickness along its edges, this topological insulator can conduct electricity with 100% efficiency.

The Stanford and SLAC researchers also say that stanene would not only have 100%-efficiency edges at room temperature, but with a bit of fluorine, would also have 100% efficiency at temperatures of up to 100 degrees Celsius (212 Fahrenheit). This is very important if stanene is ever to be used in computer chips, which have operational temps of between 40 and 90 C (104 and 194 F).

Though the claim of perfect efficiency seems outlandish to some, others admit that near-perfect efficiency is possible. And while no stanene has been fabricated yet, it is unlikely that it would be hard to fashion some on a small scale, as the technology currently exists. However, it will likely be a very, very long time until stanene is used in the production of computer chips.

Battery-Printer-640x353In the realm of additive manufacturing (aka. 3-D printing) several major developments were made during the year 0f 2013. This one came from Harvard University, where a materials scientist named Jennifer Lewis Lewis – using currently technology – has developed new “inks” that can be used to print batteries and other electronic components.

3-D printing is already at work in the field of consumer electronics with casings and some smaller components being made on industrial 3D printers. However, the need for traditionally produced circuit boards and batteries limits the usefulness of 3D printing. If the work being done by Lewis proves fruitful, it could make fabrication of a finished product considerably faster and easier.

3d_batteryThe Harvard team is calling the material “ink,” but in fact, it’s a suspension of nanoparticles in a dense liquid medium. In the case of the battery printing ink, the team starts with a vial of deionized water and ethylene glycol and adds nanoparticles of lithium titanium oxide. The mixture is homogenized, then centrifuged to separate out any larger particles, and the battery ink is formed.

This process is possible because of the unique properties of the nanoparticle suspension. It is mostly solid as it sits in the printer ready to be applied, then begins to flow like liquid when pressure is increased. Once it leaves the custom printer nozzle, it returns to a solid state. From this, Lewis’ team was able to lay down multiple layers of this ink with extreme precision at 100-nanometer accuracy.

laser-welding-640x353The tiny batteries being printed are about 1mm square, and could pack even higher energy density than conventional cells thanks to the intricate constructions. This approach is much more realistic than other metal printing technologies because it happens at room temperature, no need for microwaves, lasers or high-temperatures at all.

More importantly, it works with existing industrial 3D printers that were built to work with plastics. Because of this, battery production can be done cheaply using printers that cost on the order of a few hundred dollars, and not industrial-sized ones that can cost upwards of $1 million.

Smaller computers, and smaller, more efficient batteries. It seems that miniaturization, which some feared would be plateauing this decade, is safe for the foreseeable future! So I guess we can keep counting on our electronics getting smaller, harder to use, and easier to lose for the next few years. Yay for us!

Sources: extremetech.com, (2)

Cyberwars: Stuxnet and Cryptolocker

cyber_security1It’s been quite the year for cybercops, cybercriminals, and all those of us who are caught in between. Between viruses which continue to involve and viruses that target sensitive information in new ways, it seems clear that the information age is fraught with peril. In addition to cyberwars raging between nations, there is also the danger of guerrilla warfare and the digital weapons running amok.

Consider the Stuxnet virus, a piece of programming that made headlines last year by sabotaging the Iranian nuclear enrichment program. At the time, the target – not to mention its source (within the US) – seemed all too convenient to have been unintentional. However, this year, Stuxnet is once again garnering attention thanks to its latest target: the International Space Station.

ISSApparently, this has been the result of the virus having gone rogue, or at least become too big for its creators to control. In addition to the ISS, the latest reports state that Stuxnet is hitting nuclear plants in countries for which the virus was not originally intended. In one case, the virus even managed to infect an internal network at a Russian power planet that wasn’t even connected to the internet.

According to Eugene Kaspersky, famed head of IT security at Kaspersky Labs, the virus can travel through methods other than internet connectivity, such as via optical media or a USB drive. Kaspersky claims that this is apparently how it made its way aboard the ISS, and that it was brought aboard on more than one occasion through infected USB drives.

computer-virus.istockFor the moment, it is unclear how this virus will be taken care of, or whether or not it will continue to grow beyond any single organization’s ability to control it. All that is clear at this point is that this particular virus has returned to its original handlers. For the time being, various nations and multinational corporations are looking to harden their databases and infrastructure against cyber attack, with Stuxnet in mind.

And they are not the only ones who need to be on their guard about protecting against intrusion. Average consumers are only at risk of having their databases being accessed by an unwanted digital visitor, one that goes by the name of Cryptolocker. Designed with aggressive salesmanship – and blackmail – in mind, this virus is bringing fears about personal information being accessed to new heights.

cryptolockerBasically, the Cryptolocker works by finding people’s most important and sensitive files and selling it back to them. After obtaining the files its needs, it then contacts a remote server to create a 2048-bit key pair to encrypt them so they cannot be recovered, and then contacts the owner with an ultimatum. People are told to pay up, or the virus will begin deleting the info.

When the virus first emerged in October of this year, victims were given three days to cough up roughly $200 via BitCoin or MoneyPak currency transfer. If the virus’ authors did not receive payment within 72 hours, they said, a single line would be deleted from a text file on some hidden foreign server, forever erasing the only string of numbers that could ever bring the affected files back from the dead.

cyber_virusSome users responded by simply setting their system’s internal clock back. A temporary measure, to be sure, but one which worked by tricking the virus into thinking the deadline had not expired. In addition, the three-day deadline worked against the viruses makers, since it’s proven restrictive to the types of people who mostly contract a virus like this – i.e. senior citizens and people working on corporate networks.

Such people are more vulnerable to such scams, but seldom have the computer-savvy skills to to set up BitCoin or other such accounts and transfer the money in time. Meanwhile, infecting a corporate server means that a bloated corporate bureaucracies will be responsible for making the decision of whether or not to pay, not an individual who can decide quickly.

virus-detected-640x353So basically, the designers of Cryptolocker were facing a catch-22. They could not extend the deadline on the virus without diminishing the sense of panic that makes many people pay, but would continue to lose money as long as people couldn’t pay. Their solution: If a victim does not pay up in time, the hackers simply raise the ransom – by a factor of 10!

This allows people more time to mull over the loss of sensitive data and make a decision, but by that time – should they decide to pay up – the price tag has gone up to a bloated $2000. Luckily, this has revealed a crucial bluff in the virus’s workings by showing that all the keys to the encrypted files are in fact not deleted after the three day time limit.

???????????????As such, the security industry is encouraging people to hold on to the useless, encrypted files and waiting for the criminal server to be someday seized by the authorities. Since any ransom paid is a de-facto encouragement to hackers to write a similar virus again — or indeed to re-infect the same companies twice – people are currently being told to simply hold out and not pay up.

What’s more, regular backups are the key to protecting your database from viruses like Cryptolocker. Regular backups to off-network machines that do not auto-sync will minimize the virus’ potential for damage. The best defense is even simpler: Cryptolocker infects computers via a bogus email attachment disguised as a PDF file, so simple email safety should keep you immune.

Alas, its a world of digital warfare, and there there are no discernible sides. Just millions of perpetrators, dozens of authorities, and billions of people fearing for the safety and integrity of their data. One can only wonder what an age of quantum computers, graphene and nanotube processors will bring. But more on that later!

Sources: extremetech.com, (2), fastcoexist.com

Judgement Day Update: Bionic Computing!

big_blue1IBM has always been at the forefront of cutting-edge technology. Whether it was with the development computers that could guide ICBMs and rockets into space during the Cold War, or the creation of the Internet during the early 90’s, they have managed to stay on the vanguard by constantly looking ahead. So it comes as no surprise that they had plenty to say last month on the subject of the next of the next big leap.

During a media tour of their Zurich lab in late October, IBM presented some of the company’s latest concepts. According to the company, the key to creating supermachines that 10,000 faster and more efficient is to build bionic computers cooled and powered by electronic blood. The end result of this plan is what is known as “Big Blue”, a proposed biocomputer that they anticipate will take 10 years to make.

Human-Brain-project-Alp-ICTIntrinsic to the design is the merger of computing and biological forms, specifically the human brain. In terms of computing, IBM is relying the human brain as their template. Through this, they hope to be able to enable processing power that’s densely packed into 3D volumes rather than spread out across flat 2D circuit boards with slow communication links.

On the biological side of things, IBM is supplying computing equipment to the Human Brain Project (HBP) – a $1.3 billion European effort that uses computers to simulate the actual workings of an entire brain. Beginning with mice, but then working their way up to human beings, their simulations examine the inner workings of the mind all the way down to the biochemical level of the neuron.

brain_chip2It’s all part of what IBM calls “the cognitive systems era”, a future where computers aren’t just programmed, but also perceive what’s going on, make judgments, communicate with natural language, and learn from experience. As the description would suggest, it is closely related to artificial intelligence, and may very well prove to be the curtain raiser of the AI era.

One of the key challenge behind this work is matching the brain’s power consumption. The ability to process the subtleties of human language helped IBM’s Watson supercomputer win at “Jeopardy.” That was a high-profile step on the road to cognitive computing, but from a practical perspective, it also showed how much farther computing has to go. Whereas Watson uses 85 kilowatts of power, the human brain uses only 20 watts.

aquasar2Already, a shift has been occurring in computing, which is evident in the way engineers and technicians are now measuring computer progress. For the past few decades, the method of choice for gauging performance was operations per second, or the rate at which a machine could perform mathematical calculations.

But as a computers began to require prohibitive amounts of power to perform various functions and generated far too much waste heat, a new measurement was called for. The new measurement that emerged as a result was expressed in operations per joule of energy consumed. In short, progress has come to be measured in term’s of a computer’s energy efficiency.

IBM_Research_ZurichBut now, IBM is contemplating another method for measuring progress that is known as “operations per liter”. In accordance with this new paradigm, the success of a computer will be judged by how much data-processing can be squeezed into a given volume of space. This is where the brain really serves as a source of inspiration, being the most efficient computer in terms of performance per cubic centimeter.

As it stands, today’s computers consist of transistors and circuits laid out on flat boards that ensure plenty of contact with air that cools the chips. But as Bruno Michel – a biophysics professor and researcher in advanced thermal packaging for IBM Research – explains, this is a terribly inefficient use of space:

In a computer, processors occupy one-millionth of the volume. In a brain, it’s 40 percent. Our brain is a volumetric, dense, object.

IBM_stacked3dchipsIn short, communication links between processing elements can’t keep up with data-transfer demands, and they consume too much power as well. The proposed solution is to stack and link chips into dense 3D configurations, a process which is impossible today because stacking even two chips means crippling overheating problems. That’s where the “liquid blood” comes in, at least as far as cooling is concerned.

This process is demonstrated with the company’s prototype system called Aquasar. By branching chips into a network of liquid cooling channels that funnel fluid into ever-smaller tubes, the chips can be stacked together in large configurations without overheating. The liquid passes not next to the chip, but through it, drawing away heat in the thousandth of a second it takes to make the trip.

aquasarIn addition, IBM also is developing a system called a redox flow battery that uses liquid to distribute power instead of using wires. Two types of electrolyte fluid, each with oppositely charged electrical ions, circulate through the system to distribute power, much in the same way that the human body provides oxygen, nutrients and cooling to brain through the blood.

The electrolytes travel through ever-smaller tubes that are about 100 microns wide at their smallest – the width of a human hair – before handing off their power to conventional electrical wires. Flow batteries can produce between 0.5 and 3 volts, and that in turn means IBM can use the technology today to supply 1 watt of power for every square centimeter of a computer’s circuit board.

IBM_Blue_Gene_P_supercomputerAlready, the IBM Blue Gene supercomputer has been used for brain research by the Blue Brain Project at the Ecole Polytechnique Federale de Lausanne (EPFL) in Lausanne, Switzerland. Working with the HBP, their next step ill be to augment a Blue Gene/Q with additional flash memory at the Swiss National Supercomputing Center.

After that, they will begin simulating the inner workings of the mouse brain, which consists of 70 million neurons. By the time they will be conducting human brain simulations, they plan to be using an “exascale” machine – one that performs 1 exaflops, or quintillion floating-point operations per second. This will take place at the Juelich Supercomputing Center in northern Germany.

brain-activityThis is no easy challenge, mainly because the brain is so complex. In addition to 100 billion neurons and 100 trillionsynapses,  there are 55 different varieties of neuron, and 3,000 ways they can interconnect. That complexity is multiplied by differences that appear with 600 different diseases, genetic variation from one person to the next, and changes that go along with the age and sex of humans.

As Henry Markram, the co-director of EPFL who has worked on the Blue Brain project for years:

If you can’t experimentally map the brain, you have to predict it — the numbers of neurons, the types, where the proteins are located, how they’ll interact. We have to develop an entirely new science where we predict most of the stuff that cannot be measured.

child-ai-brainWith the Human Brain Project, researchers will use supercomputers to reproduce how brains form in an virtual vat. Then, they will see how they respond to input signals from simulated senses and nervous system. If it works, actual brain behavior should emerge from the fundamental framework inside the computer, and where it doesn’t work, scientists will know where their knowledge falls short.

The end result of all this will also be computers that are “neuromorphic” – capable of imitating human brains, thereby ushering in an age when machines will be able to truly think, reason, and make autonomous decisions. No more supercomputers that are tall on knowledge but short on understanding. The age of artificial intelligence will be upon us. And I think we all know what will follow, don’t we?

Evolution-of-the-Cylon_1024Yep, that’s what! And may God help us all!

Sources: news.cnet.com, extremetech.com