The Future of Computing: Brain-Like Computers

neuronsIt’s no secret that computer scientists and engineers are looking to the human brain as means of achieving the next great leap in computer evolution. Already, machines are being developed that rely on machine blood, can continue working despite being damaged, and recognize images and speech. And soon, a computer chip that is capable of learning from its mistakes will also be available.

The new computing approach, already in use by some large technology companies, is based on the biological nervous system – specifically on how neurons react to stimuli and connect with other neurons to interpret information. It allows computers to absorb new information while carrying out a task, and adjust what they do based on the changing signals.

brain_chip2The first commercial version of the new kind of computer chip is scheduled to be released in 2014, and was the result of a collaborative effort between I.B.M. and Qualcomm, as well as a Stanford research team. This “neuromorphic processor” can not only automate tasks that once required painstaking programming, but can also sidestep and even tolerate errors, potentially making the term “computer crash” obsolete.

In coming years, the approach will make possible a new generation of artificial intelligence systems that will perform some functions that humans do with ease: see, speak, listen, navigate, manipulate and control. That can hold enormous consequences for tasks like facial and speech recognition, navigation and planning, which are still in elementary stages and rely heavily on human programming.

googleneuralnetworkFor example, computer vision systems only “recognize” objects that can be identified by the statistics-oriented algorithms programmed into them. An algorithm is like a recipe, a set of step-by-step instructions to perform a calculation. But last year, Google researchers were able to get a machine-learning algorithm, known as a “Google Neural Network”, to perform an identification task (involving cats) without supervision.

And this past June, the company said it had used those neural network techniques to develop a new search service to help customers find specific photos more accurately. And this past November, researchers at Standford University came up with a new algorithm that could give computers the power to more reliably interpret language. It’s known as the Neural Analysis of Sentiment (NaSent).

deep_learning_laptopA similar concept known as Deep Leaning is also looking to endow software with a measure of common sense. Google is using this technique with their voice recognition technology to aid in performing searches. In addition, the social media giant Facebook is looking to use deep learning to help them improve Graph Search, an engine that allows users to search activity on their network.

Until now, the design of computers was dictated by ideas originated by the mathematician John von Neumann about 65 years ago. Microprocessors perform operations at lightning speed, following instructions programmed using long strings of binary code (0s and 1s). The information is stored separately in what is known as memory, either in the processor itself, in adjacent storage chips or in higher capacity magnetic disk drives.

neural-networksBy contrast, the new processors consist of electronic components that can be connected by wires that mimic biological synapses. Because they are based on large groups of neuron-like elements, they are known as neuromorphic processors, a term credited to the California Institute of Technology physicist Carver Mead, who pioneered the concept in the late 1980s.

These processors are not “programmed”, in the conventional sense. Instead, the connections between the circuits are “weighted” according to correlations in data that the processor has already “learned.” Those weights are then altered as data flows in to the chip, causing them to change their values and to “spike.” This, in turn, strengthens some connections and weakens others, reacting much the same way the human brain does.

Neuromorphic-chip-640x353In the words of Dharmendra Modha, an I.B.M. computer scientist who leads the company’s cognitive computing research effort:

Instead of bringing data to computation as we do today, we can now bring computation to data. Sensors become the computer, and it opens up a new way to use computer chips that can be everywhere.

One great advantage of the new approach is its ability to tolerate glitches, whereas traditional computers are cannot work around the failure of even a single transistor. With the biological designs, the algorithms are ever changing, allowing the system to continuously adapt and work around failures to complete tasks. Another benefit is energy efficiency, another inspiration drawn from the human brain.

IBM_stacked3dchipsThe new computers, which are still based on silicon chips, will not replace today’s computers, but augment them; at least for the foreseeable future. Many computer designers see them as coprocessors, meaning they can work in tandem with other circuits that can be embedded in smartphones and the centralized computers that run computing clouds.

However, the new approach is still limited, thanks to the fact that scientists still do not fully understand how the human brain functions. As Kwabena Boahen, a computer scientist who leads Stanford’s Brains in Silicon research program, put it:

We have no clue. I’m an engineer, and I build things. There are these highfalutin theories, but give me one that will let me build something.

calit2PhotoLuckily, there are efforts underway that are designed to remedy this, with the specific intention of directing that knowledge towards the creation of better computers and AIs. One such effort comes from the National Science Foundation financed the Center for Brains, Minds and Machines, a new research center based at the Massachusetts Institute of Technology, with Harvard and Cornell.

Another is the California Institute for Telecommunications and Information Technology (aka. Calit2) – a center dedicated to innovation in nanotechnology, life sciences, information technology, and telecommunications. As
Larry Smarr, an astrophysicist and director of Institute, put it:

We’re moving from engineering computing systems to something that has many of the characteristics of biological computing.

Human-Brain-project-Alp-ICTAnd last, but certainly not least, is the Human Brain Project, an international group of 200 scientists from 80 different research institutions and based in Lausanne, Switzerland. Having secured the $1.6 billion they need to fund their efforts, these researchers will spend the next ten years conducting research that cuts across multiple disciplines.

This initiative, which has been compared to the Large Hadron Collider, will attempt to reconstruct the human brain piece-by-piece and gradually bring these cognitive components into an overarching supercomputer. The expected result of this research will be new platforms for “neuromorphic computing” and “neurorobotics,” allowing for the creation of computing and robotic architectures that mimic the functions of the human brain.

neuromorphic_revolutionWhen future generations look back on this decade, no doubt they will refer to it as the birth of the neuromophic computing revolution. Or maybe just Neuromorphic Revolution for short, but that sort of depends on the outcome. With so many technological revolutions well underway, it is difficult to imagine how the future will look back and characterize this time.

Perhaps, as Charles Stross suggest, it will simply be known as “the teens”, that time in pre-Singularity history where it was all starting to come together, but was yet to explode and violently change everything we know. I for one am looking forward to being around to witness it all!

Sources: nytimes.com, technologyreview.com, calit2.net, humanbrainproject.eu

By 2014: According to Asimov and Clarke

asimov_clarkeAmongst the sci-fi greats of old, there were few authors, scientists and futurists more influential than Isaac Asimov and Arthur C. Clarke. And as individuals who constantly had one eye to the world of their day, and one eye to the future, they had plenty to say about what the world would look like by the 21st century. And interestingly enough, 2014 just happens to be the year where much of what they predicted was meant to come true.

For example, 50 years ago, Asimov wrote an article for the New York Times that listed his predictions for what the world would be like in 2014. The article was titled “Visit to the World’s Fair of 2014”, and contained many accurate, and some not-so-accurate, guesses as to how people would be living today and what kinds of technology would be available to us.

Here are some of the accurate predictions:

1. “By 2014, electroluminescent panels will be in common use.”
In short, electroluminescent displays are thin, bright panels that are used in retail displays, signs, lighting and flat panel TVs. What’s more, personal devices are incorporating this technology, in the form of OLED and AMOLED displays, which are both paper-thin and flexible, giving rise to handheld devices you can bend and flex without fear of damaging them.

touch-taiwan_amoled2. “Gadgetry will continue to relieve mankind of tedious jobs.”
Oh yes indeed! In the last thirty years, we’ve seen voicemail replace personal assistants, secretaries and message boards. We’ve seen fax machines replace couriers. We’ve seen personal devices and PDAs that are able to handle more and more in the way of tasks, making it unnecessary for people to consult a written sources of perform their own shorthand calculations. It’s a hallmark of our age that personal technology is doing more and more of the legwork, supposedly freeing us to do more with our time.

3. “Communications will become sight-sound and you will see as well as hear the person you telephone.”
This was a popular prediction in Asimov’s time, usually taking the form of a videophone or conversations that happened through display panels. And the rise of the social media and telepresence has certainly delivered on that. Services like Skype, Google Hangout, FaceTime and more have made video chatting very common, and a viable alternative to a phone line you need to pay for.

skypeskype4. “The screen can be used not only to see the people you call but also for studying documents and photographs and reading passages from books.”
Multitasking is one of the hallmarks of modern computers, handheld devices, and tablets, and has been the norm for operating systems for some time. By simply calling up new windows, new tabs, or opening up multiple apps simultaneously and simply switching between them, users are able to start multiple projects, or conduct work and view video, take pictures, play games, and generally behave like a kid with ADHD on crack if they so choose.

5. “Robots will neither be common nor very good in 2014, but they will be in existence.”
If you define “robot” as a computer that looks and acts like a human, then this guess is definitely true. While we do not have robot servants or robot friends per se, we do have Roomba’s, robots capable of performing menial tasks, and even ones capable of imitating animal and even human movements and participating in hazardous duty exercises (Google the DARPA Robot Challenge to see what I mean).

Valkyrie_robotAlas, he was off on several other fronts. For example, kitchens do not yet prepare “automeals” – meaning they prepare entire meals for us at the click of a button. What’s more, the vast majority of our education systems is not geared towards the creation and maintenance of robotics. All surfaces have not yet been converted into display screens, though we could if we wanted to. And the world population is actually higher than he predicted (6,500,000,000 was his estimate).

As for what he got wrong, well… our appliances are not powered by radioactive isotopes, and thereby able to be entirely wireless (though wireless recharging is becoming a reality). Only a fraction of students are currently proficient in computer language, contrary to his expectation that all would be. And last, society is not a place of “enforced leisure”, where work is considered a privilege and not a burden. Too bad too!

Arthur-C-ClarkeAnd when it comes to the future, there are few authors whose predictions are more trusted than Arthur C. Clarke. In addition to being a prolific science fiction writer, he wrote nearly three dozen nonfiction books and countless articles about the future of space travel, undersea exploration and daily life in the 21st century.

And in a recently released clip from a 1974 ABC News program filmed in Australia, Clarke is shown talking to a reporter next to a massive bank of computers. With his son in tow, the reporter asks Clarke to talk about what computers will be like when his son is an adult. In response, Clarke offers some eerily prophetic, if not quite spot-on, predictions:

The big difference when he grows up, in fact it won’t even wait until the year 2001, is that he will have, in his own house, not a computer as big as this, but at least a console through which he can talk to his friendly local computer and get all the information he needs for his everyday life, like his bank statements, his theater reservations, all the information you need in the course of living in a complex modern society. This will be in a compact form in his own house.

internetIn short, Clarke predicted not only the rise of the personal computer, but also online banking, shopping and a slew of internet services. Clarke was then asked about the possible danger of becoming a “computer-dependent” society, and while he acknowledged that in the future humanity would rely on computers “in some ways,” computers would also open up the world:

It’ll make it possible for us to live really anywhere we like. Any businessman, any executive, could live almost anywhere on Earth and still do his business through his device like this. And this is a wonderful thing.

Clarke certainly had a point about computers giving us the ability to communicate from almost anywhere on the globe, also known as telecommunication, telecommuting and telepresence. But as to whether or not our dependence on this level of technology is a good or bad thing, the jury is still out on that one. The point is, his predictions proved to be highly accurate, forty years in advance.

computer_chip1Granted, Clarke’s predictions were not summoned out of thin air. Ever since their use in World War II as a means of cracking Germany’s cyphers, miniaturization has been the trend in computing. By the 1970’s, they were still immense and clunky, but punch cards and vacuum tubes had already given way to transistors, ones which were getting smaller all the time.

And in 1969, the first operational packet network to implement a Transmission Control Protocol and Internet Protocol (TCP/IP) was established. Known as a Advanced Research Projects Agency Network (or ARPANET), this U.S. Department of Defense network was set up to connect the DOD’s various research projects at universities and laboratories all across the US, and was the precursor to the modern internet.

In being a man who was so on top of things technologically, Clarke accurately predicted that these two trends would continue into the foreseeable future, giving rise to computers small enough to fit on our desks (rather than taking up an entire room) and networked with other computers all around the world via a TCP/IP network that enabled real-time data sharing and communications.

And in the meantime, be sure to check out the Clarke interview below:


Sources:
huffingtonpost.com, blastr.com

Big News in Quantum Computing!

^For many years, scientists have looked at the field of quantum machinery as the next big wave in computing. Whereas conventional computing involves sending information via a series of particles (electrons), quantum computing relies on the process of beaming the states of these particles from one location to the next. This process, which occurs faster than the speed of light since no movement takes place, would make computers exponentially faster and more efficient, and lead to an explosion in machine intelligence. And while the technology has yet to be realized, every day brings us one step closer…

One important step happened earlier this month with the installment of the D-Wave Two over at the Quantum Artificial Intelligence Lab (QAIL) at the Ames Research Center in Silicon Valley, NASA has announced that this is precisely what they intend to pursue. Not surprisingly, the ARC is only the second lab in the world to have a quantum computer.  The only other lab to possess the 512-qubit, cryogenically cooled machine is the defense contractor Lockheed Martin, which upgraded to a D-Wave Two in 2011.

D-Wave’s new 512-qubit Vesuvius chip
D-Wave’s new 512-qubit Vesuvius chip

And while there are still some who question the categorization of the a D-Wave Two as a true quantum computer, most critics have acquiesced since many of its components function in accordance with the basic principle. And NASA, Google, and the people at the Universities Space Research Association (USRA) even ran some tests to confirm that the quantum computer offered a speed boost over conventional supercomputers — and it passed.

The new lab, which will be situated at NASA’s Advanced Supercomputing Facility at the Ames Research Center, will be operated by NASA, Google, and the USRA. NASA and Google will each get 40% of the system’s computing time, with the remaining 20% being divvied up by the USRA to researchers at various American universities. NASA and Google will primarily use the quantum computer to advance a branch of artificial intelligence called machine learning, which is tasked with developing algorithms that optimize themselves with experience.

nasa-ames-research-center-partyAs for what specific machine learning tasks NASA and Google actually have in mind, we can only guess. But it’s a fair bet that NASA will be interested in optimizing flight paths to other planets, or devising a safer/better/faster landing procedure for the next Mars rover. As for Google, the smart money says they will be using their time to develop complex AI algorithms for their self-driving cars, as well optimizing their search engines, and Google+.

But in the end, its the long-range possibilities that offer the most excitement here. With NASA and Google now firmly in command of a quantum processor, some of best and brightest minds in the world will now be working to forward the field of artificial intelligence, space flight, and high-tech. It will be quite exciting to see what they produce…

photon_laserAnother important step took place back in March, when researchers at Yale University announced that they had developed a new way to change the quantum state of photons, the elementary particles researchers hope to use for quantum memory. This is good news, because it effectively demonstrated that true quantum computing – the kind that utilizes qubits for all of its processes – has continually eluded scientists and researchers in recent years.

To break it down, today’s computers are restricted in that they store information as bits – where each bit holds either a “1″ or a “0.” But a quantum computer is built around qubits (quantum bits) that can store a 1, a 0 or any combination of both at the same time. And while the qubits would make up the equivalent of a processor in a quantum computer, some sort of quantum Random Access Memory (RAM) is also needed.

Photon_follow8Gerhard Kirchmair, one of Yale researchers, explained in a recent interview with Nature magazine that photons are a good choice for this because they can retain a quantum state for a long time over a long distance. But you’ll want to change the quantum information stored in the photons from time to time. What the Yale team has developed is essentially a way to temporarily make the photons used for memory “writeable,” and then switch them back into a more stable state.

To do this, Kirchmair and his associates took advantage of what’s known as a “Kerr medium”, a law that states how certain mediums will refract light in a different ways depending on the amount shined on it. This is different from normal material materials that refract light and any other form of electromagnetic field the same regardless of how much they are exposed to.

Higgs-bosonThus, by exposing photons to a microwave field in a Kerr medium, they were able to manipulate the quantum states of photons, making them the perfect means for quantum memory storage. At the same time, they knew that storing these memory photons in a Kerr medium would prove unstable, so they added a vacuum filled aluminum resonator to act as a coupler. When the resonator is decoupled, the photons are stable. When resonator is coupled, the photons are “writeable”, allowing a user to input information and store it effectively.

This is not the first or only instance of researchers finding ways to toy with the state of photons, but it is currently the most stable and effective. And coupled with other efforts, such as the development of photonic transistors and other such components, or new ways to create photons seemingly out of thin air, we could be just a few years away from the first full and bona fide quantum processor!

Sources: Extremetech.com, Wired.com, Nature.com

Microsoft Concept Video: The Future of Smartphones and Computers

futurvision5-550x321Ah, I imagine people are getting tired of these. But permit just one more! In the midst of so many new products and developments in the fields of smartphones, tablets, augmented reality, and wireless technology, Microsoft was sure to add its two cents. Releasing this concept video back in 2011, shortly after the Consumer Electronics Show, amidst all the buzz over flexible screens and paper-thin displays, Microsoft produced this short entitled “Productivity Future Vision”.

In addition to showcasing their Window Phone (shameless!), the video also features display glasses, “smart” windows, self-driving cars, 3D display technology, virtual interfacing, paper-thin and flexible display tablets, touchscreens, teleconferencing, and a ton of internet browsing and wireless connectivity. All of the technologies featured are those that are currently under development, so the video is apt in addition to being visually appealing.

But of course, the real purpose of this video is to demonstrating to the world that Microsoft can bring these technologies and build the future of business, travel, education and play. Or at the very least, they seeks to lay their claim to a good portion of it. It’s Microsoft, people, they didn’t get to being a mega-corporation by writing checks or playing nice.

And based on this video, what can be said about the future? All in all, it looks a lot like today, only with a lot more bells and whistles!

The Future is Here: The Prescient Surveillance Camera!

smart_cameraConsider the possibility that surveillance cameras might someday be automated, that there would be no need for security clerks to sit and wade through endless hours of footage to find indices of criminal behavior. That’s the reasoning behind the Pentagon and DARPA’s new project, which is named Mind’s Eye. Far from just being automated, the camera will be the first “smart” camera ever built, capable of predicting human behavior as well as monitoring it.

Using a concept known as “visual intelligence”, the project draws on a research proposal made by researchers working for the Carnegie Mellon School of Computer Science. The proposal calls for the creation of a “high-level artificial visual intelligence system” which, once operational, will be able to recognize human activities and predict what might happen next. Should it encounter a potentially threatening scene or dangerous behavior, it could sound the alarm and notify a human agent.

In essence, the camera system will rely on a series of computer visual algorithms that will allow it to classify behavior as well discriminate between different actions in a scene and predict their outcomes. Might sound like a case of coldly rational machine intelligence evaluating human actions; but in fact, the algorithm was designed to approximate human-level visual intelligence.

According to Alessandro Oltramari and Christian Lebiere, the researchers responsible for the proposal, humans evolved the ability to scan and process their environment for risks, at times relying on experience and guessing correctly what a person might do next. By using a linguistic infrastructure that operates in conjunction with a set of “action verbs”, along with a “cognitive engine,” the researchers are trying to get their camera to do the same thing.

Sound scary? Well that’s natural considering the implications. Any such technology is sure to bolster private and public security efforts by relieving human beings of the humdrum activity of watching security cameras while at the same time keeping them notified about potential risks. On the other hand, a machine intelligence would be responsible for monitoring human beings and judging their actions, which raises many issues. Sure, it’s not exactly PreCrime, but it does raise some ethical and legal concerns, not to mention worries over accountability.

Luckily, the AI that would run such a system is still several years away, which leaves us time to debate and regulate any system that uses “smart surveillance”.

Source: IO9.com

The Future is Here: The Roll Out Laptop!

rolltop1 Presenting the Rolltop laptop, a proposed next-generation portable computer that is made to look and act like a scroll. As a concept, this idea was first started in 2009 by the people of Rolltop, a team of researchers, IT developers and business administrators. By combining recent advancements in the field of OLED-Display and multi-touchscreen technology, the plan was to create a flexible computer that would combine the utility of a laptop computer with the weight of a mini notebook.

In addition, it can be switched from a laptop with 13 inch diagonal screen to a 17 inch graphics tablet. Or, stand it up against its rear-mounted support arm and use it as a primary monitor. When rolled up, it measures a mere 8.3 in width and 28 centimeters in length, and has a carrying strap which allows it to be carried around like a small case. When unrolled, the laptop is separated from a central core which contains the battery, power plug-in, and loudspeaker.

rolltopInitially, the project was merely a proposal by the Rolltop team to demonstrate their vision and ideas. However, due to the overwhelming response from the technical and consumer community, they set to work on making it happen. As it stands, the device is still in the planning and development phase, but Rolltop has everything it needs to make it a reality. Well almost… The technology exists, the concept is feasible; all that’s needed is a little more time and investment capital.

In the meantime, check out this promotional video of the Rolltop at work. And if you’re really keen, click on this link to get to the company website to pledge a donation.

The Future…

A recent article from The Futurist concerning trends in the coming decade got me thinking… If we can expect major shifts in the technological and economic landscape, but at the same time be experiencing worries about climate change and resource shortages, what will the future look like? Two competing forces are warring for possession of our future; which one will win?

To hear Singularitarians and Futurists tell it, in the not-too-distant future we will be capable of downloading our consciousness and merging our brains with machine technology. At about the same time, we’re likely to perfect nanobots that will be capable of altering matter at the atomic level. We will be living in a post-mortal, post-scarcity future where just about anything is possible and we will be able to colonize the Solar System and beyond.

But to hear environmentalists and crisis planners tell it, we will be looking at a worldwide shortage of basic commodities and food due to climate change. The world’s breadbaskets, like the American Midwest, Canada’s Prairiers, and the Russian Steppe, will suffer from repeated droughts, putting a strain on food production and food prices. Places that are already hard pressed to feed their growing populations, like China and India, will be even harder pressed. Many countries in the mid-latitudes that are already suffering from instability due to lack of irrigation and hunger – Pakistan, North Africa, the Middle East, Saharan Africa – will become even more unstable.

Polar ice regions will continue to melt, wreaking havoc with the Gulf Stream and forcing Europe to experience freezing winters and their own crop failures. And to top if off, tropical regions will suffer from increased tropical storm activity and flooding. This will be create a massive refugee crisis, where up to 25% of the world’s population will try to shift north and south to occupy the cooler climes and more developed parts of the world. And this, of course, will lead to all kinds of political upheaval and incidents as armed forces are called out to keep them away.

Makes you wonder…

To hear the future characterized in such dystopian and utopian terms is nothing new. But at this juncture, it now seems like both of these visions are closer to coming true than ever before. With the unprecedented growth in computing, information technology, and biology, we could very well be making DNA based computers and AI’s in a few decades. But the climate crisis is already happening, with record heat, terrible wildfires, tropical storms and food shortages already gripping the world. Two tidal waves are rising and heading on a collision course, both threatening to sweep humanity up in their wake. Which will prove successful, or will one come first, rendering the other completely ineffective?

Hard to say, in the meantime, check out the article. It proves to be an interesting read!

The Futurist – Seven Themes For the Coming Decade