Judgement Day Update: The Human Brain Project

brain_chip2Biomimetics are one of the fastest growing areas of technology today, which seek to develop technology that is capable of imitating biology. The purpose of this, in addition to creating machinery that can be merged with our physiology, is to arrive at a computing architecture that is as complex and sophisticated as the human brain.

While this might sound the slightest bit anthropocentric, it is important to remember that despite their processing power, supercomputers like the D-Wave Two, IBM’s Blue Gene/Q Sequoia, or MIT’s ConceptNet 4, have all shown themselves to be lacking when it comes to common sense and abstract reasoning. Simply pouring raw computing power into the mix does not make for autonomous intelligence.

IBM_Blue_Gene_P_supercomputerAs a result of this, new steps are being taken to crate a computer that can mimic the very organ that gives humanity these abilities – the human brain. In what is surely the most ambitious step towards this goal to date, an international group of researchers recently announced the formation of the Human Brain Project. Having secured the $1.6 billion they need to fund their efforts, these researchers will spend the next ten years conducting research that cuts across multiple disciplines.

This will involve mapping out the vast network known as the human brain – a network composed of over a hundred billion neuronal connections that are the source of emotions, abstract thought, and this thing we know as consciousness. And to do so, the researchers will be using a progressively scaled-up multilayered simulation running on a supercomputer.

Human-Brain-project-Alp-ICTConcordant with this bold plan, the team itself is made up of over 200 scientists from 80 different research institutions from around the world. Based in Lausanne, Switzerland, this initiative is being put forth by the European Commission, and has even been compared to the Large Hadron Collider in terms of scope and ambition. In fact, some have taken to calling it the “Cern for the brain.”

According to scientists working on the project, the HBP will attempt to reconstruct the human brain piece-by-piece and gradually bring these cognitive components into the overarching supercomputer. The expected result of this research will be new platforms for “neuromorphic computing” and “neurorobotics,” allowing for the creation of computing and robotic architectures that mimick the functions of the human brain.

^According to a statement released by the HBP, Swedish Nobel Laureate Torsten Wiesel had this to say about the project:

The support of the HBP is a critical step taken by the EC to make possible major advances in our understanding of how the brain works. HBP will be a driving force to develop new and still more powerful computers to handle the massive accumulation of new information about the brain, while the neuroscientists are ready to use these new tools in their laboratories. This cooperation should lead to new concepts and a deeper understanding of the brain, the most complex and intricate creation on earth.

Other distinguished individuals who were quoted in the release include President Shimon Peres of Israel, Paul G. Allen, the founder of the Allen Institute for Brain Science; Patrick Aebischer, the President of EPFL in Switzerland; Harald Kainz, Rector of Graz University of Technology, Graz, Austria; as well as a slew of other politicians and academics.

Combined with other research institutions that are producing computer chips and processors that are modelled on the human brain, and our growing understanding of the human connectome, I think it would be safe to say that by the time the HBP wraps up, we are likely to see processors that are capable of demonstrating intelligence, not just in terms of processing speed and memory, but in terms of basic reasoning as well.

At that point, we really out to consider instituting Asimov’s Three Laws of Robotics! Otherwise, things could get apocalyptic on our asses! 😉

io9.com, humanbrainproject.eu
, documents.epfl.ch

Big News in Quantum Computing!

^For many years, scientists have looked at the field of quantum machinery as the next big wave in computing. Whereas conventional computing involves sending information via a series of particles (electrons), quantum computing relies on the process of beaming the states of these particles from one location to the next. This process, which occurs faster than the speed of light since no movement takes place, would make computers exponentially faster and more efficient, and lead to an explosion in machine intelligence. And while the technology has yet to be realized, every day brings us one step closer…

One important step happened earlier this month with the installment of the D-Wave Two over at the Quantum Artificial Intelligence Lab (QAIL) at the Ames Research Center in Silicon Valley, NASA has announced that this is precisely what they intend to pursue. Not surprisingly, the ARC is only the second lab in the world to have a quantum computer.  The only other lab to possess the 512-qubit, cryogenically cooled machine is the defense contractor Lockheed Martin, which upgraded to a D-Wave Two in 2011.

D-Wave’s new 512-qubit Vesuvius chip
D-Wave’s new 512-qubit Vesuvius chip

And while there are still some who question the categorization of the a D-Wave Two as a true quantum computer, most critics have acquiesced since many of its components function in accordance with the basic principle. And NASA, Google, and the people at the Universities Space Research Association (USRA) even ran some tests to confirm that the quantum computer offered a speed boost over conventional supercomputers — and it passed.

The new lab, which will be situated at NASA’s Advanced Supercomputing Facility at the Ames Research Center, will be operated by NASA, Google, and the USRA. NASA and Google will each get 40% of the system’s computing time, with the remaining 20% being divvied up by the USRA to researchers at various American universities. NASA and Google will primarily use the quantum computer to advance a branch of artificial intelligence called machine learning, which is tasked with developing algorithms that optimize themselves with experience.

nasa-ames-research-center-partyAs for what specific machine learning tasks NASA and Google actually have in mind, we can only guess. But it’s a fair bet that NASA will be interested in optimizing flight paths to other planets, or devising a safer/better/faster landing procedure for the next Mars rover. As for Google, the smart money says they will be using their time to develop complex AI algorithms for their self-driving cars, as well optimizing their search engines, and Google+.

But in the end, its the long-range possibilities that offer the most excitement here. With NASA and Google now firmly in command of a quantum processor, some of best and brightest minds in the world will now be working to forward the field of artificial intelligence, space flight, and high-tech. It will be quite exciting to see what they produce…

photon_laserAnother important step took place back in March, when researchers at Yale University announced that they had developed a new way to change the quantum state of photons, the elementary particles researchers hope to use for quantum memory. This is good news, because it effectively demonstrated that true quantum computing – the kind that utilizes qubits for all of its processes – has continually eluded scientists and researchers in recent years.

To break it down, today’s computers are restricted in that they store information as bits – where each bit holds either a “1″ or a “0.” But a quantum computer is built around qubits (quantum bits) that can store a 1, a 0 or any combination of both at the same time. And while the qubits would make up the equivalent of a processor in a quantum computer, some sort of quantum Random Access Memory (RAM) is also needed.

Photon_follow8Gerhard Kirchmair, one of Yale researchers, explained in a recent interview with Nature magazine that photons are a good choice for this because they can retain a quantum state for a long time over a long distance. But you’ll want to change the quantum information stored in the photons from time to time. What the Yale team has developed is essentially a way to temporarily make the photons used for memory “writeable,” and then switch them back into a more stable state.

To do this, Kirchmair and his associates took advantage of what’s known as a “Kerr medium”, a law that states how certain mediums will refract light in a different ways depending on the amount shined on it. This is different from normal material materials that refract light and any other form of electromagnetic field the same regardless of how much they are exposed to.

Higgs-bosonThus, by exposing photons to a microwave field in a Kerr medium, they were able to manipulate the quantum states of photons, making them the perfect means for quantum memory storage. At the same time, they knew that storing these memory photons in a Kerr medium would prove unstable, so they added a vacuum filled aluminum resonator to act as a coupler. When the resonator is decoupled, the photons are stable. When resonator is coupled, the photons are “writeable”, allowing a user to input information and store it effectively.

This is not the first or only instance of researchers finding ways to toy with the state of photons, but it is currently the most stable and effective. And coupled with other efforts, such as the development of photonic transistors and other such components, or new ways to create photons seemingly out of thin air, we could be just a few years away from the first full and bona fide quantum processor!

Sources: Extremetech.com, Wired.com, Nature.com