Frontiers of Neuroscience: Neurohacking and Neuromorphics

neural-network-consciousness-downloading-640x353It is one of the hallmarks of our rapidly accelerating times: looking at the state of technology, how it is increasingly being merged with our biology, and contemplating the ultimate leap of merging mind and machinery. The concept has been popular for many decades now, and with experimental procedures showing promise, neuroscience being used to inspire the next great leap in computing, and the advance of biomedicine and bionics, it seems like just a matter of time before people can “hack” their neurology too.

Take Kevin Tracey, a researcher working for the Feinstein Institute for Medical Research in Manhasset, N.Y., as an example. Back in 1998, he began conducting experiments to show that an interface existed between the immune and nervous system. Building on ten years worth of research, he was able to show how inflammation – which is associated with rheumatoid arthritis and Crohn’s disease – can be fought by administering electrical stimulu, in the right doses, to the vagus nerve cluster.

Brain-ScanIn so doing, he demonstrated that the nervous system was like a computer terminal through which you could deliver commands to stop a problem, like acute inflammation, before it starts, or repair a body after it gets sick.  His work also seemed to indicate that electricity delivered to the vagus nerve in just the right intensity and at precise intervals could reproduce a drug’s therapeutic reaction, but with greater effectiveness, minimal health risks, and at a fraction of the cost of “biologic” pharmaceuticals.

Paul Frenette, a stem-cell researcher at the Albert Einstein College of Medicine in the Bronx, is another example. After discovering the link between the nervous system and prostate tumors, he and his colleagues created SetPoint –  a startup dedicated to finding ways to manipulate neural input to delay the growth of tumors. These and other efforts are part of the growing field of bioelectronics, where researchers are creating implants that can communicate directly with the nervous system in order to try to fight everything from cancer to the common cold.

human-hippocampus-640x353Impressive as this may seem, bioelectronics are just part of the growing discussion about neurohacking. In addition to the leaps and bounds being made in the field of brain-to-computer interfacing (and brain-to-brain interfacing), that would allow people to control machinery and share thoughts across vast distances, there is also a field of neurosurgery that is seeking to use the miracle material of graphene to solve some of the most challenging issues in their field.

Given graphene’s rather amazing properties, this should not come as much of a surprise. In addition to being incredibly thin, lightweight, and light-sensitive (it’s able to absorb light in both the UV and IR range) graphene also a very high surface area (2630 square meters per gram) which leads to remarkable conductivity. It also has the ability to bind or bioconjugate with various modifier molecules, and hence transform its behavior. 

brainscan_MRIAlready, it is being considered as a possible alternative to copper wires to break the energy efficiency barrier in computing, and even useful in quantum computing. But in the field of neurosurgery, where researchers are looking to develop materials that can bridge and even stimulate nerves. And in a story featured in latest issue of Neurosurgery, the authors suggest thatgraphene may be ideal as an electroactive scaffold when configured as a three-dimensional porous structure.

That might be a preferable solution when compared with other currently vogue ideas like using liquid metal alloys as bridges. Thanks to Samsung’s recent research into using graphene in their portable devices, it has also been shown to make an ideal E-field stimulator. And recent experiments on mice in Korea showed that a flexible, transparent, graphene skin could be used as a electrical field stimulator to treat cerebral hypoperfusion by stimulating blood flow through the brain.

Neuromorphic-chip-640x353And what look at the frontiers of neuroscience would be complete without mentioning neuromorphic engineering? Whereas neurohacking and neurosurgery are looking for ways to merge technology with the human brain to combat disease and improve its health, NE is looking to the human brain to create computational technology with improved functionality. The result thus far has been a wide range of neuromorphic chips and components, such as memristors and neuristors.

However, as a whole, the field has yet to define for itself a clear path forward. That may be about to change thanks to Jennifer Hasler and a team of researchers at Georgia Tech, who recently published a roadmap to the future of neuromorphic engineering with the end goal of creating the human-brain equivalent of processing. This consisted of Hasler sorting through the many different approaches for the ultimate embodiment of neurons in silico and come up with the technology that she thinks is the way forward.

neuromorphic-chip-fpaaHer answer is not digital simulation, but rather the lesser known technology of FPAAs (Field-Programmable Analog Arrays). FPAAs are similar to digital FPGAs (Field-Programmable Gate Arrays), but also include reconfigurable analog elements. They have been around on the sidelines for a few years, but they have been used primarily as so-called “analog glue logic” in system integration. In short, they would handle a variety of analog functions that don’t fit on a traditional integrated circuit.

Hasler outlines an approach where desktop neuromorphic systems will use System on a Chip (SoC) approaches to emulate billions of low-power neuron-like elements that compute using learning synapses. Each synapse has an adjustable strength associated with it and is modeled using just a single transistor. Her own design for an FPAA board houses hundreds of thousands of programmable parameters which enable systems-level computing on a scale that dwarfs other FPAA designs.

neuromorphic_revolutionAt the moment, she predicts that human brain-equivalent systems will require a reduction in power usage to the point where they are consuming just one-eights of what digital supercomputers that are currently used to simulate neuromorphic systems require. Her own design can account for a four-fold reduction in power usage, but the rest is going to have to come from somewhere else – possibly through the use of better materials (i.e. graphene or one of its derivatives).

Hasler also forecasts that using soon to be available 10nm processes, a desktop system with human-like processing power that consumes just 50 watts of electricity may eventually be a reality. These will likely take the form of chips with millions of neuron-like skeletons connected by billion of synapses firing to push each other over the edge, and who’s to say what they will be capable of accomplishing or what other breakthroughs they will make possible?

posthuman-evolutionIn the end, neuromorphic chips and technology are merely one half of the equation. In the grand scheme of things, the aim of all of this research is not only produce technology that can ensure better biology, but technology inspired by biology to create better machinery. The end result of this, according to some, is a world in which biology and technology increasingly resemble each other, to the point that they is barely a distinction to be made and they can be merged.

Charles Darwin would roll over in his grave!

Sources: nytimes.com, extremetech.com, (2), journal.frontiersin.orgpubs.acs.org

Cyberwars: NSA Building Quantum Computer

D-Wave's 128-qubit quantum processorAs documents that illustrate the NSA’s clandestine behavior continue to be leaked, the extents to which the agency has been going to gain supremacy over cyberspace are becoming ever more clear. Thanks to a new series of documents released by Snowden, it now seems that these efforts included two programs who’s purpose was to create a ““useful quantum computer” that would be capable of breaking all known forms of classical encryption.

According to the documents, which were published by The Washington Post earlier this month, there are at least two programs that deal with quantum computers and their use in breaking classical encryption — “Penetrating Hard Targets” and “Owning the Net.” The first program is funded to the tune of $79.7 million and includes efforts to build “a cryptologically useful quantum computer” that can:

sustain and enhance research operations at NSA/CSS Washington locations, including the Laboratory for Physical Sciences facility in College Park, MD.

nsa_aerialThe second program, Owning the Net, deals with developing new methods of intercepting communications, including the use of quantum computers to break encryption. Given the fact that quanutm machinery is considered the next great leap in computer science, offering unprecedented speed and the ability to conduct operations at many times the efficiency of normal computers, this should not come as a surprise.

Such a computer would give the NSA unprecedented access to encrypted files and communications, enadling them to break any protective cypher, access anyone’s data with ease, and mount cyber attacks with impunity. But a working model would also vital for defensive purposes. Much in the same way that the Cold War involved ongoing escalation between nuclear armament production, cybersecurity wars are also subject to constant one-upmanship.

quantum-computers-The-Next-GenerationIn short, if China, Russia, or some other potentially hostile power were to obtain a quantum computer before the US, all of its encrypted information would be laid bare. Under the circumstances, and given their mandate to protect the US’s infrastructure, data and people from harm, the NSA would much rather they come into possesion of one first. Hence why so much attention is dedicated to the issue, since whoever builds the worlds first quantum computer will enjoy full-court dominance for a time.

The mathematical, cryptographical, and quantum mechanical communities have long known that quantum computing should be able to crack classical encryption very easily. To crack RSA, the world’s prevailing cryptosystem, you need to be able to factor prime numbers — a task that is very difficult with a normal, classical-physics CPU, but might be very easy for a quantum computer. But of course, the emphasis is still very much on the word might, as no one has built a fully functioning multi-qubit quantum computer yet.

quantum-entanglement1As for when that might be, no one can say for sure. But the smart money is apparently anticipating one soon, since researchers are getting to the point where coherence on a single qubit-level is becoming feasible, allowing them to move on to the trickier subject of stringing multiple fully-entangled qubits together, as well as the necessary error checking/fault tolerance measures that go along with multi-qubit setups.

But from what it’s published so far, the Laboratory for Physical Sciences – which is carrying out the NSA’s quantum computing work under contract – doesn’t seem to be leading the pack in terms of building a quantum computer. In this respect, it’s IBM with its superconducting waveguide-cavity qubits that appears to be closer to realizing a quantum computer, with other major IT firms and their own supcomputer models not far behind.

hackers_securityDespite what this recent set of leaks demonstrates then, the public should take comfort in knowing that the NSA is not ahead of the rest of the industry. In reality, something like a working quantum computer would be so hugely significant that it would be impossible for the NSA to develop it internally and keep it a secret. And by the time the NSA does have a working quantum computer to intercept all of our encrypted data, they won’t be the only ones, which would ensure they lacked dominance in this field.

So really, thess latest leaks ought to not worry people too much, and instead should put the NSAs ongoing struggle to control cyberspace in perspective. One might go so far as to say that the NSA is trying to remain relevant in an age where they are becoming increasingly outmatched. With billions of terabytes traversing the globe on any given day and trillions of devices and sensors creating a “second skin” of information over the globe, no one organization is capable of controlling or monitoring it all.

So to those in the habit of dredging up 1984 every time they hear about the latest NSA and domestic surveillance scandal, I say: Suck on it, Big Brother!

Source: wired.com

Judgement Day Update: Bionic Computing!

big_blue1IBM has always been at the forefront of cutting-edge technology. Whether it was with the development computers that could guide ICBMs and rockets into space during the Cold War, or the creation of the Internet during the early 90’s, they have managed to stay on the vanguard by constantly looking ahead. So it comes as no surprise that they had plenty to say last month on the subject of the next of the next big leap.

During a media tour of their Zurich lab in late October, IBM presented some of the company’s latest concepts. According to the company, the key to creating supermachines that 10,000 faster and more efficient is to build bionic computers cooled and powered by electronic blood. The end result of this plan is what is known as “Big Blue”, a proposed biocomputer that they anticipate will take 10 years to make.

Human-Brain-project-Alp-ICTIntrinsic to the design is the merger of computing and biological forms, specifically the human brain. In terms of computing, IBM is relying the human brain as their template. Through this, they hope to be able to enable processing power that’s densely packed into 3D volumes rather than spread out across flat 2D circuit boards with slow communication links.

On the biological side of things, IBM is supplying computing equipment to the Human Brain Project (HBP) – a $1.3 billion European effort that uses computers to simulate the actual workings of an entire brain. Beginning with mice, but then working their way up to human beings, their simulations examine the inner workings of the mind all the way down to the biochemical level of the neuron.

brain_chip2It’s all part of what IBM calls “the cognitive systems era”, a future where computers aren’t just programmed, but also perceive what’s going on, make judgments, communicate with natural language, and learn from experience. As the description would suggest, it is closely related to artificial intelligence, and may very well prove to be the curtain raiser of the AI era.

One of the key challenge behind this work is matching the brain’s power consumption. The ability to process the subtleties of human language helped IBM’s Watson supercomputer win at “Jeopardy.” That was a high-profile step on the road to cognitive computing, but from a practical perspective, it also showed how much farther computing has to go. Whereas Watson uses 85 kilowatts of power, the human brain uses only 20 watts.

aquasar2Already, a shift has been occurring in computing, which is evident in the way engineers and technicians are now measuring computer progress. For the past few decades, the method of choice for gauging performance was operations per second, or the rate at which a machine could perform mathematical calculations.

But as a computers began to require prohibitive amounts of power to perform various functions and generated far too much waste heat, a new measurement was called for. The new measurement that emerged as a result was expressed in operations per joule of energy consumed. In short, progress has come to be measured in term’s of a computer’s energy efficiency.

IBM_Research_ZurichBut now, IBM is contemplating another method for measuring progress that is known as “operations per liter”. In accordance with this new paradigm, the success of a computer will be judged by how much data-processing can be squeezed into a given volume of space. This is where the brain really serves as a source of inspiration, being the most efficient computer in terms of performance per cubic centimeter.

As it stands, today’s computers consist of transistors and circuits laid out on flat boards that ensure plenty of contact with air that cools the chips. But as Bruno Michel – a biophysics professor and researcher in advanced thermal packaging for IBM Research – explains, this is a terribly inefficient use of space:

In a computer, processors occupy one-millionth of the volume. In a brain, it’s 40 percent. Our brain is a volumetric, dense, object.

IBM_stacked3dchipsIn short, communication links between processing elements can’t keep up with data-transfer demands, and they consume too much power as well. The proposed solution is to stack and link chips into dense 3D configurations, a process which is impossible today because stacking even two chips means crippling overheating problems. That’s where the “liquid blood” comes in, at least as far as cooling is concerned.

This process is demonstrated with the company’s prototype system called Aquasar. By branching chips into a network of liquid cooling channels that funnel fluid into ever-smaller tubes, the chips can be stacked together in large configurations without overheating. The liquid passes not next to the chip, but through it, drawing away heat in the thousandth of a second it takes to make the trip.

aquasarIn addition, IBM also is developing a system called a redox flow battery that uses liquid to distribute power instead of using wires. Two types of electrolyte fluid, each with oppositely charged electrical ions, circulate through the system to distribute power, much in the same way that the human body provides oxygen, nutrients and cooling to brain through the blood.

The electrolytes travel through ever-smaller tubes that are about 100 microns wide at their smallest – the width of a human hair – before handing off their power to conventional electrical wires. Flow batteries can produce between 0.5 and 3 volts, and that in turn means IBM can use the technology today to supply 1 watt of power for every square centimeter of a computer’s circuit board.

IBM_Blue_Gene_P_supercomputerAlready, the IBM Blue Gene supercomputer has been used for brain research by the Blue Brain Project at the Ecole Polytechnique Federale de Lausanne (EPFL) in Lausanne, Switzerland. Working with the HBP, their next step ill be to augment a Blue Gene/Q with additional flash memory at the Swiss National Supercomputing Center.

After that, they will begin simulating the inner workings of the mouse brain, which consists of 70 million neurons. By the time they will be conducting human brain simulations, they plan to be using an “exascale” machine – one that performs 1 exaflops, or quintillion floating-point operations per second. This will take place at the Juelich Supercomputing Center in northern Germany.

brain-activityThis is no easy challenge, mainly because the brain is so complex. In addition to 100 billion neurons and 100 trillionsynapses,  there are 55 different varieties of neuron, and 3,000 ways they can interconnect. That complexity is multiplied by differences that appear with 600 different diseases, genetic variation from one person to the next, and changes that go along with the age and sex of humans.

As Henry Markram, the co-director of EPFL who has worked on the Blue Brain project for years:

If you can’t experimentally map the brain, you have to predict it — the numbers of neurons, the types, where the proteins are located, how they’ll interact. We have to develop an entirely new science where we predict most of the stuff that cannot be measured.

child-ai-brainWith the Human Brain Project, researchers will use supercomputers to reproduce how brains form in an virtual vat. Then, they will see how they respond to input signals from simulated senses and nervous system. If it works, actual brain behavior should emerge from the fundamental framework inside the computer, and where it doesn’t work, scientists will know where their knowledge falls short.

The end result of all this will also be computers that are “neuromorphic” – capable of imitating human brains, thereby ushering in an age when machines will be able to truly think, reason, and make autonomous decisions. No more supercomputers that are tall on knowledge but short on understanding. The age of artificial intelligence will be upon us. And I think we all know what will follow, don’t we?

Evolution-of-the-Cylon_1024Yep, that’s what! And may God help us all!

Sources: news.cnet.com, extremetech.com

Judgement Day Update: Using AI to Predict Flu Outbreaks

hal9000It’s a rare angle for those who’ve been raised on a heady diet of movies where the robot goes mad and tries to kill all humans: an artificial intelligence using its abilities to help humankind! But that’s the idea being explored by researchers like Raul Rabadan, a theoretical physicist working in biology at Columbia University. Using a new form of machine learning, they are seeking to unlock the mysteries of flu strains.

Basically, they are hoping to find out why flu strains like the H1N1, which ordinarily infect pigs and cows, are managing to make the jump to human hosts. Key to understanding this is finding the specific mutations that transform it into a human pathogen. Traditionally, answering this question would require painstaking comparisons of the DNA and protein sequences of different viruses.

AI-fightingfluBut thanks to rapidly growing databases of virus sequences and advances made in computing, scientists are now using sophisticated machine learning techniquesa branch of artificial intelligence in which computers develop algorithms based on the data they have been given to identify key properties in viruses like bird flu and swine flu and seeing how they go about transmitting from animals to humans.

This is especially important since every few decades, a pandemic flu virus emerges that not only infects humans but also passes rapidly from person to person. The H7N9 avian flu that infected more than 130 people in China is just the latest example. While it has not been as infectious as others, the fact that humans lack the antibodies to combat it led to a high lethality rate, with 44 of the infected dying. Whats more, it is expected to emerge again this fall or winter.

Influenza_virus_2008765Knowing the key properties to this and other viruses will help researchers identify the most dangerous new flu strains and could lead to more effective vaccines. Most importantly, scientists can now look at hundreds or thousands of flu strains simultaneously, which could reveal common mechanisms across different viruses or a broad diversity of transformations that enable human transmission.

Researchers are also using these approaches to investigate other viral mysteries, including what makes some viruses more harmful than others and factors influencing a virus’s ability to trigger an immune response. The latter could ultimately aid the development of flu vaccines. Machine learning techniques might even accelerate future efforts to identify the animal source of mystery viruses.

2009_world_subdivisions_flu_pandemicThis technique was first employed in 2011 by Nir Ben-Tal – a computational biologist at Tel Aviv University in Israel – and Richard Webby – a virologist at St. Jude Children’s Research Hospital in Memphis, Tennessee. Together, Ben-Tal and Webby used machine learning to compare protein sequences of the 2009 H1N1 pandemic swine flu with hundreds of other swine viruses.

Machine learning algorithms have been used to study DNA and protein sequences for more than 20 years, but only in the past few years have scientists applied them to viruses. Inspired by the growing amount of viral sequence data available for analysis, the machine learning approach is likely to expand as even more genomic information becomes available.

Map_H1N1_2009As Webby has said, “Databases will get much richer, and computational approaches will get much more powerful.” That in turn will help scientists better monitor emerging flu strains and predict their impact, ideally forecasting when a virus is likely to jump to people and how dangerous it is likely to become.

Perhaps Asimov had the right of it. Perhaps humanity will actually derive many benefits from turning our world increasingly over to machines. Either that, or Cameron will be right, and we’ll invent a supercomputer that’ll kill us all!

Source: wired.com

Judgement Day Update: The Human Brain Project

brain_chip2Biomimetics are one of the fastest growing areas of technology today, which seek to develop technology that is capable of imitating biology. The purpose of this, in addition to creating machinery that can be merged with our physiology, is to arrive at a computing architecture that is as complex and sophisticated as the human brain.

While this might sound the slightest bit anthropocentric, it is important to remember that despite their processing power, supercomputers like the D-Wave Two, IBM’s Blue Gene/Q Sequoia, or MIT’s ConceptNet 4, have all shown themselves to be lacking when it comes to common sense and abstract reasoning. Simply pouring raw computing power into the mix does not make for autonomous intelligence.

IBM_Blue_Gene_P_supercomputerAs a result of this, new steps are being taken to crate a computer that can mimic the very organ that gives humanity these abilities – the human brain. In what is surely the most ambitious step towards this goal to date, an international group of researchers recently announced the formation of the Human Brain Project. Having secured the $1.6 billion they need to fund their efforts, these researchers will spend the next ten years conducting research that cuts across multiple disciplines.

This will involve mapping out the vast network known as the human brain – a network composed of over a hundred billion neuronal connections that are the source of emotions, abstract thought, and this thing we know as consciousness. And to do so, the researchers will be using a progressively scaled-up multilayered simulation running on a supercomputer.

Human-Brain-project-Alp-ICTConcordant with this bold plan, the team itself is made up of over 200 scientists from 80 different research institutions from around the world. Based in Lausanne, Switzerland, this initiative is being put forth by the European Commission, and has even been compared to the Large Hadron Collider in terms of scope and ambition. In fact, some have taken to calling it the “Cern for the brain.”

According to scientists working on the project, the HBP will attempt to reconstruct the human brain piece-by-piece and gradually bring these cognitive components into the overarching supercomputer. The expected result of this research will be new platforms for “neuromorphic computing” and “neurorobotics,” allowing for the creation of computing and robotic architectures that mimick the functions of the human brain.

^According to a statement released by the HBP, Swedish Nobel Laureate Torsten Wiesel had this to say about the project:

The support of the HBP is a critical step taken by the EC to make possible major advances in our understanding of how the brain works. HBP will be a driving force to develop new and still more powerful computers to handle the massive accumulation of new information about the brain, while the neuroscientists are ready to use these new tools in their laboratories. This cooperation should lead to new concepts and a deeper understanding of the brain, the most complex and intricate creation on earth.

Other distinguished individuals who were quoted in the release include President Shimon Peres of Israel, Paul G. Allen, the founder of the Allen Institute for Brain Science; Patrick Aebischer, the President of EPFL in Switzerland; Harald Kainz, Rector of Graz University of Technology, Graz, Austria; as well as a slew of other politicians and academics.

Combined with other research institutions that are producing computer chips and processors that are modelled on the human brain, and our growing understanding of the human connectome, I think it would be safe to say that by the time the HBP wraps up, we are likely to see processors that are capable of demonstrating intelligence, not just in terms of processing speed and memory, but in terms of basic reasoning as well.

At that point, we really out to consider instituting Asimov’s Three Laws of Robotics! Otherwise, things could get apocalyptic on our asses! 😉


Sources:
io9.com, humanbrainproject.eu
, documents.epfl.ch

The NSA’s New Super Computer Facilities

nsa_aerialThe extent and depth of the NSA’s snooping has been the subject of much scrutiny and controversy of late. And it seems that the more we come to learn about the issue, the worse it gets. In addition to the extensive access the NSA seems to have to our personal data, there’s also the staggering amount of power that is being concentrated in so fe hands, coupled with a serious lack of oversight. Worse yet, it appears the NSA is showing no signs of slowing down.

Just two months ago, the Army Corps of engineers began breaking ground on a new supercomputing facility in Fort Meade, Maryland – the center of the NSA’s cyber operations. Known as the High Performance Computing Center-2, this $860 million data center will span more than 600,000 square feet of space, including 70,000 square feet of technical space. The center is expected to be completed in 2016.

NSA_supercomputerBut worse yet is the fact that this is not the only center being built, nor it is even the largest. In addition to the Fort Meade facility, the NSA is also building a massive data center in Utah, a project that will feature up to 1 million square feet of facilities and cost a hefty $1.5 billion. The computers alone will take over 100,000 square feet and the facility will require its own electrical substation to power all the air conditions required.

In truth, the Fort Meade location is only necessary because of the planned facility being built in Utah. Once it is up and running, the NSA will need a separate location where analysts can look over the growing amounts of processed information and material, and in turn make reports and provide recommendations for policy-makers.

cyberwarfare1Of course, the purpose of these facilities go beyond the mere analysis and storage of information. In addition, the Utah Data Center will also employ new code-breaking capabilities. Given the extent to which modern, high-value information is encrypted – everything from commerce to diplomacy to personal information – the center will be employing the latest code-cracking tools developed by the NSA.

Naturally, the NSA’s tightly-controlled PR department has stated that the purpose of these centers is to protect national security networks and provide U.S. authorities with intelligence and warnings about cyber threats, as part of the Comprehensive National Cybersecurity Initiative (CNCI). However, this has done little to allay fears, and seems like the same song being played on repeat.

hackers_securityAs always, the NSA’s stated objective do not address the growing awareness that the NSA has and continues to conduct cyber attacks in foreign countries. As Snowden’s testimony and recent revelations about the US super-secret Cyber Command revealed, American agencies have been conducting far more than just defensive operations in recent years.

All of these efforts began in earnest during the 1990’s and expanded greatly after September 11th, 2001. Much of this has had to do with the staggering increase in the amount of data being transmitted and shared on a daily basis, and not just the issue of terrorism. But what is disturbing is the near-total removal of oversight that began after 9/11 and has continued unabated ever since.

Despite promises that the era of warrantless surveillance was at an end, all attempts to resolve the issue have become marred by what is meant by “electronic surveillance”. In the meantime, the NSA continues to enjoy some rather broad freedoms to monitor and process the information we transmit. And as those abilities continue to grow, we can only hold our breaths and pray they mean it when they say “innocent people need not be worried”.

Sources: policymic.com, datacenterknowledge.com, seattleweekly.com, wired.com

Supercomputer Creates Atomic Model of HIV

DNA-1The ongoing fight to end HIV has been a long and arduous one, but progress is being made. In addition to potential treatments being created that have shown promise, there are also efforts being mounted to understand how the virus works at an atomic level. This is great news, for as any practitioner of medicine will tell you, understanding a disease and knowing how to strike at the heart of it is the key to stopping it and making sure future generations don’t have to fear it.

In recent years, several major breakthroughs were announced for the treatment of HIV, treatments which many heralded as cures. In January of last year, the Danish Research Council awarded funding to a group of researchers who demonstrated that HIV could be “flushed” from infected cells where it tends to congregate and protect itself. Combined with vaccinations that turbocharge the body’s immune system, this method proved effective at eliminating the HIV virus in infected cells.

HIV-budding-ColorAnother came back in November, when researchers at Caltech were even able to successfully spawn a significant amount of HIV antibodies in lab mice by using a new approach, known as Vectored ImmunoProphylaxis (VIP). An inversion of the traditional vaccination method, this new method produced plenty of HIV-preventing antibodies which they believed could be fashioned into a  vaccine.

And finally, there were the experiments being conducted over at the Washington University School of Medicine, where researchers designed a solution that employed bee venom and a nanoparticle delivery system. Knowing that bee venom is capable of killing HIV, and that the virus is thousands of times smaller than your average cell, the solution proved quite effective at filtering out the virus and killing it while leaving surrounding tissue unharmed. Taken together, these two proposed solutions have left many thinking a cure is just around the corner.

blue-waters-super-computer-at-petascale-020908Nevertheless, in order for this virus to truly be beaten, we need to understand it better. Hence why a group of scientists – using the University of Illinois’ “Blue Waters” supercomputer — have developed a new series of computer models that are finally giving researchers an atomic-level look at the formidable barrier mechanism enclosing the heart of the virus.

For example, its been known for some time that the HIV virus it’s covered in several layers of protective proteins. But beneath that outer shell resides a conical structure called the capsid, which houses the virus’ payload of genetic material. (See diagram below.) When HIV invades a cell, it’s the capsid that opens up to initiate the takeover process, allowing the virus to replicate inside the healthy host cell. Better understanding of how this mysterious delivery system operates could be one of the final steps to finding a cure.

HIVAnd that’s where the modelling software really comes into play. How and when the HIV cell opens to deliver the capsid has long eluded researchers, and as Klaus Schulten, a physicist that was part of the team that modeled the virus, pointed out: “The timing of the opening of the capsid is essential for the degree of virulence of the virus.”

Using the Blue Waters, Schulten and his associates managed to map out the model all 64 million of the capsid’s atoms. Through countless simulations, they also discovered that the capsid’s microscopic outer casing is composed of 216 hexagon-shaped proteins that fit together in a honeycomb formation. These hexagonal structures are what give the capsid it’s tough outer shell and allow it to be such a harmful and insidious killer.

AIDS_memorialThis painstakingly delicate process would have been unthinkable until just a few years ago, and represents the most complete picture of the HIV virus to date. What’s more, knowing what HIV looks like at the atomic level will help scientists to understand the timing of the virus’ delivery system. Since the opening of the virus’ protective layer is when it’s most vulnerable, Schulten and his colleagues hope to determine the precise timing of this event so a treatment can be developed that could attacks the virus at this exact moment.

Think of it as throwing a bomb into the mouth of a terrible war machine, right as it opens up its armored maw to bite you! Better yet, think of it as another step on the road to ending one of the greatest plagues humankind has ever had to deal with. Safety for the future, and justice for the victims!

Sources: popularscience.com, theweek.com, (2)

The Future is Here: The Neuromimetic Processor

Neuromorphic-chip-640x353It’s known as mimetic technology, machinery that mimics the function and behavior of organic life. For some time, scientists have been using this philosophy to further develop computing, a process which many believe to be paradoxical. In gleaming inspiration from the organic world to design better computers, scientists are basically creating the machinery that could lead to better organics.

But when it comes to Neuromoprhic processors, computers that mimic the function of the human brain, scientists have been lagging behind sequential computing. For instance, IBM announced this past November that its Blue Gene/Q Sequoia supercomputer could clock 16 quadrillion calculations per second, and could crudely simulate more than 530 billion neurons – roughly five times that of a human brain. However, doing this required 8 megawatts of power, enough to power 1600 homes.

connectomeHowever, Kwabena Boahen, a bioengineering professor at Stanford University recently developed a new computing platform that he calls the “Neurogrid”. Each Neurogrid board, running at only 5 watts, can simulate detailed neuronal activity of one million neurons — and it can now do it in real time. Giving the processing to cost ratio in electricity, this means that his new chip is roughly 100,000 times more efficient than other supercomputer.

What’s more, its likely to mean the wide-scale adoption of processors that mimic human neuronal behavior over traditional computer chips. Whereas sequential computing relies on simulated ion-channels to create software-generated “neurons”, the neuromorphic approach involves the flow of ions through channels in a way that emulates the flow of electrons through transistors. Basically, the difference in emulation is a difference between software that mimics the behavior, and hardware.

AI_picWhat’s more, its likely to be a major stepping stone towards the creation of AI and MMI. That’s Artificial Intelligence and Man-Machine Interface for those who don’t speak geek. With computer chips imitating human brains and achieving a measure of intelligence which can be measured in terms of neurons and connections, the likelihood that they will be able to merge with a person’s brain, and thus augment their intelligence, becomes that much more likely.

Source: Extremetech.com