Frontiers of Neuroscience: Neurohacking and Neuromorphics

neural-network-consciousness-downloading-640x353It is one of the hallmarks of our rapidly accelerating times: looking at the state of technology, how it is increasingly being merged with our biology, and contemplating the ultimate leap of merging mind and machinery. The concept has been popular for many decades now, and with experimental procedures showing promise, neuroscience being used to inspire the next great leap in computing, and the advance of biomedicine and bionics, it seems like just a matter of time before people can “hack” their neurology too.

Take Kevin Tracey, a researcher working for the Feinstein Institute for Medical Research in Manhasset, N.Y., as an example. Back in 1998, he began conducting experiments to show that an interface existed between the immune and nervous system. Building on ten years worth of research, he was able to show how inflammation – which is associated with rheumatoid arthritis and Crohn’s disease – can be fought by administering electrical stimulu, in the right doses, to the vagus nerve cluster.

Brain-ScanIn so doing, he demonstrated that the nervous system was like a computer terminal through which you could deliver commands to stop a problem, like acute inflammation, before it starts, or repair a body after it gets sick.  His work also seemed to indicate that electricity delivered to the vagus nerve in just the right intensity and at precise intervals could reproduce a drug’s therapeutic reaction, but with greater effectiveness, minimal health risks, and at a fraction of the cost of “biologic” pharmaceuticals.

Paul Frenette, a stem-cell researcher at the Albert Einstein College of Medicine in the Bronx, is another example. After discovering the link between the nervous system and prostate tumors, he and his colleagues created SetPoint –  a startup dedicated to finding ways to manipulate neural input to delay the growth of tumors. These and other efforts are part of the growing field of bioelectronics, where researchers are creating implants that can communicate directly with the nervous system in order to try to fight everything from cancer to the common cold.

human-hippocampus-640x353Impressive as this may seem, bioelectronics are just part of the growing discussion about neurohacking. In addition to the leaps and bounds being made in the field of brain-to-computer interfacing (and brain-to-brain interfacing), that would allow people to control machinery and share thoughts across vast distances, there is also a field of neurosurgery that is seeking to use the miracle material of graphene to solve some of the most challenging issues in their field.

Given graphene’s rather amazing properties, this should not come as much of a surprise. In addition to being incredibly thin, lightweight, and light-sensitive (it’s able to absorb light in both the UV and IR range) graphene also a very high surface area (2630 square meters per gram) which leads to remarkable conductivity. It also has the ability to bind or bioconjugate with various modifier molecules, and hence transform its behavior. 

brainscan_MRIAlready, it is being considered as a possible alternative to copper wires to break the energy efficiency barrier in computing, and even useful in quantum computing. But in the field of neurosurgery, where researchers are looking to develop materials that can bridge and even stimulate nerves. And in a story featured in latest issue of Neurosurgery, the authors suggest thatgraphene may be ideal as an electroactive scaffold when configured as a three-dimensional porous structure.

That might be a preferable solution when compared with other currently vogue ideas like using liquid metal alloys as bridges. Thanks to Samsung’s recent research into using graphene in their portable devices, it has also been shown to make an ideal E-field stimulator. And recent experiments on mice in Korea showed that a flexible, transparent, graphene skin could be used as a electrical field stimulator to treat cerebral hypoperfusion by stimulating blood flow through the brain.

Neuromorphic-chip-640x353And what look at the frontiers of neuroscience would be complete without mentioning neuromorphic engineering? Whereas neurohacking and neurosurgery are looking for ways to merge technology with the human brain to combat disease and improve its health, NE is looking to the human brain to create computational technology with improved functionality. The result thus far has been a wide range of neuromorphic chips and components, such as memristors and neuristors.

However, as a whole, the field has yet to define for itself a clear path forward. That may be about to change thanks to Jennifer Hasler and a team of researchers at Georgia Tech, who recently published a roadmap to the future of neuromorphic engineering with the end goal of creating the human-brain equivalent of processing. This consisted of Hasler sorting through the many different approaches for the ultimate embodiment of neurons in silico and come up with the technology that she thinks is the way forward.

neuromorphic-chip-fpaaHer answer is not digital simulation, but rather the lesser known technology of FPAAs (Field-Programmable Analog Arrays). FPAAs are similar to digital FPGAs (Field-Programmable Gate Arrays), but also include reconfigurable analog elements. They have been around on the sidelines for a few years, but they have been used primarily as so-called “analog glue logic” in system integration. In short, they would handle a variety of analog functions that don’t fit on a traditional integrated circuit.

Hasler outlines an approach where desktop neuromorphic systems will use System on a Chip (SoC) approaches to emulate billions of low-power neuron-like elements that compute using learning synapses. Each synapse has an adjustable strength associated with it and is modeled using just a single transistor. Her own design for an FPAA board houses hundreds of thousands of programmable parameters which enable systems-level computing on a scale that dwarfs other FPAA designs.

neuromorphic_revolutionAt the moment, she predicts that human brain-equivalent systems will require a reduction in power usage to the point where they are consuming just one-eights of what digital supercomputers that are currently used to simulate neuromorphic systems require. Her own design can account for a four-fold reduction in power usage, but the rest is going to have to come from somewhere else – possibly through the use of better materials (i.e. graphene or one of its derivatives).

Hasler also forecasts that using soon to be available 10nm processes, a desktop system with human-like processing power that consumes just 50 watts of electricity may eventually be a reality. These will likely take the form of chips with millions of neuron-like skeletons connected by billion of synapses firing to push each other over the edge, and who’s to say what they will be capable of accomplishing or what other breakthroughs they will make possible?

posthuman-evolutionIn the end, neuromorphic chips and technology are merely one half of the equation. In the grand scheme of things, the aim of all of this research is not only produce technology that can ensure better biology, but technology inspired by biology to create better machinery. The end result of this, according to some, is a world in which biology and technology increasingly resemble each other, to the point that they is barely a distinction to be made and they can be merged.

Charles Darwin would roll over in his grave!

Sources: nytimes.com, extremetech.com, (2), journal.frontiersin.orgpubs.acs.org

The Future is… Worms: Life Extension and Computer-Simulations

genetic_circuitPost-mortality is considered by most to be an intrinsic part of the so-called Technological Singularity. For centuries, improvements in medicine, nutrition and health have led to improved life expectancy. And in an age where so much more is possible – thanks to cybernetics, bio, nano, and medical advances – it stands to reason that people will alter their physique in order slow the onset of age and extend their lives even more.

And as research continues, new and exciting finds are being made that would seem to indicate that this future may be just around the corner. And at the heart of it may be a series of experiments involving worms. At the Buck Institute for Research and Aging in California, researchers have been tweaking longevity-related genes in nematode worms in order to amplify their lifespans.

immortal_wormsAnd the latest results caught even the researchers by surprise. By triggering mutations in two pathways known for lifespan extension – mutations that inhibit key molecules involved in insulin signaling (IIS) and the nutrient signaling pathway Target of Rapamycin (TOR) – they created an unexpected feedback effect that amplified the lifespan of the worms by a factor of five.

Ordinarily, a tweak to the TOR pathway results in a 30% lifespan extension in C. Elegans worms, while mutations in IIS (Daf-2) results in a doubling of lifespan. By combining the mutations, the researchers were expecting something around a 130% extension to lifespan. Instead, the worms lived the equivalent of about 400 to 500 human years.

antiagingAs Doctor Pankaj Kapahi said in an official statement:

Instead, what we have here is a synergistic five-fold increase in lifespan. The two mutations set off a positive feedback loop in specific tissues that amplified lifespan. These results now show that combining mutants can lead to radical lifespan extension — at least in simple organisms like the nematode worm.

The positive feedback loop, say the researchers, originates in the germline tissue of worms – a sequence of reproductive cells that may be passed onto successive generations. This may be where the interactions between the two mutations are integrated; and if correct, might apply to the pathways of more complex organisms. Towards that end, Kapahi and his team are looking to perform similar experiments in mice.

DNA_antiagingBut long-term, Kapahi says that a similar technique could be used to produce therapies for aging in humans. It’s unlikely that it would result in the dramatic increase to lifespan seen in worms, but it could be significant nonetheless. For example, the research could help explain why scientists are having a difficult time identifying single genes responsible for the long lives experienced by human centenarians:

In the early years, cancer researchers focused on mutations in single genes, but then it became apparent that different mutations in a class of genes were driving the disease process. The same thing is likely happening in aging. It’s quite probable that interactions between genes are critical in those fortunate enough to live very long, healthy lives.

A second worm-related story comes from the OpenWorm project, an international open source project dedicated to the creation of a bottom-up computer model of a millimeter-sized nemotode. As one of the simplest known multicellular life forms on Earth, it is considered a natural starting point for creating computer-simulated models of organic beings.

openworm-nematode-roundworm-simulation-artificial-lifeIn an important step forward, OpenWorm researchers have completed the simulation of the nematode’s 959 cells, 302 neurons, and 95 muscle cells and their worm is wriggling around in fine form. However, despite this basic simplicity, the nematode is not without without its share of complex behaviors, such as feeding, reproducing, and avoiding being eaten.

To model the complex behavior of this organism, the OpenWorm collaboration (which began in May 2013) is developing a bottom-up description. This involves making models of the individual worm cells and their interactions, based on their observed functionality in the real-world nematodes. Their hope is that realistic behavior will emerge if the individual cells act on each other as they do in the real organism.

openworm-nematode-roundworm-simulation-artificial-life-0Fortunately, we know a lot about these nematodes. The complete cellular structure is known, as well as rather comprehensive information concerning the behavior of the thing in reaction to its environment. Included in our knowledge is the complete connectome, a comprehensive map of neural connections (synapses) in the worm’s nervous system.

The big question is, assuming that the behavior of the simulated worms continues to agree with the real thing, at what stage might it be reasonable to call it a living organism? The usual definition of living organisms is behavioral, that they extract usable energy from their environment, maintain homeostasis, possess a capacity to grow, respond to stimuli, reproduce, and adapt to their environment in successive generations.

openworm-nematode1If the simulation exhibits these behaviors, combined with realistic responses to its external environment, should we consider it to be alive? And just as importantly, what tests would be considered to test such a hypothesis? One possibility is an altered version of the Turing test – Alan Turing’s proposed idea for testing whether or not a computer could be called sentient.

In the Turing test, a computer is considered sentient and sapient if it can simulate the responses of a conscious sentient being so that an auditor can’t tell the difference. A modified Turing test might say that a simulated organism is alive if a skeptical biologist cannot, after thorough study of the simulation, identify a behavior that argues against the organism being alive.

openworm-nematode2And of course, this raises an even larger questions. For one, is humanity on the verge of creating “artificial life”? And what, if anything, does that really look like? Could it just as easily be in the form of computer simulations as anthropomorphic robots and biomachinery? And if the answer to any of these questions is yes, then what exactly does that say about our preconceived notions about what life is?

If humanity is indeed moving into an age of “artificial life”, and from several different directions, it is probably time that we figure out what differentiates the living from the nonliving. Structure? Behavior? DNA? Local reduction of entropy? The good news is that we don’t have to answer that question right away. Chances are, we wouldn’t be able to at any rate.

Brain-ScanAnd though it might not seem apparent, there is a connection between the former and latter story here. In addition to being able to prolong life through genetic engineering, the ability to simulate consciousness through computer-generated constructs might just prove a way to cheat death in the future. If complex life forms and connectomes (like that involved in the human brain) can be simulated, then people may be able to transfer their neural patterns before death and live on in simulated form indefinitely.

So… anti-aging, artificial life forms, and the potential for living indefinitely. And to think that it all begins with the simplest multicellular life form on Earth – the nemotode worm. But then again, all life – nay, all of existence – depends upon the most simple of interactions, which in turn give rise to more complex behaviors and organisms. Where else would we expect the next leap in biotechnological evolution to come from?

And in the meantime, be sure to enjoy this video of the OpenWorm’s simulated nemotode in action


Sources:
IO9, cell.com, gizmag, openworm

Biggest Scientific Breakthroughs of 2013

center_universe2The new year is literally right around the corner, folks. And I thought what better way to celebrate 2013 than by acknowledging its many scientific breakthroughs. And there were so many to be had – ranging in fields from bioresearch and medicine, space and extra-terrestrial exploration, computing and robotics, and biology and anthropology – that I couldn’t possibly do them all justice.

Luckily, I have found a lovely, condensed list which managed to capture what are arguably the biggest hits of the year. Many of these were ones I managed to write about as they were happening, and many were not. But that’s what’s good about retrospectives, they make us take account of things we missed and what we might like to catch up on. And of course, I threw in a few stories that weren’t included, but which I felt belonged.

So without further ado, here are the top 12 biggest breakthroughs of 2013:

1. Voyager 1 Leaves the Solar System:

For 36 years, NASA’s Voyager 1 spacecraft has travelling father and farther away from Earth, often at speeds approaching 18 km (11 miles) per second. At a pace like that, scientists knew Voyager would sooner or later breach the fringe of the heliosphere that surrounds and defines our solar neighborhood and enter the bosom of our Milky Way Galaxy. But when it would finally break that threshold was a question no one could answer. And after months of uncertainty, NASA finally announced in September that the space probe had done it. As Don Gurnett, lead author of the paper announcing Voyager’s departure put it: “Voyager 1 is the first human-made object to make it into interstellar space… we’re actually out there.”

voyager12. The Milky Way is Filled with Habitable Exoplanets:

After years of planet hunting, scientists were able to determine from all the data gathered by the Kepler space probe that there could be as many as 2 billion potentially habitable exoplanets in our galaxy. This is the equivalent of roughly 22% of the Milky Way Galaxy, with the nearest being just 12 light years away (Tau Ceti). The astronomers’ results, which were published in October of 2013, showed that roughly one in five sunlike stars harbor Earth-size planets orbiting in their habitable zones, much higher than previously thought.

exoplanets23. First Brain to Brain Interface:

In February of 2013, scientists announced that they had successfully established an electronic link between the brains of two rats. Even when the animals were separated by thousands of kms distance, signals from the mind of one could help the second solve basic puzzles in real time. By July, a connection was made between the minds of a human and a rat. And by August, two researchers at the Washington University in St. Louis were able to demonstrate that signals could be transmitted between two human brains, effectively making brain-to-brain interfacing (BBI), and not just brain computer interfacing (BCI) truly possible.

brain-to-brain-interfacing4. Long-Lost Continent Discovered:

In February of this year, geologists from the University of Oslo reported that a small precambrian continent known as Mauritia had been found. At one time, this continent resided between Madagascar and India, but was then pushed beneath the ocean by a multi-million-year breakup spurred by tectonic rifts and a yawning sea-floor. But now, volcanic activity has driven the remnants of the long-lost continent right through to the Earth’s surface.

Not only is this an incredibly rare find, the arrival of this continent to the surface has given geologists a chance to study lava sands and minerals which are millions and even billions of years old. In addition to the volcanic lava sands, the majority of which are around 9 million years old, the Oslo team also found deposits of zircon xenocryst that were anywhere from 660 million to 1.97 billion years old. Studies of these and the land mass will help us learn more about Earth’s deep past.

mauritia5. Cure for HIV Found!:

For decades, medical researchers and scientists have been looking to create a vaccine that could prevent one from being infected with HIV. But in 2013, they not developed several vaccines that demonstrated this ability, but went a step further and found several potential cures. The first bit of news came in March, when researchers at Caltech demonstrated using HIV antibodies and an approach known as Vectored ImmunoProphylaxis (VIP) that it was possible to block the virus.

Then came the SAV001 vaccine from the Schulich School of Medicine & Dentistry at Western University in London, Ontario, which aced clinical trials. This was punctuated by researchers at the University of Illinois’, who in May used the “Blue Waters” supercomputer to developed a new series of computer models to get at the heart of the virus.

HIV-budding-ColorBut even more impressive was the range of potential cures that were developed. The first came in March, where researchers at the Washington University School of Medicine in St. Louis that a solution of bee venom and nanoparticles was capable of killing off the virus, but leaving surrounding tissue unharmed. The second came in the same month, when doctors from Johns Hopkins University Medical School were able to cure a child of HIV thanks to the very early use of antiretroviral therapy (ART).

And in September, two major developments occurred. The first came from Rutgers New Jersey Medical School, where researchers showed that an antiviral foot cream called Ciclopirox was capable of eradicating infectious HIV when applied to cell cultures of the virus. The second came from the Vaccine and Gene Therapy Institute at the Oregon Health and Science University (OHSU), where researchers developed a vaccine that was also able to cure HIV in about 50% of test subjects. Taken together, these developments may signal the beginning of the end of the HIV pandemic.

hiv-aids-vaccine6. Newly Discovered Skulls Alter Thoughts on Human Evolution:

The discovery of an incredibly well-preserved skull from Dmanisi, Georgia has made anthropologists rethink human evolution. This 1.8 million-year old skull has basically suggested that our evolutionary tree may have fewer branches than previously thought. Compared with other skulls discovered nearby, it suggests that the earliest known members of the Homo genus (H. habilis, H.rudolfensis and H. erectus) may not have been distinct, coexisting species, but instead were part of a single, evolving lineage that eventually gave rise to modern humans.

humanEvolution7. Curiosity Confirms Signs of Life on Mars:

Over the past two years, the Curiosity and Opportunity rovers have provided a seemingly endless stream of scientific revelations. But in March of 2013, NASA scientists released perhaps the most compelling evidence to date that the Red Planet was once capable of harboring life. This consisted of drilling samples out of the sedimentary rock in a river bed in the area known as Yellowknife Bay.

Using its battery of onboard instruments, NASA scientists were able to detect some of the critical elements required for life – including sulfur, nitrogen, hydrogen, oxygen, phosphorus, and carbon. The rover is currently on a trek to its primary scientific target – a three-mile-high peak at the center of Gale Crater named Mount Sharp – where it will attempt to further reinforce its findings.

mt_sharp_space8. Scientists Turn Brain Matter Invisible:

Since its inception as a science, neuroanatomy – the study of the brain’s functions and makeup – has been hampered by the fact that the brain is composed of “grey matter”. For one, microscopes cannot look beyond a millimeter into biological matter before images in the viewfinder get blurry. And the common technique of “sectioning” – where a brain is frozen in liquid nitrogen and then sliced into thin sheets for analysis – results in  tissue being deformed, connections being severed, and information being lost.

But a new technique, known as CLARITY, works by stripping away all of a tissue’s light-scattering lipids, while leaving all of its significant structures – i.e. neurons, synapses, proteins and DNA – intact and in place. Given that this solution will allow researchers to study samples of the brains without having to cut them up, it is already being hailed as one of the most important advances for neuroanatomy in decades.


9. Scientists Detect Neutrinos from Another Galaxy:

In April of this year, physicists working at the IceCube South Pole Observatory took part in an expedition which drilled a hole some 2.4 km (1.5 mile) hole deep into an Antarctic glacier. At the bottom of this hole, they managed to capture 28 neutrinos, a mysterious and extremely powerful subatomic particle that can pass straight through solid matter. But the real kicker was the fact that these particles likely originated from beyond our solar system – and possibly even our galaxy.

That was impressive in and off itself, but was made even more so when it was learned that these particular neutrinos are over a billion times more powerful than the ones originating from our sun. So whatever created them would have had to have been cataclysmicly powerful – such as a supernova explosion. This find, combined with the detection technique used to find them, has ushered in a new age of astronomy.

antarctic_expedition

10. Human Cloning Becomes a Reality:

Ever since Dolly the sheep was cloned via somatic cell nuclear transfer, scientists have wondered if a similar technique could be used to produce human embryonic stem cells. And as of May, researchers at Oregon Health and Science University managed to do just that. This development is not only a step toward developing replacement tissue to treat diseases, but one that might also hasten the day when it will be possible to create cloned, human babies.

cloning

11. World’s First Lab Grown Meat:

In May of this year, after years of research and hundred of thousands of dollars invested, researchers at the University of Maastricht in the Netherlands created the world’s first in vitro burgers. The burgers were fashioned from stem cells taken from a cow’s neck which were placed in growth medium, grown into strips of muscle tissue, and then assembled into a burger. This development may prove to be a viable solution to world hunger, especially in the coming decades as the world’s population increases by several billion.

labmeat112. The Amplituhedron Discovered:

If 2012 will be remembered as the year that the Higgs Boson was finally discovered, 2013 will forever be remembered as the year of the Amplituhedron. After many decades of trying to reformulate quantum field theory to account for gravity, scientists at Harvard University discovered of a jewel-like geometric object that they believe will not only simplify quantum science, but forever alters our understanding of the universe.

This geometric shape, which is a representation of the coherent mathematical structure behind quantum field theory, has simplified scientists’ notions of the universe by postulating that space and time are not fundamental components of reality, but merely consequences of the”jewel’s” geometry. By removing locality and unitarity, this discovery may finally lead to an explanation as to how all the fundamental forces of the universe coexist.

amplutihedron_spanThese forces are weak nuclear forces, strong nuclear forces, electromagnetism and gravity. For decades, scientists have been forced to treat them according to separate principles – using Quantum Field Theory to explain the first three, and General Relativity to explain gravity. But now, a Grand Unifying Theory or Theory of Everything may actually be possible.

13. Bioprinting Explodes:

The year of 2013 was also a boon year for bioprinting – namely, using the technology of additive manufacturing to create samples of living tissue. This began in earnest in February, where a team of researchers at Heriot-Watt University in Scotland used a new printing technique to deposit live embryonic stem cells onto a surface in a specific pattern. Using this process, they were able to create entire cultures of tissue which could be morphed into specific types of tissue.

Later that month, researchers at Cornell University used a technique known as “high-fidelity tissue engineering” – which involved using artificial living cells deposited by a 3-D printer over shaped cow cartilage – to create a replacement human ear. This was followed some months later in April when a San Diego-based firm named Organova announced that they were able to create samples of liver cells using 3D printing technology.


And then in August, researchers at Huazhong University of Science and Technology were able to use the same technique create the world first, living kidneys. All of this is pointing the way towards a future where human body parts can be created simply by culturing cells from a donor’s DNA, and replacement organs can be synthetically created, revolutionizing medicine forever.

14. Bionic Machinery Expands:

If you’re a science buff, or someone who has had to go through life with a physical disability, 2013 was also a very big year for the field of bionic machinery. This consisted not only of machinery that could meld with the human body in order to perform fully-human tasks – thus restoring ambulatory ability to people dealing with disabling injuries or diseases – but also biomimetic machinery.

ArgusIIThe first took place in February, where researchers from the University of of Tübingen unveiled the world’s first high-resolution, user-configurable bionic eye. Known officially as the “Alpha IMS retinal prosthesis”, the device helps to restore vision by converted light into electrical signals your retina and then transmitted to the brain via the optic nerve. This was followed in August by the Argus II “retinal prosthetic system” being approved by the FDA, after 20 years of research, for distribution in the US.

Later that same month, the Ecole Polytechnique Federale de Lausanne in Switzerland unveiled the world’s first sensory prosthetic hand. Whereas existing mind-controlled prosthetic devices used nerve signals from the user to control the movements of the limb, this new device sends electrostimulus to the user’s nerves to simulate the sensation of touch.

prosthetic_originalThen in April, the University of Georgia announced that it had created a brand of “smart skin” – a transparent, flexible film that uses 8000 touch-sensitive transistors – that is just as sensitive as the real thing. In July, researchers in Israel took this a step further, showing how a gold-polyester nanomaterial would be ideal as a material for artificial skin, since it experiences changes in conductivity as it is bent.

15. 400,000 Year-Old DNA Confuses Humanity’s Origin Story:

Another discovery made this year has forced anthropologist to rethink human evolution. This occurred in Spain early in December, where a team from the Max Planck Institute for Evolutionary Anthropology in Leipzig, Germany recovered a 400,000 year-old thigh bone. Initially thought to be a forerunner of the Neanderthal branch of hominids, it was later learned that it belonged to the little-understood branch of hominins known as Denisovans.

Human-evoThe discordant findings are leading anthropologists to reconsider the last several hundred thousand years of human evolution. In short, it indicates that there may yet be many extinct human populations that scientists have yet to discover. What’s more, there DNA may prove to be part of modern humans genetic makeup, as interbreeding is a possibility.

Of Mechanical Minds

A few weeks back, a friend of mine, Nicola Higgins, directed me to an article about Google’s new neural net. Not only did she provide me with a damn interesting read, she also challenged me to write an article about the different types of robot brains. Well, Nicola, as Barny Stintson would say “Challenge Accepted!”And I got to say, it was a fun topic to get into.

After much research and plugging away at the lovely thing known as the internet (which was predicted by Vannevar Bush with his proposed Memor-Index system (aka. Memex) 50 years ago, btw) I managed to compile a list of the most historically relevant examples of mechanical minds, culminating in the development of Google’s Neural Net. Here we go..

Earliest Examples:
Even in ancient times, the concept of automata and arithmetic machinery can be found in certain cultures. In the Near East, the Arab World, and as far East as China, historians have found examples of primitive machinery that was designed to perform one task or another. And even though few specimens survive, there are even examples of machines that could perform complex mathematical calculations…

Antikythera mechanism:
Invented in ancient Greece, and recovered in 1901 on the ship that bears the same name, the Antikythera is the world’s oldest known analog calculator, invented to calculate the positions of the heavens for ancient astronomers. However, it was not until a century later that its true complexity and significance would be fully understood. Having been built in the 1st century BCE, it would not be until the 14th century CE that machines of its complexity would be built again.

Although it is widely theorized that this “clock of the heavens” must have had several predecessors during the Hellenistic Period, it remains the oldest surviving analog computer in existence. After collecting all the surviving pieces, scientists were able to reconstruct the design (pictured at right), which essentially amounted to a large box of interconnecting gears.

Pascaline:
Otherwise known as the Arithmetic Machine and Pascale Calculator, this device was invented by French mathematician Blaise Pascal in 1642 and is the first known example of a mechanized mathematical calculator. Apparently, Pascale invented this device to help his father reorganize the tax revenues of the French province of Haute-Normandie, and went on to create 50 prototypes before he was satisfied.

Of those 50, nine survive and are currently on display in various European museums. In addition to giving his father a helping hand, its introduction launched the development of mechanical calculators all over Europe and then the world. It’s invention is also directly linked to the development of the microprocessing circuit roughly three centuries later, which in turn is what led to the development of PC’s and embedded systems.

The Industrial Revolution:
With the rise of machine production, computational technology would see a number of developments. Key to all of this was the emergence of the concept of automation and the rationalization of society. Between the 18th and late 19th centuries, as every aspect of western society came to be organized and regimented based on the idea of regular production, machines needed to be developed that could handle this task of crunching numbers and storing the results.

Jacquard Loom:
Invented by Joseph Marie Jacquard, a French weaver and merchant, in 1801, the Loom that bears his name is the first programmable machine in history, which relied on punch cards to input orders and turn out textiles of various patterns. Thought it was based on earlier inventions by Basile Bouchon (1725), Jean Baptiste Falcon (1728) and Jacques Vaucanson (1740), it remains the most well-known example of a programmable loom and the earliest machine that was controlled through punch cards.

Though the Loom was did not perform computations, the design was nevertheless an important step in the development of computer hardware. Charles Babbage would use many of its features to design his Analytical Engine (see next example) and the use of punch cards would remain a stable in the computing industry well into the 20th century until the development of the microprocessor.

Analytical Engine:
Also known as the “Difference Engine”, this concept was originally proposed by English Mathematician Charles Babbage. Beginning in 1822 Babbage began contemplating designs for a machine that would be capable of automating the process of creating error free tables, which arose out of difficulties encountered by teams of mathematicians who were attempting to do it by hand.

Though he was never able to complete construction of a finished product, due to apparent difficulties with the chief engineer and funding shortages, his proposed engine incorporated an arithmetical unit, control flow in the form of conditional branching and loops, and integrated memory, making it the first Turing-complete design for a general-purpose computer. His various trial models (like that featured at left) are currently on display in the Science Museum in London, England.

The Birth of Modern Computing:
The early 20th century saw the rise of several new developments, many of which would play a key role in the development of modern computers. The use of electricity for industrial applications was foremost, with all computers from this point forward being powered by Alternating and/or Direct Current and even using it to store information. At the same time, older ideas would be remain in use but become refined, most notably the use of punch cards and tape to read instructions and store results.

Tabulating Machine:
The next development in computation came roughly 70 years later when Herman Hollerith, an American statistician, developed a “tabulator” to help him process information from the 1890 US Census. In addition to being the first electronic computational device designed to assist in summarizing information (and later, accounting), it also went on to spawn the entire data processing industry.

Six years after the 1890 Census, Hollerith formed his own company known as the Tabulating Machine Company that was responsible for creating machines that could tabulate info based on punch cards. In 1924, after several mergers and consolidations, Hollerith’c company was renamed International Business Machines (IBM), which would go on to build the first “supercomputer” for Columbia University in 1931.

Atanasoff–Berry Computer:
Next, we have the ABC, the first electronic digital computing device in the world. Conceived in 1937, the ABC shares several characteristics with its predecessors, not the least of which is the fact that it is electrically powered and relied on punch cards to store data. However, unlike its predecessors, it was the first machine to use digital symbols to compute and was the first computer to use vacuum tube technology

These additions allowed the ABC to acheive computational speeds that were previously thought impossible for a mechanical computer. However, the machine was limited in that it could only solve systems of linear equations, and its punch card system of storage was deemed unreliable. Work on the machine also stopped when it’s inventor John Vincent Atanasoff was called off to assist in World War II cryptographic assignments. Nevertheless, the machine remains an important milestone in the development of modern computers.

Colossus:
There’s something to be said about war being the engine of innovation. The Colossus is certainly no stranger to this rule, the machine used to break German codes in the Second World War. Due to the secrecy surrounding it, it would not have much of an influence on computing and would not be rediscovered until the 1990’s. Still, it represents a step in the development of computing, as it relied on vacuum tube technology and punch tape in order to perform calculations, and proved most adept at solving complex mathematical computations.

Originally conceived by Max Newman, the British mathematician who was chiefly responsible fore breaking German codes in Bletchley Park during the war, the machine was a proposed means of combatting the German Lorenz machine, which the Nazis used to encode all of their wireless transmissions. With the first model built in 1943, ten variants of the machine for the Allies before war’s end and were intrinsic in bringing down the Nazi war machine.

Harvard Mark I:
Also known as the “IBM Automatic Sequence Controlled Calculator (ASCC)”, the Mark I was an electro-mechanical computer that was devised by Howard H. Aiken, built by IBM, and officially presented to Harvard University in 1944. Due to its success at performing long, complex calculations, it inspired several successors, most of which were used by the US Navy and Air Force for the purpose of running computations.

According to IBM’s own archives, the Mark I was the first computer that could execute long computations automatically. Built within a steel frame 51 feet (16 m) long and eight feet high, and using 500 miles (800 km) of wire with three million connections, it was the industry’s largest electromechanical calculator and the largest computer of its day.

Manchester SSEM:
Nicknamed “Baby”, the Manchester Small-Scale Experimental Machine (SSEM) was developed in 1948 and was the world’s first computer to incorporate stored-program architecture.Whereas previous computers relied on punch tape or cards to store calculations and results, “Baby” was able to do this electronically.

Although its abilities were still modest – with a 32-bit word length, a memory of 32 words, and only capable of performing subtraction and negation without additional software – it was still revolutionary for its time. In addition, the SSEM also had the distinction of being the result of Alan Turing’s own work – another British crytographer who’s theories on the “Turing Machine” and development of the algorithm would form the basis of modern computer technology.

The Nuclear Age to the Digital Age:
With the end of World War II and the birth of the Nuclear Age, technology once again took several explosive leaps forward. This could be seen in the realm of computer technology as well, where wartime developments and commercial applications grew by leaps and bounds. In addition to processor speeds and stored memory multiplying expontentially every few years, the overall size of computers got smaller and smaller. This, some theorized would lead to the development of computers that were perfectly portable and smart enough to pass the “Turing Test”. Imagine!

IBM 7090:
The 7090 model which was released in 1959, is often referred to as a third generation computer because, unlike its predecessors which were either electormechanical  or used vacuum tubes, this machine relied transistors to conduct its computations. In addition, it was an improvement on earlier models in that it used a 36-bit word length and could store up to 32K (32,768) words, a modest increase in processing over the SSEM, but a ten thousand-fold increase in terms of storage capacity.

And of course, these improvements were mirrored in the fact the 7090 series were also significantly smaller than previous versions, being about the size of a desk rather than an entire room. They were also cheaper and were quite popular with NASA, Caltech and MIT.

PDP-8:
In keeping with the trend towards miniaturization, 1965 saw the development of the first commercial minicomputer by the Digital Equipment Corporation (DEC). Though large by modern standards (about the size of a minibar) the PDP-8, also known as the “Straight-8”, was a major improvement over previous models, and therefore a commercial success.

In addition, later models also incorporated advanced concepts like the Real-Time Operating System and preemptive multitasking. Unfortunately, early models still relied on paper tape in order to process information. It was not until later that the computer was upgraded to take advantage of controlling language  such as FORTRAN, BASIC, and DIBOL.

Intel 4004:
Founded in California in 1968, the Intel Corporation quickly moved to the forefront of computational hardware development with the creation of the 4004, the worlds first Central Processing Unit, in 1971. Continuing the trend towards smaller computers, the development of this internal processor paved the way for personal computers, desktops, and laptops.

Incorporating the then-new silicon gate technology, Intel was able to create a processor that allowed for a higher number of transistors and therefore a faster processing speed than ever possible before. On top of all that, they were able to pack in into a much smaller frame, which ensured that computers built with the new CPU would be smaller, cheaper and more ergonomic. Thereafter, Intel would be a leading designer of integrated circuits and processors, supplanting even giants like IBM.

Apple I:
The 60’s and 70’s seemed to be a time for the birthing of future giants. Less than a decade after the first CPU was created, another upstart came along with an equally significant development. Named Apple and started by three men in 1976 – Steve Jobs, Steve Wozniak, and Ronald Wayne – the first product to be marketed was a “personal computer” (PC) which Wozniak built himself.

One of the most distinctive features of the Apple I was the fact that it had a built-in keyboard. Competing models of the day, such as the Altair 8800, required a hardware extension to allow connection to a computer terminal or a teletypewriter machine. The company quickly took off and began introducing an upgraded version (the Apple II) just a year later. As a result, Apple I’s remain a scarce commodity and very valuable collector’s item.

The Future:
The last two decades of the 20th century also saw far more than its fair of developments. From the CPU and the PC came desktop computers, laptop computers, PDA’s, tablet PC’s, and networked computers. This last creation, aka. the Internet, was the greatest leap by far, allowing computers from all over the world to be networked together and share information. And with the exponential increase in information sharing that occurred as a result, many believe that it’s only a matter of time before wearable computers, fully portable computers, and artificial intelligences are possible. Ah, which brings me to the last entry in this list…

The Google Neural Network:
googleneuralnetworkFrom mechanical dials to vacuum tubes, from CPU’s to PC’s and laptops, computer’s have come a hell of a long way since the days of Ancient Greece. Hell, even within the last century, the growth in this one area of technology has been explosive, leading some to conclude that it was just a matter of time before we created a machine that was capable of thinking all on its own.

Well, my friends, that day appears to have dawned. Already, Nicola and myself blogged about this development, so I shan’t waste time going over it again. Suffice it to say, this new program, which thus far has been able to identify pictures of cats at random, contains the necessary neural capacity to acheive 1/1000th of what the human brain is capable of. Sounds small, but given the exponential growth in computing, it won’t be long before that gap is narrowed substantially.

Who knows what else the future will hold?  Optical computers that use not electrons but photons to move information about? Quantum computers, capable of connecting machines not only across space, but also time? Biocomputers that can be encoded directly into our bodies through our mitochondrial DNA? Oh, the possibilities…

Creating machines in the likeness of the human mind. Oh Brave New World that hath such machinery in it. Cool… yet scary!