The Future of Computing: Brain-Like Computers

neuronsIt’s no secret that computer scientists and engineers are looking to the human brain as means of achieving the next great leap in computer evolution. Already, machines are being developed that rely on machine blood, can continue working despite being damaged, and recognize images and speech. And soon, a computer chip that is capable of learning from its mistakes will also be available.

The new computing approach, already in use by some large technology companies, is based on the biological nervous system – specifically on how neurons react to stimuli and connect with other neurons to interpret information. It allows computers to absorb new information while carrying out a task, and adjust what they do based on the changing signals.

brain_chip2The first commercial version of the new kind of computer chip is scheduled to be released in 2014, and was the result of a collaborative effort between I.B.M. and Qualcomm, as well as a Stanford research team. This “neuromorphic processor” can not only automate tasks that once required painstaking programming, but can also sidestep and even tolerate errors, potentially making the term “computer crash” obsolete.

In coming years, the approach will make possible a new generation of artificial intelligence systems that will perform some functions that humans do with ease: see, speak, listen, navigate, manipulate and control. That can hold enormous consequences for tasks like facial and speech recognition, navigation and planning, which are still in elementary stages and rely heavily on human programming.

googleneuralnetworkFor example, computer vision systems only “recognize” objects that can be identified by the statistics-oriented algorithms programmed into them. An algorithm is like a recipe, a set of step-by-step instructions to perform a calculation. But last year, Google researchers were able to get a machine-learning algorithm, known as a “Google Neural Network”, to perform an identification task (involving cats) without supervision.

And this past June, the company said it had used those neural network techniques to develop a new search service to help customers find specific photos more accurately. And this past November, researchers at Standford University came up with a new algorithm that could give computers the power to more reliably interpret language. It’s known as the Neural Analysis of Sentiment (NaSent).

deep_learning_laptopA similar concept known as Deep Leaning is also looking to endow software with a measure of common sense. Google is using this technique with their voice recognition technology to aid in performing searches. In addition, the social media giant Facebook is looking to use deep learning to help them improve Graph Search, an engine that allows users to search activity on their network.

Until now, the design of computers was dictated by ideas originated by the mathematician John von Neumann about 65 years ago. Microprocessors perform operations at lightning speed, following instructions programmed using long strings of binary code (0s and 1s). The information is stored separately in what is known as memory, either in the processor itself, in adjacent storage chips or in higher capacity magnetic disk drives.

neural-networksBy contrast, the new processors consist of electronic components that can be connected by wires that mimic biological synapses. Because they are based on large groups of neuron-like elements, they are known as neuromorphic processors, a term credited to the California Institute of Technology physicist Carver Mead, who pioneered the concept in the late 1980s.

These processors are not “programmed”, in the conventional sense. Instead, the connections between the circuits are “weighted” according to correlations in data that the processor has already “learned.” Those weights are then altered as data flows in to the chip, causing them to change their values and to “spike.” This, in turn, strengthens some connections and weakens others, reacting much the same way the human brain does.

Neuromorphic-chip-640x353In the words of Dharmendra Modha, an I.B.M. computer scientist who leads the company’s cognitive computing research effort:

Instead of bringing data to computation as we do today, we can now bring computation to data. Sensors become the computer, and it opens up a new way to use computer chips that can be everywhere.

One great advantage of the new approach is its ability to tolerate glitches, whereas traditional computers are cannot work around the failure of even a single transistor. With the biological designs, the algorithms are ever changing, allowing the system to continuously adapt and work around failures to complete tasks. Another benefit is energy efficiency, another inspiration drawn from the human brain.

IBM_stacked3dchipsThe new computers, which are still based on silicon chips, will not replace today’s computers, but augment them; at least for the foreseeable future. Many computer designers see them as coprocessors, meaning they can work in tandem with other circuits that can be embedded in smartphones and the centralized computers that run computing clouds.

However, the new approach is still limited, thanks to the fact that scientists still do not fully understand how the human brain functions. As Kwabena Boahen, a computer scientist who leads Stanford’s Brains in Silicon research program, put it:

We have no clue. I’m an engineer, and I build things. There are these highfalutin theories, but give me one that will let me build something.

calit2PhotoLuckily, there are efforts underway that are designed to remedy this, with the specific intention of directing that knowledge towards the creation of better computers and AIs. One such effort comes from the National Science Foundation financed the Center for Brains, Minds and Machines, a new research center based at the Massachusetts Institute of Technology, with Harvard and Cornell.

Another is the California Institute for Telecommunications and Information Technology (aka. Calit2) – a center dedicated to innovation in nanotechnology, life sciences, information technology, and telecommunications. As
Larry Smarr, an astrophysicist and director of Institute, put it:

We’re moving from engineering computing systems to something that has many of the characteristics of biological computing.

Human-Brain-project-Alp-ICTAnd last, but certainly not least, is the Human Brain Project, an international group of 200 scientists from 80 different research institutions and based in Lausanne, Switzerland. Having secured the $1.6 billion they need to fund their efforts, these researchers will spend the next ten years conducting research that cuts across multiple disciplines.

This initiative, which has been compared to the Large Hadron Collider, will attempt to reconstruct the human brain piece-by-piece and gradually bring these cognitive components into an overarching supercomputer. The expected result of this research will be new platforms for “neuromorphic computing” and “neurorobotics,” allowing for the creation of computing and robotic architectures that mimic the functions of the human brain.

neuromorphic_revolutionWhen future generations look back on this decade, no doubt they will refer to it as the birth of the neuromophic computing revolution. Or maybe just Neuromorphic Revolution for short, but that sort of depends on the outcome. With so many technological revolutions well underway, it is difficult to imagine how the future will look back and characterize this time.

Perhaps, as Charles Stross suggest, it will simply be known as “the teens”, that time in pre-Singularity history where it was all starting to come together, but was yet to explode and violently change everything we know. I for one am looking forward to being around to witness it all!

Sources: nytimes.com, technologyreview.com, calit2.net, humanbrainproject.eu

The Future is… Worms: Life Extension and Computer-Simulations

genetic_circuitPost-mortality is considered by most to be an intrinsic part of the so-called Technological Singularity. For centuries, improvements in medicine, nutrition and health have led to improved life expectancy. And in an age where so much more is possible – thanks to cybernetics, bio, nano, and medical advances – it stands to reason that people will alter their physique in order slow the onset of age and extend their lives even more.

And as research continues, new and exciting finds are being made that would seem to indicate that this future may be just around the corner. And at the heart of it may be a series of experiments involving worms. At the Buck Institute for Research and Aging in California, researchers have been tweaking longevity-related genes in nematode worms in order to amplify their lifespans.

immortal_wormsAnd the latest results caught even the researchers by surprise. By triggering mutations in two pathways known for lifespan extension – mutations that inhibit key molecules involved in insulin signaling (IIS) and the nutrient signaling pathway Target of Rapamycin (TOR) – they created an unexpected feedback effect that amplified the lifespan of the worms by a factor of five.

Ordinarily, a tweak to the TOR pathway results in a 30% lifespan extension in C. Elegans worms, while mutations in IIS (Daf-2) results in a doubling of lifespan. By combining the mutations, the researchers were expecting something around a 130% extension to lifespan. Instead, the worms lived the equivalent of about 400 to 500 human years.

antiagingAs Doctor Pankaj Kapahi said in an official statement:

Instead, what we have here is a synergistic five-fold increase in lifespan. The two mutations set off a positive feedback loop in specific tissues that amplified lifespan. These results now show that combining mutants can lead to radical lifespan extension — at least in simple organisms like the nematode worm.

The positive feedback loop, say the researchers, originates in the germline tissue of worms – a sequence of reproductive cells that may be passed onto successive generations. This may be where the interactions between the two mutations are integrated; and if correct, might apply to the pathways of more complex organisms. Towards that end, Kapahi and his team are looking to perform similar experiments in mice.

DNA_antiagingBut long-term, Kapahi says that a similar technique could be used to produce therapies for aging in humans. It’s unlikely that it would result in the dramatic increase to lifespan seen in worms, but it could be significant nonetheless. For example, the research could help explain why scientists are having a difficult time identifying single genes responsible for the long lives experienced by human centenarians:

In the early years, cancer researchers focused on mutations in single genes, but then it became apparent that different mutations in a class of genes were driving the disease process. The same thing is likely happening in aging. It’s quite probable that interactions between genes are critical in those fortunate enough to live very long, healthy lives.

A second worm-related story comes from the OpenWorm project, an international open source project dedicated to the creation of a bottom-up computer model of a millimeter-sized nemotode. As one of the simplest known multicellular life forms on Earth, it is considered a natural starting point for creating computer-simulated models of organic beings.

openworm-nematode-roundworm-simulation-artificial-lifeIn an important step forward, OpenWorm researchers have completed the simulation of the nematode’s 959 cells, 302 neurons, and 95 muscle cells and their worm is wriggling around in fine form. However, despite this basic simplicity, the nematode is not without without its share of complex behaviors, such as feeding, reproducing, and avoiding being eaten.

To model the complex behavior of this organism, the OpenWorm collaboration (which began in May 2013) is developing a bottom-up description. This involves making models of the individual worm cells and their interactions, based on their observed functionality in the real-world nematodes. Their hope is that realistic behavior will emerge if the individual cells act on each other as they do in the real organism.

openworm-nematode-roundworm-simulation-artificial-life-0Fortunately, we know a lot about these nematodes. The complete cellular structure is known, as well as rather comprehensive information concerning the behavior of the thing in reaction to its environment. Included in our knowledge is the complete connectome, a comprehensive map of neural connections (synapses) in the worm’s nervous system.

The big question is, assuming that the behavior of the simulated worms continues to agree with the real thing, at what stage might it be reasonable to call it a living organism? The usual definition of living organisms is behavioral, that they extract usable energy from their environment, maintain homeostasis, possess a capacity to grow, respond to stimuli, reproduce, and adapt to their environment in successive generations.

openworm-nematode1If the simulation exhibits these behaviors, combined with realistic responses to its external environment, should we consider it to be alive? And just as importantly, what tests would be considered to test such a hypothesis? One possibility is an altered version of the Turing test – Alan Turing’s proposed idea for testing whether or not a computer could be called sentient.

In the Turing test, a computer is considered sentient and sapient if it can simulate the responses of a conscious sentient being so that an auditor can’t tell the difference. A modified Turing test might say that a simulated organism is alive if a skeptical biologist cannot, after thorough study of the simulation, identify a behavior that argues against the organism being alive.

openworm-nematode2And of course, this raises an even larger questions. For one, is humanity on the verge of creating “artificial life”? And what, if anything, does that really look like? Could it just as easily be in the form of computer simulations as anthropomorphic robots and biomachinery? And if the answer to any of these questions is yes, then what exactly does that say about our preconceived notions about what life is?

If humanity is indeed moving into an age of “artificial life”, and from several different directions, it is probably time that we figure out what differentiates the living from the nonliving. Structure? Behavior? DNA? Local reduction of entropy? The good news is that we don’t have to answer that question right away. Chances are, we wouldn’t be able to at any rate.

Brain-ScanAnd though it might not seem apparent, there is a connection between the former and latter story here. In addition to being able to prolong life through genetic engineering, the ability to simulate consciousness through computer-generated constructs might just prove a way to cheat death in the future. If complex life forms and connectomes (like that involved in the human brain) can be simulated, then people may be able to transfer their neural patterns before death and live on in simulated form indefinitely.

So… anti-aging, artificial life forms, and the potential for living indefinitely. And to think that it all begins with the simplest multicellular life form on Earth – the nemotode worm. But then again, all life – nay, all of existence – depends upon the most simple of interactions, which in turn give rise to more complex behaviors and organisms. Where else would we expect the next leap in biotechnological evolution to come from?

And in the meantime, be sure to enjoy this video of the OpenWorm’s simulated nemotode in action


Sources:
IO9, cell.com, gizmag, openworm

Birth of an Idea: Seedlings

alien-worldHey all! Hope this holidays season finds you warm, cozy, and surrounded by loved ones. And I thought I might take this opportunity to talk about an idea I’ve been working on. While I’m still searching for a proper title, the one I’ve got right now is Seedlings. This represents an idea which has been germinated in my mind for some time, ever since I saw a comprehensive map of the Solar System and learned just how many potentially habitable worlds there are out there.

Whenever we talk of colonization, planting the seed (you see where the title comes from now, yes?) of humanity on distant worlds, we tend to think of exoplanets. In other words, we generally predict that humanity will live on worlds beyond our Solar System, if and when such things ever become reality. Sure, allowances are made for Mars, and maybe Ganymede, in these scenarios, but we don’t seem to think of all the other moons we have in our Solar System.

solar_systemFor instance, did you know that in addition to our system’s 11 planets and planetoids, there are 166 moons in our Solar System, the majority of which (66) orbit Jupiter? And granted, while many are tiny little balls of rock that few people would ever want to live on, by my count, that still leaves 12 candidates for living. Especially when you consider that most have their own sources of water, even if it is in solid form.

And that’s where I began with the premise for Seedlings. The way I see it, in the distant future, humanity would expand to fill every corner of the Solar System before moving on to other stars. And in true human fashion, we would become divided along various geographic and ideological lines. In my story, its people’s attitudes towards technology that are central to this divide, with people falling into either the Seedling or Chartrist category.

nanomachineryThe Seedlings inhabit the Inner Solar System and are dedicated to embracing the accelerating nature of technology. As experts in nanotech and biotech, they establish new colonies by planting Seeds, tiny cultures of microscopic, programmed bacteria that convert the landscape into whatever they wish. Having converted Venus, Mars, and the Jovian satellites into livable worlds, they now enjoy an extremely advanced and high standard of living.

The Chartrists, on the other hand, are people committed to limiting the invasive and prescriptive nature technology has over our lives. They were formed at some point in the 21st century, when the Technological Singularity loomed, and signed a Charter whereby they swore not to embrace augmentation and nanotechnology beyond a certain point. While still technically advanced, they are limited compared to their Seedling cousins.

terraforming-mars2With life on Earth, Mars and Venus (colonized at this time) becoming increasingly complicated, the Chartrists began colonizing in the outer Solar System. Though they colonized around Jupiter, the Jovians eventualy became Seedling territory, leaving just the Saturnalian and Uranian moons for the Chartrists to colonize, with a small string of neutral planets lying in between.

While no open conflicts have ever taken place between the two sides, a sort of detente has settled in after many generations. The Solar System is now glutted by humans, and new frontiers are needed for expansion. Whereas the Seedlings have been sending missions to all suns within 20 light-years from Sol, many are looking to the Outer Solar System as a possible venue for expansion.

exoplanets1At the same time, the Chartrists see the Seedling expansion as a terrible threat to their ongoing way of life, and some are planning for an eventual conflict. How will this all play out? Well, I can tell you it will involve a lot of action and some serious social commentary! Anyway, here is the breakdown of the Solar Colonies, who owns them, and what they are dedicated to:

Inner Solar Colonies:
The home of the Seedlings, the most advanced and heavily populated worlds in the Solar System. Life here is characterized by rapid progress and augmentation through nanotechnology and biotechnology. Socially, they are ruled by a system of distributed power, or democratic anarchy, where all citizens are merged into the decision making process through neural networking.

Mercury: source of energy for the entire inner solar system
Venus: major agricultural center, leader in biomaterial construction
Earth: birthplace of humanity, administrative center
Mars: major population center, transit hub between inner colonies and Middle worlds

Middle Worlds:
A loose organization of worlds beyond Mars, including the Jovian and Saturnalian satellites. Those closest to the Sun are affiliated with the Seedlings, the outer ones the Chartrists, and with some undeclared in the middle. Life on these worlds is mixed, with the Jovian satellites boasting advanced technology, augmentation, and major industries supplying the Inner Colonies. The Saturnalian worlds are divided, with the neutral planets boasting a high level of technical advancement and servicing people on all sides. The two Chartrist moons are characterized by more traditional settlements, with thriving industry and a commitment to simpler living.

Ceres: commercial nexus of the Asteroid Belt, source of materials for solar system (S)
Europa: oceanic planet, major resort and luxury living locale (S)
Ganymede: terraforming operation, agricultural world (S)
Io: major source of energy for the Middle World (N)
Calisto: mining operations, ice, water, minerals (N)
Titan: major population center, transit point to inner colonies (N)
Tethys: oceanic world, shallow seas, major tourist destination (N)
Dione: major mining colony to outer colonies (C)
Rhea: agricultural center for outer colonies (C)

Outer Solar Colonies:
The Neptunian moons of the outer Solar System are exclusively populated by Chartrist populations, people committed to a simpler way of life and dedicated to ensuring that augmentation and rapid progress are limited. Settlements on these worlds boast a fair degree of technical advancement, but are significantly outmatched by the Seedlings. They also boast a fair degree of industry and remain tied to the Inner and Middle Worlds through the export of raw materials and the import of technical devices.

Miranda: small ice planet, source of water (C)
Ariel: agricultural world, small biomaterial industry and carbon manufacturing (C)
Umbriel: agricultural world, small biomaterial industry and carbon manufacturing (C)
Titania: agricultural world, small biomaterial industry and carbon manufacturing (C)
Oberon: agricultural world, small biomaterial industry and carbon manufacturing (C)
Triton: source of elemental nitrogen, water, chaotic landscape (C)

The Future is Here: 4-D Printing

4dprintingmaterial3-D printing has already triggered a revolution in manufacturing by allowing people to determine the length, width and depth of an object that they want to create. But thanks to research being conducted at the University of Colorado, Boulder, a fourth dimension can now be included – time. Might sounds like science fiction, until you realize that the new manufacturing process will make it possible to print objects that change their shape at a given time.

Led by Prof. H. Jerry Qi, the scientific team have developed a “4D printing” process in which shape-memory polymer fibers are deposited in key areas of a composite material item as it’s being printed. By carefully controlling factors such as the location and orientation of the fibers, those areas of the item will fold, stretch, curl or twist in a predictable fashion when exposed to a stimulus such as water, heat or mechanical pressure.

4dprintingmaterial1The concept was proposed earlier this year by MIT’s Skylar Tibbits, who used his own 4D printing process to create a variety of small self-assembling objects. Martin L. Dunn of the Singapore University of Technology and Design, who collaborated with Qi on the latest research, explained the process:

We advanced this concept by creating composite materials that can morph into several different, complicated shapes based on a different physical mechanism.

This means that one 4D-printed object could change shape in different ways, depending on the type of stimulus to which it was exposed. That functionality could make it possible to print a photovoltaic panel in a flat shape, expose it to water to cause it to fold up for shipping, and then expose it to heat to make it fold out to yet another shape that’s optimal for catching sunlight.

4dprintingmaterial2This principle may sound familiar, as it is the basis of such sci-fi concepts as polymorphic alloys or objects. It’s also the idea behind the Milli-Motein, the shape-shifting machine invented by MITs Media Labs late last year. But ultimately, it all comes back to organic biology, using structural biochemistry and the protein cell as a blueprint to create machinery made of “smart” materials.

The building block of all life, proteins can assume an untold number of shapes to fulfill an organism’s various functions, and are the universal workforce to all of life. By combining that concept with the world of robotics and manufactured products, we could be embarking upon an era of matter and products that can assume different shapes as needed and on command.

papertab-touchAnd if these materials can be scaled to the microscopic level, and equipped with tiny computers, the range of functions they will be able to do will truly stagger the mind. Imagine furniture made from materials that can automatically respond to changes in pressure and weight distribution. Or paper that is capable of absorbing your pencil scratches and then storing it in its memory, or calling up image displays like a laptop computer?

And let’s not forget how intrinsic this is to the field of nanotechnology. Smarter, more independent materials that can change shape and respond to changes in their environment, mainly so they can handle different tasks, is all part of the Fabrication Revolution that is expected to explode this century. Here’s hoping I’m alive to see it all. Sheldon Cooper isn’t the only one waiting on the Technological Singularity!

Source: gizmag.com

Nanotech News: Smart Sponges, Nanoparticles and Neural Dust!

nanomachineryNanotechnology has long been the dream of researchers, scientists and futurists alike, and for obvious reasons. If machinery were small enough so as to be microscopic, or so small that it could only be measured on the atomic level,  just about anything would be possible. These include constructing buildings and products from the atomic level up, with would revolutionize manufacturing as we know it.

In addition, microscopic computers, smart cells and materials, and electronics so infinitesimally small that they could be merged with living tissues would all be within our grasp. And it seems that at least once a month, universities, research labs, and even independent skunkworks are unveiling new and exciting steps that are bringing us ever closer to this goal.

Close-up of a smart sponge
Close-up of a smart sponge

Once such breakthrough comes from the University of North Carolina at Chapel Hill, where biomedical scientists and engineers have joined forces to create the “smart sponge”. A spherical object that is microscopic — just 250 micrometers across, and could be made as small as 0.1 micrometers – these new sponges are similar to nanoparticles, in that they are intended to be the next-generation of delivery vehicles for medication.

Each sponge is mainly composed of a polymer called chitosan, something which is not naturally occurring, but can be produced easily from the chitin in crustacean shells. The long polysaccharide chains of chitosan form a matrix in which tiny porous nanocapsules are embedded, and which can be designed to respond to the presence of some external compound – be it an enzyme, blood sugar, or a chemical trigger.

bloodstreamSo far, the researchers tested the smart sponges with insulin, so the nanocapsules in this case contained glucose oxidase. As the level of glucose in a diabetic patient’s blood increases, it would trigger the nanocapsules in the smart sponge begin releasing hydrogen ions which impart a positive charge to the chitosan strands. This in turn causes them to spread apart and begin to slowly release insulin into the blood.

The process is also self-limiting: as glucose levels in the blood come down after the release of insulin, the nanocapsules deactivate and the positive charge dissipates. Without all those hydrogen ions in the way, the chitosan can come back together to keep the remaining insulin inside. The chitosan is eventually degraded and absorbed by the body, so there are no long-term health effects.

NanoparticlesOne the chief benefits of this kind of system, much like with nanoparticles, is that it delivers medication when its needed, to where its needed, and in amounts that are appropriate to the patient’s needs. So far, the team has had success treating diabetes in rats, but plans to expand their treatment to treating humans, and branching out to treat other types of disease.

Cancer is a prime candidate, and the University team believes it can be treated without an activation system of any kind. Tumors are naturally highly acidic environments, which means a lot of free hydrogen ions. And since that’s what the diabetic smart sponge produces as a trigger anyway, it can be filled with small amounts of chemotherapy drugs that would automatically be released in areas with cancer cells.

nanorobotAnother exciting breakthrough comes from University of California at Berkeley, where medical researchers are working towards tiny, implantable sensors . As all medical researchers know, the key to understanding and treating neurological problems is to gather real-time and in-depth information on the subject’s brain. Unfortunately, things like MRIs and positron emission tomography (PET) aren’t exactly portable and are expensive to run.

Implantable devices are fast becoming a solution to this problem, offering real-time data that comes directly from the source and can be accessed wirelessly at any time. So far, this has taken the form of temporary medical tattoos or tiny sensors which are intended to be implanted in the bloodstreams. However, what the researchers at UofC are proposing something much more radical.

neural_dustIn a recent research paper, they proposed a design for a new kind of implantable sensor – an intelligent dust that can infiltrate the brain, record data, and communicate with the outside world. The preliminary design was undertaken by Berkeley’s Dongjin Seo and colleagues, who described a network of tiny sensors – each package being no more than 100 micrometers – in diameter. Hence the term they used: “neural dust”.

The smart particles would all contain a very small CMOS sensor capable of measuring electrical activity in nearby neurons. The researchers also envision a system where each particle is powered by a piezoelectric material rather than tiny batteries. The particles would communicate data to an external device via ultrasound waves, and the entire package would also be coated in a polymer, thus making it bio-neutral.

smart_tatoosBut of course, the dust would need to be complimented by some other implantable devices. These would likely include a larger subdural transceiver that would send the ultrasound waves to the dust and pick up the return signal. The internal transceiver would also be wirelessly connected to an external device on the scalp that contains data processing hardware, a long range transmitter, storage, and a battery.

The benefits of this kind of system are again obvious. In addition to acting like an MRI running in your brain all the time, it would allow for real-time monitoring of neurological activity for the purposes of research and medical monitoring. The researchers also see this technology as a way to enable brain-machine interfaces, something which would go far beyond current methods. Who knows? It might even enable a form of machine-based telepathy in time.

telepathySounds like science fiction, and it still is. Many issues need to be worked out before something of this nature would be possible or commercially available. For one, more powerful antennae would need to be designed on the microscopic scale in order for the smart dust particles to be able to send and receive ultrasound waves.

Increasing the efficiency of transceivers and piezoelectric materials will also be a necessity to provide the dust with power, otherwise they could cause a build-up of excess heat in the user’s neurons, with dire effects! But most importantly of all, researchers need to find a safe and effective way to deliver the tiny sensors to the brain.

prosthetic_originalAnd last, but certainly not least, nanotechnology might be offering improvements in the field of prosthetics as well. In recent years, scientists have made enormous breakthroughs in the field of robotic and bionic limbs, restoring ambulatory mobility to accident victims, the disabled, and combat veterans. But even more impressive are the current efforts to restore sensation as well.

One method, which is being explored by the Technion-Israel Institute of Technology in Israel, involves incorporating gold nanoparticles and a substrate made of polyethylene terephthalate (PET) – the plastic used in bottles of soft drinks. Between these two materials, they were able to make an ultra-sensitive film that would be capable of transmitting electrical signals to the user, simulating the sensation of touch.

gold_nanoparticlesBasically, the gold-polyester nanomaterial experiences changes in conductivity as it is bent, providing an extremely sensitive measure of physical force. Tests conducted on the material showed that it was able to sense pressures ranging from tens of milligrams to tens of grams, which is ten times more sensitive than any sensors being build today.

Even better, the film maintained its sensory resolution after many “bending cycles”, meaning it showed consistent results and would give users a long term of use. Unlike many useful materials that can only really be used under laboratory conditions, this film can operate at very low voltages, meaning that it could be manufactured cheaply and actually be useful in real-world situations.

smart-skin_610x407In their research paper, lead researcher Hossam Haick described the sensors as “flowers, where the center of the flower is the gold or metal nanoparticle and the petals are the monolayer of organic ligands that generally protect it.” The paper also states that in addition to providing pressure information (touch), the sensors in their prototype were also able to sense temperature and humidity.

But of course, a great deal of calibration of the technology is still needed, so that each user’s brain is able to interpret the electronic signals being received from the artificial skin correctly. But this is standard procedure with next-generation prosthetic devices, ones which rely on two-way electronic signals to provide control signals and feedback.

nanorobot1And these are just some examples of how nanotechnology is seeking to improve and enhance our world. When it comes to sensory and mobility, it offers solutions to not only remedy health problems or limitations, but also to enhance natural abilities. But the long-term possibilities go beyond this by many orders of magnitude.

As a cornerstone to the post-singularity world being envisioned by futurists, nanotech offers solutions to everything from health and manufacturing to space exploration and clinical immortality. And as part of an ongoing trend in miniaturization, it presents the possibility of building devices and products that are even tinier and more sophisticated than we can currently imagine.

It’s always interesting how science works by scale, isn’t it? In addition to dreaming large – looking to build structures that are bigger, taller, and more elaborate – we are also looking inward, hoping to grab matter at its most basic level. In this way, we will not only be able to plant our feet anywhere in the universe, but manipulate it on the tiniest of levels.

As always, the future is a paradox, filling people with both awe and fear at the same time.

Sources: extremetech.com, (2), (3)

The Future is Here: The Telescopic Contact Lense

telescopic_contact_lensWhen it comes to enhancement technology, DARPA has its hands in many programs designed to augment a soldier’s senses. Their latest invention, the telescopic contact lens, is just one of many, but it may be the most impressive to date. Not only is it capable of giving soldiers the ability to spot and focus in on faraway objects, it may also have numerous civilian applications as well.

The lens is the result of collaboration between researchers from the University of California San Diego, Ecole Polytechnique Federale de Lausanne in Switzerland, and the Pacific Science & Engineering Group, with the financial assistance of DARPA. Led by Joseph Ford of UCSD and Eric Tremblay of EPFL, the development of the lens was announced in a recent article entitled “Switchable telescopic contact lens” that appeared in the Optics Express journal.

telescopic-contact-lens-2

In addition to being just over a millimeter thick, the lens works by using a series of tiny mirrors to magnify light, and can be switched between normal and telescopic vision, which is due to the lens having two distinct regions. The first The center of the lens allows light to pass straight through, providing normal vision. The outside edge, however, acts as a telescope capable of magnifying your sight by close to a factor of three.

Above all, the main breakthrough here is that this telescopic contact lens is just 1.17mm thick, allowing it to be comfortably worn. Other attempts at granting telescopic vision have included a 4.4mm-thick contact lens (too thick for real-world use), telescopic spectacles (cumbersome and ugly), and most recently a telescopic lens implanted into the eye itself. The latter is currently the best option currently available, but it requires surgery and the image quality isn’t excellent.

Telescopic-Contact-Lens-3To accomplish this feet of micro-engineering, the researchers had to be rather creative. The light that will be magnified enters the edge of the contact lens, is bounced around four times inside the lens using patterned aluminum mirrors, and then beamed to the edge of the retina at the back of your eyeball. Or as the research team put it in their article:

The magnified optical path incorporates a telescopic arrangement of positive and negative annular concentric reflectors to achieve 2.8x magnification on the eye, while light passing through a central clear aperture provides unmagnified vision.

To switch between normal and telescopic vision, the central, unmagnified region of the contact lens has a polarizing filter in front of it — which works in tandem with a pair of 3D TV spectacles. By switching the polarizing state of the spectacles – a pair of active, liquid crystal Samsung 3D specs in this case – the user can choose between normal and magnified vision.

AR_glassesThough the project is being funded by DARPA for military use, the research team also indicated that the real long-term benefits of a device like this one come in the form of civilian and commercial applications. For those people suffering from age-related macular degeneration (AMD) – a leading cause of blindness for older adults – this lens could be used to correct for vision loss.

As always, enhancement technology is a two-edged sword. Devices and systems that are created to address disabilities and limitations have the added benefit of augmenting people who are otherwise healthy and ambulatory. The reverse is also true, with specialized machines that can make a person stronger, faster, and more aware providing amputees and physically challenged people the ability to overcome these imposed limitations.

telescopic-contact-lens-5However, before anyone starts thinking that all they need to slip on a pair of these to get superhero-like vision, there are certain limitations. As already stated, the lens doesn’t work on its own but needs to be paired with a modified set of 3D television glasses for it to work. Simply placing it on the pupil and expecting magnified vision is yet not an option.

Also, though the device has been tested using computer modeling and by attaching a prototype lens to a optomechanical model eye, it has not been tested on a set of human eyes just yet. As always, there is still a lot of work to do with refining the technology and improving the image quality, but it’s clear at this early juncture that the work holds a lot of promise.

It’s the age of bionic enhancements people, are we find ourselves at the forefront of it. As time goes on, we can expect such devices to become a regular feature of our society.

Sources: news.cnet.com, extremetech.com

Judgement Day Update: Geminoid Robotic Clones

geminoidWe all know it’s coming: the day when machines would be indistinguishable from human beings. And with a robot that is capable of imitating human body language and facial expressions, it seems we are that much closer to realizing it. It’s known as the Geminoid HI-2, a robotic clone of its maker, famed Japanese roboticist Hiroshi Ishiguro.

Ishiguro unveiled his latest creation at this year’s Global Future 2045 conference, an annual get-together for all sorts of cybernetics enthusiasts, life extension researchers, and singularity proponents. As one of the world’s top experts on human-mimicking robots, Ishiguro wants his creations to be as close to human as possible.

avatar_imageAlas, this has been difficult, since human beings tend to fidget and experience involuntary tics and movements. But that’s precisely what his latest bot excels at. Though it still requires a remote controller, the Ishiguro clone has all his idiosyncrasies hard-wired into his frame, and can even give you dirty looks.

geminoidfThis is not the first robot Ishiguro has built, as his female androids Repliee Q1Expo and Geminoid F will attest. But above all, Ishiguro loves to make robotic versions of himself, since one of his chief aims with robotics is to make human proxies. As he said during his talk, “Thanks to my android, when I have two meetings I can be in two places simultaneously.” I honestly think he was only half-joking!

During the presentation, Ishiguro’s robotic clone was on stage with him, where it realistically fidgeted as he pontificated and joked with the audience. The Geminoid was controlled from off-stage, where an unseen technician guided it, and fidgeted, yawned, and made annoyed facial expressions. At the end of the talk, Ishiguro’s clone suddenly jumped to life and told a joke that startled the crowd.

geminoid_uncanny_valleyIn Ishiguro’s eyes, robotic clones can outperform humans at basic human behaviors thanks to modern engineering. And though they are not yet to the point where the term “android” can be applied, he believes it is only a matter of time before they can rival and surpass the real thing. Roboticists and futurists refer to this as the “uncanny valley” – that strange, off-putting feeling people get when robots begin to increasingly resemble humans. If said valley was a physical place, I think we can all agree that Ishiguro would be its damn mayor!

And judging by these latest creations, the time when robots are indistinguishable from humans may be coming sooner than we think. As you can see from the photos, there seems to be very little difference in appearance between his robots and their human counterparts. And those who viewed them live have attested to them being surprisingly life-like. And once they are able to control themselves and have an artificial neural net that can rival a human one in terms of complexity, we can expect them to mimic many of our other idiosyncrasies as well.

As usual, there are those who will respond to this news with anticipation and those who respond with trepidation. Where do you fall? Maybe these videos from the conference of Ishiguro’s inventions in action will help you make up your mind:

Ishiguro Clone:


Geminoid F:

Sources: fastcoexist.com, geminoid.jp

Judgement Day Update: The Robotic Bartender and DARPA’s Latest Hand

robot_bartenderRobots have come a long way in recent years, haven’t they? From their humble beginnings, servicing human beings with menial tasks and replacing humans on the assembly line, they now appear poised to take over other, more complex tasks as well. Between private companies and DARPA-developed concepts, it seems like just a matter of time before a fully-functioning machine is capable of performing all our work for us.

One such task-mastering robot was featured at the Milan Design Week this year, an event where fashion tales center stage. It’s known as the Makr Shakr, a set of robotic arms that are capable of mixing drinks, slicing fruit, and capable of making millions of different recipes. The result of a collaborative effort between MIT SENSEable City Lab and Carlo Ratti Associati, an Italian architecture firm, this robot is apparently able to match wits with any human bartender.

robot_bartender1While at the Milan Design Week, the three robotic arms put on quite the show, demonstrating their abilities to a crowd of wowed spectators. According to the website, this technology is not just a bar aid, but part of a larger movement in robotics:

Makr Shakr aims to show the ‘Third Industrial Revolution’ paradigm through the simple process design-make-enjoy, and in just the time needed to prepare a new cocktail.

In a press release, the company described the process. It begins with the user downloading an app to create their order to the smartphone as well as peruse the recipes that other users have come up with. They then communicate the order to the Makr Shakr and “[the] cocktail is then crafted by three robotic arms, whose movements reproduce every action of a barman–from the shaking of a Martini to the muddling of a Mojito, and even the thin slicing of a lemon garnish.”

robot_bartender2Inspired by the ballerina Roberto Bolle, whose “movements were filmed and used as input for the programming of the Makr Shakr robots”, the arms appear most graceful when they do their work. In addition, the design system monitors exactly how much booze each patron is consuming, which, in theory, could let the robot-bartenders know when it’s time to cut off designers who have thrown back a few too many.

Check out the video of the Makr Shakr in action:


Another major breakthrough comes, yet again, from DARPA. For years now, they have been working with numerous companies and design and research firms in order to create truly ambulatory and dextrous robot limbs. In some cases, as with the Legged Squad Support System (LS3), this involves creating a machine that can carry supplies and keep up with troops. In others, this involves the creation of robotic hands and limbs to help wounded veterans recover and lead normal lives again.

And you may recall earlier this year when DARPA unveiled a cheap design for a robotic hand that was able to use tools and perform complex tasks (like changing a tire). More recently, it showcased a design for a three-fingered robot, designed in conjunction with the firm iRobot – the makers of the robotic 3D printer – and with support from Harvard and Yale, that is capable of unlocking and opening doors. Kind of scary really…

DARPA_robot

The arm is the latest to come out of the Autonomous Robotic Manipulation (ARM) program, a program designed to create robots that are no longer expensive, cumbersome, and dependent on human operators. Using a Kinect to zero in on the object’s location before moving in to grab the item, the arm is capable of picking up thin objects lying flat, like a laminated card or key. In addition, the hand’s three-finger configuration is versatile, strong, and therefore capable of handling objects of varying size and complexity.

When put to the test (as shown in the video below), the hand was able to pick up a metal key, insert it into a lock, and open a door without any assistance. Naturally, a human operator is still required at this stage, but the use of a Kinect sensor to identify objects shows a degree of autonomous capability, and the software behind its programming is still in the early development phase.

And while the hand isn’t exactly cheap by everyday standards, the production cost has been dramatically reduced. Hands fabricated in batches of 1,000 or more can be produced for $3,000 per unit, which is substantially less than the current cost of $50,000 per unit for similar technology. And as usual, DARPA has its eye on future development, creating hands that would be used in hazardous situations – such as diffusing IEDs on the battlefield – as well as civilian and post-combat applications (i.e. prosthetics).

And of course, there’s a video for the ARM in action as well. Check it out, and then decide for yourself if you need to be scared yet:


Sources:
fastcoexist.com, singularityhub.com
, makrshakr.com

The Future is Here: Web-Based “Brain” for Robots

AI_robotMy gratitude once again to Nicola Higgins for beating me to the punch yet again! I hope she doesn’t mind that I’m totally posting a separate article, but something like this is just too good to reblog! In what is sure to excite Singularitarians and Futurists and scare the holy bejeezus out of technophobes and those fearing the Robopocalypse, a new web-based artificial brain went online recently, allowing robots to share information and seek help whenever they need it.

It’s called Rapyuta (or the The RoboEarth Cloud Engine), a part of the European Robo Earth project that began in 2011 with the hope of standardizing the way robots perceive the human world. Basically, it is an online database that robots can consult in order to get information about their world and help them make sense of their experiences, post-activation.

robot_internetThe name Rapyuta is taken from the Japanese film by Hayao Miyazaki known as Castle in the Sky, and refers to a place where all the robots live. The project, which involves researchers at five separate European research labs, has produced the database as well as software that robot owners can upload to their machines so that they can connect to the system at any time.

You might say the “brain” is an expression of sympathy for robots, who are no doubt likely to find the world intimidating and confusing once they come online. Now, instead of every robot building up their own idiosyncratic catalog of how to deal with the objects and situations it encounters, Rapyuta would be the place they ask for help when confronted with a novel situation, place or thing.

googlecarIn addition, the web-based service is able to do complicated computation on behalf of a robot. For example, if it needs to work out how to navigate a room, fold an item of clothing or understand human speech, it can simply do an online consultation rather than try to figure it out on its own. In addition, it is believed that robots will be cheaper thanks to this system since it will mean they won’t need to carry all their processing power on board.

Looking ahead, Mohanarajah Gajamohan, technical head of the project at the Swiss Federal Institute of Technology in Zurich, says that the designers believe the system could be particularly useful for drones, self-driving cars or other mobile robots who have to do a lot of number crunching just to get round.

internetDr Heico Sandee, Robo Earth program manager at the Dutch University of Technology in Eindhoven, also highlighted the economic benefits of this new concept. “On-board computation reduces mobility and increases cost,” he said, adding that as wireless data speeds increase, more and more robotic thinking could be offloaded to the web.

But above all, the aim here is about integration. As robots become more and more common and we human beings are forced to live with them amongst us, there could be difficulties. Without access to such a database, those involved in the project and roboticists at large fear that machines will remain on production lines and never live easily alongside humans.

robots_earthAs for those who support and await the Technological Singularity, this could be one such means through which it is achieved. The idea of machines that are capable of network and constantly upgrade their software is a step in the direction of machines that are capable of self-assembling, evolving and upgrading themselves constantly, which will basically result in a rate of progress that we can currently predict.

But on the other side of the debate, there are those who say this smacks of a Skynet-like supercomputer that could provide machines with the means to network, grow smarter, and think of ways of overthrowing their human masters. While I don’t consider myself the technophobic sort, I can certainly see how this invention could be perceived that way.

robots_ideaCreating a means for robots to communicate and contribute to a growing sense of knowledge, effectively letting them take ownership of their own world, does seem kinda like the first step in creating a world where robots no longer need human handlers. Then again, if we’re going to be creating AI, we might want to consider treating them like sentient, dignified beings beforehand, and avoiding any “controversy” when they begin to demand them later.

Gotta admit, when it comes to technophobes and paranoiacs, this kind of stuff is certainly fertile territory! For more information on the Rapyuta Engine, simply click here. And may God help us all!

terminator_judgement_daySource: bbc.co.uk

The Singularity: The End of Sci-Fi?

singularity.specrepThe coming Singularity… the threshold where we will essentially surpass all our current restrictions and embark on an uncertain future. For many, its something to be feared, while for others, its something regularly fantasized about. On the one hand, it could mean a future where things like shortages, scarcity, disease, hunger and even death are obsolete. But on the other, it could also mean the end of humanity as we know it.

As a friend of mine recently said, in reference to some of the recent technological breakthroughs: “Cell phones, prosthetics, artificial tissue…you sci-fi writers are going to run out of things to write about soon.” I had to admit he had a point. If and when he reach an age where all scientific breakthroughs that were once the province of speculative writing exist, what will be left to speculate about?

Singularity4To break it down, simply because I love to do so whenever possible, the concept borrows from the field of quantum physics, where the edge of black hole is described as a “quantum singularity”. It is at this point that all known physical laws, including time and space themselves, coalesce and become a state of oneness, turning all matter and energy into some kind of quantum soup. Nothing beyond this veil (also known as an Event Horizon) can be seen, for no means exist to detect anything.

The same principle holds true in this case, at least that’s the theory. Originally coined by mathematician John von Neumann in the mid-1950’s, the term served as a description for a phenomenon of technological acceleration causing an eventual unpredictable outcome in society. In describing it, he spoke of the “ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.”

exponential_growth_largeThe term was then popularized by science fiction writer Vernor Vinge (A Fire Upon the Deep, A Deepness in the Sky, Rainbows End) who argued that artificial intelligence, human biological enhancement, or brain-computer interfaces could be possible causes of the singularity. In more recent times, the same theme has been picked up by futurist Ray Kurzweil, the man who points to the accelerating rate of change throughout history, with special emphasis on the latter half of the 20th century.

In what Kurzweil described as the “Law of Accelerating Returns”, every major technological breakthrough was preceded by a period of exponential growth. In his writings, he claimed that whenever technology approaches a barrier, new technologies come along to surmount it. He also predicted paradigm shifts will become increasingly common, leading to “technological change so rapid and profound it represents a rupture in the fabric of human history”.

kurzweil-loglog-bigLooking into the deep past, one can see indications of what Kurzweil and others mean. Beginning in the Paleolithic Era, some 70,000 years ago, humanity began to spread out a small pocket in Africa and adopt the conventions we now associate with modern Homo sapiens – including language, music, tools, myths and rituals.

By the time of the “Paleolithic Revolution” – circa 50,000 – 40,000 years ago – we had spread to all corners of the Old World world and left evidence of continuous habitation through tools, cave paintings and burials. In addition, all other existing forms of hominids – such as Homo neanderthalensis and Denisovans – became extinct around the same time, leading many anthropologists to wonder if the presence of homo sapiens wasn’t the deciding factor in their disappearance.

Map-of-human-migrationsAnd then came another revolution, this one known as the “Neolithic” which occurred roughly 12,000 years ago. By this time, humanity had hunted countless species to extinction, had spread to the New World, and began turning to agriculture to maintain their current population levels. Thanks to the cultivation of grains and the domestication of animals, civilization emerged in three parts of the world – the Fertile Crescent, China and the Andes – independently and simultaneously.

All of this gave rise to more habits we take for granted in our modern world, namely written language, metal working, philosophy, astronomy, fine art, architecture, science, mining, slavery, conquest and warfare. Empires that spanned entire continents rose, epics were written, inventions and ideas forged that have stood the test of time. Henceforth, humanity would continue to grow, albeit with some minor setbacks along the way.

The_Meeting_of_Cortés_and_MontezumaAnd then by the 1500s, something truly immense happened. The hemispheres collided as Europeans, first in small droves, but then en masse, began to cross the ocean and made it home to tell others what they found. What followed was an unprecedented period of expansion, conquest, genocide and slavery. But out of that, a global age was also born, with empires and trade networks spanning the entire planet.

Hold onto your hats, because this is where things really start to pick up. Thanks to the collision of hemispheres, all the corn, tomatoes, avocados, beans, potatoes, gold, silver, chocolate, and vanilla led to a period of unprecedented growth in Europe, leading to the Renaissance, Scientific Revolution, and the Enlightenment. And of course, these revolutions in thought and culture were followed by political revolutions shortly thereafter.

IndustrialRevolutionBy the 1700’s, another revolution began, this one involving industry and creation of a capitalist economy. Much like the two that preceded it, it was to have a profound and permanent effect on human history. Coal and steam technology gave rise to modern transportation, cities grew, international travel became as extensive as international trade, and every aspect of society became “rationalized”.

By the 20th century, the size and shape of the future really began to take shape, and many were scared. Humanity, that once tiny speck of organic matter in Africa, now covered the entire Earth and numbered over one and a half billion. And as the century rolled on, the unprecedented growth continued to accelerate. Within 100 years, humanity went from coal and diesel fuel to electrical power and nuclear reactors. We went from crossing the sea in steam ships to going to the moon in rockets.

massuseofinventionsAnd then, by the end of the 20th century, humanity once again experienced a revolution in the form of digital technology. By the time the “Information Revolution” had arrived, humanity had reached 6 billion people, was building hand held devices that were faster than computers that once occupied entire rooms, and exchanging more information in a single day than most peoples did in an entire century.

And now, we’ve reached an age where all the things we once fantasized about – colonizing the Solar System and beyond, telepathy, implants, nanomachines, quantum computing, cybernetics, artificial intelligence, and bionics – seem to be becoming more true every day. As such, futurists predictions, like how humans will one day merge their intelligence with machines or live forever in bionic bodies, don’t seem so farfetched. If anything, they seem kind of scary!

singularity-epocksThere’s no telling where it will go, and it seems like even the near future has become completely unpredictable. The Singularity looms! So really, if the future has become so opaque that accurate predictions are pretty much impossible to make, why bother? What’s more, will predictions become true as the writer is writing about them? Won’t that remove all incentive to write about it?

And really, if the future is to become so unbelievably weird and/or awesome that fact will take the place of fiction, will fantasy become effectively obsolete? Perhaps. So again, why bother? Well, I can think one reason. Because its fun! And because as long as I can, I will continue to! I can’t predict what course the future will take, but knowing that its uncertain and impending makes it extremely cool to think about. And since I’m never happy keeping my thoughts to myself, I shall try to write about it!

So here’s to the future! It’s always there, like the horizon. No one can tell what it will bring, but we do know that it will always be there. So let’s embrace it and enter into it together! We knew what we in for the moment we first woke up and embraced this thing known as humanity.

And for a lovely and detailed breakdown of the Singularity, as well as when and how it will come in the future, go to futuretimeline.net. And be prepared for a little light reading 😉