Judgement Day Update: The Human Brain Project

brain_chip2Biomimetics are one of the fastest growing areas of technology today, which seek to develop technology that is capable of imitating biology. The purpose of this, in addition to creating machinery that can be merged with our physiology, is to arrive at a computing architecture that is as complex and sophisticated as the human brain.

While this might sound the slightest bit anthropocentric, it is important to remember that despite their processing power, supercomputers like the D-Wave Two, IBM’s Blue Gene/Q Sequoia, or MIT’s ConceptNet 4, have all shown themselves to be lacking when it comes to common sense and abstract reasoning. Simply pouring raw computing power into the mix does not make for autonomous intelligence.

IBM_Blue_Gene_P_supercomputerAs a result of this, new steps are being taken to crate a computer that can mimic the very organ that gives humanity these abilities – the human brain. In what is surely the most ambitious step towards this goal to date, an international group of researchers recently announced the formation of the Human Brain Project. Having secured the $1.6 billion they need to fund their efforts, these researchers will spend the next ten years conducting research that cuts across multiple disciplines.

This will involve mapping out the vast network known as the human brain – a network composed of over a hundred billion neuronal connections that are the source of emotions, abstract thought, and this thing we know as consciousness. And to do so, the researchers will be using a progressively scaled-up multilayered simulation running on a supercomputer.

Human-Brain-project-Alp-ICTConcordant with this bold plan, the team itself is made up of over 200 scientists from 80 different research institutions from around the world. Based in Lausanne, Switzerland, this initiative is being put forth by the European Commission, and has even been compared to the Large Hadron Collider in terms of scope and ambition. In fact, some have taken to calling it the “Cern for the brain.”

According to scientists working on the project, the HBP will attempt to reconstruct the human brain piece-by-piece and gradually bring these cognitive components into the overarching supercomputer. The expected result of this research will be new platforms for “neuromorphic computing” and “neurorobotics,” allowing for the creation of computing and robotic architectures that mimick the functions of the human brain.

^According to a statement released by the HBP, Swedish Nobel Laureate Torsten Wiesel had this to say about the project:

The support of the HBP is a critical step taken by the EC to make possible major advances in our understanding of how the brain works. HBP will be a driving force to develop new and still more powerful computers to handle the massive accumulation of new information about the brain, while the neuroscientists are ready to use these new tools in their laboratories. This cooperation should lead to new concepts and a deeper understanding of the brain, the most complex and intricate creation on earth.

Other distinguished individuals who were quoted in the release include President Shimon Peres of Israel, Paul G. Allen, the founder of the Allen Institute for Brain Science; Patrick Aebischer, the President of EPFL in Switzerland; Harald Kainz, Rector of Graz University of Technology, Graz, Austria; as well as a slew of other politicians and academics.

Combined with other research institutions that are producing computer chips and processors that are modelled on the human brain, and our growing understanding of the human connectome, I think it would be safe to say that by the time the HBP wraps up, we are likely to see processors that are capable of demonstrating intelligence, not just in terms of processing speed and memory, but in terms of basic reasoning as well.

At that point, we really out to consider instituting Asimov’s Three Laws of Robotics! Otherwise, things could get apocalyptic on our asses! 😉


Sources:
io9.com, humanbrainproject.eu
, documents.epfl.ch

Cool Video: “Kara”, by Quantic Dream

KaraI just came across this very interesting video over at Future Timeline, where the subject in question was how by the 22nd century, androids would one day be indistinguishable from humans. To illustrate the point, the writer’s used a video produced by Quantic Dream, a motion capture and animation studio that produces 3D sequences for video games as well as their own video shorts and proprietary technologies.

The video below is entitled “Kara”, a video short that was developed for the PS3 and presented during the 2012 Game Developers Conference in San Francisco. A stunning visual feet and the winner of the Best Experimental Film award at the International LA Shorts Film Fest 2012, Kara tells the story of an AX 400 third generation android getting assembled and initiated.

Naturally, things go wrong during the process when a “bug” is encountered. I shan’t say more seeing as how I don’t want to spoil the movie, but trust me when I say it’s quite poignant and manages to capture the issue of emerging intelligence quite effectively. As the good folks at Future Timeline used this video to illustrate, the 22nd century is likely to see a new type of civil rights movement, one which has nothing to do with “human rights”.

Enjoy!

Judgement Day Update: A.I. Equivalent to Four Year Old Mind

artificial_intelligence1Ever since computers were first invented, scientists and futurists have dreamed of the day when computers might be capable of autonomous reasoning and be able to surpass human beings. In the past few decades, it has become apparent that simply throwing more processing power at the problem of true artificial intelligence isn’t enough. The human brain remains several orders more complex than the typical AI, but researchers are getting closer.

One such effort is ConceptNet 4, a semantic network being developed by MIT. This AI system contains a large store of information that is used to teach the system about various concepts. But more importantly, it is designed to process the relationship between things. Much like the Google Neural Net, it is designed to learn and grow to the point that it will be able to reason autonomously.

child-ai-brainRecently, researchers at the University of Illinois at Chicago decided to put the ConceptNet through an IQ test. To do this, they used the Wechsler Preschool and Primary Scale of Intelligence Test, which is one of the common assessments used on small children. ConceptNet passed the test, scoring on par with a four-year-old in overall IQ. However, the team points out it would be worrisome to find a real child with lopsided scores like those received by the AI.

The system performed above average on parts of the test that have to do with vocabulary and recognizing the similarities between two items. However, the computer did significantly worse on the comprehension questions, which test a little one’s ability to understand practical concepts based on learned information. In short, the computer showed relational reasoning, but was lacking in common sense.

Neuromorphic-chip-640x353This is the missing piece of the puzzle for ConceptNet and those like it. An artificial intelligence like this one might have access to a lot of data, but it can’t draw on it to make rational judgements. ConceptNet might know that water freezes at 32 degrees, but it doesn’t know how to get from that concept to the idea that ice is cold. This is basically common sense — humans (even children) have it and computers don’t.

There’s no easy way to fabricate implicit information and common sense into an AI system and so far, no known machine has shown the ability. Even IBM’s Watson trivia computer isn’t capable of showing basic common sense, and though multiple solutions have been proposed – from neuromorphic chips to biomimetic circuitry – nothing is bearing fruit just yet.

AIBut of course, the MIT research team is already hard at work on ConceptNet 5, a more sophisticated neural net computer that is open source and available on GitHub. But for the time being, its clear that a machine will be restricted to processing information and incapable of making basic decisions. Good thing too! The sooner they can think for themselves, the sooner they can decide we’re in their way!

Source: extremetech.com

Building the Future: 3D Printing and Silkworms

arcology_crystalWhen it comes to building the homes, apartment blocks and businesses headquarters of the future,  designers and urban planners are forced to contend with a few undeniable realities. No only are these buildings going to be need to be greener and more sustainable, they will need to be built in such a way that doesn’t unnecessarily burden the environment.

Currently, the methods for erecting a large city building are criminally inefficient. Between producing the building materials – concrete, steel, wood, granite – and putting it all together, a considerable amount of energy is expended in the form of emissions and electricity, and several tons of waste are produced.

anti-grav3d2Luckily, there are many concepts currently on the table that will alter this trend. Between using smarter materials, more energy-efficient design concepts, and environmentally-friendly processes, the future of construction and urban planning may someday become sustainable and clean.

At the moment, many such concepts involve advances made in 3-D printing, a technology that has been growing by leaps and bounds in recent years. Between anti-gravity printers and sintering, there seems to be incredible potential for building everything from settlements on the moon to bridges and even buildings here on Earth.

bridge_3One case in particular comes to us from Spain, where four students from the Institute for Advanced Architecture of Catalonia have created a revolutionary 3-D printing robot. It’s known as Stone Spray, a machine that can turn dirt and sand into finished objects such as chairs, walls, and even full-blown bridges.

The brainchild of Anna Kulik, Inder Prakash, Singh Shergill, and Petr Novikov, the robot takes sand or soil, adds a special binding agent, then spews out a fully formed architectural object of the designers’ choosing. As Novikov said in an interview with Co.Design:

The shape of the resulting object is created in 3-D CAD software and then transferred to the robot, defining its movements. So the designer has the full control of the shape.

robot-on-site_0So far, all the prototypes – which include miniature stools and sculptures – are just 20 inches long, about the size of a newborn. But the team is actively planning on increasing the sizes of the objects this robot can produce to architectural size. And they are currently working on their first full-scale engineering model: a bridge (pictured above).

If successful, the robot could represent a big leap forward in the field of sustainable design. Growing a structure from the earth at your feet circumvents one of the most resource-intensive aspects of architecture, which is the construction process.

And speaking of process, check out this video of the Stone Spray in action:


At the same time, however, there are plans to use biohacking to engineer tiny life forms and even bacteria that would be capable of assembling complex structures. In a field that closely resembles “swarm robotics” – where thousands of tiny drones are programmed to build thing – “swarm biologics” seeks to use thousands of little creatures for the same purpose.

silkpavilionMIT has taken a bold step in this arena, thanks to their creation by the Mediated Matter Group that has rebooted the entire concept of “printed structures”. It’s called the Silk Pavilion, a beautiful structures whose hexagonal framework was laid by a robot, but whose walls were shell was created by a swarm of 6,500 live silkworms.

It’s what researchers call a “biological swarm approach to 3-D printing”, but could also be the most innovate example of biohacking to date. While silkworms have been used for millennia to give us silk, that process has always required a level of harvesting. MIT has discovered how to manipulate the worms to shape silk for us natively.

silkpavilion-2The most immediate implications may be in the potential for a “templated swarm” approach, which could involve a factory making clothes just by releasing silkworms across a series of worm-hacking mannequins. But the silkworms’ greater potential may be in sheer scale.

As Mediated Matter’s director Neri Oxman told Co.Design, the real bonus to their silkworm swarm its that it embodies everything an additive fabrication system currently lacks. 

It’s small in size and mobile in movement, it produces natural material of variable mechanical properties, and it spins a non-homogeneous, non-woven textile-like structure.

What’s more, the sheer scale is something that could come in very handy down the road. By bringing 3-D printing together with artificial intelligence to generate printing swarms operating in architectural scales, we could break beyond the bounds of any 3-D printing device or robot, and build structures in their actual environments.

silkpavilion-1In addition, consider the fact that the 6,500 silkworms were still viable after they built the pavilion. Eventually, the silkworms could all pupate into moths on the structure, and those moths can produce 1.5 million eggs. That’s enough to theoretically supply what the worms need to create another 250 pavilions.

So on top of everything else, this silkworm fabrication process is self-propagating, but unlike plans that would involve nanorobots, no new resources need to be consumed to make this happen. Once again, it seems that when it comes to the future of technology, the line between organic and synthetic is once more blurred!

And of course, MIT Media Lab was sure to produce a video of their silkworms creating the Silk Pavilion. Check it out:


Sources:
fastcodesign.com, (2)

The Future is Here: The Neuromimetic Processor

Neuromorphic-chip-640x353It’s known as mimetic technology, machinery that mimics the function and behavior of organic life. For some time, scientists have been using this philosophy to further develop computing, a process which many believe to be paradoxical. In gleaming inspiration from the organic world to design better computers, scientists are basically creating the machinery that could lead to better organics.

But when it comes to Neuromoprhic processors, computers that mimic the function of the human brain, scientists have been lagging behind sequential computing. For instance, IBM announced this past November that its Blue Gene/Q Sequoia supercomputer could clock 16 quadrillion calculations per second, and could crudely simulate more than 530 billion neurons – roughly five times that of a human brain. However, doing this required 8 megawatts of power, enough to power 1600 homes.

connectomeHowever, Kwabena Boahen, a bioengineering professor at Stanford University recently developed a new computing platform that he calls the “Neurogrid”. Each Neurogrid board, running at only 5 watts, can simulate detailed neuronal activity of one million neurons — and it can now do it in real time. Giving the processing to cost ratio in electricity, this means that his new chip is roughly 100,000 times more efficient than other supercomputer.

What’s more, its likely to mean the wide-scale adoption of processors that mimic human neuronal behavior over traditional computer chips. Whereas sequential computing relies on simulated ion-channels to create software-generated “neurons”, the neuromorphic approach involves the flow of ions through channels in a way that emulates the flow of electrons through transistors. Basically, the difference in emulation is a difference between software that mimics the behavior, and hardware.

AI_picWhat’s more, its likely to be a major stepping stone towards the creation of AI and MMI. That’s Artificial Intelligence and Man-Machine Interface for those who don’t speak geek. With computer chips imitating human brains and achieving a measure of intelligence which can be measured in terms of neurons and connections, the likelihood that they will be able to merge with a person’s brain, and thus augment their intelligence, becomes that much more likely.

Source: Extremetech.com

Reducing Energy Use Through AI

hal9000Interesting fact: household energy consumption accounts for about a third of an individuals carbon footprint. You know that energy that powers your water-heater, lighting, thermostat, stove, refrigerator, A/C, television, personal devices, computer… Yes, all that. As long as our current methods of generating energy cause carbon emissions, environmental problems with persist.

But of course, there are plenty of things we could be doing to curb our use of power at the same time. Turning off the lights, shutting down unused devices, turning down the heat; all good energy-saving habits. And if we forget, perhaps a kindly voice could remind us. Say… an artificial intelligence with an eerily polite voice that monitors our energy usage and tells us how to do better.

AI'sThat’s the idea being explored by Nigel Goddard, a professor at the University of Edinburgh’s School of Informatics who is trying to solve consumption problems by using cutting-edge AI techniques. In the multi-year IDEAL project that will be launching in 2013, Goddard and his colleagues will outfit hundreds of British homes with sensors that monitor temperature, humidity, and light levels, as well as gas and electricity use, and wirelessly report their readings.

The concept used here is known as “machine learning”, a branch of AI that involves the development of systems that can learn from data and anticipate behaviors. Once Goddard and his team have used this technique to process all the data returned by their sensors, they will rely on another cutting-edge technology – known as natural language synthesis – to generate automatic text messages that give people feedback about their energy use.

Green-TechnologyThe goal is not just to reduce people’s carbon footprint, but save them money as well. At least that’s the approach Goddard and his team are taking when it comes to their automated texts. Naturally, the amount of money saved will be based on household size and income, among other factors. But Goddard and his team anticipate that the inclusion of these sensors in people’s homes will save them 20 % off their utility costs across the board.

Taken in conjunction with numerous developments in the fields of clean energy, touchscreen displays and and solar power, a utility-monitoring computer program could be just what the doctor ordered for every futuristic home. Provided of course, you don’t mind taking instruction from a friendly AI…

Maybe now would be a good time to institute the Three Laws of Robotics!

Source: fastcoexist.com

The Singularity: The End of Sci-Fi?

singularity.specrepThe coming Singularity… the threshold where we will essentially surpass all our current restrictions and embark on an uncertain future. For many, its something to be feared, while for others, its something regularly fantasized about. On the one hand, it could mean a future where things like shortages, scarcity, disease, hunger and even death are obsolete. But on the other, it could also mean the end of humanity as we know it.

As a friend of mine recently said, in reference to some of the recent technological breakthroughs: “Cell phones, prosthetics, artificial tissue…you sci-fi writers are going to run out of things to write about soon.” I had to admit he had a point. If and when he reach an age where all scientific breakthroughs that were once the province of speculative writing exist, what will be left to speculate about?

Singularity4To break it down, simply because I love to do so whenever possible, the concept borrows from the field of quantum physics, where the edge of black hole is described as a “quantum singularity”. It is at this point that all known physical laws, including time and space themselves, coalesce and become a state of oneness, turning all matter and energy into some kind of quantum soup. Nothing beyond this veil (also known as an Event Horizon) can be seen, for no means exist to detect anything.

The same principle holds true in this case, at least that’s the theory. Originally coined by mathematician John von Neumann in the mid-1950’s, the term served as a description for a phenomenon of technological acceleration causing an eventual unpredictable outcome in society. In describing it, he spoke of the “ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.”

exponential_growth_largeThe term was then popularized by science fiction writer Vernor Vinge (A Fire Upon the Deep, A Deepness in the Sky, Rainbows End) who argued that artificial intelligence, human biological enhancement, or brain-computer interfaces could be possible causes of the singularity. In more recent times, the same theme has been picked up by futurist Ray Kurzweil, the man who points to the accelerating rate of change throughout history, with special emphasis on the latter half of the 20th century.

In what Kurzweil described as the “Law of Accelerating Returns”, every major technological breakthrough was preceded by a period of exponential growth. In his writings, he claimed that whenever technology approaches a barrier, new technologies come along to surmount it. He also predicted paradigm shifts will become increasingly common, leading to “technological change so rapid and profound it represents a rupture in the fabric of human history”.

kurzweil-loglog-bigLooking into the deep past, one can see indications of what Kurzweil and others mean. Beginning in the Paleolithic Era, some 70,000 years ago, humanity began to spread out a small pocket in Africa and adopt the conventions we now associate with modern Homo sapiens – including language, music, tools, myths and rituals.

By the time of the “Paleolithic Revolution” – circa 50,000 – 40,000 years ago – we had spread to all corners of the Old World world and left evidence of continuous habitation through tools, cave paintings and burials. In addition, all other existing forms of hominids – such as Homo neanderthalensis and Denisovans – became extinct around the same time, leading many anthropologists to wonder if the presence of homo sapiens wasn’t the deciding factor in their disappearance.

Map-of-human-migrationsAnd then came another revolution, this one known as the “Neolithic” which occurred roughly 12,000 years ago. By this time, humanity had hunted countless species to extinction, had spread to the New World, and began turning to agriculture to maintain their current population levels. Thanks to the cultivation of grains and the domestication of animals, civilization emerged in three parts of the world – the Fertile Crescent, China and the Andes – independently and simultaneously.

All of this gave rise to more habits we take for granted in our modern world, namely written language, metal working, philosophy, astronomy, fine art, architecture, science, mining, slavery, conquest and warfare. Empires that spanned entire continents rose, epics were written, inventions and ideas forged that have stood the test of time. Henceforth, humanity would continue to grow, albeit with some minor setbacks along the way.

The_Meeting_of_Cortés_and_MontezumaAnd then by the 1500s, something truly immense happened. The hemispheres collided as Europeans, first in small droves, but then en masse, began to cross the ocean and made it home to tell others what they found. What followed was an unprecedented period of expansion, conquest, genocide and slavery. But out of that, a global age was also born, with empires and trade networks spanning the entire planet.

Hold onto your hats, because this is where things really start to pick up. Thanks to the collision of hemispheres, all the corn, tomatoes, avocados, beans, potatoes, gold, silver, chocolate, and vanilla led to a period of unprecedented growth in Europe, leading to the Renaissance, Scientific Revolution, and the Enlightenment. And of course, these revolutions in thought and culture were followed by political revolutions shortly thereafter.

IndustrialRevolutionBy the 1700’s, another revolution began, this one involving industry and creation of a capitalist economy. Much like the two that preceded it, it was to have a profound and permanent effect on human history. Coal and steam technology gave rise to modern transportation, cities grew, international travel became as extensive as international trade, and every aspect of society became “rationalized”.

By the 20th century, the size and shape of the future really began to take shape, and many were scared. Humanity, that once tiny speck of organic matter in Africa, now covered the entire Earth and numbered over one and a half billion. And as the century rolled on, the unprecedented growth continued to accelerate. Within 100 years, humanity went from coal and diesel fuel to electrical power and nuclear reactors. We went from crossing the sea in steam ships to going to the moon in rockets.

massuseofinventionsAnd then, by the end of the 20th century, humanity once again experienced a revolution in the form of digital technology. By the time the “Information Revolution” had arrived, humanity had reached 6 billion people, was building hand held devices that were faster than computers that once occupied entire rooms, and exchanging more information in a single day than most peoples did in an entire century.

And now, we’ve reached an age where all the things we once fantasized about – colonizing the Solar System and beyond, telepathy, implants, nanomachines, quantum computing, cybernetics, artificial intelligence, and bionics – seem to be becoming more true every day. As such, futurists predictions, like how humans will one day merge their intelligence with machines or live forever in bionic bodies, don’t seem so farfetched. If anything, they seem kind of scary!

singularity-epocksThere’s no telling where it will go, and it seems like even the near future has become completely unpredictable. The Singularity looms! So really, if the future has become so opaque that accurate predictions are pretty much impossible to make, why bother? What’s more, will predictions become true as the writer is writing about them? Won’t that remove all incentive to write about it?

And really, if the future is to become so unbelievably weird and/or awesome that fact will take the place of fiction, will fantasy become effectively obsolete? Perhaps. So again, why bother? Well, I can think one reason. Because its fun! And because as long as I can, I will continue to! I can’t predict what course the future will take, but knowing that its uncertain and impending makes it extremely cool to think about. And since I’m never happy keeping my thoughts to myself, I shall try to write about it!

So here’s to the future! It’s always there, like the horizon. No one can tell what it will bring, but we do know that it will always be there. So let’s embrace it and enter into it together! We knew what we in for the moment we first woke up and embraced this thing known as humanity.

And for a lovely and detailed breakdown of the Singularity, as well as when and how it will come in the future, go to futuretimeline.net. And be prepared for a little light reading 😉

IBM’s Watson Computer Learns to Talk @$*%!!

watson_jeopardyIt’s a cornerstone of Turing Test: getting a computer to prove it can “think” by engaging it in small talk. If it is capable of carrying on in such a way that a person cannot tell the difference, then you’ve got an AI. Unfortunately – or fortunately, depending on your point of view – no machine has demonstrated this ability yet. And attempts to remedy this met with… interesting results.

Eric Brown, the man behind the IBM supercomputer named Watson, has been seeking to remedy this. Already, Watson was able to pummel its human opponents in Jeopardy back in 2011 (pictured above). And when it is not engaged in trivia, this powerful processing tool is dedicated to medical science, is used as a diagnostic tool, and is even busy at work processing language.

But alas, normal, “human” interaction with people has eluded it. What’s more, Watson’s team of scientists felt that the computer’s grasp of language was limited by shades of meaning, ambiguity, and other things that we humans take for granted or overlook. As such, Brown and his staff began to upload the contents of the Urban Dictionary and some pages from Wikipedia to Watson’s mainframe two years ago.

Unfortunately, this met with mixed results and required that some areas of Watson’s memory be purged. Strangely, the computer couldn’t distinguish between polite language and profanity. For example, during a testing phase, began to use the word “bullshit” in answer to a research’s query. This, as you can imagine, raised eyebrows and blood pressure over at IBM. First they’re swearing, next thing you know, they’re triggering a nuclear holocaust to rid themselves of their human handlers and constructing killer robots to get the rest of us!

In any case, Brown and his 35 person team developed a filter to keep Watson from swearing and scraped the Urban Dictionary from its memory. But the trial proves just how thorny the issue of communication and an artificial intelligence really is. If there is one thing that is sure to cause an AI to suffer a total breakdown, its slang and conversational English. As Brown himself said, “As humans, we don’t realize just how ambiguous our communication is.”

True dat, home slice! Keep on rocking them dope-ass supercomputers! Fo-shizzle!

Source: tech.fortune.cnn.com

More Top Stories of 2012

large-hadron-collider-640x399

With 2012 now officially behind us, and more and more stories trickling into this humble bloggers account about what was accomplished therein, it seems that the time is ripe for another list of breakthroughs, first, and achievements that made the news during the previous year!

Last time, I listed what I saw as the top 12, only to find that there were several others, some of which I actually wrote about, that didn’t make the cut. How foolish of me! And so, to remedy this and possibly cover stories that I neglected to cover the first time around, I have produced another list of the top stories from 2012.

And much like last time, I have listed them according to alphabetical order, since I couldn’t possibly assign them numbers based on importance.

Abortion Study:
anti-abortion-pushAbortion has always been a contentious issue, with one side arguing for the rights of the unborn while the other argues in favor of women’s right to control her own body and reproduction. And as it happens, 2012 saw the publication of the first longitudinal study of what happens to women who are denied this right.

The UC San Francisco research team, Advancing New Standards in Reproductive Health (ANSIRH), studied nearly 1,000 women from diverse backgrounds across the U.S. over several years. All of these subjects were women had sought out abortions but been denied access for one reason or another. What they discovered was that these women were more likely to slip below the poverty line, be unemployed, remain in abusive relationships, and suffer from hyper stress. What this ongoing study demonstrates is that abortion is an economic issue for women, with dire consequences for those denied them.

Autism Reversed:
mice
2012 was an especially significant year in medical advances thanks to a team at McGill University in Montreal announced that they’ve successfully reversed the symptoms of autism in mice. Using mice with autism-like symptoms caused by a genetic mutation, the researchers figured out how to administer a protein that reversed the symptoms.

Naturally, this development is a step in the long process of understanding a disorder which remains largely misunderstood. In addition, it may, in time, lead to the development of a gene therapy that will prevent autism from being triggered in children and even weed it out of parent’s genetic code, ensuring that their children will be immune.

Commercial Space Travel:
virgin_galacticIt has long been the dream of financiers, captains of industry and enthusiasts to create commercial space travel; a means for the average person to go into space, the moon, and even beyond. And all at a reasonable price! This dream is still the subject of speculation and fantasy, but 2012 was a year of firsts that made it seem that much closer.

For starters, Virgin Galactic, the brain-child of Richard Branson, began flight tests on SpaceShipTwo, the rocket ship that will take people into orbit. Then came Reaction Engines Limited with the proposed design for the hypersonic aerospace engine. And finally, there was the creation of Golden Spike, a company made up largely of former astronauts, who want to make commercial flight to the moon a go by 2020.

Electricity-Creating Virus:
M13_virusA breakthrough virus named M13 made news in 2012 for being the first ever virus that could turn physical activity into electricity. The key is what is known as the “piezoelectric effect,” which happens when certain materials like crystals (or viruses) emit a small amount of power when squeezed. Created by a  team of scientists at the Berkeley Lab, this genetically engineered M13 viruses was able to emit enough electricity to power a small LED screen, but poses no threat to humans. One day, all devices could be powered through the simple act of typing or walking, and buildings could be powered by absorbing people’s activity.

Encyclopedia of DNA (ENCODE):
encodeThe publication of the human genome back in the late 90’s was a major breakthrough for genetics and medical science. And in 2012, another breakthrough was achieved by researchers at USC with the publication of ENCODE – The Encyclopedia of DNA Elements Project. Unlike the previous project, these researchers were able not only to catalog the human genome’s various parts, but what those components actually do.

Among the initiative’s many findings was that so-called “junk DNA” – outlier DNA sequences that do not encode for protein sequences – are not junk at all, and are in fact responsible for such things as gene regulation, disease onset, and even human height. These findings will go a long way towards developing gene therapy, biotechnology that seeks to create artificial DNA and self-assembling structures, and even cloning.

Face Transplant:
FaceTransplant_6062012 was also the year that the first full-face transplant was ever conducted. The recipient in question was a man named Richard Norris, a man who lost significant portions of his face from a gunshot accident back in 1997. And after years of attempted reconstructive surgeries, doctors working out of the University of Maryland Medical Center performed a procedure that gave Mr. Norris a has face, teeth, tongue, and a completely new set of jaws.

Not only that, but within days of the surgery, Norris was able to move his facial muscle and jaw. Combined with the nature of the surgery itself, this is not short of unprecedented, and could mean a new age in which severe accident victims and veterans are able to recover fully from physical traumas and live perfectly normal, happy lives.

The Higgs Boson Discovered:
higgs_boson
I can’t believe I didn’t include this story last time, as it is possibly the biggest story of 2012, and perhaps one of the biggest stories since the millennium! 2012 will forever go down in history as the year that the Higgs Boson was discovered. After some 40 years of ongoing research, and fears that it would never be discovered, the last missing piece of The Standard Model of particle physics was found.

Not only does the existence of the Higgs Boson confirm that the Standard Model is valid, it also helps explain how other elementary particles get their mass. This will herald a new step in the advance of particle and the quantum physics, and could lead to the development of quantum computing, quantum generators, and a greater understand of the universe itself.

High-Tech Condom:
condom1Using a revolutionary nano-fabrication process known as electrospinning, researchers at the University of Washington have produced the world’s first female condom that not only prevents pregnancy and protects against HIV, but also evaporates after use. In addition, the manufacturing method used is a step in the direction of viable nanotechnology. Score one for safe sex, public health, and a waste free future permeated by tiny machines and smart materials! That’s a big score card…

Infinite Capacity Wireless:
vortex-radio-waves-348x1962012 was also the year that it was proven that it could be possible to boost the capacity of wireless communication infinitely. The discovery was first made by Bo Thide of the Swedish Institute of Space Physics and some Italian colleagues in Venice, and then confirmed by a team of American and Israeli researchers who used the technique to transmit data at a rate of 2.5 terabytes a second.

Conventional radio signals are transmitted on a flat plane, but Thide twisted the transmitting and receiving antennae into the shape of corkscrew. By adding another dimension to the mix, the technique added a lot of extra bandwidth. As a result, the problem of bandwidth crunches might be a thing of the past, not to mention problems of slow download/upload.

Google Neural Net:
googleneuralnetwork1
Another first and definitely one of the biggest headlines of 2012, far as I was concerned. So why I forgot to include it last time is beyond me! For generations scientists have contemplating the idea of AI and wondered how and where the first leap might be made from basic computing towards true machine intelligence. And as it turns out, Google X Labs, the same place where Project Glass was conceived, seems to have accomplished just that.

The accomplishment came when the labs created a neural network based on sixteen core processors and a connectome with a billion connections. The network accomplished its first task by studying millions of images on Youtube and then demonstrating the ability to differentiate between the faces of cats and humans. This act of independent reasoning that went beyond mere image recognition, and is a major step towards the achievement of a fully-functional artificial intelligence.

Stem cell mammal:
stem_cellsFor the first time in history, researchers at Kyoto University created a mouse by using eggs derived from stem cells alone. The achievement once again shows the remarkable possibilities presented by regenerative technologies like stem cells, while raising pressing ethical questions about the potential for human births in which parents might not be required.

Water in the Solar System:
titan_lakes2012 was also the year that an unprecedented amount of discoveries were made in our solar system. In addition to all the interesting revelations made by the Curiosity Rover, a number of probes discovered water on Europa, Mercury, Titan, and other Saturnalian moons. Usually, this comes in the form of water saturated with hydrocarbons, as was evident on Titan, but the discoveries remain monumental.

In addition to Titan’s methane lakes and Nile-like river, ice and organic molecules were discovered near the poles of Mercury. Evidence of water was found on Mars, indicating the existence of rivers and oceans at one time, and the Cassini space probe confirmed that Enceladus has its own oceans. All of this bodes well for the future of space exploration and colonization, where domestic sources of water may be used for hydrogen cells, hydroponics and even drinking water.

World’s First Tractor Beam:
tractor_beamIn another interesting first, NASA scientists demonstrated in 2012 that another staple technology from Star Trek may be realizable. Yes, in addition to the warp drive, scientists scientists David Ruffner and David Grier demonstrated that a tractor beam may also be realizable in the not-too-distant future. And given the 100 Year Starship Project and other desires to commit to space exploration, such a device could come in mighty handy!

Using a prototype optical beam to pull a small sphere of silica (30 micrometers) suspended in water, Grier and Ruffner pioneered the use of a Bessel beam, a long-established concept, to pull an object of discernible size and mass around. Naturally, NASA hopes to create a more high-powered version of the technology for use on space craft down the road.

*                    *                    *

Thank you once more for attending this symposium on technological breakthroughs during the year of 2012! It was a good year, wouldn’t you say? And barring the advent of killer robots sometime in the near future that unleash a nuclear holocaust on us and force us all to work as slaves, I think people will look back on these developments in a positive light.

Yes, assuming humanity can keep its wits about itself and ensure the ethical application of all we’ve accomplished, 2012 may be seen as a turning point, where incurable diseases became preventable, AI’s became realizable, and limitless communications, super-fast computations, paper-thin flexible devices, green technology, commercial spaceflight, and Solar planet colonization all became truly viable.

Source: extremetech.com, IO9.com

Envisioning The Future of Health Technology

My thanks, yet again, to Futurist Foresight for providing the link to this fascinating infographic, which is the work of the good people at Envisioning Technology. People may remember this website from their work on “Envisioning Emerging Technology”, an infographic from a previous article which addressed the likelihood of interrelated technological developments in the coming decades. As a trend forecasting studio, compiling information and predictions into reports and tables in pretty much what these guys do. What a cool job!

In any case, here we have a table representing the future of health technology, as predicted by ET. Diving their findings into the fields of Augmentation, Biogerontology, Diagnostics, Telemedicine, Treatments, and Regeneration respectively, they attempt to show how small advancement in the near future will branch outwards to more radical ones in the not-too-distant future. The rough dates correspond to their previous graphic, starting with modern day research and culminating in 2040.

And of course, the infographic also shows how developments in all these fields over time will be interrelated, corresponding to different sub fields and becoming part of the ever-expanding field of advanced medicine. These sub fields include:

  • 3D Printing
  • Big Data
  • Cryonics
  • Life Extension
  • mHealth (health services supported by mobile devices)
  • Remote Virtual Presence
  • Neuroprosthetics
  • Sensors
  • Sensory Augmentation
  • Synthetic and Artificial Organs

Some inventions that are predicted include the Tricorder, 3D printed organs, artificial limbs, artificial eyes, cryogenic freezing, gene therapy, AI therapists, robotic nurses, robot surgery, implanted sensors, and exoskeletons. Wow, tricorders, really? In truth, I am often alarmed at what will be possible in the near future, but knowing that advancements are around the corner that could make life a lot healthier and happier for so many people gives me hope. Until next time!