The Future of Computing: Brain-Like Computers

neuronsIt’s no secret that computer scientists and engineers are looking to the human brain as means of achieving the next great leap in computer evolution. Already, machines are being developed that rely on machine blood, can continue working despite being damaged, and recognize images and speech. And soon, a computer chip that is capable of learning from its mistakes will also be available.

The new computing approach, already in use by some large technology companies, is based on the biological nervous system – specifically on how neurons react to stimuli and connect with other neurons to interpret information. It allows computers to absorb new information while carrying out a task, and adjust what they do based on the changing signals.

brain_chip2The first commercial version of the new kind of computer chip is scheduled to be released in 2014, and was the result of a collaborative effort between I.B.M. and Qualcomm, as well as a Stanford research team. This “neuromorphic processor” can not only automate tasks that once required painstaking programming, but can also sidestep and even tolerate errors, potentially making the term “computer crash” obsolete.

In coming years, the approach will make possible a new generation of artificial intelligence systems that will perform some functions that humans do with ease: see, speak, listen, navigate, manipulate and control. That can hold enormous consequences for tasks like facial and speech recognition, navigation and planning, which are still in elementary stages and rely heavily on human programming.

googleneuralnetworkFor example, computer vision systems only “recognize” objects that can be identified by the statistics-oriented algorithms programmed into them. An algorithm is like a recipe, a set of step-by-step instructions to perform a calculation. But last year, Google researchers were able to get a machine-learning algorithm, known as a “Google Neural Network”, to perform an identification task (involving cats) without supervision.

And this past June, the company said it had used those neural network techniques to develop a new search service to help customers find specific photos more accurately. And this past November, researchers at Standford University came up with a new algorithm that could give computers the power to more reliably interpret language. It’s known as the Neural Analysis of Sentiment (NaSent).

deep_learning_laptopA similar concept known as Deep Leaning is also looking to endow software with a measure of common sense. Google is using this technique with their voice recognition technology to aid in performing searches. In addition, the social media giant Facebook is looking to use deep learning to help them improve Graph Search, an engine that allows users to search activity on their network.

Until now, the design of computers was dictated by ideas originated by the mathematician John von Neumann about 65 years ago. Microprocessors perform operations at lightning speed, following instructions programmed using long strings of binary code (0s and 1s). The information is stored separately in what is known as memory, either in the processor itself, in adjacent storage chips or in higher capacity magnetic disk drives.

neural-networksBy contrast, the new processors consist of electronic components that can be connected by wires that mimic biological synapses. Because they are based on large groups of neuron-like elements, they are known as neuromorphic processors, a term credited to the California Institute of Technology physicist Carver Mead, who pioneered the concept in the late 1980s.

These processors are not “programmed”, in the conventional sense. Instead, the connections between the circuits are “weighted” according to correlations in data that the processor has already “learned.” Those weights are then altered as data flows in to the chip, causing them to change their values and to “spike.” This, in turn, strengthens some connections and weakens others, reacting much the same way the human brain does.

Neuromorphic-chip-640x353In the words of Dharmendra Modha, an I.B.M. computer scientist who leads the company’s cognitive computing research effort:

Instead of bringing data to computation as we do today, we can now bring computation to data. Sensors become the computer, and it opens up a new way to use computer chips that can be everywhere.

One great advantage of the new approach is its ability to tolerate glitches, whereas traditional computers are cannot work around the failure of even a single transistor. With the biological designs, the algorithms are ever changing, allowing the system to continuously adapt and work around failures to complete tasks. Another benefit is energy efficiency, another inspiration drawn from the human brain.

IBM_stacked3dchipsThe new computers, which are still based on silicon chips, will not replace today’s computers, but augment them; at least for the foreseeable future. Many computer designers see them as coprocessors, meaning they can work in tandem with other circuits that can be embedded in smartphones and the centralized computers that run computing clouds.

However, the new approach is still limited, thanks to the fact that scientists still do not fully understand how the human brain functions. As Kwabena Boahen, a computer scientist who leads Stanford’s Brains in Silicon research program, put it:

We have no clue. I’m an engineer, and I build things. There are these highfalutin theories, but give me one that will let me build something.

calit2PhotoLuckily, there are efforts underway that are designed to remedy this, with the specific intention of directing that knowledge towards the creation of better computers and AIs. One such effort comes from the National Science Foundation financed the Center for Brains, Minds and Machines, a new research center based at the Massachusetts Institute of Technology, with Harvard and Cornell.

Another is the California Institute for Telecommunications and Information Technology (aka. Calit2) – a center dedicated to innovation in nanotechnology, life sciences, information technology, and telecommunications. As
Larry Smarr, an astrophysicist and director of Institute, put it:

We’re moving from engineering computing systems to something that has many of the characteristics of biological computing.

Human-Brain-project-Alp-ICTAnd last, but certainly not least, is the Human Brain Project, an international group of 200 scientists from 80 different research institutions and based in Lausanne, Switzerland. Having secured the $1.6 billion they need to fund their efforts, these researchers will spend the next ten years conducting research that cuts across multiple disciplines.

This initiative, which has been compared to the Large Hadron Collider, will attempt to reconstruct the human brain piece-by-piece and gradually bring these cognitive components into an overarching supercomputer. The expected result of this research will be new platforms for “neuromorphic computing” and “neurorobotics,” allowing for the creation of computing and robotic architectures that mimic the functions of the human brain.

neuromorphic_revolutionWhen future generations look back on this decade, no doubt they will refer to it as the birth of the neuromophic computing revolution. Or maybe just Neuromorphic Revolution for short, but that sort of depends on the outcome. With so many technological revolutions well underway, it is difficult to imagine how the future will look back and characterize this time.

Perhaps, as Charles Stross suggest, it will simply be known as “the teens”, that time in pre-Singularity history where it was all starting to come together, but was yet to explode and violently change everything we know. I for one am looking forward to being around to witness it all!

Sources: nytimes.com, technologyreview.com, calit2.net, humanbrainproject.eu

Judgement Day Update: Google Robot Army Expanding

Atlas-x3c.lrLast week, Google announced that it will be expanding its menagerie of robots, thanks to a recent acquisition. The announcement came on Dec. 13th, when the tech giant confirmed that it had bought out the engineering company known as Boston Dynamics. This company, which has had several lucrative contracts with DARPA and the Pentagon, has been making the headlines in the past few years, thanks to its advanced robot designs.

Based in Waltham, Massachusetts, Boston Dynamics has gained an international reputation for machines that walk with an uncanny sense of balance, can navigate tough terrain on four feet, and even run faster than the fastest humans. The names BigDog, Cheetah, WildCat, Atlas and the Legged Squad Support System (LS3), have all become synonymous with the next generation of robotics, an era when machines can handle tasks too dangerous or too dirty for most humans to do.

Andy-Rubin-and-Android-logoMore impressive is the fact that this is the eight robot company that Google has acquired in the past six months. Thus far, the company has been tight-lipped about what it intends to do with this expanding robot-making arsenal. But Boston Dynamics and its machines bring significant cachet to Google’s robotic efforts, which are being led by Andy Rubin, the Google executive who spearheaded the development of Android.

The deal is also the clearest indication yet that Google is intent on building a new class of autonomous systems that might do anything from warehouse work to package delivery and even elder care. And considering the many areas of scientific and technological advancement Google is involved in – everything from AI and IT to smartphones and space travel – it is not surprising to see them branching out in this way.

wildcat1Boston Dynamics was founded in 1992 by Marc Raibert, a former professor at the Massachusetts Institute of Technology. And while it has not sold robots commercially, it has pushed the limits of mobile and off-road robotics technology thanks to its ongoing relationship and funding from DARPA. Early on, the company also did consulting work for Sony on consumer robots like the Aibo robotic dog.

Speaking on the subject of the recent acquisition, Raibert had nothing but nice things to say about Google and the man leading the charge:

I am excited by Andy and Google’s ability to think very, very big, with the resources to make it happen.

Videos uploaded to Youtube featuring the robots of Boston Dynamics have been extremely popular in recent years. For example, the video of their four-legged, gas powered, Big Dog walker has been viewed 15 million times since it was posted on YouTube in 2008. In terms of comments, many people expressed dismay over how such robots could eventually become autonomous killing machines with the potential to murder us.

petman-clothesIn response, Dr. Raibert has emphasized repeatedly that he does not consider his company to be a military contractor – it is merely trying to advance robotics technology. Google executives said the company would honor existing military contracts, but that it did not plan to move toward becoming a military contractor on its own. In many respects, this acquisition is likely just an attempt to acquire more talent and resources as part of a larger push.

Google’s other robotics acquisitions include companies in the United States and Japan that have pioneered a range of technologies including software for advanced robot arms, grasping technology and computer vision. Mr. Rubin has also said that he is interested in advancing sensor technology. Mr. Rubin has called his robotics effort a “moonshot,” but has declined to describe specific products that might come from the project.

Cheetah-robotHe has, however, also said that he does not expect initial product development to go on for some time, indicating that Google commercial robots of some nature would not be available for several more years. Google declined to say how much it paid for its newest robotics acquisition and said that it did not plan to release financial information on any of the other companies it has recently bought.

Considering the growing power and influence Google is having over technological research – be it in computing, robotics, neural nets or space exploration – it might not be too soon to assume that they are destined to one day create the supercomputer that will try to kill us all. In short, Google will play Cyberdyne to Skynet and unleash the Terminators. Consider yourself warned, people! 😉

Source: nytimes.com

Judgement Day Update: A.I. Equivalent to Four Year Old Mind

artificial_intelligence1Ever since computers were first invented, scientists and futurists have dreamed of the day when computers might be capable of autonomous reasoning and be able to surpass human beings. In the past few decades, it has become apparent that simply throwing more processing power at the problem of true artificial intelligence isn’t enough. The human brain remains several orders more complex than the typical AI, but researchers are getting closer.

One such effort is ConceptNet 4, a semantic network being developed by MIT. This AI system contains a large store of information that is used to teach the system about various concepts. But more importantly, it is designed to process the relationship between things. Much like the Google Neural Net, it is designed to learn and grow to the point that it will be able to reason autonomously.

child-ai-brainRecently, researchers at the University of Illinois at Chicago decided to put the ConceptNet through an IQ test. To do this, they used the Wechsler Preschool and Primary Scale of Intelligence Test, which is one of the common assessments used on small children. ConceptNet passed the test, scoring on par with a four-year-old in overall IQ. However, the team points out it would be worrisome to find a real child with lopsided scores like those received by the AI.

The system performed above average on parts of the test that have to do with vocabulary and recognizing the similarities between two items. However, the computer did significantly worse on the comprehension questions, which test a little one’s ability to understand practical concepts based on learned information. In short, the computer showed relational reasoning, but was lacking in common sense.

Neuromorphic-chip-640x353This is the missing piece of the puzzle for ConceptNet and those like it. An artificial intelligence like this one might have access to a lot of data, but it can’t draw on it to make rational judgements. ConceptNet might know that water freezes at 32 degrees, but it doesn’t know how to get from that concept to the idea that ice is cold. This is basically common sense — humans (even children) have it and computers don’t.

There’s no easy way to fabricate implicit information and common sense into an AI system and so far, no known machine has shown the ability. Even IBM’s Watson trivia computer isn’t capable of showing basic common sense, and though multiple solutions have been proposed – from neuromorphic chips to biomimetic circuitry – nothing is bearing fruit just yet.

AIBut of course, the MIT research team is already hard at work on ConceptNet 5, a more sophisticated neural net computer that is open source and available on GitHub. But for the time being, its clear that a machine will be restricted to processing information and incapable of making basic decisions. Good thing too! The sooner they can think for themselves, the sooner they can decide we’re in their way!

Source: extremetech.com

The Future is Here: The Kenshiro Muscle-bot

kenshiroIt may seem like someone at Tokyo University drank their breakfast. I mean really, a robot without a head? How is supposed to mimic our facial expressions and creep us out with its glowing red eyes? But when you consider the purpose behind the Kenshiro muscle-bot, you begin to see the rather important method behind the design.

In recent years, various robotics companies have been able to create machines that mimic the animal kingdom – from hummingbirds, to turtles and even squirrels. However, few have managed to tackle the realm of human movement, and shown truly positive results. Hence the purpose of Kenshiro, human-like musculoskeletal robot that was revealed at the Humanoids conference back in December.

For years, the University has been toying with the design for a bio-inspired robot, adding more muscles and more motors with each new design. Standing at 158 centimeters and weighing in at 50 kilograms, Kenshiro basically mimics the body of the average Japanese 12-year-old male. And with 160 pulley-like “muscles” – 50 in the legs, 76 in the trunk, 12 in the shoulder, and 22 in the neck –  the robot mirrors almost all the major muscles in a human and has the most muscles of any other bio-inspired humanoid out there.

And with all the progress being made in developing a fully-functional autonomous machine mind (see Google Neural Net), not to mention a face that can mimic human expressions (see the FACE), it may just be a matter of time before we need to start thinking about applying Asimov’s Three Laws of Robotics. Don’t want a Robopocalypse on our hands!


Source:
spectrum.ieee.org

More Top Stories of 2012

large-hadron-collider-640x399

With 2012 now officially behind us, and more and more stories trickling into this humble bloggers account about what was accomplished therein, it seems that the time is ripe for another list of breakthroughs, first, and achievements that made the news during the previous year!

Last time, I listed what I saw as the top 12, only to find that there were several others, some of which I actually wrote about, that didn’t make the cut. How foolish of me! And so, to remedy this and possibly cover stories that I neglected to cover the first time around, I have produced another list of the top stories from 2012.

And much like last time, I have listed them according to alphabetical order, since I couldn’t possibly assign them numbers based on importance.

Abortion Study:
anti-abortion-pushAbortion has always been a contentious issue, with one side arguing for the rights of the unborn while the other argues in favor of women’s right to control her own body and reproduction. And as it happens, 2012 saw the publication of the first longitudinal study of what happens to women who are denied this right.

The UC San Francisco research team, Advancing New Standards in Reproductive Health (ANSIRH), studied nearly 1,000 women from diverse backgrounds across the U.S. over several years. All of these subjects were women had sought out abortions but been denied access for one reason or another. What they discovered was that these women were more likely to slip below the poverty line, be unemployed, remain in abusive relationships, and suffer from hyper stress. What this ongoing study demonstrates is that abortion is an economic issue for women, with dire consequences for those denied them.

Autism Reversed:
mice
2012 was an especially significant year in medical advances thanks to a team at McGill University in Montreal announced that they’ve successfully reversed the symptoms of autism in mice. Using mice with autism-like symptoms caused by a genetic mutation, the researchers figured out how to administer a protein that reversed the symptoms.

Naturally, this development is a step in the long process of understanding a disorder which remains largely misunderstood. In addition, it may, in time, lead to the development of a gene therapy that will prevent autism from being triggered in children and even weed it out of parent’s genetic code, ensuring that their children will be immune.

Commercial Space Travel:
virgin_galacticIt has long been the dream of financiers, captains of industry and enthusiasts to create commercial space travel; a means for the average person to go into space, the moon, and even beyond. And all at a reasonable price! This dream is still the subject of speculation and fantasy, but 2012 was a year of firsts that made it seem that much closer.

For starters, Virgin Galactic, the brain-child of Richard Branson, began flight tests on SpaceShipTwo, the rocket ship that will take people into orbit. Then came Reaction Engines Limited with the proposed design for the hypersonic aerospace engine. And finally, there was the creation of Golden Spike, a company made up largely of former astronauts, who want to make commercial flight to the moon a go by 2020.

Electricity-Creating Virus:
M13_virusA breakthrough virus named M13 made news in 2012 for being the first ever virus that could turn physical activity into electricity. The key is what is known as the “piezoelectric effect,” which happens when certain materials like crystals (or viruses) emit a small amount of power when squeezed. Created by a  team of scientists at the Berkeley Lab, this genetically engineered M13 viruses was able to emit enough electricity to power a small LED screen, but poses no threat to humans. One day, all devices could be powered through the simple act of typing or walking, and buildings could be powered by absorbing people’s activity.

Encyclopedia of DNA (ENCODE):
encodeThe publication of the human genome back in the late 90’s was a major breakthrough for genetics and medical science. And in 2012, another breakthrough was achieved by researchers at USC with the publication of ENCODE – The Encyclopedia of DNA Elements Project. Unlike the previous project, these researchers were able not only to catalog the human genome’s various parts, but what those components actually do.

Among the initiative’s many findings was that so-called “junk DNA” – outlier DNA sequences that do not encode for protein sequences – are not junk at all, and are in fact responsible for such things as gene regulation, disease onset, and even human height. These findings will go a long way towards developing gene therapy, biotechnology that seeks to create artificial DNA and self-assembling structures, and even cloning.

Face Transplant:
FaceTransplant_6062012 was also the year that the first full-face transplant was ever conducted. The recipient in question was a man named Richard Norris, a man who lost significant portions of his face from a gunshot accident back in 1997. And after years of attempted reconstructive surgeries, doctors working out of the University of Maryland Medical Center performed a procedure that gave Mr. Norris a has face, teeth, tongue, and a completely new set of jaws.

Not only that, but within days of the surgery, Norris was able to move his facial muscle and jaw. Combined with the nature of the surgery itself, this is not short of unprecedented, and could mean a new age in which severe accident victims and veterans are able to recover fully from physical traumas and live perfectly normal, happy lives.

The Higgs Boson Discovered:
higgs_boson
I can’t believe I didn’t include this story last time, as it is possibly the biggest story of 2012, and perhaps one of the biggest stories since the millennium! 2012 will forever go down in history as the year that the Higgs Boson was discovered. After some 40 years of ongoing research, and fears that it would never be discovered, the last missing piece of The Standard Model of particle physics was found.

Not only does the existence of the Higgs Boson confirm that the Standard Model is valid, it also helps explain how other elementary particles get their mass. This will herald a new step in the advance of particle and the quantum physics, and could lead to the development of quantum computing, quantum generators, and a greater understand of the universe itself.

High-Tech Condom:
condom1Using a revolutionary nano-fabrication process known as electrospinning, researchers at the University of Washington have produced the world’s first female condom that not only prevents pregnancy and protects against HIV, but also evaporates after use. In addition, the manufacturing method used is a step in the direction of viable nanotechnology. Score one for safe sex, public health, and a waste free future permeated by tiny machines and smart materials! That’s a big score card…

Infinite Capacity Wireless:
vortex-radio-waves-348x1962012 was also the year that it was proven that it could be possible to boost the capacity of wireless communication infinitely. The discovery was first made by Bo Thide of the Swedish Institute of Space Physics and some Italian colleagues in Venice, and then confirmed by a team of American and Israeli researchers who used the technique to transmit data at a rate of 2.5 terabytes a second.

Conventional radio signals are transmitted on a flat plane, but Thide twisted the transmitting and receiving antennae into the shape of corkscrew. By adding another dimension to the mix, the technique added a lot of extra bandwidth. As a result, the problem of bandwidth crunches might be a thing of the past, not to mention problems of slow download/upload.

Google Neural Net:
googleneuralnetwork1
Another first and definitely one of the biggest headlines of 2012, far as I was concerned. So why I forgot to include it last time is beyond me! For generations scientists have contemplating the idea of AI and wondered how and where the first leap might be made from basic computing towards true machine intelligence. And as it turns out, Google X Labs, the same place where Project Glass was conceived, seems to have accomplished just that.

The accomplishment came when the labs created a neural network based on sixteen core processors and a connectome with a billion connections. The network accomplished its first task by studying millions of images on Youtube and then demonstrating the ability to differentiate between the faces of cats and humans. This act of independent reasoning that went beyond mere image recognition, and is a major step towards the achievement of a fully-functional artificial intelligence.

Stem cell mammal:
stem_cellsFor the first time in history, researchers at Kyoto University created a mouse by using eggs derived from stem cells alone. The achievement once again shows the remarkable possibilities presented by regenerative technologies like stem cells, while raising pressing ethical questions about the potential for human births in which parents might not be required.

Water in the Solar System:
titan_lakes2012 was also the year that an unprecedented amount of discoveries were made in our solar system. In addition to all the interesting revelations made by the Curiosity Rover, a number of probes discovered water on Europa, Mercury, Titan, and other Saturnalian moons. Usually, this comes in the form of water saturated with hydrocarbons, as was evident on Titan, but the discoveries remain monumental.

In addition to Titan’s methane lakes and Nile-like river, ice and organic molecules were discovered near the poles of Mercury. Evidence of water was found on Mars, indicating the existence of rivers and oceans at one time, and the Cassini space probe confirmed that Enceladus has its own oceans. All of this bodes well for the future of space exploration and colonization, where domestic sources of water may be used for hydrogen cells, hydroponics and even drinking water.

World’s First Tractor Beam:
tractor_beamIn another interesting first, NASA scientists demonstrated in 2012 that another staple technology from Star Trek may be realizable. Yes, in addition to the warp drive, scientists scientists David Ruffner and David Grier demonstrated that a tractor beam may also be realizable in the not-too-distant future. And given the 100 Year Starship Project and other desires to commit to space exploration, such a device could come in mighty handy!

Using a prototype optical beam to pull a small sphere of silica (30 micrometers) suspended in water, Grier and Ruffner pioneered the use of a Bessel beam, a long-established concept, to pull an object of discernible size and mass around. Naturally, NASA hopes to create a more high-powered version of the technology for use on space craft down the road.

*                    *                    *

Thank you once more for attending this symposium on technological breakthroughs during the year of 2012! It was a good year, wouldn’t you say? And barring the advent of killer robots sometime in the near future that unleash a nuclear holocaust on us and force us all to work as slaves, I think people will look back on these developments in a positive light.

Yes, assuming humanity can keep its wits about itself and ensure the ethical application of all we’ve accomplished, 2012 may be seen as a turning point, where incurable diseases became preventable, AI’s became realizable, and limitless communications, super-fast computations, paper-thin flexible devices, green technology, commercial spaceflight, and Solar planet colonization all became truly viable.

Source: extremetech.com, IO9.com

Planning For Judgement Day…

TerminatorSome very interesting things have been taking place in the last month, all of concerning the possibility that humanity may someday face the extinction at the hands of killer AIs. The first took place on November 19th, when Human Rights Watch and Harvard University teamed up to release a report calling for the ban of “killer robots”, a preemptive move to ensure that we as a species never develop machines that could one day turn against us.

The second came roughly a week later when the Pentagon announced that measures were being taken to ensure that wherever robots do kill – as with drones, remote killer bots, and cruise missiles – the controller will always be a human being. Yes, while Americans were preparing for Thanksgiving, Deputy Defense Secretary Ashton Carter signed a series of instructions to “minimize the probability and consequences of failures that could lead to unintended engagements,” starting at the design stage.

X-47A Drone
X-47A Drone, the latest “hunter-killer”

And then most recently, and perhaps in response to Harvard’s and HRW’s declaration, the University of Cambridge announced the creation of the Centre for the Study of Existential Risk (CSER). This new body, which is headed up by such luminaries as Huw Price, Martin Rees, and Skype co-founder Jaan Tallinn, will investigate whether recent advances in AI, biotechnology, and nanotechnology might eventually trigger some kind of extinction-level event. The Centre will also look at anthropomorphic (human-caused) climate change, as it might not be robots that eventually kill us, but a swelteringly hot climate instead.

All of these developments stem from the same thing: ongoing developments in the field of computer science, remotes, and AIs. Thanks in part to the creation of the Google Neural Net, increasingly sophisticated killing machines, and predictions that it is only a matter of time before they are capable of making decisions on their own, there is some worry that machines programs to kill will be able to do so without human oversight. By creating bodies that can make recommendations on the application of technologies, it is hopes that ethical conundrums and threats can be nipped in the bud. And by legislating that human agency be the deciding factor, it is further hoped that such will never be the case.

The question is, is all this overkill, or is it make perfect sense given the direction military technology and the development of AI is taking? Or, as a third possibility, might it not go far enough? Given the possibility of a “Judgement Day”-type scenario, might it be best to ban all AI’s and autonomous robots altogether? Hard to say. All I know is, its exciting to live in a time when such things are being seriously contemplated, and are not merely restricted to the realm of science fiction.Blade_runner

Immortality Is On The Way!

William Gibson must get a kick out of news items like these. According to a recent article over at IO9, it seems that an entrepreneur named Dmitry Itskova and a team of Russian scientists are developing a project that could render humans immortal by the year 2045, after a fashion. According to the plan, which is called the 2045 Initiative, they hope to create a fully functional, holographic avatar of a human being.

At the core of this avatar will be an artificial brain containing all the thoughts, memories, and emotions of the person being simulated. Given the advancements in the field of computer technology, which includes the Google Neural Net, the team estimates that it won’t be long before a construct can be made which can store the sum total of a human’s mind.

If this concept sounds familiar, then chances are you’ve been reading either from Gibson’s Sprawl Trilogy or Ray Kurzweil’s wishlist. Intrinsic to the former’s cyberpunk novels and the latter’s futurist predictions is the concept of people being able to merge their intelligence with machines for the sake of preserving their very essence for all time. Men like Kurzweil want this technology because it will ensure them the ability to live forever, while novelists like Gibson predicted that this would be something the mega-rich alone would have access to.

Which brings me to another aspect of this project. It seems that Itskova has gone to great lengths to secure investment capital to realize this dream. This included an open letter to roughly the world’s 1226 wealthiest citizens, everybody on Forbes Magazine’s list of the world’s richest people, offering them a chance to invest and make their mark on history. If any of them have already chosen to invest, it’s pretty obvious why. Being so rich and powerful, they can’t be too crazy about the idea of dying. In addition, the process isn’t likely to come cheap. Hence, if and when the technology is realized, the world’s richest people will be the first to create avatars of themselves.

No indication of when the technology will be commercially viable for say, the rest of us. But the team has provided a helpful infographic of when the project’s various steps will be realized (see above). The dates are a little flexible, but they anticipate that they will be able to create a robotic copy of a human body (i.e. an android) within three to eight years. In eight to thirteen, they would be able to build a robotic body capable of housing a brain. By eighteen to twenty-three, a robotic humanoid with a mechanical brain that can house human memories will be realizable. And last, and most impressive, will be a holographic program that is capable of preserving a person’s memories and neural patterns (aka. their personality) indefinitely.

You have to admit, this kind of technology raises an awful lot of questions. For one, there’s the inevitable social consequences of it. If the wealthiest citizens in the world are never going to die, what becomes of their spoiled children? Do they no longer inherit their parent’s wealth, or simply live on forever as they do? And won’t this cramp this style, knowing that mommy and daddy are living forever in the box next to theirs?

What’s more, if there’s no generational turn-over, won’t this effect the whole nature and culture of wealth? It is, by its very nature, something which is passed on from generation to generation, ensuring the creation of elites and their influence over society. In this scenario, the same people are likely to exert influence generation after generation, wielding a sort of power which is virtually godlike.

And let’s not forget the immense spiritual and existential implications! Does technology like this disprove the concept of the immortal soul, or its very transcendent nature? If the human personality can be reduced to a connectome, which can in turn be digitized and stored, then what room is left for the soul? Or, alternately, if the soul really does exist, won’t people who partake in this experiment be committing the ultimate sin?

All stuff to ponder as the project either approaches realization or falls flat on its face, leaving such matters for future generations to ponder. In the meantime, we shouldn’t worry to much. As this century progresses and technology grows, we will have plenty of other chances to desecrate the soul. And given the advance of overpopulation and climate change, odds are we’ll be dying off before any of those plans reach fruition. Always look on the bright side, as they say 😉

Of Mechanical Minds

A few weeks back, a friend of mine, Nicola Higgins, directed me to an article about Google’s new neural net. Not only did she provide me with a damn interesting read, she also challenged me to write an article about the different types of robot brains. Well, Nicola, as Barny Stintson would say “Challenge Accepted!”And I got to say, it was a fun topic to get into.

After much research and plugging away at the lovely thing known as the internet (which was predicted by Vannevar Bush with his proposed Memor-Index system (aka. Memex) 50 years ago, btw) I managed to compile a list of the most historically relevant examples of mechanical minds, culminating in the development of Google’s Neural Net. Here we go..

Earliest Examples:
Even in ancient times, the concept of automata and arithmetic machinery can be found in certain cultures. In the Near East, the Arab World, and as far East as China, historians have found examples of primitive machinery that was designed to perform one task or another. And even though few specimens survive, there are even examples of machines that could perform complex mathematical calculations…

Antikythera mechanism:
Invented in ancient Greece, and recovered in 1901 on the ship that bears the same name, the Antikythera is the world’s oldest known analog calculator, invented to calculate the positions of the heavens for ancient astronomers. However, it was not until a century later that its true complexity and significance would be fully understood. Having been built in the 1st century BCE, it would not be until the 14th century CE that machines of its complexity would be built again.

Although it is widely theorized that this “clock of the heavens” must have had several predecessors during the Hellenistic Period, it remains the oldest surviving analog computer in existence. After collecting all the surviving pieces, scientists were able to reconstruct the design (pictured at right), which essentially amounted to a large box of interconnecting gears.

Pascaline:
Otherwise known as the Arithmetic Machine and Pascale Calculator, this device was invented by French mathematician Blaise Pascal in 1642 and is the first known example of a mechanized mathematical calculator. Apparently, Pascale invented this device to help his father reorganize the tax revenues of the French province of Haute-Normandie, and went on to create 50 prototypes before he was satisfied.

Of those 50, nine survive and are currently on display in various European museums. In addition to giving his father a helping hand, its introduction launched the development of mechanical calculators all over Europe and then the world. It’s invention is also directly linked to the development of the microprocessing circuit roughly three centuries later, which in turn is what led to the development of PC’s and embedded systems.

The Industrial Revolution:
With the rise of machine production, computational technology would see a number of developments. Key to all of this was the emergence of the concept of automation and the rationalization of society. Between the 18th and late 19th centuries, as every aspect of western society came to be organized and regimented based on the idea of regular production, machines needed to be developed that could handle this task of crunching numbers and storing the results.

Jacquard Loom:
Invented by Joseph Marie Jacquard, a French weaver and merchant, in 1801, the Loom that bears his name is the first programmable machine in history, which relied on punch cards to input orders and turn out textiles of various patterns. Thought it was based on earlier inventions by Basile Bouchon (1725), Jean Baptiste Falcon (1728) and Jacques Vaucanson (1740), it remains the most well-known example of a programmable loom and the earliest machine that was controlled through punch cards.

Though the Loom was did not perform computations, the design was nevertheless an important step in the development of computer hardware. Charles Babbage would use many of its features to design his Analytical Engine (see next example) and the use of punch cards would remain a stable in the computing industry well into the 20th century until the development of the microprocessor.

Analytical Engine:
Also known as the “Difference Engine”, this concept was originally proposed by English Mathematician Charles Babbage. Beginning in 1822 Babbage began contemplating designs for a machine that would be capable of automating the process of creating error free tables, which arose out of difficulties encountered by teams of mathematicians who were attempting to do it by hand.

Though he was never able to complete construction of a finished product, due to apparent difficulties with the chief engineer and funding shortages, his proposed engine incorporated an arithmetical unit, control flow in the form of conditional branching and loops, and integrated memory, making it the first Turing-complete design for a general-purpose computer. His various trial models (like that featured at left) are currently on display in the Science Museum in London, England.

The Birth of Modern Computing:
The early 20th century saw the rise of several new developments, many of which would play a key role in the development of modern computers. The use of electricity for industrial applications was foremost, with all computers from this point forward being powered by Alternating and/or Direct Current and even using it to store information. At the same time, older ideas would be remain in use but become refined, most notably the use of punch cards and tape to read instructions and store results.

Tabulating Machine:
The next development in computation came roughly 70 years later when Herman Hollerith, an American statistician, developed a “tabulator” to help him process information from the 1890 US Census. In addition to being the first electronic computational device designed to assist in summarizing information (and later, accounting), it also went on to spawn the entire data processing industry.

Six years after the 1890 Census, Hollerith formed his own company known as the Tabulating Machine Company that was responsible for creating machines that could tabulate info based on punch cards. In 1924, after several mergers and consolidations, Hollerith’c company was renamed International Business Machines (IBM), which would go on to build the first “supercomputer” for Columbia University in 1931.

Atanasoff–Berry Computer:
Next, we have the ABC, the first electronic digital computing device in the world. Conceived in 1937, the ABC shares several characteristics with its predecessors, not the least of which is the fact that it is electrically powered and relied on punch cards to store data. However, unlike its predecessors, it was the first machine to use digital symbols to compute and was the first computer to use vacuum tube technology

These additions allowed the ABC to acheive computational speeds that were previously thought impossible for a mechanical computer. However, the machine was limited in that it could only solve systems of linear equations, and its punch card system of storage was deemed unreliable. Work on the machine also stopped when it’s inventor John Vincent Atanasoff was called off to assist in World War II cryptographic assignments. Nevertheless, the machine remains an important milestone in the development of modern computers.

Colossus:
There’s something to be said about war being the engine of innovation. The Colossus is certainly no stranger to this rule, the machine used to break German codes in the Second World War. Due to the secrecy surrounding it, it would not have much of an influence on computing and would not be rediscovered until the 1990’s. Still, it represents a step in the development of computing, as it relied on vacuum tube technology and punch tape in order to perform calculations, and proved most adept at solving complex mathematical computations.

Originally conceived by Max Newman, the British mathematician who was chiefly responsible fore breaking German codes in Bletchley Park during the war, the machine was a proposed means of combatting the German Lorenz machine, which the Nazis used to encode all of their wireless transmissions. With the first model built in 1943, ten variants of the machine for the Allies before war’s end and were intrinsic in bringing down the Nazi war machine.

Harvard Mark I:
Also known as the “IBM Automatic Sequence Controlled Calculator (ASCC)”, the Mark I was an electro-mechanical computer that was devised by Howard H. Aiken, built by IBM, and officially presented to Harvard University in 1944. Due to its success at performing long, complex calculations, it inspired several successors, most of which were used by the US Navy and Air Force for the purpose of running computations.

According to IBM’s own archives, the Mark I was the first computer that could execute long computations automatically. Built within a steel frame 51 feet (16 m) long and eight feet high, and using 500 miles (800 km) of wire with three million connections, it was the industry’s largest electromechanical calculator and the largest computer of its day.

Manchester SSEM:
Nicknamed “Baby”, the Manchester Small-Scale Experimental Machine (SSEM) was developed in 1948 and was the world’s first computer to incorporate stored-program architecture.Whereas previous computers relied on punch tape or cards to store calculations and results, “Baby” was able to do this electronically.

Although its abilities were still modest – with a 32-bit word length, a memory of 32 words, and only capable of performing subtraction and negation without additional software – it was still revolutionary for its time. In addition, the SSEM also had the distinction of being the result of Alan Turing’s own work – another British crytographer who’s theories on the “Turing Machine” and development of the algorithm would form the basis of modern computer technology.

The Nuclear Age to the Digital Age:
With the end of World War II and the birth of the Nuclear Age, technology once again took several explosive leaps forward. This could be seen in the realm of computer technology as well, where wartime developments and commercial applications grew by leaps and bounds. In addition to processor speeds and stored memory multiplying expontentially every few years, the overall size of computers got smaller and smaller. This, some theorized would lead to the development of computers that were perfectly portable and smart enough to pass the “Turing Test”. Imagine!

IBM 7090:
The 7090 model which was released in 1959, is often referred to as a third generation computer because, unlike its predecessors which were either electormechanical  or used vacuum tubes, this machine relied transistors to conduct its computations. In addition, it was an improvement on earlier models in that it used a 36-bit word length and could store up to 32K (32,768) words, a modest increase in processing over the SSEM, but a ten thousand-fold increase in terms of storage capacity.

And of course, these improvements were mirrored in the fact the 7090 series were also significantly smaller than previous versions, being about the size of a desk rather than an entire room. They were also cheaper and were quite popular with NASA, Caltech and MIT.

PDP-8:
In keeping with the trend towards miniaturization, 1965 saw the development of the first commercial minicomputer by the Digital Equipment Corporation (DEC). Though large by modern standards (about the size of a minibar) the PDP-8, also known as the “Straight-8”, was a major improvement over previous models, and therefore a commercial success.

In addition, later models also incorporated advanced concepts like the Real-Time Operating System and preemptive multitasking. Unfortunately, early models still relied on paper tape in order to process information. It was not until later that the computer was upgraded to take advantage of controlling language  such as FORTRAN, BASIC, and DIBOL.

Intel 4004:
Founded in California in 1968, the Intel Corporation quickly moved to the forefront of computational hardware development with the creation of the 4004, the worlds first Central Processing Unit, in 1971. Continuing the trend towards smaller computers, the development of this internal processor paved the way for personal computers, desktops, and laptops.

Incorporating the then-new silicon gate technology, Intel was able to create a processor that allowed for a higher number of transistors and therefore a faster processing speed than ever possible before. On top of all that, they were able to pack in into a much smaller frame, which ensured that computers built with the new CPU would be smaller, cheaper and more ergonomic. Thereafter, Intel would be a leading designer of integrated circuits and processors, supplanting even giants like IBM.

Apple I:
The 60’s and 70’s seemed to be a time for the birthing of future giants. Less than a decade after the first CPU was created, another upstart came along with an equally significant development. Named Apple and started by three men in 1976 – Steve Jobs, Steve Wozniak, and Ronald Wayne – the first product to be marketed was a “personal computer” (PC) which Wozniak built himself.

One of the most distinctive features of the Apple I was the fact that it had a built-in keyboard. Competing models of the day, such as the Altair 8800, required a hardware extension to allow connection to a computer terminal or a teletypewriter machine. The company quickly took off and began introducing an upgraded version (the Apple II) just a year later. As a result, Apple I’s remain a scarce commodity and very valuable collector’s item.

The Future:
The last two decades of the 20th century also saw far more than its fair of developments. From the CPU and the PC came desktop computers, laptop computers, PDA’s, tablet PC’s, and networked computers. This last creation, aka. the Internet, was the greatest leap by far, allowing computers from all over the world to be networked together and share information. And with the exponential increase in information sharing that occurred as a result, many believe that it’s only a matter of time before wearable computers, fully portable computers, and artificial intelligences are possible. Ah, which brings me to the last entry in this list…

The Google Neural Network:
googleneuralnetworkFrom mechanical dials to vacuum tubes, from CPU’s to PC’s and laptops, computer’s have come a hell of a long way since the days of Ancient Greece. Hell, even within the last century, the growth in this one area of technology has been explosive, leading some to conclude that it was just a matter of time before we created a machine that was capable of thinking all on its own.

Well, my friends, that day appears to have dawned. Already, Nicola and myself blogged about this development, so I shan’t waste time going over it again. Suffice it to say, this new program, which thus far has been able to identify pictures of cats at random, contains the necessary neural capacity to acheive 1/1000th of what the human brain is capable of. Sounds small, but given the exponential growth in computing, it won’t be long before that gap is narrowed substantially.

Who knows what else the future will hold?  Optical computers that use not electrons but photons to move information about? Quantum computers, capable of connecting machines not only across space, but also time? Biocomputers that can be encoded directly into our bodies through our mitochondrial DNA? Oh, the possibilities…

Creating machines in the likeness of the human mind. Oh Brave New World that hath such machinery in it. Cool… yet scary!