Breaking Moore’s Law: Graphene Nanoribbons

^Ask a technician or a computer science major, and they will likely tell you that the next great leap in computing will only come once Moore’s Law is overcome. This law, which states that the number of transistors on a single chip doubles every 18 months to two years, is proceeding towards a bottleneck. For decades, CPUs and computer chips have been getting smaller, but they are fast approaching their physical limitations.

One of the central problems arising from the Moore’s Law bottleneck has to do with the materials we used to create microchips. Short of continued miniaturization, there is simply no way to keep placing more and more components on a microchip. And copper wires can only be miniaturized so much before they lose the ability to conduct electricity effectively.

graphene_ribbons1This has led scientists and engineers to propose that new materials be used, and graphene appears to be the current favorite. And researchers at the University of California at Berkeley are busy working on a form of so-called nanoribbon graphene that could increase the density of transistors on a computer chip by as much as 10,000 times.

Graphene, for those who don’t know, is a miracle material that is basically a sheet of carbon only one layer of atoms thick. This two-dimensional physical configuration gives it some incredible properties, like extreme electrical conductivity at room temperature. Researchers have been working on producing high quality sheets of the material, but nanoribbons ask more of science than it can currently deliver.

graphene_ribbonsWork on nanoribbons over the past decade has revolved around using lasers to carefully sculpt ribbons 10 or 20 atoms wide from larger sheets of graphene. On the scale of billionths of an inch, that calls for incredible precision. If the makers are even a few carbon atoms off, it can completely alter the properties of the ribbon, preventing it from working as a semiconductor at room temperature.

Alas, Berkeley chemist Felix Fischer thinks he might have found a solution. Rather than carving ribbons out of larger sheets like a sculptor, Fischer has begun creating nanoribbons from carbon atoms using a chemical process. Basically, he’s working on a new way to produce graphene that happens to already be in the right configuration for nanoribbons.

graphene-solarHe begins by synthesizing rings of carbon atoms similar in structure to benzene, then heats the molecules to encourage them to form a long chain. A second heating step strips away most of the hydrogen atoms, freeing up the carbon to form bonds in a honeycomb-like graphene structure. This process allows Fischer and his colleagues to control where each atom of carbon goes in the final nanoribbon.

On the scale Fischer is making them, graphene nanoribbons could be capable of transporting electrons thousands of times faster than a traditional copper conductor. They could also be packed very close together since a single ribbon is 1/10,000th the thickness of a human hair. Thus, if the process is perfected and scaled up, everything from CPUs to storage technology could be much faster and smaller.

Sources: extremetech.com

Cyberwars: NSA Building Quantum Computer

D-Wave's 128-qubit quantum processorAs documents that illustrate the NSA’s clandestine behavior continue to be leaked, the extents to which the agency has been going to gain supremacy over cyberspace are becoming ever more clear. Thanks to a new series of documents released by Snowden, it now seems that these efforts included two programs who’s purpose was to create a ““useful quantum computer” that would be capable of breaking all known forms of classical encryption.

According to the documents, which were published by The Washington Post earlier this month, there are at least two programs that deal with quantum computers and their use in breaking classical encryption — “Penetrating Hard Targets” and “Owning the Net.” The first program is funded to the tune of $79.7 million and includes efforts to build “a cryptologically useful quantum computer” that can:

sustain and enhance research operations at NSA/CSS Washington locations, including the Laboratory for Physical Sciences facility in College Park, MD.

nsa_aerialThe second program, Owning the Net, deals with developing new methods of intercepting communications, including the use of quantum computers to break encryption. Given the fact that quanutm machinery is considered the next great leap in computer science, offering unprecedented speed and the ability to conduct operations at many times the efficiency of normal computers, this should not come as a surprise.

Such a computer would give the NSA unprecedented access to encrypted files and communications, enadling them to break any protective cypher, access anyone’s data with ease, and mount cyber attacks with impunity. But a working model would also vital for defensive purposes. Much in the same way that the Cold War involved ongoing escalation between nuclear armament production, cybersecurity wars are also subject to constant one-upmanship.

quantum-computers-The-Next-GenerationIn short, if China, Russia, or some other potentially hostile power were to obtain a quantum computer before the US, all of its encrypted information would be laid bare. Under the circumstances, and given their mandate to protect the US’s infrastructure, data and people from harm, the NSA would much rather they come into possesion of one first. Hence why so much attention is dedicated to the issue, since whoever builds the worlds first quantum computer will enjoy full-court dominance for a time.

The mathematical, cryptographical, and quantum mechanical communities have long known that quantum computing should be able to crack classical encryption very easily. To crack RSA, the world’s prevailing cryptosystem, you need to be able to factor prime numbers — a task that is very difficult with a normal, classical-physics CPU, but might be very easy for a quantum computer. But of course, the emphasis is still very much on the word might, as no one has built a fully functioning multi-qubit quantum computer yet.

quantum-entanglement1As for when that might be, no one can say for sure. But the smart money is apparently anticipating one soon, since researchers are getting to the point where coherence on a single qubit-level is becoming feasible, allowing them to move on to the trickier subject of stringing multiple fully-entangled qubits together, as well as the necessary error checking/fault tolerance measures that go along with multi-qubit setups.

But from what it’s published so far, the Laboratory for Physical Sciences – which is carrying out the NSA’s quantum computing work under contract – doesn’t seem to be leading the pack in terms of building a quantum computer. In this respect, it’s IBM with its superconducting waveguide-cavity qubits that appears to be closer to realizing a quantum computer, with other major IT firms and their own supcomputer models not far behind.

hackers_securityDespite what this recent set of leaks demonstrates then, the public should take comfort in knowing that the NSA is not ahead of the rest of the industry. In reality, something like a working quantum computer would be so hugely significant that it would be impossible for the NSA to develop it internally and keep it a secret. And by the time the NSA does have a working quantum computer to intercept all of our encrypted data, they won’t be the only ones, which would ensure they lacked dominance in this field.

So really, thess latest leaks ought to not worry people too much, and instead should put the NSAs ongoing struggle to control cyberspace in perspective. One might go so far as to say that the NSA is trying to remain relevant in an age where they are becoming increasingly outmatched. With billions of terabytes traversing the globe on any given day and trillions of devices and sensors creating a “second skin” of information over the globe, no one organization is capable of controlling or monitoring it all.

So to those in the habit of dredging up 1984 every time they hear about the latest NSA and domestic surveillance scandal, I say: Suck on it, Big Brother!

Source: wired.com

The Future of Computing: Graphene Chips and Transistors

computer_chip4The basic law of computer evolution, known as Moore’s Law, teaches that within every two years, the number of transistors on a computer chip will double. What this means is that every couple of years, computer speeds will double, effectively making the previous technology obsolete. Recently, analysts have refined this period to about 18 months or less, as the rate of increase itself seems to be increasing.

This explosion in computing power is due to ongoing improvements in the field of miniaturization. As the component pieces get smaller and smaller, engineers are able to cram more and more of them onto chips of the same size. However, it does make one wonder just how far it will all go. Certainly there is a limit to how small things can get before they cease working.

GrapheneAccording to the International Technology Roadmap for Semiconductors (ITRS), a standard which has been established by the industry’s top experts, that limit will be reached in 2015. By then, engineers will have reached the threshold of 22 nanometers, the limit of thickness before the copper wiring that currently connect the billions of transistors in a modern CPU or GPU will be made unworkable due to resistance and other mechanical issues.

However, recent revelations about the material known as graphene show that it is not hampered by the same mechanical restrictions. As such, it could theoretically be scaled down to the point where it is just a few nanometers, allowing for the creation of computer chips that are orders of magnitude more dense and powerful, while consuming less energy.

IBM-Graphene-ICBack in 2011, IBM built what it called the first graphene integrated circuit, but in truth, only some of the transistors and inductors were made of graphene while other standard components (like copper wiring) was still employed. But now, a team at the University of California Santa Barbara (UCSB) have proposed the first all-graphene chip, where the transistors and interconnects are monolithically patterned on a single sheet of graphene.

In their research paper, “Proposal for all-graphene monolithic logic circuits,” the UCSB researchers say that:

[D]evices and interconnects can be built using the ‘same starting material’ — graphene… all-graphene circuits can surpass the static performances of the 22nm complementary metal-oxide-semiconductor devices.

graphene_transistormodelTo build an all-graphene IC (pictured here), the researchers propose using one of graphene’s interesting qualities, that depending on its thickness it behaves in different ways. Narrow ribbons of graphene are semiconducting, ideal for making transistors while wider ribbons are metallic, ideal for gates and interconnects.

For now, the UCSB team’s design is simply a computer model that should technically work, but which hasn’t been built yet. In theory, though, with the worldwide efforts to improve high-quality graphene production and patterning, it should only be a few years before an all-graphene integrated circuit is built. As for full-scale commercial production, that is likely to take a decade or so.

When that happens though, another explosive period of growth in computing speed, coupled with lower power consumption is to be expected. From there, subsequent leaps are likely to involve carbon nanotubes components, true quantum computing, and perhaps even biotechnological circuits. Oh the places it will all go!

Source: extremetech.com

The Birth of an Idea: The Computer Coat!

optical_computer1I’ve been thinking… which is not something novel for me, it just so happens that my thoughts have been a bit more focused lately. Specifically, I have an idea for an invention: something futuristic, practical, that could very well be part of our collective, computing future. With all the developments in the field of personal computing lately, and I my ongoing efforts to keep track of them, I hoped I might eventually come up with an idea of my own.

Consider, the growth in smartphones and personal digital assistants. In the last few years, we’ve seen companies produce working prototypes for paper-thin, flexible, and durable electronics. Then consider the growth in projection touchscreens, portable computing, and augmented reality. Could it be that there’s some middle ground here for something that incorporates all of the above?

Pranav Mistry 5Ever since I saw Pranav Mistry’s demonstration of a wearable computer that could interface with others, project its screen onto any surface, and be operated through simple gestures from the user, I’ve been looking for a way to work this into fiction. But in the years since Mistry talked to TED.com and showed off his “Sixth Sense Technology”, the possibilities have grown and been refined.

papertab-touchAnd then something happened. While at school, I noticed one of the kids wearing a jacket that had a hole near the lapel with a headphones icon above it. The little tunnel worked into the coat was designed to keep the chord to your iPod or phone safe and tucked away, and it got me thinking! Wires running through a coat, inset electrical gear, all the advancements made in the last few years. Who thinks about this kind of stuff, anyway? Who cares, it was the birth of an idea!

headphonesFor example, its no longer necessary to carry computer components that are big and bulky on your person. With thin, flexible electronics, much like the new Papertab, all the components one would need could be thin enough and flexible enough to be worked into the inlay of a coat. These could include the CPU, a wireless router, and a hard drive.

Paper-thin zinc batteries, also under development, could be worked into the coast as well, with a power cord connected to them so they could be jacked into a socket and recharged. And since they too are paper-thin, they could be expected to move and shift with the coat, along with all the other electronics, without fear of breakage or malfunction.

flexbatteryAnd of course, there would be the screen itself, via a small camera and projector in the collar, which could be placed and interfaced with on any flat surface. Or, forget the projector entirely and just connect the whole thing to a set of glasses. Google’s doing a good job on those, as is DARPA with their development of AR contact lenses. Either one will do in a pinch, and could be wirelessly or wired to the coat itself.

google_glass1Addendum: Shortly after publishing this, I realized that a power cord is totally unnecessary! Thanks to two key technologies, it could be possible to recharge the batteries using a combination of flexible graphene solar panels and some M13 peizoelectric virus packs. The former could be attached to the back, where they would be wired to the coats power system, and the M13 packs could be placed in the arms, where the user’s movement would be harnessed to generate electricity. Total self-sufficiency, baby!

powerbuttonAnd then how about a wrist segment where some basic controls, such as the power switch and a little screen are? This little screen could act as a prompt, telling you you have emails, texts, tweets, and updates available for download. Oh, and lets not forget a USB port, where you can plug in an external hard drive, flash drive, or just hook up to another computer.

So that’s my idea, in a nutshell. I plan to work it into my fiction at the first available opportunity, as I consider it an idea that hasn’t been proposed yet, not without freaky nanotech being involved! Look for it, and in the meantime, check out the video of Pranav Mistry on TED talks back in 2010 when he first proposed 6th Sense Tech. Oh, and just in case, you heard about the Computer Coat here first, patent pending!

IBM Creates First Photonic Microchip

optical_computer1For many years, optical computing has been a subject of great interest for engineers and researchers. As opposed to the current crop of computers which rely on the movement of electrons in and out of transistors to do logic, an optical computer relies on the movement of photons. Such a computer would confer obvious advantages, mainly in the realm of computing speed since photons travel much faster than electrical current.

While the concept and technology is relatively straightforward, no one has been able to develop photonic components that were commercially viable. All that changed this past December as IBM became the first company to integrate electrical and optical components on the same chip. As expected, when tested, this new chip was able to transmit data significantly faster than current state-of-the-art copper and optical networks.

ibm-silicon-nanophotonic-chip-copper-and-waveguidesBut what was surprising was just how fast the difference really was. Whereas current interconnects are generally measured in gigabits per second, IBM’s new chip is already capable of shuttling data around at terabits per second. In other words, over a thousand times faster than what we’re currently used to. And since it will be no big task or expense to replace the current generation of electrical components with photonic ones, we could be seeing this chip taking the place of our standard CPUs really soon!

This comes after a decade of research and an announcement made back in 2010, specifically that IBM Research was tackling the concept of silicon nanophotonics. And since they’ve proven they can create the chips commercially, they could be on the market within just a couple of years. This is certainly big news for supercomputing and the cloud, where limited bandwidth between servers is a major bottleneck for those with a need for speed!

internetCool as this is, there are actually two key breakthroughs to boast about here. First, IBM has managed to build a monolithic silicon chip that integrates both electrical (transistors, capacitors, resistors) and optical (modulators, photodetectors, waveguides) components. Monolithic means that the entire chip is fabricated from a single crystal of silicon on a single production line, and the optical and electrical components are mixed up together to form an integrated circuit.

Second, and perhaps more importantly, IBM was able to manufacture these chips using the same process they use to produce the CPUs for the Xbox 360, PS3, and Wii. This was not easy, according to internal sources, but in so doing, they can produce this new chip using their standard manufacturing process, which will not only save them money in the long run, but make the conversion process that much cheaper and easier. From all outward indications, it seems that IBM spent most of the last two years trying to ensure that this aspect of the process would work.

Woman-Smashing-ComputerExcited yet? Or perhaps concerned that this boost in speed will mean even more competition and the need to constantly upgrade? Well, given the history of computing and technological progress, both of these sentiments would be right on the money. On the one hand, this development may herald all kinds of changes and possibilities for research and development, with breakthroughs coming within days and weeks instead of years.

At the same time, it could mean that rest of us will be even more hard pressed to keep our software and hardware current, which can be frustrating as hell. As it stands, Moore’s Law states that it takes between 18 months and two years for CPUs to double in speed. Now imagine that dwindling to just a few weeks, and you’ve got a whole new ballgame!

Source: Extremetech.com

Of Mechanical Minds

A few weeks back, a friend of mine, Nicola Higgins, directed me to an article about Google’s new neural net. Not only did she provide me with a damn interesting read, she also challenged me to write an article about the different types of robot brains. Well, Nicola, as Barny Stintson would say “Challenge Accepted!”And I got to say, it was a fun topic to get into.

After much research and plugging away at the lovely thing known as the internet (which was predicted by Vannevar Bush with his proposed Memor-Index system (aka. Memex) 50 years ago, btw) I managed to compile a list of the most historically relevant examples of mechanical minds, culminating in the development of Google’s Neural Net. Here we go..

Earliest Examples:
Even in ancient times, the concept of automata and arithmetic machinery can be found in certain cultures. In the Near East, the Arab World, and as far East as China, historians have found examples of primitive machinery that was designed to perform one task or another. And even though few specimens survive, there are even examples of machines that could perform complex mathematical calculations…

Antikythera mechanism:
Invented in ancient Greece, and recovered in 1901 on the ship that bears the same name, the Antikythera is the world’s oldest known analog calculator, invented to calculate the positions of the heavens for ancient astronomers. However, it was not until a century later that its true complexity and significance would be fully understood. Having been built in the 1st century BCE, it would not be until the 14th century CE that machines of its complexity would be built again.

Although it is widely theorized that this “clock of the heavens” must have had several predecessors during the Hellenistic Period, it remains the oldest surviving analog computer in existence. After collecting all the surviving pieces, scientists were able to reconstruct the design (pictured at right), which essentially amounted to a large box of interconnecting gears.

Pascaline:
Otherwise known as the Arithmetic Machine and Pascale Calculator, this device was invented by French mathematician Blaise Pascal in 1642 and is the first known example of a mechanized mathematical calculator. Apparently, Pascale invented this device to help his father reorganize the tax revenues of the French province of Haute-Normandie, and went on to create 50 prototypes before he was satisfied.

Of those 50, nine survive and are currently on display in various European museums. In addition to giving his father a helping hand, its introduction launched the development of mechanical calculators all over Europe and then the world. It’s invention is also directly linked to the development of the microprocessing circuit roughly three centuries later, which in turn is what led to the development of PC’s and embedded systems.

The Industrial Revolution:
With the rise of machine production, computational technology would see a number of developments. Key to all of this was the emergence of the concept of automation and the rationalization of society. Between the 18th and late 19th centuries, as every aspect of western society came to be organized and regimented based on the idea of regular production, machines needed to be developed that could handle this task of crunching numbers and storing the results.

Jacquard Loom:
Invented by Joseph Marie Jacquard, a French weaver and merchant, in 1801, the Loom that bears his name is the first programmable machine in history, which relied on punch cards to input orders and turn out textiles of various patterns. Thought it was based on earlier inventions by Basile Bouchon (1725), Jean Baptiste Falcon (1728) and Jacques Vaucanson (1740), it remains the most well-known example of a programmable loom and the earliest machine that was controlled through punch cards.

Though the Loom was did not perform computations, the design was nevertheless an important step in the development of computer hardware. Charles Babbage would use many of its features to design his Analytical Engine (see next example) and the use of punch cards would remain a stable in the computing industry well into the 20th century until the development of the microprocessor.

Analytical Engine:
Also known as the “Difference Engine”, this concept was originally proposed by English Mathematician Charles Babbage. Beginning in 1822 Babbage began contemplating designs for a machine that would be capable of automating the process of creating error free tables, which arose out of difficulties encountered by teams of mathematicians who were attempting to do it by hand.

Though he was never able to complete construction of a finished product, due to apparent difficulties with the chief engineer and funding shortages, his proposed engine incorporated an arithmetical unit, control flow in the form of conditional branching and loops, and integrated memory, making it the first Turing-complete design for a general-purpose computer. His various trial models (like that featured at left) are currently on display in the Science Museum in London, England.

The Birth of Modern Computing:
The early 20th century saw the rise of several new developments, many of which would play a key role in the development of modern computers. The use of electricity for industrial applications was foremost, with all computers from this point forward being powered by Alternating and/or Direct Current and even using it to store information. At the same time, older ideas would be remain in use but become refined, most notably the use of punch cards and tape to read instructions and store results.

Tabulating Machine:
The next development in computation came roughly 70 years later when Herman Hollerith, an American statistician, developed a “tabulator” to help him process information from the 1890 US Census. In addition to being the first electronic computational device designed to assist in summarizing information (and later, accounting), it also went on to spawn the entire data processing industry.

Six years after the 1890 Census, Hollerith formed his own company known as the Tabulating Machine Company that was responsible for creating machines that could tabulate info based on punch cards. In 1924, after several mergers and consolidations, Hollerith’c company was renamed International Business Machines (IBM), which would go on to build the first “supercomputer” for Columbia University in 1931.

Atanasoff–Berry Computer:
Next, we have the ABC, the first electronic digital computing device in the world. Conceived in 1937, the ABC shares several characteristics with its predecessors, not the least of which is the fact that it is electrically powered and relied on punch cards to store data. However, unlike its predecessors, it was the first machine to use digital symbols to compute and was the first computer to use vacuum tube technology

These additions allowed the ABC to acheive computational speeds that were previously thought impossible for a mechanical computer. However, the machine was limited in that it could only solve systems of linear equations, and its punch card system of storage was deemed unreliable. Work on the machine also stopped when it’s inventor John Vincent Atanasoff was called off to assist in World War II cryptographic assignments. Nevertheless, the machine remains an important milestone in the development of modern computers.

Colossus:
There’s something to be said about war being the engine of innovation. The Colossus is certainly no stranger to this rule, the machine used to break German codes in the Second World War. Due to the secrecy surrounding it, it would not have much of an influence on computing and would not be rediscovered until the 1990’s. Still, it represents a step in the development of computing, as it relied on vacuum tube technology and punch tape in order to perform calculations, and proved most adept at solving complex mathematical computations.

Originally conceived by Max Newman, the British mathematician who was chiefly responsible fore breaking German codes in Bletchley Park during the war, the machine was a proposed means of combatting the German Lorenz machine, which the Nazis used to encode all of their wireless transmissions. With the first model built in 1943, ten variants of the machine for the Allies before war’s end and were intrinsic in bringing down the Nazi war machine.

Harvard Mark I:
Also known as the “IBM Automatic Sequence Controlled Calculator (ASCC)”, the Mark I was an electro-mechanical computer that was devised by Howard H. Aiken, built by IBM, and officially presented to Harvard University in 1944. Due to its success at performing long, complex calculations, it inspired several successors, most of which were used by the US Navy and Air Force for the purpose of running computations.

According to IBM’s own archives, the Mark I was the first computer that could execute long computations automatically. Built within a steel frame 51 feet (16 m) long and eight feet high, and using 500 miles (800 km) of wire with three million connections, it was the industry’s largest electromechanical calculator and the largest computer of its day.

Manchester SSEM:
Nicknamed “Baby”, the Manchester Small-Scale Experimental Machine (SSEM) was developed in 1948 and was the world’s first computer to incorporate stored-program architecture.Whereas previous computers relied on punch tape or cards to store calculations and results, “Baby” was able to do this electronically.

Although its abilities were still modest – with a 32-bit word length, a memory of 32 words, and only capable of performing subtraction and negation without additional software – it was still revolutionary for its time. In addition, the SSEM also had the distinction of being the result of Alan Turing’s own work – another British crytographer who’s theories on the “Turing Machine” and development of the algorithm would form the basis of modern computer technology.

The Nuclear Age to the Digital Age:
With the end of World War II and the birth of the Nuclear Age, technology once again took several explosive leaps forward. This could be seen in the realm of computer technology as well, where wartime developments and commercial applications grew by leaps and bounds. In addition to processor speeds and stored memory multiplying expontentially every few years, the overall size of computers got smaller and smaller. This, some theorized would lead to the development of computers that were perfectly portable and smart enough to pass the “Turing Test”. Imagine!

IBM 7090:
The 7090 model which was released in 1959, is often referred to as a third generation computer because, unlike its predecessors which were either electormechanical  or used vacuum tubes, this machine relied transistors to conduct its computations. In addition, it was an improvement on earlier models in that it used a 36-bit word length and could store up to 32K (32,768) words, a modest increase in processing over the SSEM, but a ten thousand-fold increase in terms of storage capacity.

And of course, these improvements were mirrored in the fact the 7090 series were also significantly smaller than previous versions, being about the size of a desk rather than an entire room. They were also cheaper and were quite popular with NASA, Caltech and MIT.

PDP-8:
In keeping with the trend towards miniaturization, 1965 saw the development of the first commercial minicomputer by the Digital Equipment Corporation (DEC). Though large by modern standards (about the size of a minibar) the PDP-8, also known as the “Straight-8”, was a major improvement over previous models, and therefore a commercial success.

In addition, later models also incorporated advanced concepts like the Real-Time Operating System and preemptive multitasking. Unfortunately, early models still relied on paper tape in order to process information. It was not until later that the computer was upgraded to take advantage of controlling language  such as FORTRAN, BASIC, and DIBOL.

Intel 4004:
Founded in California in 1968, the Intel Corporation quickly moved to the forefront of computational hardware development with the creation of the 4004, the worlds first Central Processing Unit, in 1971. Continuing the trend towards smaller computers, the development of this internal processor paved the way for personal computers, desktops, and laptops.

Incorporating the then-new silicon gate technology, Intel was able to create a processor that allowed for a higher number of transistors and therefore a faster processing speed than ever possible before. On top of all that, they were able to pack in into a much smaller frame, which ensured that computers built with the new CPU would be smaller, cheaper and more ergonomic. Thereafter, Intel would be a leading designer of integrated circuits and processors, supplanting even giants like IBM.

Apple I:
The 60’s and 70’s seemed to be a time for the birthing of future giants. Less than a decade after the first CPU was created, another upstart came along with an equally significant development. Named Apple and started by three men in 1976 – Steve Jobs, Steve Wozniak, and Ronald Wayne – the first product to be marketed was a “personal computer” (PC) which Wozniak built himself.

One of the most distinctive features of the Apple I was the fact that it had a built-in keyboard. Competing models of the day, such as the Altair 8800, required a hardware extension to allow connection to a computer terminal or a teletypewriter machine. The company quickly took off and began introducing an upgraded version (the Apple II) just a year later. As a result, Apple I’s remain a scarce commodity and very valuable collector’s item.

The Future:
The last two decades of the 20th century also saw far more than its fair of developments. From the CPU and the PC came desktop computers, laptop computers, PDA’s, tablet PC’s, and networked computers. This last creation, aka. the Internet, was the greatest leap by far, allowing computers from all over the world to be networked together and share information. And with the exponential increase in information sharing that occurred as a result, many believe that it’s only a matter of time before wearable computers, fully portable computers, and artificial intelligences are possible. Ah, which brings me to the last entry in this list…

The Google Neural Network:
googleneuralnetworkFrom mechanical dials to vacuum tubes, from CPU’s to PC’s and laptops, computer’s have come a hell of a long way since the days of Ancient Greece. Hell, even within the last century, the growth in this one area of technology has been explosive, leading some to conclude that it was just a matter of time before we created a machine that was capable of thinking all on its own.

Well, my friends, that day appears to have dawned. Already, Nicola and myself blogged about this development, so I shan’t waste time going over it again. Suffice it to say, this new program, which thus far has been able to identify pictures of cats at random, contains the necessary neural capacity to acheive 1/1000th of what the human brain is capable of. Sounds small, but given the exponential growth in computing, it won’t be long before that gap is narrowed substantially.

Who knows what else the future will hold?  Optical computers that use not electrons but photons to move information about? Quantum computers, capable of connecting machines not only across space, but also time? Biocomputers that can be encoded directly into our bodies through our mitochondrial DNA? Oh, the possibilities…

Creating machines in the likeness of the human mind. Oh Brave New World that hath such machinery in it. Cool… yet scary!

The Future is Here: The Google Neural Net!

I came across a recent story at BBC News, one which makes me both hopeful and fearful. It seems that a team of researchers, working for Google, have completed work on an artificial neural net that is capable of recognizing pictures of cats. Designed and built to mimic the human brain, this may very well be the first instance where a computer was capable of exercising the faculty of autonomous reasoning – the very thing that we humans are so proud (and jealous) of!

The revolutionary new system was a collaborative effort between Google’s X Labs division and Professor Andrew Ng of the AI Lab at Standford University, California. As opposed to image recognition software, which tells computers to look for specific features in a target picture before being presented with it, the Google machine knew nothing about the images in advance. Instead, it relied on its 16,000 processing cores to run software that simulated the workings of a biological neural network with about one billion connections.

Now, according to various estimates, the human cerebral cortex contains at least 1010 neurons linked by 1014 synaptic connections – or in lay terms, 10 trillions neurons with roughly 1 quadrillion connections. That means this artificial brain has one one thousandth the complexity of the organic, human one. Not quite as complex, but it’s a start… A BIG start really!

For decades – hell, even centuries and millennia – human beings have contemplated what it would take to make an autonomous automaton. Even with all the growth in computer’s processing speed and storage, the question of how to make the leap between a smart machine and a truly “intelligent” one has remained a tricky one. Judging from all the speculation and representations in fiction, everyone seemed to surmise that some sort of artificial neural net would be involved, something that could mimic the process of forming connections, encoding experiences into a physical (i.e. digital) form, and expanding based on ongoing learning.

Naturally, Google has plans for an application using this new system. Apparently, the company is hoping that it will help them with its indexing systems and with language translation.  Giving the new guy the boring jobs, huh? I wonder what’s going to happen when the newer, smarter models start coming out? Yeah, I can foresee new generations emerging over time, much as new generations of iPods with larger and larger storage capacities have been coming out every year for the past decade. Or, like faster and faster CPU’s from the past three decades. Yes, this could very well represent the next great technological race, as foreseen by such men as Eliezer Yudkowsky, Nick Bostrom, and Ray Kurzweil.

In short, Futurists will rejoice, Alarmists will be afraid, and science fiction writers will exploit it for all its worth! Until next time, keep your eyes peeled for any red-eyed robots. That seems to be the first warning sign of impending robocalypse!