Breaking Moore’s Law: Graphene Nanoribbons

^Ask a technician or a computer science major, and they will likely tell you that the next great leap in computing will only come once Moore’s Law is overcome. This law, which states that the number of transistors on a single chip doubles every 18 months to two years, is proceeding towards a bottleneck. For decades, CPUs and computer chips have been getting smaller, but they are fast approaching their physical limitations.

One of the central problems arising from the Moore’s Law bottleneck has to do with the materials we used to create microchips. Short of continued miniaturization, there is simply no way to keep placing more and more components on a microchip. And copper wires can only be miniaturized so much before they lose the ability to conduct electricity effectively.

graphene_ribbons1This has led scientists and engineers to propose that new materials be used, and graphene appears to be the current favorite. And researchers at the University of California at Berkeley are busy working on a form of so-called nanoribbon graphene that could increase the density of transistors on a computer chip by as much as 10,000 times.

Graphene, for those who don’t know, is a miracle material that is basically a sheet of carbon only one layer of atoms thick. This two-dimensional physical configuration gives it some incredible properties, like extreme electrical conductivity at room temperature. Researchers have been working on producing high quality sheets of the material, but nanoribbons ask more of science than it can currently deliver.

graphene_ribbonsWork on nanoribbons over the past decade has revolved around using lasers to carefully sculpt ribbons 10 or 20 atoms wide from larger sheets of graphene. On the scale of billionths of an inch, that calls for incredible precision. If the makers are even a few carbon atoms off, it can completely alter the properties of the ribbon, preventing it from working as a semiconductor at room temperature.

Alas, Berkeley chemist Felix Fischer thinks he might have found a solution. Rather than carving ribbons out of larger sheets like a sculptor, Fischer has begun creating nanoribbons from carbon atoms using a chemical process. Basically, he’s working on a new way to produce graphene that happens to already be in the right configuration for nanoribbons.

graphene-solarHe begins by synthesizing rings of carbon atoms similar in structure to benzene, then heats the molecules to encourage them to form a long chain. A second heating step strips away most of the hydrogen atoms, freeing up the carbon to form bonds in a honeycomb-like graphene structure. This process allows Fischer and his colleagues to control where each atom of carbon goes in the final nanoribbon.

On the scale Fischer is making them, graphene nanoribbons could be capable of transporting electrons thousands of times faster than a traditional copper conductor. They could also be packed very close together since a single ribbon is 1/10,000th the thickness of a human hair. Thus, if the process is perfected and scaled up, everything from CPUs to storage technology could be much faster and smaller.

Sources: extremetech.com

The Future of Computing: Brain-Like Computers

neuronsIt’s no secret that computer scientists and engineers are looking to the human brain as means of achieving the next great leap in computer evolution. Already, machines are being developed that rely on machine blood, can continue working despite being damaged, and recognize images and speech. And soon, a computer chip that is capable of learning from its mistakes will also be available.

The new computing approach, already in use by some large technology companies, is based on the biological nervous system – specifically on how neurons react to stimuli and connect with other neurons to interpret information. It allows computers to absorb new information while carrying out a task, and adjust what they do based on the changing signals.

brain_chip2The first commercial version of the new kind of computer chip is scheduled to be released in 2014, and was the result of a collaborative effort between I.B.M. and Qualcomm, as well as a Stanford research team. This “neuromorphic processor” can not only automate tasks that once required painstaking programming, but can also sidestep and even tolerate errors, potentially making the term “computer crash” obsolete.

In coming years, the approach will make possible a new generation of artificial intelligence systems that will perform some functions that humans do with ease: see, speak, listen, navigate, manipulate and control. That can hold enormous consequences for tasks like facial and speech recognition, navigation and planning, which are still in elementary stages and rely heavily on human programming.

googleneuralnetworkFor example, computer vision systems only “recognize” objects that can be identified by the statistics-oriented algorithms programmed into them. An algorithm is like a recipe, a set of step-by-step instructions to perform a calculation. But last year, Google researchers were able to get a machine-learning algorithm, known as a “Google Neural Network”, to perform an identification task (involving cats) without supervision.

And this past June, the company said it had used those neural network techniques to develop a new search service to help customers find specific photos more accurately. And this past November, researchers at Standford University came up with a new algorithm that could give computers the power to more reliably interpret language. It’s known as the Neural Analysis of Sentiment (NaSent).

deep_learning_laptopA similar concept known as Deep Leaning is also looking to endow software with a measure of common sense. Google is using this technique with their voice recognition technology to aid in performing searches. In addition, the social media giant Facebook is looking to use deep learning to help them improve Graph Search, an engine that allows users to search activity on their network.

Until now, the design of computers was dictated by ideas originated by the mathematician John von Neumann about 65 years ago. Microprocessors perform operations at lightning speed, following instructions programmed using long strings of binary code (0s and 1s). The information is stored separately in what is known as memory, either in the processor itself, in adjacent storage chips or in higher capacity magnetic disk drives.

neural-networksBy contrast, the new processors consist of electronic components that can be connected by wires that mimic biological synapses. Because they are based on large groups of neuron-like elements, they are known as neuromorphic processors, a term credited to the California Institute of Technology physicist Carver Mead, who pioneered the concept in the late 1980s.

These processors are not “programmed”, in the conventional sense. Instead, the connections between the circuits are “weighted” according to correlations in data that the processor has already “learned.” Those weights are then altered as data flows in to the chip, causing them to change their values and to “spike.” This, in turn, strengthens some connections and weakens others, reacting much the same way the human brain does.

Neuromorphic-chip-640x353In the words of Dharmendra Modha, an I.B.M. computer scientist who leads the company’s cognitive computing research effort:

Instead of bringing data to computation as we do today, we can now bring computation to data. Sensors become the computer, and it opens up a new way to use computer chips that can be everywhere.

One great advantage of the new approach is its ability to tolerate glitches, whereas traditional computers are cannot work around the failure of even a single transistor. With the biological designs, the algorithms are ever changing, allowing the system to continuously adapt and work around failures to complete tasks. Another benefit is energy efficiency, another inspiration drawn from the human brain.

IBM_stacked3dchipsThe new computers, which are still based on silicon chips, will not replace today’s computers, but augment them; at least for the foreseeable future. Many computer designers see them as coprocessors, meaning they can work in tandem with other circuits that can be embedded in smartphones and the centralized computers that run computing clouds.

However, the new approach is still limited, thanks to the fact that scientists still do not fully understand how the human brain functions. As Kwabena Boahen, a computer scientist who leads Stanford’s Brains in Silicon research program, put it:

We have no clue. I’m an engineer, and I build things. There are these highfalutin theories, but give me one that will let me build something.

calit2PhotoLuckily, there are efforts underway that are designed to remedy this, with the specific intention of directing that knowledge towards the creation of better computers and AIs. One such effort comes from the National Science Foundation financed the Center for Brains, Minds and Machines, a new research center based at the Massachusetts Institute of Technology, with Harvard and Cornell.

Another is the California Institute for Telecommunications and Information Technology (aka. Calit2) – a center dedicated to innovation in nanotechnology, life sciences, information technology, and telecommunications. As
Larry Smarr, an astrophysicist and director of Institute, put it:

We’re moving from engineering computing systems to something that has many of the characteristics of biological computing.

Human-Brain-project-Alp-ICTAnd last, but certainly not least, is the Human Brain Project, an international group of 200 scientists from 80 different research institutions and based in Lausanne, Switzerland. Having secured the $1.6 billion they need to fund their efforts, these researchers will spend the next ten years conducting research that cuts across multiple disciplines.

This initiative, which has been compared to the Large Hadron Collider, will attempt to reconstruct the human brain piece-by-piece and gradually bring these cognitive components into an overarching supercomputer. The expected result of this research will be new platforms for “neuromorphic computing” and “neurorobotics,” allowing for the creation of computing and robotic architectures that mimic the functions of the human brain.

neuromorphic_revolutionWhen future generations look back on this decade, no doubt they will refer to it as the birth of the neuromophic computing revolution. Or maybe just Neuromorphic Revolution for short, but that sort of depends on the outcome. With so many technological revolutions well underway, it is difficult to imagine how the future will look back and characterize this time.

Perhaps, as Charles Stross suggest, it will simply be known as “the teens”, that time in pre-Singularity history where it was all starting to come together, but was yet to explode and violently change everything we know. I for one am looking forward to being around to witness it all!

Sources: nytimes.com, technologyreview.com, calit2.net, humanbrainproject.eu

Year-End Tech News: Stanene and Nanoparticle Ink

3d.printingThe year of 2013 was also a boon for the high-tech industry, especially where electronics and additive manufacturing were concerned. In fact, several key developments took place last year that may help scientists and researchers to move beyond Moore’s Law, as well as ring in a new era of manufacturing and production.

In terms of computing, developers have long feared that Moore’s Law – which states that the number of transistors on integrated circuits doubles approximately every two years – could be reaching a bottleneck. While the law (really it’s more of an observation) has certainly held true for the past forty years, it has been understood for some time that the use of silicon and copper wiring would eventually impose limits.

copper_in_chips__620x350Basically, one can only miniaturize circuits made from these materials so much before resistance occurs and they are too fragile to be effective. Because of this, researchers have been looking for replacement materials to substitute the silicon that makes up the 1 billion transistors, and the one hundred or so kilometers of copper wire, that currently make up an integrated circuit.

Various materials have been proposed, such as graphene, carbyne, and even carbon nanotubes. But now, a group of researchers from Stanford University and the SLAC National Accelerator Laboratory in California are proposing another material. It’s known as Stanene, a theorized material fabricated from a single layer of tin atoms that is theoretically extremely efficient, even at high temperatures.

computer_chip5Compared to graphene, which is stupendously conductive, the researchers at Stanford and the SLAC claim that stanene should be a topological insulator. Topological insulators, due to their arrangement of electrons/nuclei, are insulators on their interior, but conductive along their edge and/or surface. Being only a single atom in thickness along its edges, this topological insulator can conduct electricity with 100% efficiency.

The Stanford and SLAC researchers also say that stanene would not only have 100%-efficiency edges at room temperature, but with a bit of fluorine, would also have 100% efficiency at temperatures of up to 100 degrees Celsius (212 Fahrenheit). This is very important if stanene is ever to be used in computer chips, which have operational temps of between 40 and 90 C (104 and 194 F).

Though the claim of perfect efficiency seems outlandish to some, others admit that near-perfect efficiency is possible. And while no stanene has been fabricated yet, it is unlikely that it would be hard to fashion some on a small scale, as the technology currently exists. However, it will likely be a very, very long time until stanene is used in the production of computer chips.

Battery-Printer-640x353In the realm of additive manufacturing (aka. 3-D printing) several major developments were made during the year 0f 2013. This one came from Harvard University, where a materials scientist named Jennifer Lewis Lewis – using currently technology – has developed new “inks” that can be used to print batteries and other electronic components.

3-D printing is already at work in the field of consumer electronics with casings and some smaller components being made on industrial 3D printers. However, the need for traditionally produced circuit boards and batteries limits the usefulness of 3D printing. If the work being done by Lewis proves fruitful, it could make fabrication of a finished product considerably faster and easier.

3d_batteryThe Harvard team is calling the material “ink,” but in fact, it’s a suspension of nanoparticles in a dense liquid medium. In the case of the battery printing ink, the team starts with a vial of deionized water and ethylene glycol and adds nanoparticles of lithium titanium oxide. The mixture is homogenized, then centrifuged to separate out any larger particles, and the battery ink is formed.

This process is possible because of the unique properties of the nanoparticle suspension. It is mostly solid as it sits in the printer ready to be applied, then begins to flow like liquid when pressure is increased. Once it leaves the custom printer nozzle, it returns to a solid state. From this, Lewis’ team was able to lay down multiple layers of this ink with extreme precision at 100-nanometer accuracy.

laser-welding-640x353The tiny batteries being printed are about 1mm square, and could pack even higher energy density than conventional cells thanks to the intricate constructions. This approach is much more realistic than other metal printing technologies because it happens at room temperature, no need for microwaves, lasers or high-temperatures at all.

More importantly, it works with existing industrial 3D printers that were built to work with plastics. Because of this, battery production can be done cheaply using printers that cost on the order of a few hundred dollars, and not industrial-sized ones that can cost upwards of $1 million.

Smaller computers, and smaller, more efficient batteries. It seems that miniaturization, which some feared would be plateauing this decade, is safe for the foreseeable future! So I guess we can keep counting on our electronics getting smaller, harder to use, and easier to lose for the next few years. Yay for us!

Sources: extremetech.com, (2)

The Future is Here: Carbon Nanotube Computers

carbon-nanotubeSilicon Valley is undergoing a major shift, one which may require it to rethink its name. This is thanks in no small part to the efforts of a team based at Stanford that is seeking to create the first basic computer built around carbon nanotubes rather than silicon chips. In addition to changing how computers are built, this is likely to extend the efficiency and performance.

What’s more, this change may deal a serious blow to the law of computing known as Moore’s Law. For decades now, the exponential acceleration of technology – which has taken us from room-size computers run by punched paper cards to handheld devices with far more computing power – has depended the ability to place more and more transistors onto an individual chip.

PPTMooresLawaiThe result of this ongoing trend in miniaturization has been devices that are becoming smaller, more powerful, and cheaper. The law used to describe this – though “basic rule” would be a more apt description – states that the number of transistors on a chip has been doubling every 18 months or so since the dawn of the information age. This is what is known as “Moore’s Law.”

However, this trend could be coming to an end, mainly because its becoming increasingly difficult, expensive and inefficient to keep jamming more tiny transistors on a chip. In addition, there are the inevitable physical limitations involved, as miniaturization can only go on for so long before its becomes unfeasible.

carbon_nanotubecomputerCarbon nanotubes, which are long chains of carbon atoms thousands of times thinner than a human hair, have the potential to be more energy-efficient and outperform computers made with silicon components. Using a technique that involved “burning” off and weeding out imperfections with an algorithm from the nanotube matrix, the team built a very basic computer with 178 transistors that can do tasks like counting and number sorting.

In a recent release from the university, Stanford professor Subhasish Mitra said:

People have been talking about a new era of carbon nanotube electronics moving beyond silicon. But there have been few demonstrations of complete digital systems using this exciting technology. Here is the proof.

Naturally, this computer is more of a proof of concept than a working prototype. There are still a number of problems with the idea, such as the fact that nanotubes don’t always grow in straight lines and cannot always “switch off” like a regular transistor. The Stanford team’s computer’s also has limited power due to the limited facilities they had to work with, which did not have access to industrial fabrication tools.

carbon_nanotube2All told, their computer is only about as powerful as an Intel 4004, the first single-chip silicon microprocessor that was released in 1971. But given time, we can expect more sophisticated designs to emerge, especially if design teams have access to top of the line facilities to build prototypes.

And this research team is hardly alone in this regard. Last year, Silicon Valley giant IBM managed to create their own transistors using carbon nanotubes and also found that they outperformed the transistors made of silicon. What’s more, these transistors measured less than ten nanometers across, and were able to operated using very low voltage.

carbon_nanotube_transistorSimilarly, a research team from Northwestern University in Evanston, Illinois managed to create something very similar. In their case, this consisted of a logic gate – the fundamental circuit that all integrated circuits are based on – using carbon nanotubes to create transistors that operate in a CMOS-like architecture. And much like IBM and the Standford team’s transistors, it functioned at very low power levels.

What this demonstrated is that carbon nanotube transistors and other computer components are not only feasible, but are able to outperform transistors many times their size while using a fraction of the power. Hence, it is probably only a matter of time before a fully-functional computer is built – using carbon nanotube components – that will supersede silicon systems and throw Moore’s Law out the window.

Sources: news.cnet.com, (2), fastcolabs.com

IBM Creates First Photonic Microchip

optical_computer1For many years, optical computing has been a subject of great interest for engineers and researchers. As opposed to the current crop of computers which rely on the movement of electrons in and out of transistors to do logic, an optical computer relies on the movement of photons. Such a computer would confer obvious advantages, mainly in the realm of computing speed since photons travel much faster than electrical current.

While the concept and technology is relatively straightforward, no one has been able to develop photonic components that were commercially viable. All that changed this past December as IBM became the first company to integrate electrical and optical components on the same chip. As expected, when tested, this new chip was able to transmit data significantly faster than current state-of-the-art copper and optical networks.

ibm-silicon-nanophotonic-chip-copper-and-waveguidesBut what was surprising was just how fast the difference really was. Whereas current interconnects are generally measured in gigabits per second, IBM’s new chip is already capable of shuttling data around at terabits per second. In other words, over a thousand times faster than what we’re currently used to. And since it will be no big task or expense to replace the current generation of electrical components with photonic ones, we could be seeing this chip taking the place of our standard CPUs really soon!

This comes after a decade of research and an announcement made back in 2010, specifically that IBM Research was tackling the concept of silicon nanophotonics. And since they’ve proven they can create the chips commercially, they could be on the market within just a couple of years. This is certainly big news for supercomputing and the cloud, where limited bandwidth between servers is a major bottleneck for those with a need for speed!

internetCool as this is, there are actually two key breakthroughs to boast about here. First, IBM has managed to build a monolithic silicon chip that integrates both electrical (transistors, capacitors, resistors) and optical (modulators, photodetectors, waveguides) components. Monolithic means that the entire chip is fabricated from a single crystal of silicon on a single production line, and the optical and electrical components are mixed up together to form an integrated circuit.

Second, and perhaps more importantly, IBM was able to manufacture these chips using the same process they use to produce the CPUs for the Xbox 360, PS3, and Wii. This was not easy, according to internal sources, but in so doing, they can produce this new chip using their standard manufacturing process, which will not only save them money in the long run, but make the conversion process that much cheaper and easier. From all outward indications, it seems that IBM spent most of the last two years trying to ensure that this aspect of the process would work.

Woman-Smashing-ComputerExcited yet? Or perhaps concerned that this boost in speed will mean even more competition and the need to constantly upgrade? Well, given the history of computing and technological progress, both of these sentiments would be right on the money. On the one hand, this development may herald all kinds of changes and possibilities for research and development, with breakthroughs coming within days and weeks instead of years.

At the same time, it could mean that rest of us will be even more hard pressed to keep our software and hardware current, which can be frustrating as hell. As it stands, Moore’s Law states that it takes between 18 months and two years for CPUs to double in speed. Now imagine that dwindling to just a few weeks, and you’ve got a whole new ballgame!

Source: Extremetech.com