Powered by the Sun: Mirrored Solar Dishes

sun_magneticfieldIn the race to develop alternative energy sources, solar power is the undeniable top contender. In addition to being infinitely renewable So much sunlight hits the Earth each day that the world’s entire electricity needs could be met by harvesting only 2% of the solar energy in the Sahara Desert. Of course, this goal has remained elusive due to the problem of costs – both in the manufacture of solar panels and the installation therefor.

But researchers at IBM think they’re one step closer to making solar universally accessible with a low-cost system that can concentrate the sunlight by 2,000 times. The system uses a dish covered in mirrors to aim sunlight in a small area, and which follows the sun throughout the day to catch the most light. Other concentrated solar power systems do the same thing, but a typical system only converts around 20% of the incoming light to usable energy, while this one can convert 80%.

Inline_solardishThis not only ensures a much larger yield, but also makes the energy it harvests cheap. Bruno Michel, the manager for advanced thermal packaging at IBM Research, believes the design could be three-times cheaper than “comparable” systems. Officially, the estimate he provides claim that the cost per kilowatt hour will work out to less than 10 cents, which works out to 0.01 cents per watt (significantly cheaper than the $0.74 per watt of standard solar).

But as he explains, using simple materials also helps:

The reflective material we use for the mirror facets are similar to that of potato chip bags. The reinforced concrete is also similar to what is being used to build bridges around the world. So outside of the receiver, which contains the photovoltaic chips, we are using standard materials.

A few small high-tech parts will be built in Switzerland (where the prototype is currently being produced). but the main parts of the equipment could easily be built locally, wherever it’s being used. It’s especially well-suited for sunny areas that happen to be dry. As the system runs, it can use excess heat that would normally be wasted to desalinate water. Hence, a large installation could provide not only abundant electricity, but clean drinking water for an entire town.

inline-i-solar-02A combined system of this kind could be an incredible boon to economies in parts of the world that are surrounded by deserts, such as North Africa or Mongolia. But given the increasing risk of worldwide droughts caused by Climate Change, it may also become a necessity in the developed world. Here, such dishes could not only provide clean energy that would reduce our carbon footprint, but also process water for agricultural use, thus combating the problem on two fronts.

IBM researchers are currently working with partners at Airlight Energy, ETH-Zurich, and Interstate University of Applied Sciences Buchs NTB to finish building a large prototype, which they anticipate will be ready by the end of this summer. After testing, they hope to start production at scale within 18 months. Combined with many, many other plans to make panels cheaper and more effective, we can expect to be seeing countless options for solar appearing in the near future.

And if recent years are any indication, we can expect solar usage to double before the year is out.

Sources: fastcoexist.com

Cyberwars: NSA Building Quantum Computer

D-Wave's 128-qubit quantum processorAs documents that illustrate the NSA’s clandestine behavior continue to be leaked, the extents to which the agency has been going to gain supremacy over cyberspace are becoming ever more clear. Thanks to a new series of documents released by Snowden, it now seems that these efforts included two programs who’s purpose was to create a ““useful quantum computer” that would be capable of breaking all known forms of classical encryption.

According to the documents, which were published by The Washington Post earlier this month, there are at least two programs that deal with quantum computers and their use in breaking classical encryption — “Penetrating Hard Targets” and “Owning the Net.” The first program is funded to the tune of $79.7 million and includes efforts to build “a cryptologically useful quantum computer” that can:

sustain and enhance research operations at NSA/CSS Washington locations, including the Laboratory for Physical Sciences facility in College Park, MD.

nsa_aerialThe second program, Owning the Net, deals with developing new methods of intercepting communications, including the use of quantum computers to break encryption. Given the fact that quanutm machinery is considered the next great leap in computer science, offering unprecedented speed and the ability to conduct operations at many times the efficiency of normal computers, this should not come as a surprise.

Such a computer would give the NSA unprecedented access to encrypted files and communications, enadling them to break any protective cypher, access anyone’s data with ease, and mount cyber attacks with impunity. But a working model would also vital for defensive purposes. Much in the same way that the Cold War involved ongoing escalation between nuclear armament production, cybersecurity wars are also subject to constant one-upmanship.

quantum-computers-The-Next-GenerationIn short, if China, Russia, or some other potentially hostile power were to obtain a quantum computer before the US, all of its encrypted information would be laid bare. Under the circumstances, and given their mandate to protect the US’s infrastructure, data and people from harm, the NSA would much rather they come into possesion of one first. Hence why so much attention is dedicated to the issue, since whoever builds the worlds first quantum computer will enjoy full-court dominance for a time.

The mathematical, cryptographical, and quantum mechanical communities have long known that quantum computing should be able to crack classical encryption very easily. To crack RSA, the world’s prevailing cryptosystem, you need to be able to factor prime numbers — a task that is very difficult with a normal, classical-physics CPU, but might be very easy for a quantum computer. But of course, the emphasis is still very much on the word might, as no one has built a fully functioning multi-qubit quantum computer yet.

quantum-entanglement1As for when that might be, no one can say for sure. But the smart money is apparently anticipating one soon, since researchers are getting to the point where coherence on a single qubit-level is becoming feasible, allowing them to move on to the trickier subject of stringing multiple fully-entangled qubits together, as well as the necessary error checking/fault tolerance measures that go along with multi-qubit setups.

But from what it’s published so far, the Laboratory for Physical Sciences – which is carrying out the NSA’s quantum computing work under contract – doesn’t seem to be leading the pack in terms of building a quantum computer. In this respect, it’s IBM with its superconducting waveguide-cavity qubits that appears to be closer to realizing a quantum computer, with other major IT firms and their own supcomputer models not far behind.

hackers_securityDespite what this recent set of leaks demonstrates then, the public should take comfort in knowing that the NSA is not ahead of the rest of the industry. In reality, something like a working quantum computer would be so hugely significant that it would be impossible for the NSA to develop it internally and keep it a secret. And by the time the NSA does have a working quantum computer to intercept all of our encrypted data, they won’t be the only ones, which would ensure they lacked dominance in this field.

So really, thess latest leaks ought to not worry people too much, and instead should put the NSAs ongoing struggle to control cyberspace in perspective. One might go so far as to say that the NSA is trying to remain relevant in an age where they are becoming increasingly outmatched. With billions of terabytes traversing the globe on any given day and trillions of devices and sensors creating a “second skin” of information over the globe, no one organization is capable of controlling or monitoring it all.

So to those in the habit of dredging up 1984 every time they hear about the latest NSA and domestic surveillance scandal, I say: Suck on it, Big Brother!

Source: wired.com

The Future of Computing: Brain-Like Computers

neuronsIt’s no secret that computer scientists and engineers are looking to the human brain as means of achieving the next great leap in computer evolution. Already, machines are being developed that rely on machine blood, can continue working despite being damaged, and recognize images and speech. And soon, a computer chip that is capable of learning from its mistakes will also be available.

The new computing approach, already in use by some large technology companies, is based on the biological nervous system – specifically on how neurons react to stimuli and connect with other neurons to interpret information. It allows computers to absorb new information while carrying out a task, and adjust what they do based on the changing signals.

brain_chip2The first commercial version of the new kind of computer chip is scheduled to be released in 2014, and was the result of a collaborative effort between I.B.M. and Qualcomm, as well as a Stanford research team. This “neuromorphic processor” can not only automate tasks that once required painstaking programming, but can also sidestep and even tolerate errors, potentially making the term “computer crash” obsolete.

In coming years, the approach will make possible a new generation of artificial intelligence systems that will perform some functions that humans do with ease: see, speak, listen, navigate, manipulate and control. That can hold enormous consequences for tasks like facial and speech recognition, navigation and planning, which are still in elementary stages and rely heavily on human programming.

googleneuralnetworkFor example, computer vision systems only “recognize” objects that can be identified by the statistics-oriented algorithms programmed into them. An algorithm is like a recipe, a set of step-by-step instructions to perform a calculation. But last year, Google researchers were able to get a machine-learning algorithm, known as a “Google Neural Network”, to perform an identification task (involving cats) without supervision.

And this past June, the company said it had used those neural network techniques to develop a new search service to help customers find specific photos more accurately. And this past November, researchers at Standford University came up with a new algorithm that could give computers the power to more reliably interpret language. It’s known as the Neural Analysis of Sentiment (NaSent).

deep_learning_laptopA similar concept known as Deep Leaning is also looking to endow software with a measure of common sense. Google is using this technique with their voice recognition technology to aid in performing searches. In addition, the social media giant Facebook is looking to use deep learning to help them improve Graph Search, an engine that allows users to search activity on their network.

Until now, the design of computers was dictated by ideas originated by the mathematician John von Neumann about 65 years ago. Microprocessors perform operations at lightning speed, following instructions programmed using long strings of binary code (0s and 1s). The information is stored separately in what is known as memory, either in the processor itself, in adjacent storage chips or in higher capacity magnetic disk drives.

neural-networksBy contrast, the new processors consist of electronic components that can be connected by wires that mimic biological synapses. Because they are based on large groups of neuron-like elements, they are known as neuromorphic processors, a term credited to the California Institute of Technology physicist Carver Mead, who pioneered the concept in the late 1980s.

These processors are not “programmed”, in the conventional sense. Instead, the connections between the circuits are “weighted” according to correlations in data that the processor has already “learned.” Those weights are then altered as data flows in to the chip, causing them to change their values and to “spike.” This, in turn, strengthens some connections and weakens others, reacting much the same way the human brain does.

Neuromorphic-chip-640x353In the words of Dharmendra Modha, an I.B.M. computer scientist who leads the company’s cognitive computing research effort:

Instead of bringing data to computation as we do today, we can now bring computation to data. Sensors become the computer, and it opens up a new way to use computer chips that can be everywhere.

One great advantage of the new approach is its ability to tolerate glitches, whereas traditional computers are cannot work around the failure of even a single transistor. With the biological designs, the algorithms are ever changing, allowing the system to continuously adapt and work around failures to complete tasks. Another benefit is energy efficiency, another inspiration drawn from the human brain.

IBM_stacked3dchipsThe new computers, which are still based on silicon chips, will not replace today’s computers, but augment them; at least for the foreseeable future. Many computer designers see them as coprocessors, meaning they can work in tandem with other circuits that can be embedded in smartphones and the centralized computers that run computing clouds.

However, the new approach is still limited, thanks to the fact that scientists still do not fully understand how the human brain functions. As Kwabena Boahen, a computer scientist who leads Stanford’s Brains in Silicon research program, put it:

We have no clue. I’m an engineer, and I build things. There are these highfalutin theories, but give me one that will let me build something.

calit2PhotoLuckily, there are efforts underway that are designed to remedy this, with the specific intention of directing that knowledge towards the creation of better computers and AIs. One such effort comes from the National Science Foundation financed the Center for Brains, Minds and Machines, a new research center based at the Massachusetts Institute of Technology, with Harvard and Cornell.

Another is the California Institute for Telecommunications and Information Technology (aka. Calit2) – a center dedicated to innovation in nanotechnology, life sciences, information technology, and telecommunications. As
Larry Smarr, an astrophysicist and director of Institute, put it:

We’re moving from engineering computing systems to something that has many of the characteristics of biological computing.

Human-Brain-project-Alp-ICTAnd last, but certainly not least, is the Human Brain Project, an international group of 200 scientists from 80 different research institutions and based in Lausanne, Switzerland. Having secured the $1.6 billion they need to fund their efforts, these researchers will spend the next ten years conducting research that cuts across multiple disciplines.

This initiative, which has been compared to the Large Hadron Collider, will attempt to reconstruct the human brain piece-by-piece and gradually bring these cognitive components into an overarching supercomputer. The expected result of this research will be new platforms for “neuromorphic computing” and “neurorobotics,” allowing for the creation of computing and robotic architectures that mimic the functions of the human brain.

neuromorphic_revolutionWhen future generations look back on this decade, no doubt they will refer to it as the birth of the neuromophic computing revolution. Or maybe just Neuromorphic Revolution for short, but that sort of depends on the outcome. With so many technological revolutions well underway, it is difficult to imagine how the future will look back and characterize this time.

Perhaps, as Charles Stross suggest, it will simply be known as “the teens”, that time in pre-Singularity history where it was all starting to come together, but was yet to explode and violently change everything we know. I for one am looking forward to being around to witness it all!

Sources: nytimes.com, technologyreview.com, calit2.net, humanbrainproject.eu

Judgement Day Update: Bionic Computing!

big_blue1IBM has always been at the forefront of cutting-edge technology. Whether it was with the development computers that could guide ICBMs and rockets into space during the Cold War, or the creation of the Internet during the early 90’s, they have managed to stay on the vanguard by constantly looking ahead. So it comes as no surprise that they had plenty to say last month on the subject of the next of the next big leap.

During a media tour of their Zurich lab in late October, IBM presented some of the company’s latest concepts. According to the company, the key to creating supermachines that 10,000 faster and more efficient is to build bionic computers cooled and powered by electronic blood. The end result of this plan is what is known as “Big Blue”, a proposed biocomputer that they anticipate will take 10 years to make.

Human-Brain-project-Alp-ICTIntrinsic to the design is the merger of computing and biological forms, specifically the human brain. In terms of computing, IBM is relying the human brain as their template. Through this, they hope to be able to enable processing power that’s densely packed into 3D volumes rather than spread out across flat 2D circuit boards with slow communication links.

On the biological side of things, IBM is supplying computing equipment to the Human Brain Project (HBP) – a $1.3 billion European effort that uses computers to simulate the actual workings of an entire brain. Beginning with mice, but then working their way up to human beings, their simulations examine the inner workings of the mind all the way down to the biochemical level of the neuron.

brain_chip2It’s all part of what IBM calls “the cognitive systems era”, a future where computers aren’t just programmed, but also perceive what’s going on, make judgments, communicate with natural language, and learn from experience. As the description would suggest, it is closely related to artificial intelligence, and may very well prove to be the curtain raiser of the AI era.

One of the key challenge behind this work is matching the brain’s power consumption. The ability to process the subtleties of human language helped IBM’s Watson supercomputer win at “Jeopardy.” That was a high-profile step on the road to cognitive computing, but from a practical perspective, it also showed how much farther computing has to go. Whereas Watson uses 85 kilowatts of power, the human brain uses only 20 watts.

aquasar2Already, a shift has been occurring in computing, which is evident in the way engineers and technicians are now measuring computer progress. For the past few decades, the method of choice for gauging performance was operations per second, or the rate at which a machine could perform mathematical calculations.

But as a computers began to require prohibitive amounts of power to perform various functions and generated far too much waste heat, a new measurement was called for. The new measurement that emerged as a result was expressed in operations per joule of energy consumed. In short, progress has come to be measured in term’s of a computer’s energy efficiency.

IBM_Research_ZurichBut now, IBM is contemplating another method for measuring progress that is known as “operations per liter”. In accordance with this new paradigm, the success of a computer will be judged by how much data-processing can be squeezed into a given volume of space. This is where the brain really serves as a source of inspiration, being the most efficient computer in terms of performance per cubic centimeter.

As it stands, today’s computers consist of transistors and circuits laid out on flat boards that ensure plenty of contact with air that cools the chips. But as Bruno Michel – a biophysics professor and researcher in advanced thermal packaging for IBM Research – explains, this is a terribly inefficient use of space:

In a computer, processors occupy one-millionth of the volume. In a brain, it’s 40 percent. Our brain is a volumetric, dense, object.

IBM_stacked3dchipsIn short, communication links between processing elements can’t keep up with data-transfer demands, and they consume too much power as well. The proposed solution is to stack and link chips into dense 3D configurations, a process which is impossible today because stacking even two chips means crippling overheating problems. That’s where the “liquid blood” comes in, at least as far as cooling is concerned.

This process is demonstrated with the company’s prototype system called Aquasar. By branching chips into a network of liquid cooling channels that funnel fluid into ever-smaller tubes, the chips can be stacked together in large configurations without overheating. The liquid passes not next to the chip, but through it, drawing away heat in the thousandth of a second it takes to make the trip.

aquasarIn addition, IBM also is developing a system called a redox flow battery that uses liquid to distribute power instead of using wires. Two types of electrolyte fluid, each with oppositely charged electrical ions, circulate through the system to distribute power, much in the same way that the human body provides oxygen, nutrients and cooling to brain through the blood.

The electrolytes travel through ever-smaller tubes that are about 100 microns wide at their smallest – the width of a human hair – before handing off their power to conventional electrical wires. Flow batteries can produce between 0.5 and 3 volts, and that in turn means IBM can use the technology today to supply 1 watt of power for every square centimeter of a computer’s circuit board.

IBM_Blue_Gene_P_supercomputerAlready, the IBM Blue Gene supercomputer has been used for brain research by the Blue Brain Project at the Ecole Polytechnique Federale de Lausanne (EPFL) in Lausanne, Switzerland. Working with the HBP, their next step ill be to augment a Blue Gene/Q with additional flash memory at the Swiss National Supercomputing Center.

After that, they will begin simulating the inner workings of the mouse brain, which consists of 70 million neurons. By the time they will be conducting human brain simulations, they plan to be using an “exascale” machine – one that performs 1 exaflops, or quintillion floating-point operations per second. This will take place at the Juelich Supercomputing Center in northern Germany.

brain-activityThis is no easy challenge, mainly because the brain is so complex. In addition to 100 billion neurons and 100 trillionsynapses,  there are 55 different varieties of neuron, and 3,000 ways they can interconnect. That complexity is multiplied by differences that appear with 600 different diseases, genetic variation from one person to the next, and changes that go along with the age and sex of humans.

As Henry Markram, the co-director of EPFL who has worked on the Blue Brain project for years:

If you can’t experimentally map the brain, you have to predict it — the numbers of neurons, the types, where the proteins are located, how they’ll interact. We have to develop an entirely new science where we predict most of the stuff that cannot be measured.

child-ai-brainWith the Human Brain Project, researchers will use supercomputers to reproduce how brains form in an virtual vat. Then, they will see how they respond to input signals from simulated senses and nervous system. If it works, actual brain behavior should emerge from the fundamental framework inside the computer, and where it doesn’t work, scientists will know where their knowledge falls short.

The end result of all this will also be computers that are “neuromorphic” – capable of imitating human brains, thereby ushering in an age when machines will be able to truly think, reason, and make autonomous decisions. No more supercomputers that are tall on knowledge but short on understanding. The age of artificial intelligence will be upon us. And I think we all know what will follow, don’t we?

Evolution-of-the-Cylon_1024Yep, that’s what! And may God help us all!

Sources: news.cnet.com, extremetech.com

The Future of Computing: Graphene Chips and Transistors

computer_chip4The basic law of computer evolution, known as Moore’s Law, teaches that within every two years, the number of transistors on a computer chip will double. What this means is that every couple of years, computer speeds will double, effectively making the previous technology obsolete. Recently, analysts have refined this period to about 18 months or less, as the rate of increase itself seems to be increasing.

This explosion in computing power is due to ongoing improvements in the field of miniaturization. As the component pieces get smaller and smaller, engineers are able to cram more and more of them onto chips of the same size. However, it does make one wonder just how far it will all go. Certainly there is a limit to how small things can get before they cease working.

GrapheneAccording to the International Technology Roadmap for Semiconductors (ITRS), a standard which has been established by the industry’s top experts, that limit will be reached in 2015. By then, engineers will have reached the threshold of 22 nanometers, the limit of thickness before the copper wiring that currently connect the billions of transistors in a modern CPU or GPU will be made unworkable due to resistance and other mechanical issues.

However, recent revelations about the material known as graphene show that it is not hampered by the same mechanical restrictions. As such, it could theoretically be scaled down to the point where it is just a few nanometers, allowing for the creation of computer chips that are orders of magnitude more dense and powerful, while consuming less energy.

IBM-Graphene-ICBack in 2011, IBM built what it called the first graphene integrated circuit, but in truth, only some of the transistors and inductors were made of graphene while other standard components (like copper wiring) was still employed. But now, a team at the University of California Santa Barbara (UCSB) have proposed the first all-graphene chip, where the transistors and interconnects are monolithically patterned on a single sheet of graphene.

In their research paper, “Proposal for all-graphene monolithic logic circuits,” the UCSB researchers say that:

[D]evices and interconnects can be built using the ‘same starting material’ — graphene… all-graphene circuits can surpass the static performances of the 22nm complementary metal-oxide-semiconductor devices.

graphene_transistormodelTo build an all-graphene IC (pictured here), the researchers propose using one of graphene’s interesting qualities, that depending on its thickness it behaves in different ways. Narrow ribbons of graphene are semiconducting, ideal for making transistors while wider ribbons are metallic, ideal for gates and interconnects.

For now, the UCSB team’s design is simply a computer model that should technically work, but which hasn’t been built yet. In theory, though, with the worldwide efforts to improve high-quality graphene production and patterning, it should only be a few years before an all-graphene integrated circuit is built. As for full-scale commercial production, that is likely to take a decade or so.

When that happens though, another explosive period of growth in computing speed, coupled with lower power consumption is to be expected. From there, subsequent leaps are likely to involve carbon nanotubes components, true quantum computing, and perhaps even biotechnological circuits. Oh the places it will all go!

Source: extremetech.com

The Future is Here: Carbon Nanotube Computers

carbon-nanotubeSilicon Valley is undergoing a major shift, one which may require it to rethink its name. This is thanks in no small part to the efforts of a team based at Stanford that is seeking to create the first basic computer built around carbon nanotubes rather than silicon chips. In addition to changing how computers are built, this is likely to extend the efficiency and performance.

What’s more, this change may deal a serious blow to the law of computing known as Moore’s Law. For decades now, the exponential acceleration of technology – which has taken us from room-size computers run by punched paper cards to handheld devices with far more computing power – has depended the ability to place more and more transistors onto an individual chip.

PPTMooresLawaiThe result of this ongoing trend in miniaturization has been devices that are becoming smaller, more powerful, and cheaper. The law used to describe this – though “basic rule” would be a more apt description – states that the number of transistors on a chip has been doubling every 18 months or so since the dawn of the information age. This is what is known as “Moore’s Law.”

However, this trend could be coming to an end, mainly because its becoming increasingly difficult, expensive and inefficient to keep jamming more tiny transistors on a chip. In addition, there are the inevitable physical limitations involved, as miniaturization can only go on for so long before its becomes unfeasible.

carbon_nanotubecomputerCarbon nanotubes, which are long chains of carbon atoms thousands of times thinner than a human hair, have the potential to be more energy-efficient and outperform computers made with silicon components. Using a technique that involved “burning” off and weeding out imperfections with an algorithm from the nanotube matrix, the team built a very basic computer with 178 transistors that can do tasks like counting and number sorting.

In a recent release from the university, Stanford professor Subhasish Mitra said:

People have been talking about a new era of carbon nanotube electronics moving beyond silicon. But there have been few demonstrations of complete digital systems using this exciting technology. Here is the proof.

Naturally, this computer is more of a proof of concept than a working prototype. There are still a number of problems with the idea, such as the fact that nanotubes don’t always grow in straight lines and cannot always “switch off” like a regular transistor. The Stanford team’s computer’s also has limited power due to the limited facilities they had to work with, which did not have access to industrial fabrication tools.

carbon_nanotube2All told, their computer is only about as powerful as an Intel 4004, the first single-chip silicon microprocessor that was released in 1971. But given time, we can expect more sophisticated designs to emerge, especially if design teams have access to top of the line facilities to build prototypes.

And this research team is hardly alone in this regard. Last year, Silicon Valley giant IBM managed to create their own transistors using carbon nanotubes and also found that they outperformed the transistors made of silicon. What’s more, these transistors measured less than ten nanometers across, and were able to operated using very low voltage.

carbon_nanotube_transistorSimilarly, a research team from Northwestern University in Evanston, Illinois managed to create something very similar. In their case, this consisted of a logic gate – the fundamental circuit that all integrated circuits are based on – using carbon nanotubes to create transistors that operate in a CMOS-like architecture. And much like IBM and the Standford team’s transistors, it functioned at very low power levels.

What this demonstrated is that carbon nanotube transistors and other computer components are not only feasible, but are able to outperform transistors many times their size while using a fraction of the power. Hence, it is probably only a matter of time before a fully-functional computer is built – using carbon nanotube components – that will supersede silicon systems and throw Moore’s Law out the window.

Sources: news.cnet.com, (2), fastcolabs.com

The Future is Here: The Neuromimetic Processor

Neuromorphic-chip-640x353It’s known as mimetic technology, machinery that mimics the function and behavior of organic life. For some time, scientists have been using this philosophy to further develop computing, a process which many believe to be paradoxical. In gleaming inspiration from the organic world to design better computers, scientists are basically creating the machinery that could lead to better organics.

But when it comes to Neuromoprhic processors, computers that mimic the function of the human brain, scientists have been lagging behind sequential computing. For instance, IBM announced this past November that its Blue Gene/Q Sequoia supercomputer could clock 16 quadrillion calculations per second, and could crudely simulate more than 530 billion neurons – roughly five times that of a human brain. However, doing this required 8 megawatts of power, enough to power 1600 homes.

connectomeHowever, Kwabena Boahen, a bioengineering professor at Stanford University recently developed a new computing platform that he calls the “Neurogrid”. Each Neurogrid board, running at only 5 watts, can simulate detailed neuronal activity of one million neurons — and it can now do it in real time. Giving the processing to cost ratio in electricity, this means that his new chip is roughly 100,000 times more efficient than other supercomputer.

What’s more, its likely to mean the wide-scale adoption of processors that mimic human neuronal behavior over traditional computer chips. Whereas sequential computing relies on simulated ion-channels to create software-generated “neurons”, the neuromorphic approach involves the flow of ions through channels in a way that emulates the flow of electrons through transistors. Basically, the difference in emulation is a difference between software that mimics the behavior, and hardware.

AI_picWhat’s more, its likely to be a major stepping stone towards the creation of AI and MMI. That’s Artificial Intelligence and Man-Machine Interface for those who don’t speak geek. With computer chips imitating human brains and achieving a measure of intelligence which can be measured in terms of neurons and connections, the likelihood that they will be able to merge with a person’s brain, and thus augment their intelligence, becomes that much more likely.

Source: Extremetech.com

IBM Creates First Photonic Microchip

optical_computer1For many years, optical computing has been a subject of great interest for engineers and researchers. As opposed to the current crop of computers which rely on the movement of electrons in and out of transistors to do logic, an optical computer relies on the movement of photons. Such a computer would confer obvious advantages, mainly in the realm of computing speed since photons travel much faster than electrical current.

While the concept and technology is relatively straightforward, no one has been able to develop photonic components that were commercially viable. All that changed this past December as IBM became the first company to integrate electrical and optical components on the same chip. As expected, when tested, this new chip was able to transmit data significantly faster than current state-of-the-art copper and optical networks.

ibm-silicon-nanophotonic-chip-copper-and-waveguidesBut what was surprising was just how fast the difference really was. Whereas current interconnects are generally measured in gigabits per second, IBM’s new chip is already capable of shuttling data around at terabits per second. In other words, over a thousand times faster than what we’re currently used to. And since it will be no big task or expense to replace the current generation of electrical components with photonic ones, we could be seeing this chip taking the place of our standard CPUs really soon!

This comes after a decade of research and an announcement made back in 2010, specifically that IBM Research was tackling the concept of silicon nanophotonics. And since they’ve proven they can create the chips commercially, they could be on the market within just a couple of years. This is certainly big news for supercomputing and the cloud, where limited bandwidth between servers is a major bottleneck for those with a need for speed!

internetCool as this is, there are actually two key breakthroughs to boast about here. First, IBM has managed to build a monolithic silicon chip that integrates both electrical (transistors, capacitors, resistors) and optical (modulators, photodetectors, waveguides) components. Monolithic means that the entire chip is fabricated from a single crystal of silicon on a single production line, and the optical and electrical components are mixed up together to form an integrated circuit.

Second, and perhaps more importantly, IBM was able to manufacture these chips using the same process they use to produce the CPUs for the Xbox 360, PS3, and Wii. This was not easy, according to internal sources, but in so doing, they can produce this new chip using their standard manufacturing process, which will not only save them money in the long run, but make the conversion process that much cheaper and easier. From all outward indications, it seems that IBM spent most of the last two years trying to ensure that this aspect of the process would work.

Woman-Smashing-ComputerExcited yet? Or perhaps concerned that this boost in speed will mean even more competition and the need to constantly upgrade? Well, given the history of computing and technological progress, both of these sentiments would be right on the money. On the one hand, this development may herald all kinds of changes and possibilities for research and development, with breakthroughs coming within days and weeks instead of years.

At the same time, it could mean that rest of us will be even more hard pressed to keep our software and hardware current, which can be frustrating as hell. As it stands, Moore’s Law states that it takes between 18 months and two years for CPUs to double in speed. Now imagine that dwindling to just a few weeks, and you’ve got a whole new ballgame!

Source: Extremetech.com

Big News in Quantum Science!

Welcome all to my 800th post! Woot woot! I couldn’t possibly think of anything to special to write about to mark the occasion, as I seem to acknowledge far too many of these occasions. So instead I thought I’d wait for a much bigger milestone which is on the way and simply do a regular article. Hope you enjoy it, it is the 800th one I’ve written 😉

*                    *                    *

C2012 saw quite a few technical developments and firsts being made; so many in fact that I had to dedicate two full posts to them! However, one story which didn’t make many news cycles, but may prove to be no less significant, was the  advances made in the field of quantum science. In fact, the strides made in this field during the past year were the first indication that a global, quantum internet might actually be possible.

For some time now, scientists and researchers have been toying with the concept of machinery that relies on quantum mechanics. Basically, the idea revolves around “quantum teleportation”, a process where quantum states of matter, rather than matter itself, are beamed from one location to another. Currently, this involves using a high-powered laser to fire entangled photons from one location to the next. When the photons at the receiving end take on the properties of the photon sent, a quantum teleportation has occurred, a process which is faster than the speed of light since matter is not actually moving, only its properties.

quantum-teleportation-star-trails-canary-islands-1-640x353Two years ago, scientists set the record for the longest teleportation by beaming a photon some 16 km. However, last year, a team of international researchers was able to beam the properties of a photon from their lab in La Palma to another lab in Tenerife, some 143 km away. Not only was this a new record, it was significant because 143 km happens to be just far enough to reach low Earth orbit satellites, thus proving that a world-spanning quantum network could be built.

Shortly thereafter, China struck back with its own advance, conducting the first teleportation of quantum states between two rubidium atoms. Naturally, atoms are several orders larger than a quantum qubit, which qualifies them as “macroscopic objects” – i.e. visible to the naked eye. This in turn has led many to believe that large quantities of information could be teleported from one location to the next using this technique in the near future.

And then came another breakthrough from England, where researchers managed to transmit qubits and binary data down the same piece of optic fiber, which laid the groundwork for a conventional internet that runs via optic cable instead of satellites, and which could be protected using quantum cryptography, a secured means of information transfer which remains (in theory) unbreakable.

quantum_compAnd finally, the companies of IBM and the University of Southern California (USC) reported big advances in the field of quantum computing during 2012. The year began with IBM announcing that it had created a 3-qubit computer chip (video below) capable of performing controlled logic functions. USC could only manage a 2-qubit chip — but it was fashioned out of diamond (pictured at left). Both advances strongly point to a future where your PC could be either completely quantum-based, or where you have a few quantum chips to aid with specific tasks.

As it stands, quantum computing, networking, and cryptography remain in the research and development phase. IBM’s current estimates place the completion of a fully-working quantum computer at roughly ten to fifteen years away. And as it stands, the machinery needed to conduct any of these processes remains large, bulky and very expensive. But miniaturization and a drop in prices are too things you can always count on in the tech world!

^So really, we may be looking at a worldwide, quantum internet by 2025 or 2030. We’re talking about a world in which information transfers faster than the speed of light, all connections are secure, and computing happens at unheard of speeds. Sounds impressive, but the real effect of this “quantum revolution” will be the exponential rate at which progress increases. With worldwide information sharing and computing happening so much faster, we can expect further advances in every field to take less time, and breakthroughs happening on a regular basis.

Yes, this technology could very well be the harbinger of what John von Neumann called the “Technological Singularity”. I know some of you might be feeling nervous at the moment, but somewhere, Ray Kurzweil is doing a happy dance! Just a few more decades before he and others like him can start downloading their brains or getting those long-awaited cybernetic enhancements!

Source: extremetech.com