Breaking Moore’s Law: Graphene Nanoribbons

^Ask a technician or a computer science major, and they will likely tell you that the next great leap in computing will only come once Moore’s Law is overcome. This law, which states that the number of transistors on a single chip doubles every 18 months to two years, is proceeding towards a bottleneck. For decades, CPUs and computer chips have been getting smaller, but they are fast approaching their physical limitations.

One of the central problems arising from the Moore’s Law bottleneck has to do with the materials we used to create microchips. Short of continued miniaturization, there is simply no way to keep placing more and more components on a microchip. And copper wires can only be miniaturized so much before they lose the ability to conduct electricity effectively.

graphene_ribbons1This has led scientists and engineers to propose that new materials be used, and graphene appears to be the current favorite. And researchers at the University of California at Berkeley are busy working on a form of so-called nanoribbon graphene that could increase the density of transistors on a computer chip by as much as 10,000 times.

Graphene, for those who don’t know, is a miracle material that is basically a sheet of carbon only one layer of atoms thick. This two-dimensional physical configuration gives it some incredible properties, like extreme electrical conductivity at room temperature. Researchers have been working on producing high quality sheets of the material, but nanoribbons ask more of science than it can currently deliver.

graphene_ribbonsWork on nanoribbons over the past decade has revolved around using lasers to carefully sculpt ribbons 10 or 20 atoms wide from larger sheets of graphene. On the scale of billionths of an inch, that calls for incredible precision. If the makers are even a few carbon atoms off, it can completely alter the properties of the ribbon, preventing it from working as a semiconductor at room temperature.

Alas, Berkeley chemist Felix Fischer thinks he might have found a solution. Rather than carving ribbons out of larger sheets like a sculptor, Fischer has begun creating nanoribbons from carbon atoms using a chemical process. Basically, he’s working on a new way to produce graphene that happens to already be in the right configuration for nanoribbons.

graphene-solarHe begins by synthesizing rings of carbon atoms similar in structure to benzene, then heats the molecules to encourage them to form a long chain. A second heating step strips away most of the hydrogen atoms, freeing up the carbon to form bonds in a honeycomb-like graphene structure. This process allows Fischer and his colleagues to control where each atom of carbon goes in the final nanoribbon.

On the scale Fischer is making them, graphene nanoribbons could be capable of transporting electrons thousands of times faster than a traditional copper conductor. They could also be packed very close together since a single ribbon is 1/10,000th the thickness of a human hair. Thus, if the process is perfected and scaled up, everything from CPUs to storage technology could be much faster and smaller.

Sources: extremetech.com

Year-End Tech News: Stanene and Nanoparticle Ink

3d.printingThe year of 2013 was also a boon for the high-tech industry, especially where electronics and additive manufacturing were concerned. In fact, several key developments took place last year that may help scientists and researchers to move beyond Moore’s Law, as well as ring in a new era of manufacturing and production.

In terms of computing, developers have long feared that Moore’s Law – which states that the number of transistors on integrated circuits doubles approximately every two years – could be reaching a bottleneck. While the law (really it’s more of an observation) has certainly held true for the past forty years, it has been understood for some time that the use of silicon and copper wiring would eventually impose limits.

copper_in_chips__620x350Basically, one can only miniaturize circuits made from these materials so much before resistance occurs and they are too fragile to be effective. Because of this, researchers have been looking for replacement materials to substitute the silicon that makes up the 1 billion transistors, and the one hundred or so kilometers of copper wire, that currently make up an integrated circuit.

Various materials have been proposed, such as graphene, carbyne, and even carbon nanotubes. But now, a group of researchers from Stanford University and the SLAC National Accelerator Laboratory in California are proposing another material. It’s known as Stanene, a theorized material fabricated from a single layer of tin atoms that is theoretically extremely efficient, even at high temperatures.

computer_chip5Compared to graphene, which is stupendously conductive, the researchers at Stanford and the SLAC claim that stanene should be a topological insulator. Topological insulators, due to their arrangement of electrons/nuclei, are insulators on their interior, but conductive along their edge and/or surface. Being only a single atom in thickness along its edges, this topological insulator can conduct electricity with 100% efficiency.

The Stanford and SLAC researchers also say that stanene would not only have 100%-efficiency edges at room temperature, but with a bit of fluorine, would also have 100% efficiency at temperatures of up to 100 degrees Celsius (212 Fahrenheit). This is very important if stanene is ever to be used in computer chips, which have operational temps of between 40 and 90 C (104 and 194 F).

Though the claim of perfect efficiency seems outlandish to some, others admit that near-perfect efficiency is possible. And while no stanene has been fabricated yet, it is unlikely that it would be hard to fashion some on a small scale, as the technology currently exists. However, it will likely be a very, very long time until stanene is used in the production of computer chips.

Battery-Printer-640x353In the realm of additive manufacturing (aka. 3-D printing) several major developments were made during the year 0f 2013. This one came from Harvard University, where a materials scientist named Jennifer Lewis Lewis – using currently technology – has developed new “inks” that can be used to print batteries and other electronic components.

3-D printing is already at work in the field of consumer electronics with casings and some smaller components being made on industrial 3D printers. However, the need for traditionally produced circuit boards and batteries limits the usefulness of 3D printing. If the work being done by Lewis proves fruitful, it could make fabrication of a finished product considerably faster and easier.

3d_batteryThe Harvard team is calling the material “ink,” but in fact, it’s a suspension of nanoparticles in a dense liquid medium. In the case of the battery printing ink, the team starts with a vial of deionized water and ethylene glycol and adds nanoparticles of lithium titanium oxide. The mixture is homogenized, then centrifuged to separate out any larger particles, and the battery ink is formed.

This process is possible because of the unique properties of the nanoparticle suspension. It is mostly solid as it sits in the printer ready to be applied, then begins to flow like liquid when pressure is increased. Once it leaves the custom printer nozzle, it returns to a solid state. From this, Lewis’ team was able to lay down multiple layers of this ink with extreme precision at 100-nanometer accuracy.

laser-welding-640x353The tiny batteries being printed are about 1mm square, and could pack even higher energy density than conventional cells thanks to the intricate constructions. This approach is much more realistic than other metal printing technologies because it happens at room temperature, no need for microwaves, lasers or high-temperatures at all.

More importantly, it works with existing industrial 3D printers that were built to work with plastics. Because of this, battery production can be done cheaply using printers that cost on the order of a few hundred dollars, and not industrial-sized ones that can cost upwards of $1 million.

Smaller computers, and smaller, more efficient batteries. It seems that miniaturization, which some feared would be plateauing this decade, is safe for the foreseeable future! So I guess we can keep counting on our electronics getting smaller, harder to use, and easier to lose for the next few years. Yay for us!

Sources: extremetech.com, (2)

The Future is Bright: Positive Trends to Look For in 2014

Colourful 2014 in fiery sparklersWith all of the world’s current problems, poverty, underdevelopment, terrorism, civil war, and environmental degradation, it’s easy to overlook how things are getting better around the world. Not only do we no longer live in a world where superpowers are no longer aiming nuclear missiles at each other and two-thirds of the human race live beneath totalitarian regimes; in terms of health, mortality, and income, life is getting better too.

So, in honor of the New Year and all our hopes for a better world, here’s a gander at how life is improving and is likely to continue…

1. Poverty is decreasing:
The population currently whose income or consumption is below the poverty line – subsisting on less than $1.25 a day –  is steadily dropping. In fact, the overall economic growth of the past 50 years has been proportionately greater than that experienced in the previous 500. Much of this is due not only to the growth taking place in China and India, but also Brazil, Russia, and Sub-Saharan Africa. In fact, while developing nations complain about debt crises and ongoing recession, the world’s poorest areas continue to grow.

gdp-growth-20132. Health is improving:
The overall caloric consumption of people around the world is increasing, meaning that world hunger is on the wane. Infant mortality, a major issue arising from poverty, and underdevelopment, and closely related to overpopulation, is also dropping. And while rates of cancer continue to rise, the rate of cancer mortality continue to decrease. And perhaps biggest of all, the world will be entering into 2014 with several working vaccines and even cures for HIV (of which I’ve made many posts).

3. Education is on the rise:
More children worldwide (especially girls) have educational opportunities, with enrollment increasing in both primary and secondary schools. Literacy is also on the rise, with the global rate reaching as high as 84% by 2012. At its current rate of growth, global rates of literacy have more than doubled since 1970, and the connections between literacy, economic development, and life expectancy are all well established.

literacy_worldwide4. The Internet and computing are getting faster:
Ever since the internet revolution began, connection speeds and bandwidth have been increasing significantly year after year. In fact, the global average connection speed for the first quarter of 2012 hit 2.6 Mbps, which is a 25 percent year-over-year gain, and a 14 percent gain over the fourth quarter of 2011. And by the second quarter of 2013, the overall global average peak connection speed reached 18.9 Mbps, which represented a 17 percent gan over 2012.

And while computing appears to be reaching a bottleneck, the overall increase in speed has increased by a factor of 260,000 in the past forty years, and storage capacity by a factor of 10,000 in the last twenty. And in terms of breaking the current limitations imposed by chip size and materials, developments in graphene, carbon nanotubes, and biochips are promising solutions.

^5. Unintended pregnancies are down:
While it still remains high in the developing regions of the world, the global rate of unintended pregnancies has fallen dramatically in recent years. In fact, between 1995 and 2008, of 208 billion pregnancies surveyed in a total of 80 nations, 41 percent of the pregnancies were unintended. However, this represents a drop of 29 percent in the developed regions surveyed and a 20 percent drop in developing regions.

The consequences of unintended pregnancies for women and their families is well established, and any drop presents opportunities for greater health, safety, and freedom for women. What’s more, a drop in the rate of unwanted pregnancies is surefire sign of socioeconomic development and increasing opportunities for women and girls worldwide.

gfcdimage_06. Population growth is slowing:
On this blog of mine, I’m always ranting about how overpopulation is bad and going to get to get worse in the near future. But in truth, that is only part of the story. The upside is while the numbers keep going up, the rate of increase is going down. While global population is expected to rise to 9.3 billion by 2050 and 10.1 billion by 2100, this represents a serious slowing of growth.

If one were to compare these growth projections to what happened in the 20th century, where population rose from 1 billion to just over 6, they would see that the rate of growth has halved. What’s more, rates of population growth are expecting to begin falling in Asia by 2060 (one of the biggest contributors to world population in the 20th century), in Europe by 2055, and the Caribbean by 2065.

Population_curve.svgIn fact, the only region where exponential population growth is expected to happen is Africa, where the population of over 1 billion is expected to reach 4 billion by the end of the 21st century. And given the current rate of economic growth, this could represent a positive development for the continent, which could see itself becoming the next powerhouse economy by the 2050s.

7. Clean energy is getting cheaper:
While the price of fossil fuels are going up around the world, forcing companies to turn to dirty means of oil and natural gas extraction, the price of solar energy has been dropping exponentially. In fact, the per capita cost of this renewable source of energy ($ per watt) has dropped from a high of $80 in 1977 to 0.74 this past year. This represents a 108 fold decrease in the space of 36 years.

solar_array1And while solar currently comprises only a quarter of a percent of the planet’s electricity supply, its total share grew by 86% last year. In addition, wind farms already provide 2% of the world’s electricity, and their capacity is doubling every three years. At this rate of increase, solar, wind and other renewables are likely to completely offset coal, oil and gas in the near future.

Summary:
In short, things are looking up, even if they do have a long way to go. And a lot of what is expected to make the world a better place is likely to happen this year. Who knows which diseases we will find cures for? Who knows what inspirational leaders will come forward? And who knows what new and exciting inventions will be created, ones which offer creative and innovative solutions to our current problems?

Who knows? All I can say is that I am eager to find out!

Additional Reading: unstats.un.org, humanprogress.org, mdgs.un.org

The Future of Computing: Graphene Chips and Transistors

computer_chip4The basic law of computer evolution, known as Moore’s Law, teaches that within every two years, the number of transistors on a computer chip will double. What this means is that every couple of years, computer speeds will double, effectively making the previous technology obsolete. Recently, analysts have refined this period to about 18 months or less, as the rate of increase itself seems to be increasing.

This explosion in computing power is due to ongoing improvements in the field of miniaturization. As the component pieces get smaller and smaller, engineers are able to cram more and more of them onto chips of the same size. However, it does make one wonder just how far it will all go. Certainly there is a limit to how small things can get before they cease working.

GrapheneAccording to the International Technology Roadmap for Semiconductors (ITRS), a standard which has been established by the industry’s top experts, that limit will be reached in 2015. By then, engineers will have reached the threshold of 22 nanometers, the limit of thickness before the copper wiring that currently connect the billions of transistors in a modern CPU or GPU will be made unworkable due to resistance and other mechanical issues.

However, recent revelations about the material known as graphene show that it is not hampered by the same mechanical restrictions. As such, it could theoretically be scaled down to the point where it is just a few nanometers, allowing for the creation of computer chips that are orders of magnitude more dense and powerful, while consuming less energy.

IBM-Graphene-ICBack in 2011, IBM built what it called the first graphene integrated circuit, but in truth, only some of the transistors and inductors were made of graphene while other standard components (like copper wiring) was still employed. But now, a team at the University of California Santa Barbara (UCSB) have proposed the first all-graphene chip, where the transistors and interconnects are monolithically patterned on a single sheet of graphene.

In their research paper, “Proposal for all-graphene monolithic logic circuits,” the UCSB researchers say that:

[D]evices and interconnects can be built using the ‘same starting material’ — graphene… all-graphene circuits can surpass the static performances of the 22nm complementary metal-oxide-semiconductor devices.

graphene_transistormodelTo build an all-graphene IC (pictured here), the researchers propose using one of graphene’s interesting qualities, that depending on its thickness it behaves in different ways. Narrow ribbons of graphene are semiconducting, ideal for making transistors while wider ribbons are metallic, ideal for gates and interconnects.

For now, the UCSB team’s design is simply a computer model that should technically work, but which hasn’t been built yet. In theory, though, with the worldwide efforts to improve high-quality graphene production and patterning, it should only be a few years before an all-graphene integrated circuit is built. As for full-scale commercial production, that is likely to take a decade or so.

When that happens though, another explosive period of growth in computing speed, coupled with lower power consumption is to be expected. From there, subsequent leaps are likely to involve carbon nanotubes components, true quantum computing, and perhaps even biotechnological circuits. Oh the places it will all go!

Source: extremetech.com

The Future is Here: Carbon Nanotube Computers

carbon-nanotubeSilicon Valley is undergoing a major shift, one which may require it to rethink its name. This is thanks in no small part to the efforts of a team based at Stanford that is seeking to create the first basic computer built around carbon nanotubes rather than silicon chips. In addition to changing how computers are built, this is likely to extend the efficiency and performance.

What’s more, this change may deal a serious blow to the law of computing known as Moore’s Law. For decades now, the exponential acceleration of technology – which has taken us from room-size computers run by punched paper cards to handheld devices with far more computing power – has depended the ability to place more and more transistors onto an individual chip.

PPTMooresLawaiThe result of this ongoing trend in miniaturization has been devices that are becoming smaller, more powerful, and cheaper. The law used to describe this – though “basic rule” would be a more apt description – states that the number of transistors on a chip has been doubling every 18 months or so since the dawn of the information age. This is what is known as “Moore’s Law.”

However, this trend could be coming to an end, mainly because its becoming increasingly difficult, expensive and inefficient to keep jamming more tiny transistors on a chip. In addition, there are the inevitable physical limitations involved, as miniaturization can only go on for so long before its becomes unfeasible.

carbon_nanotubecomputerCarbon nanotubes, which are long chains of carbon atoms thousands of times thinner than a human hair, have the potential to be more energy-efficient and outperform computers made with silicon components. Using a technique that involved “burning” off and weeding out imperfections with an algorithm from the nanotube matrix, the team built a very basic computer with 178 transistors that can do tasks like counting and number sorting.

In a recent release from the university, Stanford professor Subhasish Mitra said:

People have been talking about a new era of carbon nanotube electronics moving beyond silicon. But there have been few demonstrations of complete digital systems using this exciting technology. Here is the proof.

Naturally, this computer is more of a proof of concept than a working prototype. There are still a number of problems with the idea, such as the fact that nanotubes don’t always grow in straight lines and cannot always “switch off” like a regular transistor. The Stanford team’s computer’s also has limited power due to the limited facilities they had to work with, which did not have access to industrial fabrication tools.

carbon_nanotube2All told, their computer is only about as powerful as an Intel 4004, the first single-chip silicon microprocessor that was released in 1971. But given time, we can expect more sophisticated designs to emerge, especially if design teams have access to top of the line facilities to build prototypes.

And this research team is hardly alone in this regard. Last year, Silicon Valley giant IBM managed to create their own transistors using carbon nanotubes and also found that they outperformed the transistors made of silicon. What’s more, these transistors measured less than ten nanometers across, and were able to operated using very low voltage.

carbon_nanotube_transistorSimilarly, a research team from Northwestern University in Evanston, Illinois managed to create something very similar. In their case, this consisted of a logic gate – the fundamental circuit that all integrated circuits are based on – using carbon nanotubes to create transistors that operate in a CMOS-like architecture. And much like IBM and the Standford team’s transistors, it functioned at very low power levels.

What this demonstrated is that carbon nanotube transistors and other computer components are not only feasible, but are able to outperform transistors many times their size while using a fraction of the power. Hence, it is probably only a matter of time before a fully-functional computer is built – using carbon nanotube components – that will supersede silicon systems and throw Moore’s Law out the window.

Sources: news.cnet.com, (2), fastcolabs.com

Powered By The Sun: Visualizing Swanson’s Law

solar_power1For decades, solar power has been dogged by two undeniable problems that have prevented it from replacing fossil fuels as our primary means of energy. The first has to do the cost of producing and installing solar cells, which until recently remained punitively. The second has to do with efficiency, in that conventional photovoltaic cells remained inefficient as far as most cost per watt analyses went. But thanks to a series of developments, solar power has been beating the odds on both fronts and coming down in price.

However, to most people, it was unclear exactly how far it had come down in price. And thanks to a story recently published in The Economist, which comes complete with a helpful infographic, we are now able to see firsthand the progress that’s been made. To call it astounding would be an understatement; and for the keen observer, a certain pattern is certainly discernible.

PPTMooresLawaiIt’s known as the “Swanson Effect” (or Swanson’s Law), a theory that suggests that the cost of the photovoltaic cells needed to generate solar power falls by 20% with each doubling of global manufacturing capacity. Named after Richard Swanson, the founder of the major American solar-cell manufacturer named SunPower, this law is basically an imitation of Moore’s Law, which states that every 18 months or so, the size of transistors (and also their cost) halves.

What this means, in effect, is that in solar-rich areas of the world, solar power can now compete with gas and coal without the need for clean energy subsidies. As it stands, solar energy still accounts for only  a quarter of a percent of the planet’s electricity needs. But when you consider that this represents a 86% increase over last year and prices shall continue to drop, you begin to see a very trend in the making.

What this really means is that within a few decades time, alternative energy won’t be so alternative anymore. Alongside such growth made in wind power, tidal harnesses, and piezoelectric bacterias and kinetic energy generators, fossil fuels, natural gas and coal will soon be the “alternatives” to cheap, abundant and renewable energy. Combined with advances being made in carbon capture and electric/hydrogen fuel cell technology, perhaps all will arrive in time to stave off environmental collapse!

Check out the infographic below and let the good news of the “Swanson Effect” inspire you!:

swanson_effectSource: theeconomist.com

IBM Creates First Photonic Microchip

optical_computer1For many years, optical computing has been a subject of great interest for engineers and researchers. As opposed to the current crop of computers which rely on the movement of electrons in and out of transistors to do logic, an optical computer relies on the movement of photons. Such a computer would confer obvious advantages, mainly in the realm of computing speed since photons travel much faster than electrical current.

While the concept and technology is relatively straightforward, no one has been able to develop photonic components that were commercially viable. All that changed this past December as IBM became the first company to integrate electrical and optical components on the same chip. As expected, when tested, this new chip was able to transmit data significantly faster than current state-of-the-art copper and optical networks.

ibm-silicon-nanophotonic-chip-copper-and-waveguidesBut what was surprising was just how fast the difference really was. Whereas current interconnects are generally measured in gigabits per second, IBM’s new chip is already capable of shuttling data around at terabits per second. In other words, over a thousand times faster than what we’re currently used to. And since it will be no big task or expense to replace the current generation of electrical components with photonic ones, we could be seeing this chip taking the place of our standard CPUs really soon!

This comes after a decade of research and an announcement made back in 2010, specifically that IBM Research was tackling the concept of silicon nanophotonics. And since they’ve proven they can create the chips commercially, they could be on the market within just a couple of years. This is certainly big news for supercomputing and the cloud, where limited bandwidth between servers is a major bottleneck for those with a need for speed!

internetCool as this is, there are actually two key breakthroughs to boast about here. First, IBM has managed to build a monolithic silicon chip that integrates both electrical (transistors, capacitors, resistors) and optical (modulators, photodetectors, waveguides) components. Monolithic means that the entire chip is fabricated from a single crystal of silicon on a single production line, and the optical and electrical components are mixed up together to form an integrated circuit.

Second, and perhaps more importantly, IBM was able to manufacture these chips using the same process they use to produce the CPUs for the Xbox 360, PS3, and Wii. This was not easy, according to internal sources, but in so doing, they can produce this new chip using their standard manufacturing process, which will not only save them money in the long run, but make the conversion process that much cheaper and easier. From all outward indications, it seems that IBM spent most of the last two years trying to ensure that this aspect of the process would work.

Woman-Smashing-ComputerExcited yet? Or perhaps concerned that this boost in speed will mean even more competition and the need to constantly upgrade? Well, given the history of computing and technological progress, both of these sentiments would be right on the money. On the one hand, this development may herald all kinds of changes and possibilities for research and development, with breakthroughs coming within days and weeks instead of years.

At the same time, it could mean that rest of us will be even more hard pressed to keep our software and hardware current, which can be frustrating as hell. As it stands, Moore’s Law states that it takes between 18 months and two years for CPUs to double in speed. Now imagine that dwindling to just a few weeks, and you’ve got a whole new ballgame!

Source: Extremetech.com

Transhumanism… The Shape of Things to Come?

“Your mind is software. Program it. Your body is a shell. Change it. Death is a disease. Cure it. Extinction is approaching. Fight it.”

-Eclipse Phrase

A lot of terms are thrown around these days that allude to the possible shape of our future. Words like Technological Singularity, extropianism, postmortal, posthuman, and Transhuman. What do these words mean? What kind of future do they point to? Though they remain part of a school of thought that is still very much theoretical and speculative, this future appears to be becoming more likely every day.

Ultimately, the concept is pretty simple, in a complex, mind-bending sort of way. The theory has it that at some point in this or the next century, humanity will overcome death, scarcity, and all other limitations imposed on us by nature. The means vary, but it is believed that progress in any one or more of the following areas will make such a leap inevitable:

Artificial Intelligence:
The gradual evolution of computers, from punch cards to integrated circuits to networking, shows an exponential trend upwards. With the concordant growth of memory capacity and processing speed, it is believed that it is only a matter of time before computers are capable of independent reasoning. Progress is already being made in this domain, with the Google X Labs Neural Net that has a connectome of a billion connections.

As such, it is seen as inevitable that a machine will one day exist that is capable of surpassing a human being. This sort of machinery could even be merged with a human’s own mind, enhancing their natural thought patterns, memory, and augmenting their intelligence to the point where their intelligence is immeasurable by modern standards.

Just think of the things we could think up once that’s possible. Well… you can’t exactly, but we can certainly postulate. For starters, such things as the Grand Unifying Theory, the nature of time and space, quantum mechanics, and other mind-bendingly complex fields could suddenly make sense to us. What’s more, this would make further technological leaps that much easier.

Biology:
Here we have an area of development which can fall into one of three categories. On the one hand, advancements in medical science could very well lead to the elimination of disease and the creation of mind-altering pharmaceuticals. On the other, there’s the eventual development of things like biotechnology, machinery that is grown rather than built, composed of DNA strands or other “programmable” material.

Lastly, there is the potential for cybernetics, a man-machine interface where organic is merged with the artificial, either in the form of implants, prosthetic limbs, and artificial organs. All of these, alone or in combination, would enhance a human beings strength, mental capacity, and prolong their life.

This is the meaning behind the word postmortal. If human beings could live to the point where life could be considered indefinite (at least by current standards), the amount we could accomplish in a single lifetime could very well be immeasurable.

Nanotechnology:
The concept of machines so small that anything will be accessible, even the smallest components of matter, has been around for over half a century. However, it was not until the development of microcircuits and miniaturization that the concept graduated from pure speculation and became a scientific possibility.

Here again, the concept is simple, assuming you can wrap your head around the staggering technical aspects and implications. For starters, we are talking about machines that are measurable only on the nanoscale, meaning one to one-hundred billionths of a meter (1 x 10-9 m). At this size, these machines would be capable of manipulating matter at the cellular or even atomic level. This is where the staggering implications come in, when you realize that this kinds of machinery could make just about anything possible.

For starters, all forms of disease would be conquerable, precious metals could be synthesized, seamless, self-regenerating structures could be made, and any and all consumer products could be created out of base matter. We’d be living in a world in which scarcity would be a thing of the past, our current system of values and exchange would become meaningless, buildings could build themselves, and out of raw matter (like dirt and pure scrap) no less, societies would become garbage free, pollution could be eliminated, and manufactured goods could be made of materials that are both extra-light and near-indestructible.

Summary:
All of this progress, either alone or in combination, will add to a future that we can’t even begin to fathom. This is where the concept of the Technological Singularity comes in. If human beings were truly postmortal (evolved beyond death), society was postscarce (meaning food, water, fuel and other necessities would never be in short supply), and machines would be capable of handling all our basic needs.

For Futurists and self-professed Singularitarians, this trend is as desirable as it is inevitable. Citing such things as Moore’s Law (which measures the rate of computing progress) or Kurzweil’s Law of Accelerating Returns – which postulates that the rate of progress increases exponentially with each development – these voices claim that it is humanity’s destiny to conquer death and its inherent limitations. If one looks at the full range of human history – from the Neolithic Revolution to the Digital – the trend seems clear and obvious.

For others, this prospect is both frightening and something to be avoided. When it comes right down to it, transhumanity means leaving behind all the things that make us human. And whereas some people think the Singularity will solve all human problems, others see it as merely an extension of a trend whereby our lives become increasingly complicated and dependent on machinery. And supposing that we do cross some kind of existential barrier, will we ever be able to turn back?

And of course, the more dystopian predictions warn against the cataclysmic possibilities of entrusting so much of our lives to automata, or worse, intelligent machines. Virtually every apocalyptic and dystopian scenario devised in the last sixty years has predicted that doom will result from the development of AI, cybernetics and other advanced technology. The most technophobic claim that the machinery will turn on humanity, while the more moderate warn against increased dependency, since we will be all the more vulnerable if and when the technology fails.

Naturally, there are many who fall somewhere in between and question both outlooks. In recent decades, scientists and speculative fiction writers have emerged who challenge the idea that technological progress will automatically lead to the rise of dystopia. Citing the undeniable trend towards greater and greater levels of material prosperity caused by the industrial revolution and the post-war era – something which is often ignored by people who choose to emphasize the down sides – these voices believe that the future will be neither utopian or dystopian. It will simply be…

Where do you fall?