The Fate of Humanity

the-futureWelcome to the world of tomorroooooow! Or more precisely, to many possible scenarios that humanity could face as it steps into the future. Perhaps it’s been all this talk of late about the future of humanity, how space exploration and colonization may be the only way to ensure our survival. Or it could be I’m just recalling what a friend of mine – Chris A. Jackson – wrote with his “Flash in the Pan” piece – a short that consequently inspired me to write the novel Source.

Either way, I’ve been thinking about the likely future scenarios and thought I should include it alongside the Timeline of the Future. After all, once cannot predict the course of the future as much as predict possible outcomes and paths, and trust that the one they believe in the most will come true. So, borrowing from the same format Chris used, here are a few potential fates, listed from worst to best – or least to most advanced.

1. Humanrien:
extinctionDue to the runaway effects of Climate Change during the 21st/22nd centuries, the Earth is now a desolate shadow of its once-great self. Humanity is non-existent, as are many other species of mammals, avians, reptiles, and insects. And it is predicted that the process will continue into the foreseeable future, until such time as the atmosphere becomes a poisoned, sulfuric vapor and the ground nothing more than windswept ashes and molten metal.

One thing is clear though: the Earth will never recover, and humanity’s failure to seed other planets with life and maintain a sustainable existence on Earth has led to its extinction. The universe shrugs and carries on…

2. Post-Apocalyptic:
post-apocalypticWhether it is due to nuclear war, a bio-engineered plague, or some kind of “nanocaust”, civilization as we know it has come to an end. All major cities lie in ruin and are populated only marauders and street gangs, the more peaceful-minded people having fled to the countryside long ago. In scattered locations along major rivers, coastlines, or within small pockets of land, tiny communities have formed and eke out an existence from the surrounding countryside.

At this point, it is unclear if humanity will recover or remain at the level of a pre-industrial civilization forever. One thing seems clear, that humanity will not go extinct just yet. With so many pockets spread across the entire planet, no single fate could claim all of them anytime soon. At least, one can hope that it won’t.

3. Dog Days:
arcology_lillypadThe world continues to endure recession as resource shortages, high food prices, and diminishing space for real estate continue to plague the global economy. Fuel prices remain high, and opposition to new drilling and oil and natural gas extraction are being blamed. Add to that the crushing burdens of displacement and flooding that is costing governments billions of dollars a year, and you have life as we know it.

The smart money appears to be in offshore real-estate, where Lillypad cities and Arcologies are being built along the coastlines of the world. Already, habitats have been built in Boston, New York, New Orleans, Tokyo, Shanghai, Hong Kong and the south of France, and more are expected in the coming years. These are the most promising solution of what to do about the constant flooding and damage being caused by rising tides and increased coastal storms.

In these largely self-contained cities, those who can afford space intend to wait out the worst. It is expected that by the mid-point of the 22nd century, virtually all major ocean-front cities will be abandoned and those that sit on major waterways will be protected by huge levies. Farmland will also be virtually non-existent except within the Polar Belts, which means the people living in the most populous regions of the world will either have to migrate or die.

No one knows how the world’s 9 billion will endure in that time, but for the roughly 100 million living at sea, it’s not a going concern.

4. Technological Plateau:
computer_chip4Computers have reached a threshold of speed and processing power. Despite the discovery of graphene, the use of optical components, and the development of quantum computing/internet principles, it now seems that machines are as smart as they will ever be. That is to say, they are only slightly more intelligent than humans, and still can’t seem to beat the Turing Test with any consistency.

It seems the long awaited-for explosion in learning and intelligence predicted by Von Neumann, Kurzweil and Vinge seems to have fallen flat. That being said, life is getting better. With all the advances turned towards finding solutions to humanity’s problems, alternative energy, medicine, cybernetics and space exploration are still growing apace; just not as fast or awesomely as people in the previous century had hoped.

Missions to Mars have been mounted, but a colony on that world is still a long ways away. A settlement on the Moon has been built, but mainly to monitor the research and solar energy concerns that exist there. And the problem of global food shortages and CO2 emissions is steadily declining. It seems that the words “sane planning, sensible tomorrow” have come to characterize humanity’s existence. Which is good… not great, but good.

Humanity’s greatest expectations may have yielded some disappointment, but everyone agrees that things could have been a hell of a lot worse!

5. The Green Revolution:
MarsGreenhouse2The global population has reached 10 billion. But the good news is, its been that way for several decades. Thanks to smart housing, hydroponics and urban farms, hunger and malnutrition have been eliminated. The needs of the Earth’s people are also being met by a combination of wind, solar, tidal, geothermal and fusion power. And though space is not exactly at a premium, there is little want for housing anymore.

Additive manufacturing, biomanufacturing and nanomanufacturing have all led to an explosion in how public spaces are built and administered. Though it has led to the elimination of human construction and skilled labor, the process is much safer, cleaner, efficient, and has ensured that anything built within the past half-century is harmonious with the surrounding environment.

This explosion is geological engineering is due in part to settlement efforts on Mars and the terraforming of Venus. Building a liveable environment on one and transforming the acidic atmosphere on the other have helped humanity to test key technologies and processes used to end global warming and rehabilitate the seas and soil here on Earth. Over 100,000 people now call themselves “Martian”, and an additional 10,000 Venusians are expected before long.

Colonization is an especially attractive prospect for those who feel that Earth is too crowded, too conservative, and lacking in personal space…

6. Intrepid Explorers:
spacex-icarus-670Humanity has successfully colonized Mars, Venus, and is busy settling the many moons of the outer Solar System. Current population statistics indicate that over 50 billion people now live on a dozen worlds, and many are feeling the itch for adventure. With deep-space exploration now practical, thanks to the development of the Alcubierre Warp Drive, many missions have been mounted to explore and colonizing neighboring star systems.

These include Earth’s immediate neighbor, Alpha Centauri, but also the viable star systems of Tau Ceti, Kapteyn, Gliese 581, Kepler 62, HD 85512, and many more. With so many Earth-like, potentially habitable planets in the near-universe and now within our reach, nothing seems to stand between us and the dream of an interstellar human race. Mission to find extra-terrestrial intelligence are even being plotted.

This is one prospect humanity both anticipates and fears. While it is clear that no sentient life exists within the local group of star systems, our exploration of the cosmos has just begun. And if our ongoing scientific surveys have proven anything, it is that the conditions for life exist within many star systems and on many worlds. No telling when we might find one that has produced life of comparable complexity to our own, but time will tell.

One can only imagine what they will look like. One can only imagine if they are more or less advanced than us. And most importantly, one can only hope that they will be friendly…

7. Post-Humanity:
artificial-intelligence1Cybernetics, biotechnology, and nanotechnology have led to an era of enhancement where virtually every human being has evolved beyond its biological limitations. Advanced medicine, digital sentience and cryonics have prolonged life indefinitely, and when someone is facing death, they can preserve their neural patterns or their brain for all time by simply uploading or placing it into stasis.

Both of these options have made deep-space exploration a reality. Preserved human beings launch themselves towards expoplanets, while the neural uploads of explorers spend decades or even centuries traveling between solar systems aboard tiny spaceships. Space penetrators are fired in all directions to telexplore the most distant worlds, with the information being beamed back to Earth via quantum communications.

It is an age of posts – post-scarcity, post-mortality, and post-humansim. Despite the existence of two billion organics who have minimal enhancement, there appears to be no stopping the trend. And with the breakneck pace at which life moves around them, it is expected that the unenhanced – “organics” as they are often known – will migrate outward to Europa, Ganymede, Titan, Oberon, and the many space habitats that dot the outer Solar System.

Presumably, they will mount their own space exploration in the coming decades to find new homes abroad in interstellar space, where their kind can expect not to be swept aside by the unstoppable tide of progress.

8. Star Children:
nanomachineryEarth is no more. The Sun is now a mottled, of its old self. Surrounding by many layers of computronium, our parent star has gone from being the source of all light and energy in our solar system to the energy source that powers the giant Dyson Swarm at the center of our universe. Within this giant Matrioshka Brain, trillions of human minds live out an existence as quantum-state neural patterns, living indefinitely in simulated realities.

Within the outer Solar System and beyond lie billions more, enhanced trans and post-humans who have opted for an “Earthly” existence amongst the planets and stars. However, life seems somewhat limited out in those parts, very rustic compared to the infinite bandwidth and computational power of inner Solar System. And with this strange dichotomy upon them, the human race suspects that it might have solved the Fermi Paradox.

If other sentient life can be expected to have followed a similar pattern of technological development as the human race, then surely they too have evolved to the point where the majority of their species lives in Dyson Swarms around their parent Sun. Venturing beyond holds little appeal, as it means moving away from the source of bandwidth and becoming isolated. Hopefully, enough of them are adventurous enough to meet humanity partway…

_____

Which will come true? Who’s to say? Whether its apocalyptic destruction or runaway technological evolution, cataclysmic change is expected and could very well threaten our existence. Personally, I’m hoping for something in the scenario 5 and/or 6 range. It would be nice to know that both humanity and the world it originated from will survive the coming centuries!

The Future of Space: A Space Elevator by 2050?

space_elevatorIn the ongoing effort to ensure humanity has a future offworld, it seems that another major company has thrown its hat into the ring. This time, its the Japanese construction giant Obayashi that’s declared its interest in building a Space Elevator, a feat which it plans to have it up and running by the year 2050. If successful, it would make space travel easier and more accessible, and revolutionize the world economy.

This is just the latest proposal to build an elevator in the coming decades, using both existing and emerging technology. Obayashi’s plan calls for a tether that will reach 96,000 kilometers into space, with robotic cars powered by magnetic linear motors that will carry people and cargo to a newly-built space station. The estimated travel time will take 7 days, and will cost a fraction of what it currently takes to bring people to the ISS using rockets.

space_elevator_liftThe company said the fantasy can now become a reality because of the development of carbon nanotechnology. As Yoji Ishikawa, a research and development manager at Obayashi, explained:

The tensile strength is almost a hundred times stronger than steel cable so it’s possible. Right now we can’t make the cable long enough. We can only make 3-centimetre-long nanotubes but we need much more… we think by 2030 we’ll be able to do it.

Once considered the realm of science fiction, the concept is fast becoming a possibility. A major international study in 2012 concluded the space elevator was feasible, but best achieved with international co-operation. Since that time, Universities all over Japan have been working on the engineering problems, and every year they hold competitions to share their suggestions and learn from each other.

space_elevator3Experts have claimed the space elevator could signal the end of Earth-based rockets which are hugely expensive and dangerous. Compared to space shuttles, which cost about $22,000 per kilogram to take cargo into space, the Space Elevator can do it for around $200. It’s also believed that having one operational could help solve the world’s power problems by delivering huge amounts of solar power. It would also be a boon for space tourism.

Constructing the Space Elevator would allow small rockets to be housed and launched from stations in space without the need for massive amounts of fuel required to break the Earth’s gravitational pull. Obayashi is working on cars that will carry 30 people up the elevator, so it may not be too long before the Moon is the next must-see tourist destination. They are joined by a team at Kanagawa University that have been working on robotic cars or climbers.

graphene_ribbonsAnd one of the greatest issues – the development of a tether that can withstand the weight and tension of stresses of reaching into orbit – may be closer to being solved than previously thought. While the development of carbon nanotubes has certainly been a shot in the arm for those contemplating the space elevator’s tether, this material is not quite strong enough to do the job itself.

Luckily, a team working out of Penn State University have created something that just might. Led by chemistry professor John Badding, the team has created a “diamond nanothread” – a thread composed of carbon atoms that measures one-twenty-thousands the diameter of a single strand of human hair, and which may prove to be the strongest man-made material in the universe.

diamond_nanothreadAt the heart of the thread is a never-before-seen structure resembling the hexagonal rings of bonded carbon atoms that make up diamonds, the hardest known mineral in existence. That makes these nanothreads potentially stronger and more resilient than the most advanced carbon nanotubes, which are similar super-durable and super-light structures composed of rolled up, one atom-thick sheets of carbon called graphene.

Graphene and carbon nanotubes are already ushering in stunning advancements in the fields of electronics, energy storage and even medicine. This new discovery of diamond nanothreads, if they prove to be stronger than existing materials, could accelerate this process even further and revolutionize the development of electronics vehicles, batteries, touchscreens, solar cells, and nanocomposities.

space_elevator2But by far the most ambitious possibility offered is that of a durable cable that could send humans to space without the need of rockets. As John Badding said in a statement:

One of our wildest dreams for the nanomaterials we are developing is that they could be used to make the super-strong, lightweight cables that would make possible the construction of a ‘space elevator’ which so far has existed only as a science-fiction idea,

At this juncture, and given the immense cost and international commitment required to built it, 2050 seems like a reasonable estimate for creating a Space Elevator. However, other groups hope to see this goal become a reality sooner. The  International Academy of Astronautics (IAA) for example, thinks one could be built by 2035 using existing technology. And several assessments indicate that a Lunar Elevator would be far more feasible in the meantime.

Come what may, it is clear that the future of space exploration will require us to think bigger and bolder if we’re going to secure our future as a “space-faring” race. And be sure to check out these videos from Penn State and the Obayashi Corp:

John Badding and the Nanodiamond Thread:


Obayashi and the 2050 Space Elevator:


Sources:
cnet.com
, abc.net.au, science.psu.edu

Powered by the Sun: Boosting Solar Efficiency

solar1Improving the efficiency of solar power – which is currently the most promising alternative energy source – is central to ensuring that it an becomes economically viable replacement to fossil fuels, coal, and other “dirty” sources. And while many solutions have emerged in recent years that have led to improvements in solar panel efficiency, many developments are also aimed at the other end of things – i.e. improving the storage capacity of solar batteries.

In the former case, a group of scientists working with the University of Utah believe they’ve discovered a method of substantially boosting solar cell efficiencies. By adding a polychromat layer that separates and sorts incoming light, redirecting it to strike particular layers in a multijunction cell, they hope to create a commercial cell that can absorb more wavelengths of light, and therefor generate more energy for volume than conventional cells.

EMSpectrumTraditionally, solar cell technology has struggled to overcome a significant efficiency problem. The type of substrate used dictates how much energy can be absorbed from sunlight — but each type of substrate (silicon, gallium arsenide, indium gallium arsenide, and many others) corresponds to capturing a particular wavelength of energy. Cheap solar cells built on inexpensive silicon have a maximum theoretical efficiency of 34% and a practical (real-world) efficiency of around 22%.

At the other end of things, there are multijunction cells. These use multiple layers of substrates to capture a larger section of the sun’s spectrum and can reach up to 87% efficiency in theory – but are currently limited to 43% in practice. What’s more, these types of multijunction cells are extremely expensive and have intricate wiring and precise structures, all of which leads to increased production and installation costs.

SolarCellResearchIn contrast, the cell created by the University of Utah used two layers — indium gallium phosphide (for visible light) and gallium arsenide for infrared light. According to the research team, when their polychromat was added, the power efficiency increased by 16 percent. The team also ran simulations of a polychromat layer with up to eight different absorbtion layers and claim that it could potentially yield an efficiency increase of up to 50%.

However, there were some footnotes to their report which temper the good news. For one, the potential gain has not been tested yet, so any major increases in solar efficiency remain theoretical at this time. Second, the report states that the reported gain was a percentage of a percentage, meaning that if the original cell efficiency was 30%, then a gain of 16% percent means that the new efficiency is 34.8%. That’s still a huge gain for a polychromat layer that is easily produced, but not as impressive as it originally sounded.

PolyChromat-640x353However, given that the biggest barrier to multi-junction solar cell technology is manufacturing complexity and associated cost, anything that boosts cell efficiency on the front end without requiring any major changes to the manufacturing process is going to help with the long-term commercialization of the technology. Advances like this could help make technologies cost effective for personal deployment and allow them to scale in a similar fashion to cheaper devices.

In the latter case, where energy storage is concerned, a California-based startup called Enervault recently unveiled battery technology that could increase the amount of renewable energy utilities can use. The technology is based on inexpensive materials that researchers had largely given up on because batteries made from them didn’t last long enough to be practical. But the company says it has figured out how to make the batteries last for decades.

SONY DSCThe technology is being demonstrated in a large battery at a facility in the California desert near Modeso, 0ne that stores one megawatt-hour of electricity, enough to run 10,000 100-watt light bulbs for an hour. The company has been testing a similar, though much smaller, version of the technology for about two years with good results. It has also raised $30 million in funding, including a $5 million grant from the U.S. Department of Energy.

The technology is a type of flow battery, so called because the energy storage materials are in liquid form. They are stored in big tanks until they’re needed and then pumped through a relatively small device (called a stack) where they interact to generate electricity. Building bigger tanks is relatively cheap, so the more energy storage is needed, the better the economics become. That means the batteries are best suited for storing hours’ or days’ worth of electricity, and not delivering quick bursts.

solarpanelsThis is especially good news for solar and wind companies, which have remained plagued by problems of energy storage despite improvements in both yield and efficiency. Enervault says that when the batteries are produced commercially at even larger sizes, they will cost just a fifth as much as vanadium redox flow batteries, which have been demonstrated at large scales and are probably the type of flow battery closest to market right now.

And the idea is not reserved to just startups. Researchers at Harvard recently made a flow battery that could prove cheaper than Enervault’s, but the prototype is small and could take many years to turn into a marketable version. An MIT spinoff, Sun Catalytix, is also developing an advanced flow battery, but its prototype is also small. And other types of inexpensive, long-duration batteries are being developed, using materials such as molten metals.

Sumitomo-redox-flow-battery-YokohamaOne significant drawback to the technology is that it’s less than 70 percent efficient, which falls short of the 90 percent efficiency of many batteries. The company says the economics still work out, but such a wasteful battery might not be ideal for large-scale renewable energy. More solar panels would have to be installed to make up for the waste. What’s more, the market for batteries designed to store hours of electricity is still uncertain.

A combination of advanced weather forecasts, responsive fossil-fuel power plants, better transmission networks, and smart controls for wind and solar power could delay the need for them. California is requiring its utilities to invest in energy storage but hasn’t specified what kind, and it’s not clear what types of batteries will prove most valuable in the near term, slow-charging ones like Enervault’s or those that deliver quicker bursts of power to make up for short-term variations in energy supply.

Tesla Motors, one company developing the latter type, hopes to make them affordable by producing them at a huge factory. And developments and new materials are being considered all time (i.e. graphene) that are improving both the efficiency and storage capacity of batteries. And with solar panels and wind becoming increasingly cost-effective, the likelihood of storage methods catching up is all but inevitable.

Sources: extremetech.com, technologyreview.com

 

Frontiers of Neuroscience: Neurohacking and Neuromorphics

neural-network-consciousness-downloading-640x353It is one of the hallmarks of our rapidly accelerating times: looking at the state of technology, how it is increasingly being merged with our biology, and contemplating the ultimate leap of merging mind and machinery. The concept has been popular for many decades now, and with experimental procedures showing promise, neuroscience being used to inspire the next great leap in computing, and the advance of biomedicine and bionics, it seems like just a matter of time before people can “hack” their neurology too.

Take Kevin Tracey, a researcher working for the Feinstein Institute for Medical Research in Manhasset, N.Y., as an example. Back in 1998, he began conducting experiments to show that an interface existed between the immune and nervous system. Building on ten years worth of research, he was able to show how inflammation – which is associated with rheumatoid arthritis and Crohn’s disease – can be fought by administering electrical stimulu, in the right doses, to the vagus nerve cluster.

Brain-ScanIn so doing, he demonstrated that the nervous system was like a computer terminal through which you could deliver commands to stop a problem, like acute inflammation, before it starts, or repair a body after it gets sick.  His work also seemed to indicate that electricity delivered to the vagus nerve in just the right intensity and at precise intervals could reproduce a drug’s therapeutic reaction, but with greater effectiveness, minimal health risks, and at a fraction of the cost of “biologic” pharmaceuticals.

Paul Frenette, a stem-cell researcher at the Albert Einstein College of Medicine in the Bronx, is another example. After discovering the link between the nervous system and prostate tumors, he and his colleagues created SetPoint –  a startup dedicated to finding ways to manipulate neural input to delay the growth of tumors. These and other efforts are part of the growing field of bioelectronics, where researchers are creating implants that can communicate directly with the nervous system in order to try to fight everything from cancer to the common cold.

human-hippocampus-640x353Impressive as this may seem, bioelectronics are just part of the growing discussion about neurohacking. In addition to the leaps and bounds being made in the field of brain-to-computer interfacing (and brain-to-brain interfacing), that would allow people to control machinery and share thoughts across vast distances, there is also a field of neurosurgery that is seeking to use the miracle material of graphene to solve some of the most challenging issues in their field.

Given graphene’s rather amazing properties, this should not come as much of a surprise. In addition to being incredibly thin, lightweight, and light-sensitive (it’s able to absorb light in both the UV and IR range) graphene also a very high surface area (2630 square meters per gram) which leads to remarkable conductivity. It also has the ability to bind or bioconjugate with various modifier molecules, and hence transform its behavior. 

brainscan_MRIAlready, it is being considered as a possible alternative to copper wires to break the energy efficiency barrier in computing, and even useful in quantum computing. But in the field of neurosurgery, where researchers are looking to develop materials that can bridge and even stimulate nerves. And in a story featured in latest issue of Neurosurgery, the authors suggest thatgraphene may be ideal as an electroactive scaffold when configured as a three-dimensional porous structure.

That might be a preferable solution when compared with other currently vogue ideas like using liquid metal alloys as bridges. Thanks to Samsung’s recent research into using graphene in their portable devices, it has also been shown to make an ideal E-field stimulator. And recent experiments on mice in Korea showed that a flexible, transparent, graphene skin could be used as a electrical field stimulator to treat cerebral hypoperfusion by stimulating blood flow through the brain.

Neuromorphic-chip-640x353And what look at the frontiers of neuroscience would be complete without mentioning neuromorphic engineering? Whereas neurohacking and neurosurgery are looking for ways to merge technology with the human brain to combat disease and improve its health, NE is looking to the human brain to create computational technology with improved functionality. The result thus far has been a wide range of neuromorphic chips and components, such as memristors and neuristors.

However, as a whole, the field has yet to define for itself a clear path forward. That may be about to change thanks to Jennifer Hasler and a team of researchers at Georgia Tech, who recently published a roadmap to the future of neuromorphic engineering with the end goal of creating the human-brain equivalent of processing. This consisted of Hasler sorting through the many different approaches for the ultimate embodiment of neurons in silico and come up with the technology that she thinks is the way forward.

neuromorphic-chip-fpaaHer answer is not digital simulation, but rather the lesser known technology of FPAAs (Field-Programmable Analog Arrays). FPAAs are similar to digital FPGAs (Field-Programmable Gate Arrays), but also include reconfigurable analog elements. They have been around on the sidelines for a few years, but they have been used primarily as so-called “analog glue logic” in system integration. In short, they would handle a variety of analog functions that don’t fit on a traditional integrated circuit.

Hasler outlines an approach where desktop neuromorphic systems will use System on a Chip (SoC) approaches to emulate billions of low-power neuron-like elements that compute using learning synapses. Each synapse has an adjustable strength associated with it and is modeled using just a single transistor. Her own design for an FPAA board houses hundreds of thousands of programmable parameters which enable systems-level computing on a scale that dwarfs other FPAA designs.

neuromorphic_revolutionAt the moment, she predicts that human brain-equivalent systems will require a reduction in power usage to the point where they are consuming just one-eights of what digital supercomputers that are currently used to simulate neuromorphic systems require. Her own design can account for a four-fold reduction in power usage, but the rest is going to have to come from somewhere else – possibly through the use of better materials (i.e. graphene or one of its derivatives).

Hasler also forecasts that using soon to be available 10nm processes, a desktop system with human-like processing power that consumes just 50 watts of electricity may eventually be a reality. These will likely take the form of chips with millions of neuron-like skeletons connected by billion of synapses firing to push each other over the edge, and who’s to say what they will be capable of accomplishing or what other breakthroughs they will make possible?

posthuman-evolutionIn the end, neuromorphic chips and technology are merely one half of the equation. In the grand scheme of things, the aim of all of this research is not only produce technology that can ensure better biology, but technology inspired by biology to create better machinery. The end result of this, according to some, is a world in which biology and technology increasingly resemble each other, to the point that they is barely a distinction to be made and they can be merged.

Charles Darwin would roll over in his grave!

Sources: nytimes.com, extremetech.com, (2), journal.frontiersin.orgpubs.acs.org

Breaking Moore’s Law: Graphene Nanoribbons

^Ask a technician or a computer science major, and they will likely tell you that the next great leap in computing will only come once Moore’s Law is overcome. This law, which states that the number of transistors on a single chip doubles every 18 months to two years, is proceeding towards a bottleneck. For decades, CPUs and computer chips have been getting smaller, but they are fast approaching their physical limitations.

One of the central problems arising from the Moore’s Law bottleneck has to do with the materials we used to create microchips. Short of continued miniaturization, there is simply no way to keep placing more and more components on a microchip. And copper wires can only be miniaturized so much before they lose the ability to conduct electricity effectively.

graphene_ribbons1This has led scientists and engineers to propose that new materials be used, and graphene appears to be the current favorite. And researchers at the University of California at Berkeley are busy working on a form of so-called nanoribbon graphene that could increase the density of transistors on a computer chip by as much as 10,000 times.

Graphene, for those who don’t know, is a miracle material that is basically a sheet of carbon only one layer of atoms thick. This two-dimensional physical configuration gives it some incredible properties, like extreme electrical conductivity at room temperature. Researchers have been working on producing high quality sheets of the material, but nanoribbons ask more of science than it can currently deliver.

graphene_ribbonsWork on nanoribbons over the past decade has revolved around using lasers to carefully sculpt ribbons 10 or 20 atoms wide from larger sheets of graphene. On the scale of billionths of an inch, that calls for incredible precision. If the makers are even a few carbon atoms off, it can completely alter the properties of the ribbon, preventing it from working as a semiconductor at room temperature.

Alas, Berkeley chemist Felix Fischer thinks he might have found a solution. Rather than carving ribbons out of larger sheets like a sculptor, Fischer has begun creating nanoribbons from carbon atoms using a chemical process. Basically, he’s working on a new way to produce graphene that happens to already be in the right configuration for nanoribbons.

graphene-solarHe begins by synthesizing rings of carbon atoms similar in structure to benzene, then heats the molecules to encourage them to form a long chain. A second heating step strips away most of the hydrogen atoms, freeing up the carbon to form bonds in a honeycomb-like graphene structure. This process allows Fischer and his colleagues to control where each atom of carbon goes in the final nanoribbon.

On the scale Fischer is making them, graphene nanoribbons could be capable of transporting electrons thousands of times faster than a traditional copper conductor. They could also be packed very close together since a single ribbon is 1/10,000th the thickness of a human hair. Thus, if the process is perfected and scaled up, everything from CPUs to storage technology could be much faster and smaller.

Sources: extremetech.com

Powered by the Sun: Efficiency Records and Future Trends

solar_panelThere have been many new developments in the field of solar technology lately, thanks to new waves of innovation and the ongoing drive to make the technology cheaper and more efficient. At the current rate of growth, solar power is predicted to become cheaper than natural gas by 2025. And with that, so many opportunities for clean energy and clean living will become available.

Though there are many contributing factors to this trend, much of the progress made of late is thanks to the discovery of graphene. This miracle material – which is ultra-thin, strong and light – has the ability to act as a super capacitor, battery, and an amazing superconductor. And its use in the manufacture of solar panels is leading to record breaking efficiency.

graphene-solarBack in 2012, researchers from the University of Florida reported a record efficiency of 8.6 percent for a prototype solar cell consisting of a wafer of silicon coated with a layer of graphene doped with trifluoromethanesulfonyl-amide (TFSA). And now, another team is claiming a new record efficiency of 15.6 percent for a graphene-based solar cell by ditching the silicon all together.

And while 15.6 efficiency might still lag behind certain designs of conventional solar cells (for instance, the Boeing Spectrolabs mass-production design of 2010 achieved upwards of 40 percent), this represents a exponential increase for graphene cells. The reason why it is favored in the production of cells is the fact that compared to silicon, it is far cheaper to produce.

solar_power2Despite the improvements made in manufacturing and installation, silicon is still expensive to process into cells. This new prototype, created by researchers from the Group of Photovoltaic and Optoelectronic Devices (DFO) – located at Spain’s Universitat Jaume I Castelló and the University of Oxford – uses a combination of titanium oxide and graphene as a charge collector and perovskite to absorb sunlight.

As well as the impressive solar efficiency, the team says the device is manufactured at low temperatures, with the several layers that go into making it being processed at under 150° C (302° F) using a solution-based deposition technique. This not only means lower potential production costs, but also makes it possible for the technology to be used on flexible plastics.

twin-creeks-hyperion-wafer-ii-flexibleWhat this means is a drop in costs all around, from production to installation, and the means to adapt the panel design to more surfaces. And considering the rate at which efficiency is being increased, it would not be rash to anticipate a range of graphene-based solar panels hitting the market in the near future – ones that can give conventional cells a run for their money!

However, another major stumbling block with solar power is weather, since it requires clear skies to be effective. For some time, the idea of getting the arrays into space has been proposed as a solution, which may finally be possible thanks to recent drops in the associated costs. In most cases, this consists or orbital arrays, but as noted late last year, there are more ambitious plans as well.

lunaring-3Take the Japanese company Shimizu and it’s proposed “Luna Ring” as an example. As noted earlier this month, Shimizu has proposed creating a solar array some 400 km (250 miles) wide and 11,000 km (6,800 miles) long that would beam solar energy directly to Earth. Being located on the Moon and wrapped around its entirety, this array would be able to take advantage of perennial exposure to sunlight.

Cables underneath the ring would gather power and transfer it to stations that facing Earth, which would then beam the energy our way using microwaves and lasers. Shimizu believes the scheme, which it showed off at a recent exhibition in Japan, would virtually solve our energy crisis, so we never have to think about fossil fuels again.

lunaring-2They predict that the entire array could be built and operational by 2035. Is that too soon to hope for planetary energy independence? And given the progress being made by companies like SpaceX and NASA in bringing the costs of getting into space down, and the way the Moon is factoring into multiple space agencies plans for the coming decades, I would anticipate that such a project is truly feasible, if still speculative.

Combined with increases being made in the fields of wind turbines, tidal harnesses, and other renewable energy sources – i.e. geothermal and piezoelectric – the future of clean energy, clear skies and clean living can’t get here soon enough! And be sure to check out this video of the Luna Ring, courtesy of the Shimizu corporation:


Sources:
gizmodo.com, fastcoexist.com

Judgement Day Update: Super-Strong Robotic Muscle

robot-arm-wrestling-03-20-09In their quest to build better, smarter and faster machines, researchers are looking to human biology for inspiration. As has been clear for some time, anthropomorphic robot designs cannot be expected to do the work of a person or replace human rescue workers if they are composed of gears, pullies, and hydraulics. Not only would they be too slow, but they would be prone to breakage.

Because of this, researchers have been working looking to create artificial muscles, synthetics tissues that respond to electrical stimuli, are flexible, and able to carry several times their own weight – just like the real thing. Such muscles will not only give robots the ability to move and perform tasks with the same ambulatory range as a human, they are likely to be far stronger than the flesh and blood variety.

micro_robot_muscleAnd of late, there have been two key developments on this front which may make this vision come true. The first comes from the US Department of Energy ’s Lawrence Berkeley National Laboratory, where a team of researchers have demonstrated a new type of robotic muscle that is 1,000 times more powerful than that of a human’s, and has the ability to catapult an item 50 times its own weight.

The artificial muscle was constructed using vanadium dioxide, a material known for its ability to rapidly change size and shape. Combined with chromium and fashioned with a silicone substrate, the team formed a V-shaped ribbon which formed a coil when released from the substrate. The coil when heated turned into a micro-catapult with the ability to hurl objects – in this case, a proximity sensor.

micro_robot_muscle2pngVanadium dioxide boasts several useful qualities for creating miniaturized artificial muscles and motors. An insulator at low temperatures, it abruptly becomes a conductor at 67° Celsius (152.6° F), a quality which makes it an energy efficient option for electronic devices. In addition, the vanadium dioxide crystals undergo a change in their physical form when warmed, contracting along one dimension while expanding along the other two.

Junqiao Wu, the team’s project leader, had this to say about their invention in a press statement:

Using a simple design and inorganic materials, we achieve superior performance in power density and speed over the motors and actuators now used in integrated micro-systems… With its combination of power and multi-functionality, our micro-muscle shows great potential for applications that require a high level of functionality integration in a small space.

In short, the concept is a big improvement over existing gears and motors that are currently employed in electronic systems. However, since it is on the scale of nanometers, it’s not exactly Terminator-compliant. However, it does provide some very interesting possibilities for machines of the future, especially where the functionality of micro-systems are concerned.

graphene_flexibleAnother development with the potential to create robotic muscles comes from Duke University, where a team of engineers have found a possible way to turn graphene into a stretchable, retractable material. For years now, the miracle properties of graphene have made it an attractive option for batteries, circuits, capacitors, and transistors.

However, graphene’s tendency to stick together once crumpled has had a somewhat limiting effect on its applications. But by attacking the material to a stretchy polymer film, the Duke researchers were able to crumple and then unfold the material, resulting in a properties that lend it to a broader range of applications- including artificial muscles.

robot_muscle1Before adhering the graphene to the rubber film, the researchers first pre-stretched the film to multiple times its original size. The graphene was then attached and, as the rubber film relaxed, the graphene layer compressed and crumpled, forming a pattern where tiny sections were detached. It was this pattern that allowed the graphene to “unfold” when the rubber layer was stretched out again.

The researchers say that by crumpling and stretching, it is possible to tune the graphene from being opaque to transparent, and different polymer films can result in different properties. These include a “soft” material that acts like an artificial muscle. When electricity is applied, the material expands, and when the electricity is cut off, it contracts; the degree of which depends on the amount of voltage used.

robot_muscle2Xuanhe Zhao, an Assistant Professor at the Pratt School of Engineering, explained the implications of this discovery:

New artificial muscles are enabling diverse technologies ranging from robotics and drug delivery to energy harvesting and storage. In particular, they promise to greatly improve the quality of life for millions of disabled people by providing affordable devices such as lightweight prostheses and full-page Braille displays.

Currently, artificial muscles in robots are mostly of the pneumatic variety, relying on pressurized air to function. However, few robots use them because they can’t be controlled as precisely as electric motors. It’s possible then, that future robots may use this new rubberized graphene and other carbon-based alternatives as a kind of muscle tissue that would more closely replicate their biological counterparts.

artificial-muscle-1This would not only would this be a boon for robotics, but (as Zhao notes) for amputees and prosthetics as well. Already, bionic devices are restoring ability and even sensation to accident victims, veterans and people who suffer from physical disabilities. By incorporating carbon-based, piezoelectric muscles, these prosthetics could function just like the real thing, but with greater strength and carrying capacity.

And of course, there is the potential for cybernetic enhancement, at least in the long-term. As soon as such technology becomes commercially available, even affordable, people will have the option of swapping out their regular flesh and blood muscles for something a little more “sophisticated” and high-performance. So in addition to killer robots, we might want to keep an eye out for deranged cyborg people!

And be sure to check out this video from the Berkley Lab showing the vanadium dioxide muscle in action:


Source:
gizmag.com, (2)
, extremetech.com, pratt.duke.edu