Judgement Day Update: Super-Strong Robotic Muscle

robot-arm-wrestling-03-20-09In their quest to build better, smarter and faster machines, researchers are looking to human biology for inspiration. As has been clear for some time, anthropomorphic robot designs cannot be expected to do the work of a person or replace human rescue workers if they are composed of gears, pullies, and hydraulics. Not only would they be too slow, but they would be prone to breakage.

Because of this, researchers have been working looking to create artificial muscles, synthetics tissues that respond to electrical stimuli, are flexible, and able to carry several times their own weight – just like the real thing. Such muscles will not only give robots the ability to move and perform tasks with the same ambulatory range as a human, they are likely to be far stronger than the flesh and blood variety.

micro_robot_muscleAnd of late, there have been two key developments on this front which may make this vision come true. The first comes from the US Department of Energy ’s Lawrence Berkeley National Laboratory, where a team of researchers have demonstrated a new type of robotic muscle that is 1,000 times more powerful than that of a human’s, and has the ability to catapult an item 50 times its own weight.

The artificial muscle was constructed using vanadium dioxide, a material known for its ability to rapidly change size and shape. Combined with chromium and fashioned with a silicone substrate, the team formed a V-shaped ribbon which formed a coil when released from the substrate. The coil when heated turned into a micro-catapult with the ability to hurl objects – in this case, a proximity sensor.

micro_robot_muscle2pngVanadium dioxide boasts several useful qualities for creating miniaturized artificial muscles and motors. An insulator at low temperatures, it abruptly becomes a conductor at 67° Celsius (152.6° F), a quality which makes it an energy efficient option for electronic devices. In addition, the vanadium dioxide crystals undergo a change in their physical form when warmed, contracting along one dimension while expanding along the other two.

Junqiao Wu, the team’s project leader, had this to say about their invention in a press statement:

Using a simple design and inorganic materials, we achieve superior performance in power density and speed over the motors and actuators now used in integrated micro-systems… With its combination of power and multi-functionality, our micro-muscle shows great potential for applications that require a high level of functionality integration in a small space.

In short, the concept is a big improvement over existing gears and motors that are currently employed in electronic systems. However, since it is on the scale of nanometers, it’s not exactly Terminator-compliant. However, it does provide some very interesting possibilities for machines of the future, especially where the functionality of micro-systems are concerned.

graphene_flexibleAnother development with the potential to create robotic muscles comes from Duke University, where a team of engineers have found a possible way to turn graphene into a stretchable, retractable material. For years now, the miracle properties of graphene have made it an attractive option for batteries, circuits, capacitors, and transistors.

However, graphene’s tendency to stick together once crumpled has had a somewhat limiting effect on its applications. But by attacking the material to a stretchy polymer film, the Duke researchers were able to crumple and then unfold the material, resulting in a properties that lend it to a broader range of applications- including artificial muscles.

robot_muscle1Before adhering the graphene to the rubber film, the researchers first pre-stretched the film to multiple times its original size. The graphene was then attached and, as the rubber film relaxed, the graphene layer compressed and crumpled, forming a pattern where tiny sections were detached. It was this pattern that allowed the graphene to “unfold” when the rubber layer was stretched out again.

The researchers say that by crumpling and stretching, it is possible to tune the graphene from being opaque to transparent, and different polymer films can result in different properties. These include a “soft” material that acts like an artificial muscle. When electricity is applied, the material expands, and when the electricity is cut off, it contracts; the degree of which depends on the amount of voltage used.

robot_muscle2Xuanhe Zhao, an Assistant Professor at the Pratt School of Engineering, explained the implications of this discovery:

New artificial muscles are enabling diverse technologies ranging from robotics and drug delivery to energy harvesting and storage. In particular, they promise to greatly improve the quality of life for millions of disabled people by providing affordable devices such as lightweight prostheses and full-page Braille displays.

Currently, artificial muscles in robots are mostly of the pneumatic variety, relying on pressurized air to function. However, few robots use them because they can’t be controlled as precisely as electric motors. It’s possible then, that future robots may use this new rubberized graphene and other carbon-based alternatives as a kind of muscle tissue that would more closely replicate their biological counterparts.

artificial-muscle-1This would not only would this be a boon for robotics, but (as Zhao notes) for amputees and prosthetics as well. Already, bionic devices are restoring ability and even sensation to accident victims, veterans and people who suffer from physical disabilities. By incorporating carbon-based, piezoelectric muscles, these prosthetics could function just like the real thing, but with greater strength and carrying capacity.

And of course, there is the potential for cybernetic enhancement, at least in the long-term. As soon as such technology becomes commercially available, even affordable, people will have the option of swapping out their regular flesh and blood muscles for something a little more “sophisticated” and high-performance. So in addition to killer robots, we might want to keep an eye out for deranged cyborg people!

And be sure to check out this video from the Berkley Lab showing the vanadium dioxide muscle in action:


Source:
gizmag.com, (2)
, extremetech.com, pratt.duke.edu

Tech News: Google Seeking “Conscious Homes”

nest_therm1In Google’s drive for world supremacy, a good number of start-ups and developers have been bought up. Between their acquisition of eight robotics companies in the space of sixth months back in 2013 to their ongoing  buyout of anyone in the business of aerospace, voice and facial recognition, and artificial intelligence, Google seems determined to have a controlling interest in all fields of innovation.

And in what is their second-largest acquisition to date, Google announced earlier this month that they intend get in on the business of smart homes. The company in question is known as Nest Labs, a home automation company that was founded by former Apple engineers Tony Fadell and Matt Rogers in 2010 and is behind the creation of The Learning Thermostat and the Protect smoke and carbon monoxide detector.

nest-thermostatThe Learning Thermostat, the company’s flagship product, works by learning a home’s heating and cooling preferences over time, removing the need for manual adjustments or programming. Wi-Fi networking and a series of apps also let users control and monitor the unit Nest from afar, consistent with one of the biggest tenets of smart home technology, which is connectivity.

Similarly, the Nest Protect, a combination smoke and carbon monoxide detector, works by differentiating between burnt toast and real fires. Whenever it detects smoke, one alarm goes off, which can be quieted by simply waving your hand in front of it. But in a real fire, or where deadly carbon monoxide is detected, a much louder alarm sounds to alert its owners.

nest_smoke_detector_(1_of_9)_1_610x407In addition, the device sends a daily battery status report to the Nest mobile app, which is the same one that controls the thermostats, and is capable of connecting with other units in the home. And, since Nest is building a platform for all its devices, if a Nest thermostat is installed in the same home, the Protect and automatically shut it down in the event that carbon monoxide is detected.

According to a statement released by co-f0under Tony Fadell, Nest will continue to be run in-house, but will be partnered with Google in their drive to create a conscious home. On his blog, Fadell explained his company’s decision to join forces with the tech giant:

Google will help us fully realize our vision of the conscious home and allow us to change the world faster than we ever could if we continued to go it alone. We’ve had great momentum, but this is a rocket ship. Google has the business resources, global scale, and platform reach to accelerate Nest growth across hardware, software, and services for the home globally.

smarthomeYes, and I’m guessing that the $3.2 billion price tag added a little push as well! Needless to say, some wondered why Apple didn’t try to snatch up this burgeoning company, seeing as how its being run by two of its former employees. But according to Fadell, Google founder Sergey Brin “instantly got what we were doing and so did the rest of the Google team” when they got a Nest demo at the 2011 TED conference.

In a press release, Google CEO Larry Page had this to say about bringing Nest into their fold:

They’re already delivering amazing products you can buy right now – thermostats that save energy and smoke/[carbon monoxide] alarms that can help keep your family safe. We are excited to bring great experiences to more homes in more countries and fulfill their dreams!

machine_learningBut according to some, this latest act by Google goes way beyond wanting to develop devices. Sara Watson at Harvard University’s Berkman Center for Internet and Society is one such person, who believes Google is now a company obsessed with viewing everyday activities as “information problems” to be solved by machine learning and algorithms.

Consider Google’s fleet of self-driving vehicles as an example, not to mention their many forays into smartphone and deep learning technology. The home is no different, and a Google-enabled smart home of the future, using a platform such as the Google Now app – which already gathers data on users’ travel habits – could adapt energy usage to your life in even more sophisticated ways.

Larry_PageSeen in these terms, Google’s long terms plans of being at the forefront of the new technological paradigm  – where smart technology knows and anticipates and everything is at our fingertips – certainly becomes more clear. I imagine that their next goal will be to facilitate the creation of household AIs, machine minds that monitor everything within our household, provide maintenance, and ensure energy efficiency.

However, another theory has it that this is in keeping with Google’s push into robotics, led by the former head of Android, Andy Rubin. According to Alexis C. Madrigal of the Atlantic, Nest always thought of itself as a robotics company, as evidence by the fact that their VP of technology is none other than Yoky Matsuoka – a roboticist and artificial intelligence expert from the University of Washington.

yokymatsuoka1During an interview with Madrigal back in 2012, she explained why this was. Apparently, Matsuoka saw Nest as being positioned right in a place where it could help machine and human intelligence work together:

The intersection of neuroscience and robotics is about how the human brain learns to do things and how machine learning comes in to augment that.

In short, Nest is a cryptorobotics company that deals in sensing, automation, and control. It may not make a personable, humanoid robot, but it is producing machine intelligences that can do things in the physical world. Seen in this respect, the acquisition was not so much part of Google’s drive to possess all our personal information, but a mere step along the way towards the creation of a working artificial intelligence.

It’s a Brave New World, and it seems that people like Musk, Page, and a slew of futurists that are determined to make it happen, are at the center of it.

Sources: cnet.news.com, (2), newscientist.com, nest.com, theatlantic.com

The Future of Computing: Brain-Like Computers

neuronsIt’s no secret that computer scientists and engineers are looking to the human brain as means of achieving the next great leap in computer evolution. Already, machines are being developed that rely on machine blood, can continue working despite being damaged, and recognize images and speech. And soon, a computer chip that is capable of learning from its mistakes will also be available.

The new computing approach, already in use by some large technology companies, is based on the biological nervous system – specifically on how neurons react to stimuli and connect with other neurons to interpret information. It allows computers to absorb new information while carrying out a task, and adjust what they do based on the changing signals.

brain_chip2The first commercial version of the new kind of computer chip is scheduled to be released in 2014, and was the result of a collaborative effort between I.B.M. and Qualcomm, as well as a Stanford research team. This “neuromorphic processor” can not only automate tasks that once required painstaking programming, but can also sidestep and even tolerate errors, potentially making the term “computer crash” obsolete.

In coming years, the approach will make possible a new generation of artificial intelligence systems that will perform some functions that humans do with ease: see, speak, listen, navigate, manipulate and control. That can hold enormous consequences for tasks like facial and speech recognition, navigation and planning, which are still in elementary stages and rely heavily on human programming.

googleneuralnetworkFor example, computer vision systems only “recognize” objects that can be identified by the statistics-oriented algorithms programmed into them. An algorithm is like a recipe, a set of step-by-step instructions to perform a calculation. But last year, Google researchers were able to get a machine-learning algorithm, known as a “Google Neural Network”, to perform an identification task (involving cats) without supervision.

And this past June, the company said it had used those neural network techniques to develop a new search service to help customers find specific photos more accurately. And this past November, researchers at Standford University came up with a new algorithm that could give computers the power to more reliably interpret language. It’s known as the Neural Analysis of Sentiment (NaSent).

deep_learning_laptopA similar concept known as Deep Leaning is also looking to endow software with a measure of common sense. Google is using this technique with their voice recognition technology to aid in performing searches. In addition, the social media giant Facebook is looking to use deep learning to help them improve Graph Search, an engine that allows users to search activity on their network.

Until now, the design of computers was dictated by ideas originated by the mathematician John von Neumann about 65 years ago. Microprocessors perform operations at lightning speed, following instructions programmed using long strings of binary code (0s and 1s). The information is stored separately in what is known as memory, either in the processor itself, in adjacent storage chips or in higher capacity magnetic disk drives.

neural-networksBy contrast, the new processors consist of electronic components that can be connected by wires that mimic biological synapses. Because they are based on large groups of neuron-like elements, they are known as neuromorphic processors, a term credited to the California Institute of Technology physicist Carver Mead, who pioneered the concept in the late 1980s.

These processors are not “programmed”, in the conventional sense. Instead, the connections between the circuits are “weighted” according to correlations in data that the processor has already “learned.” Those weights are then altered as data flows in to the chip, causing them to change their values and to “spike.” This, in turn, strengthens some connections and weakens others, reacting much the same way the human brain does.

Neuromorphic-chip-640x353In the words of Dharmendra Modha, an I.B.M. computer scientist who leads the company’s cognitive computing research effort:

Instead of bringing data to computation as we do today, we can now bring computation to data. Sensors become the computer, and it opens up a new way to use computer chips that can be everywhere.

One great advantage of the new approach is its ability to tolerate glitches, whereas traditional computers are cannot work around the failure of even a single transistor. With the biological designs, the algorithms are ever changing, allowing the system to continuously adapt and work around failures to complete tasks. Another benefit is energy efficiency, another inspiration drawn from the human brain.

IBM_stacked3dchipsThe new computers, which are still based on silicon chips, will not replace today’s computers, but augment them; at least for the foreseeable future. Many computer designers see them as coprocessors, meaning they can work in tandem with other circuits that can be embedded in smartphones and the centralized computers that run computing clouds.

However, the new approach is still limited, thanks to the fact that scientists still do not fully understand how the human brain functions. As Kwabena Boahen, a computer scientist who leads Stanford’s Brains in Silicon research program, put it:

We have no clue. I’m an engineer, and I build things. There are these highfalutin theories, but give me one that will let me build something.

calit2PhotoLuckily, there are efforts underway that are designed to remedy this, with the specific intention of directing that knowledge towards the creation of better computers and AIs. One such effort comes from the National Science Foundation financed the Center for Brains, Minds and Machines, a new research center based at the Massachusetts Institute of Technology, with Harvard and Cornell.

Another is the California Institute for Telecommunications and Information Technology (aka. Calit2) – a center dedicated to innovation in nanotechnology, life sciences, information technology, and telecommunications. As
Larry Smarr, an astrophysicist and director of Institute, put it:

We’re moving from engineering computing systems to something that has many of the characteristics of biological computing.

Human-Brain-project-Alp-ICTAnd last, but certainly not least, is the Human Brain Project, an international group of 200 scientists from 80 different research institutions and based in Lausanne, Switzerland. Having secured the $1.6 billion they need to fund their efforts, these researchers will spend the next ten years conducting research that cuts across multiple disciplines.

This initiative, which has been compared to the Large Hadron Collider, will attempt to reconstruct the human brain piece-by-piece and gradually bring these cognitive components into an overarching supercomputer. The expected result of this research will be new platforms for “neuromorphic computing” and “neurorobotics,” allowing for the creation of computing and robotic architectures that mimic the functions of the human brain.

neuromorphic_revolutionWhen future generations look back on this decade, no doubt they will refer to it as the birth of the neuromophic computing revolution. Or maybe just Neuromorphic Revolution for short, but that sort of depends on the outcome. With so many technological revolutions well underway, it is difficult to imagine how the future will look back and characterize this time.

Perhaps, as Charles Stross suggest, it will simply be known as “the teens”, that time in pre-Singularity history where it was all starting to come together, but was yet to explode and violently change everything we know. I for one am looking forward to being around to witness it all!

Sources: nytimes.com, technologyreview.com, calit2.net, humanbrainproject.eu

The Future is Here: 3-D Printed Eye Cells

printed_eyecells3In the past few years, medical researchers have been able to replicate real, living tissues samples using 3-D printing technology – ranging from replacement ears and printed cartilage to miniature kidneys and even liver cells. Well now, thanks to a team of researchers from the University of Cambridge, eye cells have been added to that list.

Using a standard ink-jet printer to form layers of two types of cells,  the research team managed to print two types of central nervous system cells from the retinas of adult rats – ganglion cells (which transmit information from the eye to the brain), and glial cells (which provide protection and support for neurons). The resulting cells were able to grow normally and remain healthy in culture.

printed_eyecells2Ink-jet printing has been used to deposit cells before, but this is the first time cells from an adult animal’s central nervous system have been printed. The research team published its research in the IOP Publishing’s open-access journal Biofabrication and plans to extend this study to print other cells of the retina and light-sensitive photoreceptors.

In the report, Keith Martin and Barbara Lorber – the co-authors of the paper who work at the John van Geest Centre for Brain Repair at the University of Cambridge – explained the experiment in detail:

Our study has shown, for the first time, that cells derived from the mature central nervous system, the eye, can be printed using a piezoelectric inkjet printer. Although our results are preliminary and much more work is still required, the aim is to develop this technology for use in retinal repair in the future.

printed_eyecellsThis is especially good news for people with impaired visual acuity, or those who fear losing their sight, as it could lead to new therapies for retinal disorders such as blindness and macular degeneration. Naturally, more tests are needed before human trials can begin. But the research and its conclusions are quite reassuring that eye cells can not only be produced synthetically, but will remain healthy after they are produced.

Clara Eaglen, a spokesperson for the Royal National Institute of Blind People (RNIB), had this to say about the breakthrough:

The key to this research, once the technology has moved on, will be how much useful vision is restored. Even a small bit of sight can make a real difference, for some people it could be the difference between leaving the house on their own or not. It could help boost people’s confidence and in turn their independence.

printed_eyecells1Combined with bionic eyes that are now approved for distribution in the US, and stem cell treatments that have restores sight in mice, this could be the beginning of the end of blindness. And with all the strides being made in bioprinting and biofabrication, it could also be another step on the long road to replacement organs and print-on-demand body parts.

Sources: news.cnet.com, singularityhub.com, cam.ca.uk, bbc.co.uk

News From Space: Luna Rings and Spidersuits!

space_cameraSpace is becoming a very interesting place, thanks to numerous innovations that are looking ahead to the next great leap in exploration. With the Moon and Mars firmly fixed as the intended targets for future manned missions, everything from proposed settlements and construction projects are being plotted, and the requisite tools are being fashioned.

For instance, the Shimizu Corporation (the designers of the Shimizu Mega-City Pyramid), a Japanese construction firm, has proposed a radical idea for bringing solar energy to the world. Taking the concept of space-based solar power a step further, Shimizu has proposed the creation of a “Luna Ring” – an array of solar cells around the Moon’s 11000 km (6800 mile) equator to harvest solar energy and beam it back to Earth.

lunaringThe plan involves using materials derived from lunar soil itself, and then using them to build an array that will measure some 400 km (250 miles) thick. Since the Moon’s equator receives a steady amount of exposure to the Sun, the photovoltaic ring would be able to generate a continuous amount of electricity, which it would then beam down to Earth from the near side of the Moon.

It’s an ambitious idea that calls for assembling machinery transported from Earth and using tele-operated robots to do the actual construction on the Moon’s surface, once it all arrives. The project would involve multiple phases, to be spread out over a period of about thirty years, and which relies on multiple strategies to make it happen.

lunaring-1For example, the firm claims that water – a necessary prerequisite for construction – could be produced by reducing lunar soil with hydrogen imported from Earth. The company also proposes extracting local regolith to fashion “lunar concrete”, and utilizing solar-heat treatment processes to fashion it into bricks, ceramics, and glass fibers.

The remotely-controlled robots would also be responsible for other construction tasks, such as excavating the surrounding landscape, leveling the ground, laying out solar panel-studded concrete, and laying embedded cables that would run from the ring to a series of transmission stations located on the Earth-facing side of the Moon.

space-based-solarpowerPower could be beamed to the Earth through microwave power transmission antennas, about 20 m (65 ft) in diameter, and a series of high density lasers, both of which would be guided by radio beacons. Microwave power receiving antennas on Earth, located offshore or in areas with little cloud cover, could convert the received microwave power into DC electricity and send it to where it was needed.

The company claims that it’s system could beam up to 13,000 terawatts of power around-the-clock, which is roughly two-thirds of what is used by the world on average per year. With such an array looming in space, and a few satellites circling the planet to pick up the slack, Earth’s energy needs could be met for the foreseable future, and all without a single drop of oil or brick of coal.

The proposed timeline has actual construction beginning as soon as 2035.

biosuitAnd naturally, when manned missions are again mounted into space, the crews will need the proper equipment to live, thrive and survive. And since much of the space suit technology is several decades old, space agencies and private companies are partnering to find new and innovative gear with which to equip the men and women who will brave the dangers of space and planetary exploration.

Consider the Biosuit, which is a prime example of a next-generation technology designed to tackle the challenges of manned missions to Mars. Created by Dava Newman, an MIT aerospace engineering professor, this Spiderman-like suit is a sleeker, lighter alternative to the standard EVA suits that weigh approximately 135 kilograms (300 pounds).

biosuit_dava_newmanFor over a decade now, Newman has been working on a suit that is specifically designed for Mars exploration. At this year’s TEDWomen event in San Francisco, she showcased her concept and demonstrated how its ergonomic design will allow astronauts to explore the difficult terrain of the Red Planet without tripping over the bulk they carry with the current EVA suits.

The reason the suit is sleek is because it’s pressurized close to the skin, which is possible thanks to tension lines in the suit. These are coincidentally what give it it’s Spiderman-like appearance, contributing to its aesthetic appeal as well. These lines are specifically designed to flex as the astronauts ends their arms or knees, thus replacing hard panels with soft, tensile fabric.

biosuit1Active materials, such as nickel-titanium shape-memory alloys, allow the nylon and spandex suit to be shrink-wrapped around the skin even tighter. This is especially important, in that it gets closer Newman to her goal of designing a suit that can contain 30% of the atmosphere’s pressure – the level necessary to keep someone alive in space.

Another benefit of the BioSuit is its resiliency. If it gets punctured, an astronaut can fix it with a new type of space-grade Ace Bandage. And perhaps most importantly, traditional suits can only be fitted to people 5′ 5″ and taller, essentially eliminating short women and men from the astronaut program. The BioSuit, on the other hand, can be built for smaller people, making things more inclusive in the future.

Mars_simulationNewman is designing the suit for space, but she also has some Earth-bound uses in mind . Thanks to evidence that showcases the benefits of compression to the muscles and cardiovascular system, the technology behind the Biosuit could be used to increase athletic performance or even help boost mobility for people with cerebral palsy. As Newman herself put it:

We’ll probably send a dozen or so people to Mars in my lifetime. I hope I see it. But imagine if we could help kids with CP just move around a little bit better.

With proper funding, Newman believes she could complete the suit design in two to three years. It would be a boon to NASA, as it appears to be significantly cheaper to make than traditional spacesuits. Funding isn’t in place yet, but Newman still hopeful that the BioSuit will be ready for the first human mission to Mars, which are slated for sometime in 2030.

In the meantime, enjoy this video of the TEDWomen talk featuring Newman and her Biosuit demonstration:

Sources: gizmag, fastcoexist, blog.ted

Climate News: World’s Most Potent Greenhouse Gas Found

NASA_global_warming_predFor over a century now, scientists have understood the crucial link that lies between greenhouse gases and the effect known as “Global Warming”. For decades, scientists have been focused on the role played by carbon dioxide and methane gas, the two principle polluters that are tied to human behavior and the consequences of our activities.

But now, a long-lived greenhouse gas, more potent than any other, has been discovered in the upper atmosphere by chemists at the University of Toronto. It’s known as Perfluorotributylamine (PFTBA), a gas that has a radiative efficiency of 0.86 – which is one measure of a chemical’s effectiveness at warming the climate (expressed in parts per million).

upper_atmosphereAt present, the biggest contributor to climate change is carbon dioxide, mainly because its concentrations are so high — 393.1 parts per million in 2012 and growing, thanks to human activity. However, many other gases contribute to this trend – such as nitrogen trifluoride and various chloroflurocarbons (CFCs) – but are less involved in the overall warming effect because their concentrations are lower.

According to the research article, which appeared in a recent issue of Geophysics Research Letters, the concentrations of PFTBA are very small — about 0.18 parts per trillion by volume in the atmosphere (at least in Toronto, where it was detected). But even though the overall contribution of PFTBA is comparatively small, its effect is “on the same scale as some of the gases that the monitoring community is aware of.”

Toronto Skyline With SmogAccording to 3M, a producer of PFTBA, the chemical has been sold for more than 30 years for the purpose of cooling semiconductor processing equipment and specialized military equipment, much in the same way that CFCs have been used. It is effective at transferring heat away from electronic components, and is stable, non-flammable, non-toxic, and doesn’t conduct electricity.

The chemical has an average lifespan of about 500 years in the lower atmosphere, and also like CFC’s, it has long been known to have the potential to cause damage to the ozone layer. But up until now its ability to trap heat in the atmosphere had not been measured, nor had it been detected in the atmosphere. The reason PFTBA is so potent compared to other gases is that it absorbs heat that would normally escape from the atmosphere.

electromagnetic-spectrumHeat, or infrared radiation comes, in different colors, and each greenhouse gas is only able to absorb certain colors of heat. PFTBA is different in that it manages to absorb colors that other greenhouse gases don’t. It was after some was discovered on the university grounds by Professor Scott Mabury that his team began to consider whether any had made it into the atmosphere as well.

Shortly thereafter, they conducted a series of tests to measure the radiative efficiency of the chemical and then began looking for samples of it in the air. This involved deploying air pumps to three locations – including the University of Toronto campus, Mt. Pleasant Cemetery and Woodbine Beach. The samples were then condensed and concentrated, and the PFTBA separated by weight.

airpollution1The end result was that PFTBA was found in all samples, including those upwind from the University of Toronto, suggesting that it wasn’t just coming from the chemistry building. However, the measurements were local and therefore not representative of the global average concentrations of the chemical. Still, its discovery is an indication that dangers might exist.

According to Angela Hong, a PhD student at the UofT department of chemistry and the lead author of the paper, this danger lies in the combined effect PFTBA could have alongside other gases:

If you’re suddenly going to add a greenhouse gas and it absorbs in that region. it’s going to be very potent.

Its effect is far more intense if its effect per molecule is considered, since it is about 15 times heavier than carbon dioxide. What’s more, PFTBA survives hundreds of years in the atmosphere, which means its effects are long-lasting. Fortunately, its use has been regulated under a U.S. Environmental Protection Agency program that promotes alternatives to chemicals that deplete the ozone layer.

pftba-toronto-537x402In addition, chemicals that deplete the ozone layer are recognized by the Kyoto Protocols. As such, it should be an easy matter (from a legal standpoint anyway) to legislate against its continued use. As 3M indicated in a recent press statement:

That regulation stipulates that PFCs [the class of chemical that PFTBA belongs to] should be used only where there are no other alternatives on the basis of performance and safety. 3M adheres to that policy globally.

It added that the company “has worked to limit the use of these materials to non-emissive applications” and emphasized that the concentration of PFTBA found in the atmosphere is very low.

????????????????Nevertheless, this represents good news and bad news when it comes to the ongoing issue of Climate Change. On the one hand, early detection like this is a good way of ensuring that gases that contribute to the problem can be identified and brought under control before they become a problem. On the other, it shows us that when it comes to warming, there are more culprits than previously expected to contributing to it.

According to the most recent IPCC report, which was filed in 2012, the likelihood of us reaching a critical tipping point – i.e. the point of no return with warming – this century is highly unlikely. But that still leaves plenty of room for the problem to get worse before it gets better. One can only hope we get our acts together before it’s too late.

Sources: cbc.ca, IO9

The Future is… Worms: Life Extension and Computer-Simulations

genetic_circuitPost-mortality is considered by most to be an intrinsic part of the so-called Technological Singularity. For centuries, improvements in medicine, nutrition and health have led to improved life expectancy. And in an age where so much more is possible – thanks to cybernetics, bio, nano, and medical advances – it stands to reason that people will alter their physique in order slow the onset of age and extend their lives even more.

And as research continues, new and exciting finds are being made that would seem to indicate that this future may be just around the corner. And at the heart of it may be a series of experiments involving worms. At the Buck Institute for Research and Aging in California, researchers have been tweaking longevity-related genes in nematode worms in order to amplify their lifespans.

immortal_wormsAnd the latest results caught even the researchers by surprise. By triggering mutations in two pathways known for lifespan extension – mutations that inhibit key molecules involved in insulin signaling (IIS) and the nutrient signaling pathway Target of Rapamycin (TOR) – they created an unexpected feedback effect that amplified the lifespan of the worms by a factor of five.

Ordinarily, a tweak to the TOR pathway results in a 30% lifespan extension in C. Elegans worms, while mutations in IIS (Daf-2) results in a doubling of lifespan. By combining the mutations, the researchers were expecting something around a 130% extension to lifespan. Instead, the worms lived the equivalent of about 400 to 500 human years.

antiagingAs Doctor Pankaj Kapahi said in an official statement:

Instead, what we have here is a synergistic five-fold increase in lifespan. The two mutations set off a positive feedback loop in specific tissues that amplified lifespan. These results now show that combining mutants can lead to radical lifespan extension — at least in simple organisms like the nematode worm.

The positive feedback loop, say the researchers, originates in the germline tissue of worms – a sequence of reproductive cells that may be passed onto successive generations. This may be where the interactions between the two mutations are integrated; and if correct, might apply to the pathways of more complex organisms. Towards that end, Kapahi and his team are looking to perform similar experiments in mice.

DNA_antiagingBut long-term, Kapahi says that a similar technique could be used to produce therapies for aging in humans. It’s unlikely that it would result in the dramatic increase to lifespan seen in worms, but it could be significant nonetheless. For example, the research could help explain why scientists are having a difficult time identifying single genes responsible for the long lives experienced by human centenarians:

In the early years, cancer researchers focused on mutations in single genes, but then it became apparent that different mutations in a class of genes were driving the disease process. The same thing is likely happening in aging. It’s quite probable that interactions between genes are critical in those fortunate enough to live very long, healthy lives.

A second worm-related story comes from the OpenWorm project, an international open source project dedicated to the creation of a bottom-up computer model of a millimeter-sized nemotode. As one of the simplest known multicellular life forms on Earth, it is considered a natural starting point for creating computer-simulated models of organic beings.

openworm-nematode-roundworm-simulation-artificial-lifeIn an important step forward, OpenWorm researchers have completed the simulation of the nematode’s 959 cells, 302 neurons, and 95 muscle cells and their worm is wriggling around in fine form. However, despite this basic simplicity, the nematode is not without without its share of complex behaviors, such as feeding, reproducing, and avoiding being eaten.

To model the complex behavior of this organism, the OpenWorm collaboration (which began in May 2013) is developing a bottom-up description. This involves making models of the individual worm cells and their interactions, based on their observed functionality in the real-world nematodes. Their hope is that realistic behavior will emerge if the individual cells act on each other as they do in the real organism.

openworm-nematode-roundworm-simulation-artificial-life-0Fortunately, we know a lot about these nematodes. The complete cellular structure is known, as well as rather comprehensive information concerning the behavior of the thing in reaction to its environment. Included in our knowledge is the complete connectome, a comprehensive map of neural connections (synapses) in the worm’s nervous system.

The big question is, assuming that the behavior of the simulated worms continues to agree with the real thing, at what stage might it be reasonable to call it a living organism? The usual definition of living organisms is behavioral, that they extract usable energy from their environment, maintain homeostasis, possess a capacity to grow, respond to stimuli, reproduce, and adapt to their environment in successive generations.

openworm-nematode1If the simulation exhibits these behaviors, combined with realistic responses to its external environment, should we consider it to be alive? And just as importantly, what tests would be considered to test such a hypothesis? One possibility is an altered version of the Turing test – Alan Turing’s proposed idea for testing whether or not a computer could be called sentient.

In the Turing test, a computer is considered sentient and sapient if it can simulate the responses of a conscious sentient being so that an auditor can’t tell the difference. A modified Turing test might say that a simulated organism is alive if a skeptical biologist cannot, after thorough study of the simulation, identify a behavior that argues against the organism being alive.

openworm-nematode2And of course, this raises an even larger questions. For one, is humanity on the verge of creating “artificial life”? And what, if anything, does that really look like? Could it just as easily be in the form of computer simulations as anthropomorphic robots and biomachinery? And if the answer to any of these questions is yes, then what exactly does that say about our preconceived notions about what life is?

If humanity is indeed moving into an age of “artificial life”, and from several different directions, it is probably time that we figure out what differentiates the living from the nonliving. Structure? Behavior? DNA? Local reduction of entropy? The good news is that we don’t have to answer that question right away. Chances are, we wouldn’t be able to at any rate.

Brain-ScanAnd though it might not seem apparent, there is a connection between the former and latter story here. In addition to being able to prolong life through genetic engineering, the ability to simulate consciousness through computer-generated constructs might just prove a way to cheat death in the future. If complex life forms and connectomes (like that involved in the human brain) can be simulated, then people may be able to transfer their neural patterns before death and live on in simulated form indefinitely.

So… anti-aging, artificial life forms, and the potential for living indefinitely. And to think that it all begins with the simplest multicellular life form on Earth – the nemotode worm. But then again, all life – nay, all of existence – depends upon the most simple of interactions, which in turn give rise to more complex behaviors and organisms. Where else would we expect the next leap in biotechnological evolution to come from?

And in the meantime, be sure to enjoy this video of the OpenWorm’s simulated nemotode in action


Sources:
IO9, cell.com, gizmag, openworm

Looking Forward: Science Stories to Watch for in 2014

BrightFutureThe year of 2013 was a rather big one in terms of technological developments, be they in the field of biomedicine, space exploration, computing, particle physics, or robotics technology. Now that the New Year is in full swing, there are plenty of predictions as to what the next twelve months will bring. As they say, nothing ever occurs in a vacuum, and each new step in the long chain known as “progress” is built upon those that came before.

And with so many innovations and breakthroughs behind us, it will be exciting to see what lies ahead of us for the year of 2014. The following is a list containing many such predictions, listed in alphabetical order:

Beginning of Human Trials for Cancer Drug:
A big story that went largely unreported in 2013 came out of the Stanford School of Medicine, where researchers announced a promising strategy in developing a vaccine to combat cancer. Such a goal has been dreamed about for years, using the immune system’s killer T-cells to attack cancerous cells. The only roadblock to this strategy has been that cancer cells use a molecule known as CD47 to send a signal that fools T-cells, making them think that the cancer cells are benign.

pink-ribbonHowever, researchers at Stanford have demonstrated that the introduction of an “Anti-CD47 antibody” can intercept this signal, allowing T-cells and macrophages to identify and kill cancer cells. Stanford researchers plan to start human trials of this potential new cancer therapy in 2014, with the hope that it would be commercially available in a few years time. A great hope with this new macrophage therapy is that it will, in a sense, create a personalized vaccination against a patient’s particular form of cancer.

Combined with HIV vaccinations that have been shown not only to block the acquisition of the virus, but even kill it, 2014 may prove to be the year that the ongoing war against two of the deadliest diseases in the world finally began to be won.

Close Call for Mars:
A comet discovery back in 2013 created a brief stir when researchers noted that the comet in question – C/2013 A1 Siding Springs – would make a very close passage of the planet Mars on October 19th, 2014. Some even suspected it might impact the surface, creating all kinds of havoc for the world’s small fleet or orbiting satellites and ground-based rovers.

Mars_A1_Latest_2014Though refinements from subsequent observations have effectively ruled that out, the comet will still pass by Mars at a close 41,300 kilometers, just outside the orbit of its outer moon of Deimos. Ground-based observers will get to watch the magnitude comet close in on Mars through October, as will the orbiters and rovers on and above the Martian surface.

Deployment of the First Solid-State Laser:
The US Navy has been working diligently to create the next-generation of weapons and deploy them to the front lines. In addition to sub-hunting robots and autonomous aerial drones, they have also been working towards the creation of some serious ship-based firepower. This has included electrically-powered artillery guns (aka. rail guns); and just as impressively, laser guns!

Navy_LAWS_laser_demonstrator_610x406Sometime in 2014, the US Navy expects to see the USS Ponce, with its single solid-state laser weapon, to be deployed to the Persian Gulf as part of an “at-sea demonstration”. Although they have been tight-lipped on the capabilities of this particular directed-energy weapon,they have indicated that its intended purpose is as a countermeasure against threats – including aerial drones and fast-moving small boats.

Discovery of Dark Matter:
For years, scientists have suspected that they are closing in on the discovery of Dark Matter. Since it was proposed in the 1930s, finding this strange mass – that makes up the bulk of the universe alongside “Dark Energy” – has been a top priority for astrophysicists. And 2014 may just be the year that the Large Underground Xenon experiment (LUX), located near the town of Lead in South Dakota, finally detects it.

LUXLocated deep underground to prevent interference from cosmic rays, the LUX experiment monitors Weakly Interacting Massive Particles (WIMPs) as they interact with 370 kilograms of super-cooled liquid Xenon. LUX is due to start another 300 day test run in 2014, and the experiment will add another piece to the puzzle posed by dark matter to modern cosmology. If all goes well, conclusive proof as to the existence of this invisible, mysterious mass may finally be found!

ESA’s Rosetta Makes First Comet Landing:
This year, after over a decade of planning, the European Space Agency’s Rosetta robotic spacecraft will rendezvous with Comet 67P/Churyumov-Gerasimenko. This will begin on January 20th, when the ESA will hail the R0setta and “awaken” its systems from their slumber. By August, the two will meet, in what promises to be the cosmic encounter of the year. After examining the comet in detail, Rosetta will then dispatch its Philae lander, equipped complete with harpoons and ice screws to make the first ever landing on a comet.

Rosetta_and_Philae_at_comet_node_full_imageFirst Flight of Falcon Heavy:
2014 will be a busy year for SpaceX, and is expected to be conducting more satellite deployments for customers and resupply missions to the International Space Station in the coming year. They’ll also be moving ahead with tests of their crew-rated version of the Dragon capsule in 2014. But one of the most interesting missions to watch for is the demo flight of the Falcon 9 Heavy, which is slated to launch out of Vandenberg Air Force Base by the end of 2014.

This historic flight will mark the beginning in a new era of commercial space exploration and private space travel. It will also see Elon Musk’s (founder and CEO of Space X, Tesla Motors and PayPal) dream of affordable space missions coming one step closer to fruition. As for what this will make possible, well… the list is endless.

spaceX-falcon9Everything from Space Elevators and O’Neil space habitats to asteroid mining, missions to the Moon, Mars and beyond. And 2014 may prove to be the year that it all begins in earnest!

First Flight of the Orion:
In September of this coming year, NASA is planning on making the first launch of its new Orion Multi-Purpose Crew Vehicle. This will be a momentous event since it constitutes the first step in replacing NASA’s capability to launch crews into space. Ever since the cancellation of their Space Shuttle Program in 2011, NASA has been dependent on other space agencies (most notably the Russian Federal Space Agency) to launch its personnel, satellites and supplies into space.

orion_arrays1The test flight, which will be known as Exploration Flight Test 1 (EFT-1), will be a  short uncrewed flight that tests the capsule during reentry after two orbits. In the long run, this test will determine if the first lunar orbital mission using an Orion MPCV can occur by the end of the decade. For as we all know, NASA has some BIG PLANS for the Moon, most of which revolve around creating a settlement there.

Gaia Begins Mapping the Milky Way:
Launched on from the Kourou Space Center in French Guiana on December 19thof last year, the European Space Agency’s Gaia space observatory will begin its historic astrometry mission this year. Relying on an advanced array of instruments to conduct spectrophotometric measurements, Gaia will provide detailed physical properties of each star observed, characterising their luminosity, effective temperature, gravity and elemental composition.

Gaia_galaxyThis will effectively create the most accurate map yet constructed of our Milky Way Galaxy, but it is also anticipated that many exciting new discoveries will occur due to spin-offs from this mission. This will include the discovery of new exoplanets, asteroids, comets and much more. Soon, the mysteries of deep space won’t seem so mysterious any more. But don’t expect it to get any less tantalizing!

International Climate Summit in New York:
While it still remains a hotly contested partisan issue, the scientific consensus is clear: Climate Change is real and is getting worse. In addition to environmental organizations and agencies, non-partisan entities, from insurance companies to the U.S. Navy, are busy preparing for rising sea levels and other changes. In September 2014, the United Nations will hold another a Climate Summit to discuss what can be one.

United-Nations_HQThis time around, the delegates from hundreds of nations will converge on the UN Headquarters in New York City. This comes one year before the UN is looking to conclude its Framework Convention on Climate Change, and the New York summit will likely herald more calls to action. Though it’ll be worth watching and generate plenty of news stories, expect many of the biggest climate offenders worldwide to ignore calls for action.

MAVEN and MOM reach Mars:
2014 will be a red-letter year for those studying the Red Planet, mainly because it will be during this year that two operations are slated to begin. These included the Indian Space Agency’s Mars Orbiter Mission (MOM, aka. Mangalyaan-1) and NASA’ Mars Atmosphere and Volatile EvolutioN (MAVEN) mission, which are due to arrive just two days apart – on September 24th and 22nd respectively.

mars_lifeBoth orbiters will be tasked with studying Mars’ atmosphere and determining what atmospheric conditions looked like billions of years ago, and what happened to turn the atmosphere into the thin, depleted layer it is today. Combined with the Curiosity and Opportunity rovers, ESA’s Mars Express,  NASA’s Odyssey spacecraft and the Mars Reconnaissance Orbiter, they will help to unlock the secrets of the Red Planet.

Unmanned Aircraft Testing:
A lot of the action for the year ahead is in the area of unmanned aircraft, building on the accomplishments in recent years on the drone front. For instance, the US Navy is expected to continue running trials with the X-47B, the unmanned technology demonstrator aircraft that is expected to become the template for autonomous aerial vehicles down the road.

X-47BThroughout 2013, the Navy conducted several tests with the X-47B, as part of its ongoing UCLASS (Unmanned Carrier Launched Airborne Surveillance and Strike) aircraft program. Specifically, they demonstrated that the X-47B was capable of making carrier-based take offs and landings. By mid 2014, it is expected that they will have made more key advances, even though the program is likely to take another decade before it is fully realizable.

Virgin Galactic Takes Off:
And last, but not least, 2014 is the year that space tourism is expected to take off (no pun intended!). After many years of research, development and testing, Virgin Galactic’s SpaceShipTwo may finally make its inaugural flights, flying out of the Mohave Spaceport and bringing tourists on an exciting (and expensive) ride into the upper atmosphere.

spaceshiptwo-2nd-flight-2In late 2013, SpaceShipTwo and passed a key milestone test flight when its powered rocket engine was test fired for an extended period of time and it achieved speeds and altitudes in excess of anything it had achieved before. Having conducted several successful glide and feathered-wing test flights already, Virgin Galactic is confident that the craft has what it takes to ferry passengers into low-orbit and bring them home safely.

On its inaugural flights, SpaceShipTwo will carry two pilots and six passengers, with seats going for $250,000 a pop. If all goes well, 2014 will be remembered as the year that low-orbit space tourism officially began!

Yes, 2014 promises to be an exciting year. And I look forward to chronicling and documenting it as much as possible from this humble little blog. I hope you will all join me on the journey!

Sources: Universetoday, (2), med.standford.edu, news.cnet, listosaur, sci.esa.int

News From Space: Space Planes and Space Colonies

skylon-orbit-reaction-enginesThe year of 2013 closed with many interesting stories about the coming age of space exploration. And they came from many fronts, including the frontiers of exploration (Mars and the outer Solar System) as well as right here at home, on the conceptual front. In the case of the latter, it seems that strides made in the field are leading to big plans for sending humans into orbit, and into deep space.

The first bit of news comes from Reaction Engines Limited, where it seems that the Skylon space plane is beginning to move from the conceptual stage to a reality. For some time now, the British company has been talked about, thanks to their plans to create a reusable aerospace jet that would be powered by a series of hypersonic engines.

Skylon_diagramAnd after years of research and development, the hypersonic Sabre Engine passed a critical heat tolerance and cooling test. Because of this, Reaction Engines Limited won an important endorsement from the European Space Agency. Far from being a simple milestone, this test may prove to be historic. Or as Skymania‘s Paul Sutherland noted, it’s “the biggest breakthrough in flight technology since the invention of the jet engine.”

Now that Reaction Engines has proven that they can do this, the company will be looking for £250 million (approx $410 million) of investment for the next step in development. This will include the development of the LapCat, a hypersonic jet that will carry 300 passengers around the world in less than four hours; and the Skylon, which will carry astronauts, tourists, satellites and space station components into orbit.

sabre-engine-17Speaking at the press conference after the test in late November, ESA’s Mark Ford had this to say:

ESA are satisfied that the tests demonstrate the technology required for the Sabre engine development. One of the major obstacles to a reusable vehicle has been removed. The gateway is now open to move beyond the jet age.

The Sabre engine is the crucial piece in the reusable space plane puzzle, hence why this test was so crucial. Once built and operational, Skylon will take off and land like a conventional plane, but still achieve orbit by mixing air-breathing jets for takeoff, and landing with rockets fueled by onboard oxygen once it gets past a certain speed.

Skylon-space-plane-obtains-breakthrough-new-engines-2The recent breakthrough had to do to the development of a heat exchanger that’s able to cool air sucked into the engine at high speed from 1,000 degrees Celsius to minus 150 degrees in one hundredth of a second. It’s this critical technology that will allow the Sabre engine to surpass the bounds of a traditional jet engine, by as much as twofold.

Alan Bond, the engineering genius behind the invention, had this to say about his brainchild:

These successful tests represent a fundamental breakthrough in propulsion technology. The Sabre engine has the potential to revolutionise our lives in the 21st century in the way the jet engine did in the 20th Century. This is the proudest moment of my life.

And of course, there’s a video of the engine in action. Check it out:


Second, and perhaps in response to these and other developments, the British Interplanetary Society is resurrecting a forty year old idea. This society, which came up with the idea to send a multi-stage rocket and a manned lander to the moon in the 1930’s (eerily reminiscent of the Apollo 11 mission some 30 years later) is now reconsidering plans for giant habitats in space.

o'neil_cylinderTo make the plan affordable and feasible, they are turning to a plan devised by Gerard O’Neill back in the 1970s. Commonly known as the O’Neill Cylinder, the plan calls for space-based human habitats consisting of giant rotating spaceships containing landscaped biospheres that can house up to 10 million people. The cylinder would rotate to provide gravity and – combined with the interior ecology – would simulate a real-world environment.

Jerry Stone of BIS’s SPACE (Study Project Advancing Colony Engineering) is trying to show that building a very large space colony is technically feasible. Part of what makes the plan work is the fact that O’Neill deliberately designed the structure using existing 1970s technology, materials and construction techniques, rather than adopting futuristic inventions.

Rama16wikiStone is bringing these plans up to date using today’s technologies. Rather than building the shell from aluminium, for example, Stone argues tougher and lighter carbon composites could be used instead. Advances in solar cell and climate control technologies could also be used to make life easier and more comfortable in human space colonies.

One of the biggest theoretical challenges O’Neill faced in his own time was the effort and cost of construction. That, says Stone, will be solved when a new generation of much cheaper rocket launchers and spaceplanes has been developed (such as the UK-built Skylon). Using robot builders could also help, and other futuristic construction techniques like 3-D printing robots and even nanomachines and bacteria could be used.

RAMAAnd as Stone said, much of the materials could be outsourced, taking advantage of the fact that this would be a truly space-aged construction project:

Ninety per cent of the material to build the colonies would come from the Moon. We know from Apollo there’s silicon for the windows, and aluminium, iron and magnesium for the main structure. There’s even oxygen in the lunar soil.

Fans of Arthur C. Clarke’s Rendezvous with Rama, the series Babylon 5 or the movie Elysium out to instantly recognize this concept. In addition to being a very real scientific concept, it has also informed a great deal of science fiction and speculation. For some time, writers and futurists have been dreaming of a day when humanity might live in space habitats that can simulate terrestrial life.

Elysium_conceptWell, that day might be coming sooner than expected. And, as O’Neill and his contemporaries theorized at the time, it may be a viable solution to the possibility of humanity’s extinction. Granted, we aren’t exactly living in fear of nuclear holocaust anymore, but ecological collapse is still a threat! And with the Earth’s population set to reach 12 billion by the 22nd century, it might be an elegant solution to getting some of those people offworld.

It’s always an exciting thing when hopes and aspirations begin to become feasible. And though aerospace transit is likely to be coming a lot sooner than O’Neill habitats in orbit, the two are likely to compliment each other. After all, jet planes that can reach orbit, affordably and efficiently, is the first step in making offworld living a reality!

Until next time, keep your eyes to the skies. Chances are, people will be looking back someday soon…

Sources: IO9, skymania, (2)bbc.com

The Future is Here: Wind Drones and Clean Buildings

wind_powerIt’s no secret that wind power is one of main clean forms of energy that is being considered as a viable alternative to coal, oil and gas. But much like solar, tidal and geothermal, the method has some flaws that is preventing it from being adopted in a more widespread fashion. However, as an infinitely renewable source of energy, it likely just a matter of time before technical developments lead to its wholesale use.

The first challenge has to do with size. Currently, wind farms are massive operations, and many designers think they need to continue to get bigger in order to generate the kinds of electricity we currently need. However, a Netherlands-based startup named Ampyx Power is looking in another direction: an airborne wind turbine that they think could capture the same amount of energy as a large operation.

ampyx-power-powerplane-6-topview-1Basically, their design is a small glider plane attached by cable to a generator, which is then deployed into the air and flies in figure eights. As it moves, the glider pulls on the capable, and the generator converts the movement to electricity. Since it isn’t attached to a tower, it can soar nearly 2,000 feet in the air, catching stronger winds that produce about eight times more energy than the lower-altitude breezes that reach a normal wind turbine.

So in addition to being able to produce more power than a typical wind farm, it costs significantly less than its competitor. The average wind farm weighs about 120 metric tons, while the glider system weighs in at a mere 363 kilograms (800 pounds). And in addition to being cheaper than other renewables, the process may even be cheaper than coal.

wind-power-660As Wolbert Allaart, the startup’s managing director, put it:

We’re replacing tons of steel and concrete. It’s a huge materials reduction, and we can produce the same amount of power. That obviously has an effect on cost as well… The whole reason why we’re doing this is because we think we can get the cost of a kilowatt-hour well below the price of coal.

And Ampyx is hardly alone in developing the technology. In fact, their design is similar to California-based Makani Power’s glider. This company was acquired by Google earlier this year, while Ampyx raised the necessary capital via a crowdfunding campaign. And though there are some differences in the design and methods employed, both companies dream of a day when wind will replace coal and other dirty means.

ampyx-power1Because the planes are so efficient, places that might not have worked for wind power in the past – like forests, where trees catch and redirect the wind – could be a fit for the system, so the market is wide open. And given his country’s growing interest in wind power, Allaart hopes to introduce it to the domestic market very soon:

In Holland, where we’re based, we now have a 4.3 billion Euro subsidy scheme for offshore wind. People are starting to wonder already, if we have a technology being developed in our own country that could provide offshore wind at more or less competitive price with coal, why on Earth are we still subsidizing this so heavily? How fast this grows will depend on political will.

pertamina-energy-tower4site-aerialsomAnother very cool wind-related story comes from Jakarta, where a massive tower is being planned that will be capable of generating all its own power. It’s known as the Pertamina Energy Tower, the proposed headquarters of the Pertamina power company. And while the proposed building will be 99 stories in height, it will also gather all its power from wind, solar, and geothermal energy.

When it comes to its wind operations, the building’s height plays to its advantage. At the top of the building, a funnel captures wind, sucks it inside, and speeds it up to run a series of vertical wind turbines. In this respect, the building operates like a giant, vertical wind tunnel. Solar energy will also be incorporated through panels that will cover the roofs of other buildings on the new campus.

pertamina-energy-tower2energy-ribbonsomBut perhaps the most impressive feat comes in the form of geothermal, a type of energy that’s uniquely suited for Indonesia because it’s a volcanic island chain. Geothermal systems in Indonesia can tap directly into superheated sources of subterranean steam with a single pipe, unlike typical systems that are more complicated and expensive to engineer.

Scott Duncan, the director of Pertamina’s architecture firm – Skidmore, Owings & Merrill LLP (SOM) – who led the project, describes it this way:

It would essentially provide an unlimited energy source for the tower and campus and could make the tower the world’s first energy-positive supertall building.

pertamina-energy-tower6In addition to meeting this clean-energy trifecta, the design of the tower is focused on saving energy as much generating it. Sun-shading “leaves” on two sides of the building cut glare and shade the brightest sunlight while still keeping the inside of the offices bright enough to avoid most artificial lighting. Instead of power-sucking air conditioners, the building uses water-based radiant cooling systems to keep the temperatures even.

Along with other strategies, the energy-saving design elements mean that the campus – which will include a mosque, a performing arts and exhibition center, and sports facilities along with the office space – can keep energy use low enough that renewable power may be able to cover its entire energy needs. In short, the building could prove to be a model of energy-independence.

pertamina-energy-tower5However, the motivation for this project go beyond the altruistic, and involve a good many practical considerations. For starters, Jakarta still has an unreliable power grid, and if the campus generates its own power, work and play won’t get interrupted. The buildings also won’t have to rely on diesel fuel generators if the city’s power goes down.

The technology is expected to be adopted elsewhere, particularly China where wind power is expanding all the time. Indonesia, despite its easy access to geothermal energy, is not the windiest place in the world. Cities that are strategically located along coastlines or in elevated regions would find the wind tunnel feature that much more useful, reducing their dependence on the other two forms of energy.

shanghai_towerWhat’s more, this building is in many respects what one would call an Arcology, and just happens to be the second one being planned for construction in the world today. The other, un-coincidentally enough, is China’s Shanghai Tower, a building that is one-third green space and a transparent second skin that surrounds the city in a protective air envelope that controls its internal temperature.

And with global energy prices increasing, the sources of easily-accessible oil disappearing, and atmospheric CO2 levels steadily rising, we can expect to see more buildings like these ones going up all around the world. We’re also likely to see more creative and innovative forms of power generation popping up in our backyards. Much like peak oil, centralized grids and dependence on unclean energy is disappearing…

And in the meantime, enjoy this video of the Ampyx Power glider in action:


Sources:
fastcoexist, (2)