Towards a Cleaner Future: The Molten Salt Reactor

nuclear-power

What if you heard that there was such a thing as a 500 Megawatt reactor that was clean, safe, cheap, and made to order? Well, considering that 500 MWs is the close to the annual output of a dirty coal power station, you might think it sounded too good to be true. But that’s the nature of technological innovations and revolutions, which the nuclear industry has been in dire need of in recent years.

While it is true that the widespread use of nuclear energy could see to humanity’s needs through to the indefinite future, the cost of assembling and maintaining so many facilities is highly prohibitive. What’s more, in the wake of the Fukushima disaster, nuclear power has suffered a severe image problem, spurred on by lobbyists from other industries who insist that their products are safer and cheaper to maintain, and not prone to meltdowns!

Nuclear MOX plant : recycling nuclear waste : Submerged Spent Fuel Elements with Blue Glow

As a result of all this, the stage now seems set for a major breakthrough, and researchers at MIT and Transatomic’s own Russ Wilcox seems to be stepping up to provide it. Last year, Wilcox said in an interview with Forbes that it was “a fabulous time to do a leapfrog move”. Sounded like a bold statement at the time, but recently, Transatomic went a step further and claimed it was mobilizing its capital to make the leap happen.

Basically, the plan calls for the creation of a new breed of nuclear reactor, one which is miniaturized and still produces a significant amount of mega-wattage. Such efforts have been mounted in the past, mainly in response to the fact that scaling reactors upwards has never resulted in increased production. In each case, however, the resulting output was quite small, usually on the order of 200 MW.

???????????????????????????????

Enter into this the Transatomic’s Molten Salt Reactor (MSR), a design that is capable of producing half the power of a large-scale reactor, but in a much smaller package. In addition, MSRs possess a number of advantages, not the least of which are safety and cost. For starters, they rely on coolants like flouride or chloride salts instead of light or heavy water, which negates the need to pressurize the system and instantly reduces the dangers associated with super-heated, pressurized liquids.

What’s more, having the fuel-coolant mixture at a reasonable pressure also allows the mixture to expand, which ensures that if overheating does take place, the medium will simply expand to the point that the fuel atoms too far apart to continue a nuclear reaction. This is what is called a “passive safety system”, one that kicks in automatically and does not require a full-scale shutdown in the event that something goes wrong.

moltensalt_reactor1

Last, but not least, is the addition of the so-called freeze plug – an actively cooled barrier that melts in the event of a power failure, leading all nuclear material to automatically drain into a reinforced holding tank. These reactors are “walk away safe,” meaning that in the event of a power failure, accident, or general strike, the worst that could happen is a loss of service. In a post-Fukushima industry such disaster-proof measures simply must be the future of nuclear power.

Then, there is the costs factor. Transatomic claims their reactor will be capable of pumping out 500 megawatts for a total initial cost of about $1.7 billion, compared to 1000 megawatts for an estimated $7 billion. That’s about half the cost per megawatt, and the new reactor would also be small enough to be built in a central factory and then shipped to its destination, rather than requiring a multi-year construction project to build the plant and reactor on site.

The project has raised $1 million dollars of investment so far, and Transatomic appears to be putting all their eggs in this one basket. Their researchers also claim their design is production-ready and they are just waiting for orders to come in. And given the current energy crisis, it’s not likely to be long before government and industry comes knocking!

Source: Extremetech.com

Towards a Cleaner Future: The Bloom Aquatic Habitat

bloom_habitatWhen it comes to addressing Climate Change, scientists have known for some time that changing our habits is no longer enough to meet the challenge. In addition to adopting cleaner fuels and alternative energy, carbon capture – removing carbon dioxide gas from the air – will have to become an active part of our future habits. In addition to geoengineering processes, such as introducing sulfur dioxide into the upper atmosphere, carbon capturing technologies will likely need to be built into our very habitats.

And that’s where the Bloom comes in, an artificial coastline habitat that will also generate carbon-consuming phytoplankton. In a world characterized by rising ocean tides, shrinking coast lines, changing climates, and extreme weather, a water-based living space that can address the source of the problem seems like an ideal solution. In addition to being waterborne, the Bloom is hurricane proof, semi-submersible, and even consumes pollution.

bloom_underwaterDesigned by the French firm Sitbon, these structures are a proposal for a research station moored to the seabed with a system of cables and would both house researchers and grow carbon-dioxide absorbing phytoplankton. While it’s more of an experiment than a vision for what housing looks like in the future, their goal is to install them in the Indian Ocean as part of an attempt to monitor tsunamis and absorb carbon dioxide.

Alongside skyscrapers that utilize vertical agriculture, carbon-capturing artificial trees, and buildings that have their own solar cells and windmills, this concept is part of a growing field of designs that seeks to incorporate clean technology with modern living. In addition, for those familiar with the concept of an Arcology, this concept also calls to mind such ideas as the Lillypad City.

arcology_lillypad

In this case and others like it, the idea is building sustainable habitats that will take advantage of rising sea levels and coastlines, rather than add to the problem by proposing more urban sprawl farther inland. As the creators wrote in a recent press statement:

Bloom wishes to be a sustainable answer for rising waters by decreasing our carbon footprint while learning to live in accordance with our seas. Every factory would have its own bloom allowing it to absorb the CO2 that it created.

And even if it doesn’t pan out, funding for the design and its related technologies will lead to innovation in the wider field of sustainable architecture and clean energy. And who knows? Might make some really awesome seaborne property!

Source: fastcoexist.com

Powered by the Sun: The Artificial Leaf

solar_power1Despite progress made in recent decades, solar power still has some obstacles to overcome before it can be completely adopted. Thanks to several innovations, the price of manufacturing and installing solar panels has dropped substantially, intermittency remains a problem. So long as solar power remains limited by both geography and weather, we can expect to remain limited in terms of use.

And short of building Space-Based Solar Power (SBSP) arrays, or producing super-capacitor batteries with graphene – both of which are being explored – the only other option is to find ways to turn solar power into other forms of usable fuel. When the sun isn’t shining, people will need something else to power their homes, appliances, heating and AC. And given that the point is to reduce pollution, it will also have to be clean.

??????And that’s precisely what Daniel Nocera and his team are doing over at the University of Harvard. Their “artificial leaf” – a piece of silicon (solar cell) coated with two catalysts – is a means of turning sunshine into hydrogen fuel. Basically, when sunlight shines in, the leaf splits the water into bubbles of hydrogen and oxygen on each side, which can then be used in a fuel cell.

Efforts in the past to build similar solar cells have faltered, due largely to the costs involved. However, with the price of solar-related materials dropping in recent years, this latest device may prove commercially viable. And built to a larger scale, the device could provide a super-cheap and storable energy source from which could then be piped off and used in a fuel cell to make electricity. And combined with arrays of solar panels, we could have the energy crisis licked!

artificial-leafNocera and his team first announced the technology back in 2011, back when he was still a chemist at MIT. Since that time, they have published a follow-up paper showing how the team has improved the leaf’s efficiency, laying out future challenges, and how these might be overcome. Foremost amongst these are a field trial, with the eventual aim of building a commercial device for the developing world.

Beyond that, Nocera hopes to commercialize the technology through his company, the Massachusetts-based Sun Catalytix. Once realized, he plans to to put his dream of giving the poor “their first 100 watts of energy” into action. Here’s hoping he succeeds. The poor need power, and the environment needs a break from all our polluting!

Thank you all for reading the latest installment of PBTS! And be sure to check out this video of the artificial leaf in action:

The Future is Here: The Air Scrubbing Skyscraper!

aircleaning_skyscraperAir pollution has always been a problem in urban centers. But with the massive industrialization and urban expansion taking place in some of the most heavily populated regions of the world (China and India being foremost), the issue of how to deal with increasing emissions is especially important. And more and more, researchers and environmentalists are considering options that hits air pollution where it lives.

Two such individuals are Danny Mui and Benjamin Sahagun, a pair of architects who have devised a rather novel concept for dealing with the thick layers of carbon dioxide pollution that are so common to major urban centers. In essence, it is a pair of buildings that scrub CO2 emissions from the air, and thus marries the concept of Carbon Capture technology to urban planning.

artificial_trees1Dubbed the CO2ngress Gateway Towers, the concept involves two crooked buildings that are outfitted with a filtration system. This system then feeds the captured CO2 to algae grown in the building which then converts into biofuels for use in vehicles. In this respect, it is not unlike the artificial tree concept designed by Klaus Lackner, director of the Lenfest Center for Sustainable Energy at Columbia University.

Much like these “trees”, the carbon capture technology involves using a entirely natural process to absorb CO2 from the air and then combine it with water, thus causing a chemical reaction that results in a fossil fuel precursor which can easily be converted. This fuel can then be consumed as gasoline or ethanol, thus giving people the ability to keep burning fossil fuels while they research cleaner, more sustainable sources of fuel.

aircleaning_skyscraper3Ultimately, the idea here is not to offer a be-all, end-all solution to the problem, but rather to buy the human race time to clean up its act. And by ensuring that carbon capture technology is available in large urban dwellings, they are looking to ensure that one of the many symptoms of urban sprawl – i.e. large urban dwellings – are part of the solution.

Said Mui and Sahagun on the Council on Tall Buildings and Urban Habitat (CTBUH) website:

The scrubbers are the first step in a process that generates fuel for a fleet of eco-friendly cars for building residents. The system raises public awareness of air pollution and its impact on the health of Chicagoans.

aircleaning_skyscraper1Aside from the scrubbers, the buildings boast some other impressive features to cut down on urban annoyances. These include the “double skin facade”- two layers of windows – that can cut down on outside traffic noise. In addition, the spaces on either side of the buildings’ central elevator core can be used as outdoor terraces for residents.

Apparently, Mui and Sahagun worked on the project while students at the Illinois Institute of Technology, where it earned them an honorable mention in the 2012 CTBUH student competition. According to Mui, they created the structure after the semester ended, but there are no immediate plans to build it.

aircleaning_skyscraper2However, given the growing interest in arcologies and urban structures that reduce our impact on the environment, it is likely to garner serious interest very soon. Especially in China, where air pollution is so severe that it causes up to 750,000 deaths from respiratory illness a year and cities are still growing, buildings like this one could easily become the stone that kills two birds.

Sources: factcoexist.com, bbc.com

The Future is Here: The (Super) Supercapacitor

supercapacitor_movieLast year, researchers at UCLA made a fantastic, albeit accidental, when a team of scientists led by chemist Richard Kaner devised an efficient method for producing high-quality sheets of graphene. This supermaterial, which won its developers the 2010 Nobel Prize in Physics, is a carbon material that is known for its incredible strength and flexibility, which is why it is already being considered for use in electronic devices, solar cells, transparent electrodes, and just about every other futuristic high-tech application.

Given the fact that the previous method of producing graphene sheets (peeling it with scotch tape) was not practical, the development of the new production process was already good news. However, something even more impressive happened when Maher El-Kady, a researcher in Kaner’s lab, wired a small square of their high quality carbon sheets to a lightbulb.

supercapacitor1After showing it to Dr. Kaner, the team quickly realized they had stumbled onto a supercapacitor material – a high-storage battery that also boasts a very fast recharge rate – that boasted a greater energy storage capacity than anything currently on the market. Naturally, their imaginations were fired, and their discovery has been spreading like wildfire through the engineering and scientific community.

The immediate benefit of batteries that use this new material are obvious. Imagine if you will having a PDA, tablet, or other mobile device that can be charged within a matter of seconds instead of hours. With batteries so quick to charge and able to store an abundant supply of volts, watts, or amperes, the entire market of consumer electronics would be revolutionized.

electric_carBut looking ahead, even greater applications become clear. Imagine electric cars that only need a few minute to recharge, thus making the gasoline engine all but obsolete. And graphene-based batteries could be making an impact when it comes to the even greater issue of energy storage with regards to solar and other renewable energy sources.

In the year since they made their discovery, the researchers report that El-Kady’s original fabrication process can be made even more efficient. The original process involved placing a solution of graphite oxide on a plastic surface and then subjecting it to lasers to oxigenate and turn the solution into graphene. A year ago, the team could produce only a few sheets at a time, but now have a scalable method which could very quickly lead to manufacturing and wide-scale technological implementation.

solar_array1As it stands, an electric car with a recharge rate of a few minutes is still several years away. But Dr. Kaner and his team expect that graphene supercapacitors batteries will be finding their way into the consumer world much sooner than anyone originally expected.  According to Kaner, his lab is already courting partners in industry, so keep your eyes pealed!

Combined with the new technologies of lithium-ion and nanofabricated batteries, we could be looking at a possible solution to the worlds energy problem right here. What’s more, it could be the solution that makes solar, wind, and other renewable sources of energy feasible, efficient, and profitable enough that they will finally supplant fossil fuels and coal as the main source of energy production worldwide.

Only time will tell… And be sure to check out the video of Dr. Kaner and El-Kady showing off the process that led to this discovery:


Source: IO9.com

Powered By the Sun: The Solar Island

solar4As Climate Change becomes an ever increasing problem, nations are turning to alternative technologies and geological engineering to offset the effects. This means significant investments being made in technologies such as solar cells and other clean energies. However, the question of where to put all the resulting arrays is one which cannot be overlooked. Since we are trying to save the environment, it doesn’t exactly make sense to clear more tracts of land to make room for them.

Already, there is a land rush to build more solar power plants all around the world. In the U.S., the Department of Interior is currently processing leases for roughly 1.8 million acres in the West alone. Globally, solar photovoltaic (PV) capacity has been doubling annually, with another 16 gigawatts of power added just in 2010. At this rate, and considering how much space is needed to set up the average array, we could run out of room real fast!

solar_islandAnd yet, the one thing that accounts for the majority of the planet’s surface area has been sadly neglected up until this point. I am of course referring to the oceans, lakes, reservoirs, retention ponds, and all other natural or unnatural bodies of water. As they account for over three-quarters of the planet’s real estate, they are quickly being targeted as the new frontier for floating solar power plants, with companies and locations being considered from India to Europe, to Napa Valley.

One of the more ambitious plans comes to us from Switzerland, will a proposed array will be built on Lake Neuchâtel later this year. As a collaborative effort between the solar developer Nolaris and the Swiss energy company Viteos, the proposed floating array will be the first of three set upon the lake. Each island will measure some 25 meters in diameter, be built from plastic and steel, and support 100 photovoltaic cells that will rotate with the sun.

solar_island1What’s more, this is just one of several ideas under consideration. Other companies pursuing this concept are favoring floating pontoons with individual photovoltaic assemblies on the water’s surface. In this case, concentrating lenses will focus the sunlight on a solar cell while a simple motor, light sensors, and software rotate the cells to maximize power generation. In tropical climes, where many pilot projects are being considered and storms are quite common, the entire array will be able to submerge as the winds rise.

In other places, where land is particularly expensive, floating solar may even come to rival its land-based counterpart. In Australia, for example, a company named Sunengy is pushing the concept of “Liquid Solar Array” technology, which they claim will be able to match the power output of a typical hydroelectric dam and cover less than 10% of the reservoir’s surface. They are currently teaming up with the Indian giant Tata Power to build India’s first floating solar power plant, and estimate that if India used just 1% of its 11,500 square kilometers of captured water it could generate the equivalent of 15 large coal-fired power stations.

As the saying goes, necessity is the mother of invention. And as it stands, planet Earth needs energy, and needs to generate it in such a way that won’t mess up the environment any further or usher in the scourge of Climate Change. When the survival of our planet and our species is at stake, you can expect people to get very inventive. Very, very inventive!

Source: factcoexist.com

Global Warming Slowed by Volcanoes

Klyuchevskaya Volcano. NASA Goddard Space Flight Center
Klyuchevskaya Volcano. NASA Goddard Space Flight Center

Global mean temperatures have been rising in recent years, consistent with every projection provided by Climate Change specialists and planetary ecologists. However, it now seems as though the rate of increase is not as bad as it should have been, thanks to a series of small-to-moderate-sized volcanic eruptions that have spewed sunlight-blocking particles high into the atmosphere.

Between 2000 and 2010, the average atmospheric concentration of carbon dioxide rose more from about 370 parts per million to nearly 390. According to Ryan Neely III, an atmospheric scientist at the University of Colorado, Boulder, if that uptick were the only factor driving climate change, the average global temperature would have risen about 0.2°C. But a surge in the concentration of light-scattering particles in the stratosphere countered as much as 25% of that potential temperature increase.

Sanpedropable Volcano as seen from the ISS
San Pedro Volcano, as seen from the ISS

In addition, Neely and his colleagues ran a series of simulations that indicated that human the human contribution of aerosols to the stratosphere – which would have had a counteractive effect to the carbon – was minimal between 2000 and 2010. William Randel, an atmospheric scientist at the National Center for Atmospheric Research in Boulder claimed that the pattern of stratospheric particulate variations during the past decade “shows the fingerprint of volcanoes, with the right episodes showing up at the right time.”

For some time now, researchers and ecologists have known that sulfur dioxide, a major biproduct of volcanic eruptions, has a global cooling effect. Once introduced into the upper atmosphere, this particulate matter blocks out solar radiation and prevents it from being absorbed by the Earth’s soil, water, and plant life. In fact, it was a massive series of eruptions which took place during the Cretaceous–Paleogene Era that is believed to be linked to the extinction of the dinosaurs.

converted PNM fileFor many years, geoengineers have considered releasing sulfur dioxide into the upper atmosphere in order to slow down the process of Climate Change, a measure intended to give Earth’s scientists more time to develop alternative fuels and its people more time to get their act together. However, at this juncture it seems that the planet has obliged us and given us a bit of window, and completely unheeded.

It’s good to know that human agency alone does not determine the course this planet will take. At the same time however, one should not get too enthused and think this means we’re in for a big reprieve. Based on the most recent data, humanity still only has a few decades before the worst begins to happen and our world slowly becomes uninhabitable.

Source: sciencemag.org, Wired.com

 

Happy Birthday Copernicus!

heliocentricAs I learned not long ago, today is the 540th birthday of the late great man who definitely proved that the Earth revolved around the sun. And so I thought I’d take some time out of my busy (not so much today!) schedule to honor this great man and the massive contribution he made to astronomy, science and our understanding of the universe.

Given the importance of these contributions, I shall do my best to be pay homage to him while at the same time being as brief and succinct as I possibly can. Ready? Here goes…

Background:
copernicusBorn in Toruń (Thorn), Poland on 19 February 1473, Mikolaj Kopernik was the youngest of four children to be born into his wealthy merchant family. Given his background, Copernicus’ family was able to provide an extensive education for their son, which took him from Thorn to Włocławek to Krakow, where he attended university. In this time, he learned to speak many languages – including Polish, Greek, Italian, German and Latin (the language of academia in his day) – and also showed himself to be adept at mathematics and science.

During this time, he also received a great deal of exposure to astronomy, since it was during his years in Krakow (1491-1495) that the Krakow astronomical-mathematical school was experiencing its heyday. He was also exposed to the writings of Aristotle and Averroes, and became very self-guided in his learning, collecting numerous books on the subject of astronomy for his personal library.

Leaving Krakow without taking a degree, Copernicus moved to Warmia (northern Poland) where he turned to the study of canon law, perhaps in part because of his family’s strong Roman Catholic background. However, his love for the humanities and astronomy never left him, and he seemed to devote himself to these subjects even as he worked to obtain his doctorate in law. It was also during his time in Warmia that he met the famous astronomer Domenico Maria Novara da Ferrara and became his disciple and assistant.

geocentricUnder Ferrara, Copernicus traveled to Bologna, Italy and began critiquing the logical contradictions in the two most popular systems of astronomy – Aristotle’s theory of homocentric spheres, and Ptolemy’s mechanism of eccentrics and epicycles – that would eventually lead him to doubt both models. In the early 1500’s, while studying medicine at the University of Padua in Italy, he used the opportunity to pour over the libraries many ancient Greek and Latin texts to find historic information about ancient astronomical, cosmological and calendar systems.

In 1503, having finally earned his doctorate in canon law, Copernicus returned to Warmia where he would spend the remaining 40 years of his life. It was here that all of his observations about the movement of the planets, and the contradictions in the current astronomic models, would crystallize into his model for the heliocentric universe. However, due to fears that the publication of his theories would lead to official sanction from the church, he withheld his research until a year before he died.

It was only in 1542, after he had been seized with apoplexy and paralysis, that he sent his treaties, De revolutionibus orbium coelestium (On the Revolutions of the Heavenly Spheres) to Nuremberg to be published. It is said that on the day of his death, May 24th 1543 at the age of 70, he was presented with an advance copy of his book.

Impact and Legacy:
The immediate reaction of the church to the publication of Copernicus’ theories was quite limited. In time, Dominican scholars would seek to refute based on logical arguments and Aquinism, ranging from the positions of planets in the sky to very idea that Earth could be in motion. However, in attempting to disprove Copernicus’ theory, his detractors merely fostered a debate which would provide the impetus for reevaluating the field of physics and proving the heliocentric model correct.

galileo_telescopeAnd in time, with the help of such astronomers and mathematicians as Galileo, the debate would come to a head. Using the telescope, a technology he helped pioneer, he was able to demonstrate that the size of the planets during various times in the year did indeed conform to the heliocentric model, and that it was only through distortions caused by observing with the naked eye that made them seem larger (hence, closer to Earth) than they really were.

And although Galileo would eventually be forced to recant and placed under house arrest for his last few years on this Earth, the Copernican system became the defacto model of astronomy henceforth, and would help to launch the Scientific Revolution whereby several long-established theories would come to be challenged. These included the age of the Earth, the existence of other moons in our Solar System, Universal Gravitation, and the belief in the universe as a giant, rationalized clockwork mechanism.

Final Thoughts:
Naturally, there are those purists who would point out that he was not the first to propose a heliocentric planet system. In fact, the concept of a universe with the sun at the epicenter dates back Ancient Greece. However, Copernicus would be the first astronomer to propose a comprehensive model, which would later be refined by Galileo Galilee.

HeliocentricOther purists would point out that his system, when he developed it, had numerous observation and/or mathematical flaws, and that it was only after Galileo’s observations of the heavens with his telescope that his theories were made to work. But it is precisely because he was able to realize the truth of our corner of the universe, sans a reliable telescope, that makes this accomplishment so meaningful.

In Copernicus’ time, the rigors of the Aristotelian and Ptolemaic models were still seem by the majority of astronomers to be the correct one, regardless of church doctrine or religious bias. In purely mathematical terms, there was little reason to make an intuitive leap and suppose that the great minds on which Scholastic science was based had got it all wrong.

So when it comes right down to it, Copernicus was an intuitive genius the likes of which is seen only once in a lifetime. What’s more, his discoveries and the publication thereof helped bring humanity out of the Dark Ages – a time where learning and the hearts and minds of men were still under the iron grip of the Church – and helped usher in the modern age of science.

Copernicus_conversation_with_GodAnd if I could get a bit polemic for a second, I would like to say that it is unfortunate then that much of what Copernicus helped to overcome is once prevalent in society today. In recent years, long-established scientific truths like Evolution, Global Warming, and Homosexuality have being challenged by individuals who claim they are lies or merely “theories” that have yet to be proven. In all cases, it is clear what the agenda is, and once again faith and God are being used as a justification.

In fact, despite the monumental growth in learning and the explosion in information sharing that has come with the digital age, it seems that misinformation is being spread like never before. Whereas previous generations could always blame ignorance or lack of education, we few who are privileged enough to live in a modern, secular, democratic and industrialized nation have no such excuses.

And yet, it seems that some decidedly medieval trends are determined to persist. Despite living in a time when the vast and infinite nature of the universe is plain to see, there are still those who would insist on making it smaller just so they can sleep soundly in their beds. As if that’s not enough, they feel the need to villify that which they don’t understand, or openly threaten to kill those who preach it.

Sorry, like I said, polemic! And on this day of days, we can’t help but remember the lessons of history and how so often they are ignored. So if I might offer a suggestion to all people on this day, it would be to choose a subject they feel uninformed about and learn what they can about it. And do not trust just any source, consider the built-in biases and political slants of whatever it is you are reading. And if possible, go out and hug a scientist! Tell them you accept them, do not fear what they have to say, and will not be sending them death threats for doing what they do.

Happy 540th birthday Mikolaj Kopernik!

Should We Be Afraid? A List for 2013

emerg_techIn a recent study, the John J. Reilly Center at University of Notre Dame published a rather list of possible threats that could be seen in the new year. The study, which was called “Emerging Ethical Dilemmas and Policy Issues in Science and Technology” sought to address all the likely threats people might face as a result of all developments and changes made of late, particularly in the fields of medical research, autonomous machines, 3D printing, Climate Change and enhancements.

The list contained eleven articles, presented in random order so people can assess what they think is the most important and vote accordingly. And of course, each one was detailed and sourced so as to ensure people understood the nature of the issue and where the information was obtained. They included:

1. Personalized Medicine:
dna_selfassemblyWithin the last ten years, the creation of fast, low-cost genetic sequencing has given the public direct access to genome sequencing and analysis, with little or no guidance from physicians or genetic counselors on how to process the information. Genetic testing may result in prevention and early detection of diseases and conditions, but may also create a new set of moral, legal, ethical, and policy issues surrounding the use of these tests. These include equal access, privacy, terms of use, accuracy, and the possibility of an age of eugenics.

2. Hacking medical devices:
pacemakerThough no reported incidents have taken place (yet), there is concern that wireless medical devices could prove vulnerable to hacking. The US Government Accountability Office recently released a report warning of this while Barnaby Jack – a hacker and director of embedded device security at IOActive Inc. – demonstrated the vulnerability of a pacemaker by breaching the security of the wireless device from his laptop and reprogramming it to deliver an 830-volt shock. Because many devices are programmed to allow doctors easy access in case reprogramming is necessary in an emergency, the design of many of these devices is not geared toward security.

3. Driverless zipcars:
googlecarIn three states – Nevada, Florida, and California – it is now legal for Google to operate its driverless cars. A human in the vehicle is still required, but not at the controls. Google also plans to marry this idea to the zipcar, fleets of automobiles shared by a group of users on an as-needed basis and sharing in costs. These fully automated zipcars will change the way people travel but also the entire urban/suburban landscape. And once it gets going, ethical questions surrounding access, oversight, legality and safety are naturally likely to emerge.

4. 3-D Printing:
AR-153D printing has astounded many scientists and researchers thanks to the sheer number of possibilities it has created for manufacturing. At the same time, there is concern that some usages might be unethical, illegal, and just plain dangerous. Take for example, recent effort by groups such as Distributed Defense, a group intent on using 3D printers to create “Wiki-weapons”, or the possibility that DNA assembling and bioprinting could yield infectious or dangerous agents.

5. Adaptation to Climate Change:
climatewarsThe effects of climate change are likely to be felt differently by different people’s around the world. Geography plays a role in susceptibility, but a nation’s respective level of development is also intrinsic to how its citizens are likely to adapt. What’s more, we need to address how we intend to manage and manipulate wild species and nature in order to preserve biodiversity.This warrants an ethical discussion, not to mention suggestions of how we will address it when it comes.

6. Counterfeit Pharmaceuticals:
Syringe___Spritze___by_F4U_DraconiXIn developing nations, where life saving drugs are most needed, low-quality and counterfeit pharmaceuticals are extremely common. Detecting such drugs requires the use of expensive equipment which is often unavailable, and expanding trade in pharmaceuticals is giving rise to the need to establish legal measures to combat foreign markets being flooded with cheap or ineffective knock-offs.

7. Autonomous Systems:
X-47BWar machines and other robotic systems are evolving to the point that they can do away with human controllers or oversight. In the coming decades, machines that can perform surgery, carry out airstrikes, diffuse bombs and even conduct research and development are likely to be created, giving rise to a myriad of ethical, safety and existential issues. Debate needs to be fostered on how this will effect us and what steps should be taken to ensure that the outcome is foreseeable and controllable.

8. Human-animal hybrids:
human animal hybrid
Is interspecies research the next frontier in understanding humanity and curing disease, or a slippery slope, rife with ethical dilemmas, toward creating new species? So far, scientists have kept experimentation with human-animal hybrids on the cellular level and have recieved support for their research goals. But to some, even modest experiments involving animal embryos and human stem cells are ethical violation. An examination of the long-term goals and potential consequences is arguably needed.

9. Wireless technology:
vortex-radio-waves-348x196Mobile devices, PDAs and wireless connectivity are having a profound effect in developed nations, with the rate of data usage doubling on an annual basis. As a result, telecommunications and government agencies are under intense pressure to regulate the radio frequency spectrum. The very way government and society does business, communicates, and conducts its most critical missions is changing rapidly. As such, a policy conversation is needed about how to make the most effective use of the precious radio spectrum, and to close the digital access divide for underdeveloped populations.

10. Data collection/privacy:
privacy1With all the data that is being transmitted on a daily basis, the issue of privacy is a major concern that is growing all the time. Considering the amount of personal information a person gives simply to participate in a social network, establish an email account, or install software to their computer, it is no surprise that hacking and identity theft are also major conerns. And now that data storage, microprocessors and cloud computing have become inexpensive and so widespread, a discussion on what kinds of information gathering and how quickly a person should be willing to surrender details about their life needs to be had.

11. Human enhancements:
transhumanismA tremendous amount of progress has been made in recent decades when it comes to prosthetic, neurological, pharmaceutical and therapeutic devices and methods. Naturally, there is warranted concern that progress in these fields will reach past addressing disabilities and restorative measures and venture into the realm of pure enhancement. With the line between biological and artificial being blurred, many are concerned that we may very well be entering into an era where the two are indistinguishable, and where cybernetic, biotechnological and other enhancements lead to a new form of competition where people must alter their bodies in order to maintain their jobs or avoid behind left behind.

Feel scared yet? Well you shouldn’t. The issue here is about remaining informed about possible threats, likely scenarios, and how we as people can address and deal with them now and later. If there’s one thing we should always keep in mind, it is that the future is always in the process of formation. What we do at any given time controls the shape of it and together we are always deciding what kind of world we want to live in. Things only change because all of us, either through action or inaction, allow them to. And if we want things to go a certain way, we need to be prepared to learn all we can about the causes, consequences, and likely outcomes of every scenario.

To view the whole report, follow the link below. And to vote on which issue you think is the most important, click here.

Source: reilly.nd.edu

Should I Be Afraid of the Future?

should-i-be-afraid-of-the-futureNot that long ago, I discovered a site dedicated to taking speculations about the future, crunching data and trends, and producing visualizations about them. Already, they had me with their graph that shows when future technologies will emerge, and how they will be interrelated. But then came their future of education and health technology, both of which addressed the same issue – what can we can expect within the next few decades, leading up to the middle of this century?

And now, the good folks at Envisioning Technology have created something truly informative and relevant. Entitled “Should I be afraid of the future?”, the infograph addresses all the big questions people might have when it comes to emerging technology, environmental perils, and the kind of technophobia that often result.

“Geophysical disasters, global warming, robot uprisings, zombie apocalypse, overpopulation, and last but not least the end of the Mayan calendar – humanity faces many threats! Will we survive the end of the year? And if we do, what’s next lurking around the corner? What is science fiction, what is science fact? Join in exploring the world of existential risks – but always remember what Carl Sagan said: ‘Extraordinary claims require extraordinary evidence.'”

The questions are broken down into three interrelating fields. First, there is Nature, covering such things as geological disasters, climate change, a possible ice age, and even astronomical events. Then comes Mankind, addressing possible factors such as war, apocalyptic scenarios, and overpopulation. And finally, there is technology, where questions about whether robots and AIs could turn hostile, and if advances in nanotech, biotech, and neuroscience could be potentially harmful.

And of course, each question is addressed in a rational, sensible fashion, even when the questions themselves are based on irrational, myth-peddling paranoia. The Mayan Calendar, bio-outbreaks, every possible technophobic impulse, and even a zombie apocalypse are covered. But then again, the infograph is all about addressing fears. Fear, by its very definition is irrational, and the only cure is information. A well-informed public is not only a safeguard against persecution and bigotry, but against a future full of existential risks.

Source: Envisioning Technology