Improving the efficiency of solar power – which is currently the most promising alternative energy source – is central to ensuring that it an becomes economically viable replacement to fossil fuels, coal, and other “dirty” sources. And while many solutions have emerged in recent years that have led to improvements in solar panel efficiency, many developments are also aimed at the other end of things – i.e. improving the storage capacity of solar batteries.
In the former case, a group of scientists working with the University of Utah believe they’ve discovered a method of substantially boosting solar cell efficiencies. By adding a polychromat layer that separates and sorts incoming light, redirecting it to strike particular layers in a multijunction cell, they hope to create a commercial cell that can absorb more wavelengths of light, and therefor generate more energy for volume than conventional cells.
Traditionally, solar cell technology has struggled to overcome a significant efficiency problem. The type of substrate used dictates how much energy can be absorbed from sunlight — but each type of substrate (silicon, gallium arsenide, indium gallium arsenide, and many others) corresponds to capturing a particular wavelength of energy. Cheap solar cells built on inexpensive silicon have a maximum theoretical efficiency of 34% and a practical (real-world) efficiency of around 22%.
At the other end of things, there are multijunction cells. These use multiple layers of substrates to capture a larger section of the sun’s spectrum and can reach up to 87% efficiency in theory – but are currently limited to 43% in practice. What’s more, these types of multijunction cells are extremely expensive and have intricate wiring and precise structures, all of which leads to increased production and installation costs.
In contrast, the cell created by the University of Utah used two layers — indium gallium phosphide (for visible light) and gallium arsenide for infrared light. According to the research team, when their polychromat was added, the power efficiency increased by 16 percent. The team also ran simulations of a polychromat layer with up to eight different absorbtion layers and claim that it could potentially yield an efficiency increase of up to 50%.
However, there were some footnotes to their report which temper the good news. For one, the potential gain has not been tested yet, so any major increases in solar efficiency remain theoretical at this time. Second, the report states that the reported gain was a percentage of a percentage, meaning that if the original cell efficiency was 30%, then a gain of 16% percent means that the new efficiency is 34.8%. That’s still a huge gain for a polychromat layer that is easily produced, but not as impressive as it originally sounded.
However, given that the biggest barrier to multi-junction solar cell technology is manufacturing complexity and associated cost, anything that boosts cell efficiency on the front end without requiring any major changes to the manufacturing process is going to help with the long-term commercialization of the technology. Advances like this could help make technologies cost effective for personal deployment and allow them to scale in a similar fashion to cheaper devices.
In the latter case, where energy storage is concerned, a California-based startup called Enervault recently unveiled battery technology that could increase the amount of renewable energy utilities can use. The technology is based on inexpensive materials that researchers had largely given up on because batteries made from them didn’t last long enough to be practical. But the company says it has figured out how to make the batteries last for decades.
The technology is being demonstrated in a large battery at a facility in the California desert near Modeso, 0ne that stores one megawatt-hour of electricity, enough to run 10,000 100-watt light bulbs for an hour. The company has been testing a similar, though much smaller, version of the technology for about two years with good results. It has also raised $30 million in funding, including a $5 million grant from the U.S. Department of Energy.
The technology is a type of flow battery, so called because the energy storage materials are in liquid form. They are stored in big tanks until they’re needed and then pumped through a relatively small device (called a stack) where they interact to generate electricity. Building bigger tanks is relatively cheap, so the more energy storage is needed, the better the economics become. That means the batteries are best suited for storing hours’ or days’ worth of electricity, and not delivering quick bursts.
This is especially good news for solar and wind companies, which have remained plagued by problems of energy storage despite improvements in both yield and efficiency. Enervault says that when the batteries are produced commercially at even larger sizes, they will cost just a fifth as much as vanadium redox flow batteries, which have been demonstrated at large scales and are probably the type of flow battery closest to market right now.
And the idea is not reserved to just startups. Researchers at Harvard recently made a flow battery that could prove cheaper than Enervault’s, but the prototype is small and could take many years to turn into a marketable version. An MIT spinoff, Sun Catalytix, is also developing an advanced flow battery, but its prototype is also small. And other types of inexpensive, long-duration batteries are being developed, using materials such as molten metals.
One significant drawback to the technology is that it’s less than 70 percent efficient, which falls short of the 90 percent efficiency of many batteries. The company says the economics still work out, but such a wasteful battery might not be ideal for large-scale renewable energy. More solar panels would have to be installed to make up for the waste. What’s more, the market for batteries designed to store hours of electricity is still uncertain.
A combination of advanced weather forecasts, responsive fossil-fuel power plants, better transmission networks, and smart controls for wind and solar power could delay the need for them. California is requiring its utilities to invest in energy storage but hasn’t specified what kind, and it’s not clear what types of batteries will prove most valuable in the near term, slow-charging ones like Enervault’s or those that deliver quicker bursts of power to make up for short-term variations in energy supply.
Tesla Motors, one company developing the latter type, hopes to make them affordable by producing them at a huge factory. And developments and new materials are being considered all time (i.e. graphene) that are improving both the efficiency and storage capacity of batteries. And with solar panels and wind becoming increasingly cost-effective, the likelihood of storage methods catching up is all but inevitable.
Sources: extremetech.com, technologyreview.com
Jack Andraka is at it again! For those who follow this blog (or subscribe to Forbes or watch TED Talks), this young man probably needs no introduction. But if not, then you might not known that Andraka is than the young man who – at 15 years of age – invented an inexpensive litmus test for detecting pancreatic cancer. This invention won him first prize at the 2012 Intel International Science and Engineering Fair (ISEF), and was followed up less than a year later with a handheld device that could detect cancer and even explosives.

As part of the project, Diggs and Andraka also developed an inexpensive water filter made out of plastic bottles. Next, they hope to do large-scale testing for their sensor in Maryland, where they live. They also want to develop a cell-phone-based sensor reader that lets users quickly evaluate water quality and post the test results online. Basically, its all part of what is fast becoming the digitization of health and medicine, where the sensors are portable and the information can be uploaded and shared.
The acquisition makes sense given that Silevo’s technology has the potential to reduce the cost of installing solar panels, Solar City’s main business. But the decision to build a huge factory in the U.S. seems daring – especially given the recent failures of other U.S.-based solar manufacturers in the face of competition from Asia. Ultimately, however, Solar City may have little choice, since it needs to find ways to reduce costs to keep growing.
Silevo isn’t the only company to produce high-efficiency solar cells. A version made by Panasonic is just as efficient, and SunPower makes ones that are significantly more so. But Silevo claims that its panels could be made as cheaply as conventional ones if they could scale their production capacity up from their current 32 megawatts to the factory Musk has planned, which is expected to produce 1,000 megawatts or more.
It’s official: all of Tesla’s electric car technology is now available for anyone to use. Yes, after hinting that he might be willing to do so last weekend, Musk announced this week that his companies patents are now open source. In a blog post on the Tesla website, Musk explained his reasoning. Initially, Musk wrote, Tesla created patents because of a concern that large car companies would copy the company’s electric vehicle technology and squash the smaller start-up.
But that turned out to be an unnecessary worry, as carmakers have by and large decided to downplay the viability and relevance of EV technology while continuing to focus on gasoline-powered vehicles. At this point, he thinks that opening things up to other developers will speed up electric car development. And after all, there’s something to be said about competition driving innovation.
And the move should come as no surprise. As the Hyperloop demonstrated, Musk is not above making grandiose gestures and allowing others to run with ideas he knows will be profitable. And as Musk himself pointed in a webcast made after the announcement, his sister-company SpaceX – which deals with the development of reusable space transports – has virtually no patents.
As it stands, auto emissions account for a large and growing share of greenhouse gas emissions. For decades now, the technology has been in development and the principles have all been known. However, whether it has been due to denial, intransigence, complacency, or all of the above, no major moves have been made to effect a transition in the auto industry towards non-fossil fuel-using cars.






Even though the most recently-built skyscrapers are helping change things by employing renewable energy and sustainable methods – like the 











Hyper Filter Skyscraper: Designed by Umarov Alexey of Russia, the Hyper Filter Skyscraper recognizes the threat of environmental pollution and seeks to merge carbon capture technology with the building’s design. Under today’s levels of pollution, harmful substances spread over hundreds of kilometers and a whole region and even a country could represent a single pollution source. Hence the plan to place a air-scrubbing building at the heart of the problem – an urban core.
Manuel Gea González Hospital: Located in Mexico City, this hospital was unveiled last year. The building features a “smog-eating” façade that covers 2,500 square meters and has titanium dioxide coating that reacts with ambient ultraviolet light to neutralize elements of air pollution, breaking them down to less noxious compounds like water. This was Berlin-based Elegant Embellishment’s first full-scale installation, and its designers claim the façade negates the effects of 1,000 vehicles each day.
A simple vertical grid scaffold forms the framework and takes all the ingredients it needs for material propagation from the surrounding environment. Individual living spaces are built within this gridwork, which creates open square spaces between lattices that can then be filled by tenements. Its pattern of growth is defined by environmental factors such as wind, weather, and the saturation of carbon dioxide within the immediate atmosphere.



This Tuesday, the Whitehouse received the latest draft of the Climate Assessment Report, a scientific study produced by the National Climate Assessment to determine the impacts of Climate Change. In addition to outlining the risks it poses to various regions in the US, the report also addresses the apparent increase in the number of severe weather events that have taken place in the past few years, and how these events affect local economies and communities.
Henry Jacoby, co-director of the Joint Program on the Science and Policy of Global Change at the MIT, was joined by other scientists and White House officials when he claimed that this is the most detailed and U.S.-focused scientific report on global warming. Above all, the most chilling claim contained within is the fact that “Climate change, once considered an issue for a distant future, has moved firmly into the present.”
The report says the intensity, frequency and duration of the strongest Atlantic hurricanes have increased since the early 1980s. Winter storms have increased in frequency and intensity and shifted northward since the 1950s, with heavy downpours increasing by 71 per cent in the northeast alone. Heat waves are projected to intensify nationwide, with droughts in the southwest expected to get stronger. Sea levels have risen 20 centimetres since 1880 and are projected to rise between 0.3 meters and 1.2 metres by 2100.
As she described it, America is basically in a boxing match, and is currently on the ropes:
And then there’s more pollen because of warming weather and the effects of carbon dioxide on plants. Ragweed pollen season has lengthened by 24 days in the Minnesota-North Dakota region between 1995 and 2011, the report says. In other parts of the Midwest, the pollen season has gotten longer by anywhere from 11 days to 20 days. And all of this has associated costs, not the least of which is in damages, insurance costs, and health care expenses.
