After more than 20 years in the making, the Argus II bionic eye was finally approved this past February by the Food and Drug Administration for commercial sale in the US. For people suffering from the rare genetic condition known as retinitis pigmentosa – an inherited, degenerative eye disease that causes severe vision impairment and often blindness – this is certainly good news indeed.
Developed by Second Sight, the Argus II is what is known as a “Retinal Prosthesis System” (RPS) that corrects the main effect of retinitis pigmentosa, which is the diminished ability to distinguish light from dark. While it doesn’t actually restore vision to people who suffer from this condition, it can improve their perceptions of light and dark, and thus identify the movement or location of objects.
The Argus II works by using a series of electrodes implanted onto the retina that are wirelessly connected to a video camera mounted on the eyeglasses. The eye-electrodes use electrical impulses transmitted from the camera to stimulate the part of the retina that allows for image perception. By circumventing the parts of the eye effected by the disease, the bionic device is a prosthetic in every sense of the word.
According to Suber S. Huang, director of the University Hospital Eye Institute’s Center for Retina and Macular Disease, the breakthrough treatment is:
[R]emarkable. The system offers a profound benefit for people who are blind from RP and who currently have no therapy available to them. Argus II allows patients to reclaim their independence and improve their lives.
Argus II boasts 20-plus years of research, three clinical trials, and more than $200 million in private and public investment behind it. Still, the system has been categorized by the FDA as a humanitarian use device, meaning there is a “reasonable assurance” that the device is safe and its “probable benefit outweighs the risk of illness or injury.”
Good news for people with vision impairment, and a big step in the direction of restoring sight. And of course, a possible step on the road to human enhancement and augmentation. As always, every development that is made in the direction of correcting human impairment offers the future possibility of augmenting otherwise unimpaired human beings.
As such, it might not be long before there are devices that can give the average human the ability to see in the invisible spectrum, such as IR and ultra-violet frequencies. Perhaps also something that can detect x-rays, gamma ray radiation, and other harmful particles. Given that the very definition of cyborg is “a being with both organic and cybernetic parts”, the integration of this device means the birth of the cybernetic age.
And be sure to check out this promotional video by Second Sight showing how the device works:
Robots have been making quite the stir in the news lately. And no, that’s not a delicious pun on the robotic bartender – aka. the Makr Shakr, it’s just a frank appraisal of the leap and bounds by which robots and their integration to society is proceeding. Between developing machines that can imitate human movements, human facial expressions, and carry out specialized tasks, it appears that we may actually be on the verge on a world where robots are a common feature.
Just a few days ago, DARPA and Boston Dynamics unveiled their most anthropomorphic robot to date – the Atlas Robot. And this came less than a month after the Global Future 2045 conference took place in Moscow, where Geminoid robot clones – so realistic that they were virtually indistinguishable from their human counterparts – were put on display. And yet, it seems that the Singularitarians and roboticists of the world were not yet finished for the season.
Now it appears that there is a robotic arm that is capable of performing another highly-specialized task: painting. Created by a team at the University of Konstanz in Germany, the E-David is capable of performing the artistic variety of painting, not the kind which involves spraying enamel onto car frames – something robots have been doing for decades, much to the chagrin of auto workers.
Granted, it is not capable of “artistic inspiration”, but instead takes a picture of what it wants to copy and takes it from there. What’s more, it e-David doesn’t require programming directions that tell it how to paint, relying instead on a concept known as “visual optimization” to make its own decisions. After each brush stroke, e-David takes a picture, and its software calculates where the next stroke needs to fall, what colors are needed, and whether it needs to be lighter or darker, etc.
In short, e-David can do the time-consuming and often monotonous task of reproducing original works of art, or cleaning them up, but cannot create someone all on its own. Now lets all join the artists of the world in breathing a collective sigh of relief. The team of university researchers described the e-David’s “process” in a release in which they stated:
We equipped a standard robot with all necessary means for painting. Five different brushes can be used, color can be selected from a repository with 24 colors, brushes can be cleaned, and colors can be distributed precisely on the canvas. The machine watches itself while painting and decides independently where to add new strokes. This way, paintings are created that are not completely defined by the programmer, but are the result of a visual optimization process.
While E-David isn’t the first robot capable of painting, it is in a class by itself when it comes to the quality of the images it creates. Much like the supercomputer Iamus that composed classical music which was performed by the London Symphony Orchestra and recorded on an album, it is impossible to tell when looking at the finished product if the paintings were crafted by hand or machine. An interesting twist on the Turing Test, I think!
What’s next? A robot that can compose pop songs? I don’t think I can stand another version of “Friday”! And be sure to enjoy this video of e-David at work:
Judgement Day has come early this year! At least that’s the impression I got when I took a look at this new DARPA prototype for a future robotic infantryman. With its anthropomorphic frame, servomotors and cables, sensor-clustered face, and the shining lights on its chest, this machine just screams Terminator! Yet surprisingly, it is being developed to help humans beings. Yeah, that’s what they said about Skynet, right before it nuked us!
Yes, this 6-foot, 330-pound robot, which was unveiled this past Thursday, was in fact designed as a testbed humanoid for disaster response. Designed to carry tools and tackle rough terrain, this robot – and those like it – are intended to operate in hazardous or disaster-stricken areas, assisting in rescue efforts and performing tasks that would ordinarily endanger the lives of human workers.
Funded by DARPA as part of their Robotics Challenge, the robot was developed by Boston Dynamics, the same people who brought you the AlphaDog – aka the Legged Squad Support System (LS3, pictured above) – and the Petman soldier robot. The former was developed as an all-terrain quadruped robot that could as an infantry-support vehicle by carrying a squad’s heavy ordinance over rough terrain.
The latter, like Atlas, was developed as testbed to see just how anthropomorphic a robot can be – i.e. whether or not it could move, run and jump with fluidity rather than awkward “robot” movements, and handle different surfaces. Some of you may recall seeing a video or two of it doing pushups and running on a treadmill back in 2011.
Alas, Atlas represents something vastly different and more complex than these other two machines. It was designed to not only walk and carry things, but can travel over rough terrain and climb using its hands and feet. Its head includes stereo cameras and a laser range finder to help it navigate its environment.
And, as Boston Dynamics claimed in a press release, the bot also possesses “sensate hands” that are capable of using human tools, and “28 hydraulically actuated degrees of freedom”. Its only weakness, at present, is the electrical power supply it is tethered to. But other than that, it is the most “human” robot – purely in terms physical capabilities – to date. Not only that, but it also looks pretty badass when seen in this full-profile pic, doesn’t it?
The DARPA Robotics Challenge is designed to help evolve machines that can cope with disasters and hazardous environments like nuclear power plant accidents. The seven teams currently in the challenge will get their own Atlas bot and then program it until December, when trials will be held at the Homestead Miami Speedway in Florida – where they will be presented with a series of challenges.
In the meantime, check out the video below of the Atlas robot as it demonstrates it full range of motion while busting a move! Then tell me if the robot is any less frightening to you. Can’t help but look at the full-length picture and imagine a plasma cannon in its hands, can you?
From the way people have been going on about 3D printing in the past few months, you’d think it was some kind of fad or something! But of course, there’s a reason for that. Far from being a simple prescriptive technology that requires us all to update our software or buy the latest version in order to “stay current”, 3D printing is ushering in a revolution that will literally change the world.
From design models and manufactured products, the range of possibilities is now venturing into printed food and even artificial organs. The potential for growth is undeniable, and the pace at which progress is happening is astounding. And on one of my usual jaunts through the tech journals and video-sharing websites, I found a few more examples of the latest applications.
First up is this story from Mashable, a social media news source, that discusses NYU student Marko Manriquez’s new invention: the BurritoBot. Essentially a 3D food printer that uses tortillas, salsa, guacamole and other quintessential ingredients, Manriquez’s built this machine for his master’s thesis using open-source hardware – including the ORD bot, a 3D printing mechanical platform (pictured above).
The result is a food printer that an tailor-make Burritos and other Mexican delights, giving users the ability to specify which ingredients they want, in which proportion, and all through an app on their smartphone. No demos available online as of yet, but Mashable provides a pretty good breakdown on how it works, as well as Manrquez’s inspiration and intent behind its creation:
Next up, there’s Cornell University’s food printer that allows users to created desserts. In this CNN video, Chef David Arnold at the French Culinary Institute shows off the printer by creating a chocolate cake, layer by layer, dough and icing. A grad student from Cornell’s Computational Synthesis Lab was on hand to explain that their design is also open-source, with the blueprints and technical design made available online so anyone can build their own.
As Chef Arnold explained, his kitchen has been using the printer to work with ingredients ranging from cookie dough, to icing to masa – the corn meal tortillas are made from. It also allows for a degree of accuracy that many may not possess, while still offering plenty of opportunities to be creative. “The only real limitation now is that the product has to be able to go through a syringe,” he said. “Other than that, skies the limit.”
But even more exciting for some are the opportunities that are now being explored using metals. Using metal powder and an electron beam to form manufactured components, this type of “additive manufacturing” is capable of turning out parts that are amazingly complex, far more so than anything created through the machining-process.
In this next video, the crew from CNNMoney travel to the Oakridge National Lab in Tenessee to speak to the Automation, Manufacturing and Robotics Group. This government-funded lab specializes in making parts that are basically “structures within structures”, the kind of things that are used in advanced prosthetic limbs, machinery, and robots. As they claim, this sort of manufacturing is made possible thanks to the new generation of 3D ABS and metal printers.
What’s more, this new process is far more efficient. Compared to old fashioned forms of machining, it consumes less energy and generates far less waste in terms of materials used. And the range of applications is extensive, embracing fields as divergent as robotics and construction to biomedical and aerospace. At present, the only real prohibition is the cost of the equipment itself, but that is expected to come down as 3D printing and additive manufacturers receive more market penetration.
But of course, all of this pales in comparison to the prospect of 3D printed buildings. As Behrokh Khoshnevis – a professor of Industrial & Systems Engineering at USC – explains in this last video from TEDxTalks, conventional construction methods are not only inefficient, labor intensive and dangerous, they may very well be hampering development efforts in the poorer parts of the world.
As anyone with a rudimentary knowledge of poverty and underdevelopment knows, slums and shanty-towns suffer disproportionately from the problems of crime, disease, illiteracy, and infant mortality. Unfortunately, government efforts to create housing in regions where these types of communities are common are restrained by budgets and resource shortages. With one billion people living in shanties and slum-like shelters, a new means of creating shelter needs to be found for the 21st century.
The solution, according to Khoshnevis, lies in Contour Crafting and Automated Construction – a process which can create a custom house in just 20 hours! As a proponent of Computer-Assisted Design and Computer-Assisted Manufacturing (CAD/CAM), he sees automated construction as a cost-effective and less labor resource-intensive means of creating homes for these and other people who are likely to live in unsafe, unsanitary conditions.
The technology is already in place, so any claims of that is of a “theoretical nature” are moot. What’s more, such processes are already being designed to construct settlements on the moon, incorporating robotics and 3D printing with advanced computer-assisted simulations. As such, Khoshnevis is hardly alone in advocating similar usages here on planet Earth.
The benefits, as he outlines them, are dignity, safety, and far more sanitary conditions for the inhabitants, as well as the social benefits of breaking the pathological cycle of underdevelopment. Be sure to check out his video below. It’s a bit long, but very enlightening!
Once in awhile, its good to take stock of the future and see that it’s not all creepy robots and questionable inventions. Much of the time, technological progress really does promise to make life better, and not just “more convenient”. It’s also especially good to see how it can be made to improve the lives of all people, rather than perpetuating the gap between the haves and the have nots.
Until next time, keep your heads high and your eyes to the horizon!
It’s a good day when a show like Futurama begins turning out new episodes. This past week’s featured a story where Bender began taking advantage of 3D printing to create a famous folk singer’s one-of-a-kind guitar. Naturally, things got out of control, and the story was chock full of social commentary and the concept that the printing revolution might actually be ushering an age where artificial replicas could infringe on the real thing.
For the life of me, I can’t find clips of this episode anywhere. Guess it’s too soon to expect anyone to upload it to Youtube, lazy piraters! But I found the next best thing: a time-lapse video of a Bender figurine being printed out on a Maker Bot. It’s set to the extended cut of Futurama’s theme, and the result is a pretty cool replica of the jive-talking, amoral alcoholic robot himself!
We all know it’s coming: the day when machines would be indistinguishable from human beings. And with a robot that is capable of imitating human body language and facial expressions, it seems we are that much closer to realizing it. It’s known as the Geminoid HI-2, a robotic clone of its maker, famed Japanese roboticist Hiroshi Ishiguro.
Ishiguro unveiled his latest creation at this year’s Global Future 2045 conference, an annual get-together for all sorts of cybernetics enthusiasts, life extension researchers, and singularity proponents. As one of the world’s top experts on human-mimicking robots, Ishiguro wants his creations to be as close to human as possible.
Alas, this has been difficult, since human beings tend to fidget and experience involuntary tics and movements. But that’s precisely what his latest bot excels at. Though it still requires a remote controller, the Ishiguro clone has all his idiosyncrasies hard-wired into his frame, and can even give you dirty looks.
This is not the first robot Ishiguro has built, as his female androids Repliee Q1Expo and Geminoid F will attest. But above all, Ishiguro loves to make robotic versions of himself, since one of his chief aims with robotics is to make human proxies. As he said during his talk, “Thanks to my android, when I have two meetings I can be in two places simultaneously.” I honestly think he was only half-joking!
During the presentation, Ishiguro’s robotic clone was on stage with him, where it realistically fidgeted as he pontificated and joked with the audience. The Geminoid was controlled from off-stage, where an unseen technician guided it, and fidgeted, yawned, and made annoyed facial expressions. At the end of the talk, Ishiguro’s clone suddenly jumped to life and told a joke that startled the crowd.
In Ishiguro’s eyes, robotic clones can outperform humans at basic human behaviors thanks to modern engineering. And though they are not yet to the point where the term “android” can be applied, he believes it is only a matter of time before they can rival and surpass the real thing. Roboticists and futurists refer to this as the “uncanny valley” – that strange, off-putting feeling people get when robots begin to increasingly resemble humans. If said valley was a physical place, I think we can all agree that Ishiguro would be its damn mayor!
And judging by these latest creations, the time when robots are indistinguishable from humans may be coming sooner than we think. As you can see from the photos, there seems to be very little difference in appearance between his robots and their human counterparts. And those who viewed them live have attested to them being surprisingly life-like. And once they are able to control themselves and have an artificial neural net that can rival a human one in terms of complexity, we can expect them to mimic many of our other idiosyncrasies as well.
As usual, there are those who will respond to this news with anticipation and those who respond with trepidation. Where do you fall? Maybe these videos from the conference of Ishiguro’s inventions in action will help you make up your mind:
AR displays are becoming all the rage, thanks in no small part to Google Glass and other display glasses. And given the demand and appeal of the technology, it seemed like only a matter of time before AR displays began providing real-time navigation for vehicles. For decades, visor-mounted heads-up displays have been available, but fully-integrated displays have yet to have been produced.
Live Helmet is one such concept, a helmet that superimposes information and directions into a bike-helmet visor. Based in Moscow, this startup seeks to combine a head-mounted display, built-in navigation, and Siri-like voice recognition. The helmet will have a translucent, color display that’s projected on the visor in the center of the field of vision, and a custom user interface, English language-only at launch, based on Android.
This augmented reality helmet display includes a light sensor for adjusting image brightness according to external light conditions, as well as an accelerometer, gyroscope, and digital compass for tracking head movements. Naturally, the company anticipated that concerns about driver safety would come up, hence numerous safety features which they’ve included.
For one, the digital helmet is cleverly programmed to display maps only when the rider’s speed is close to zero to avoid distracting them at high speeds. And for the sake of hands-free control, it comes equipped with a series of voice commands for navigation and referencing points of interest. No texting and driving with this thing!
So far, the company has so far built some prototype hardware and software for the helmet with the help of grants from the Russian government, and is also seeking venture capital. However, they have found little within their home country, and have been forced to crowdfund via an Indiegogo campaign. As CEO, Andrew Artishchev, wrote on LiveMap’s Indiegogo page:
Russian venture funds are not disposed to invest into hardware startups. They prefer to back up clones of successful services like Groupon, Airnb, Zappos, Yelp, Booking, etc. They are not interested in producing hardware either.
All told, they are seeking to raise $150,000 to make press molds for the helmet capsule. At present, they have raised $5,989 with 31 days remaining. Naturally, prizes have been offered, ranging from thank yous and a poster (for donations of $1 to $25) to a test drive in a major city (Berlin, Paris, Rome, Moscow, Barcelona) for $100, and a grand prize of a helmet itself for a donation of $1500.
And of course, the company has announced that they have some “Stretched Goals”, just in case people want to help them overshoot their mandate of $150,000. For 300 000$, they will include a Bluetooth with a headset profile to their helmet, and for 500 000$, they will merge a built-in high-resolution 13Mpix photo&video camera. Good to have goals.
Personally, I’d help sponsor this, except for the fact that I don’t have motorbike and wouldn’t know how to use it if I did. But a long drive across the autobahn or the Amber Route would be totally boss! Speaking of which, check out the company’s promotional video:
In any developmental milestone, the X-47B made its first arrested landing aboard an aircraft carrier yesterday. This latest test, which comes after a successful arrested landing on an airstrip and a successful deployment from an aircraft carrier, may help signal a new era for the use of unmanned aircraft in military operations.
For months now, the US Navy has been testing the Unmanned Aerial Combat Air System – the first drone aircraft that requires only minimal human intervention – pushing the boundaries in the hopes of determining how far the new autonomous air system can go. And with this latest landing, they just proved that the X-47B is capable of being deployed and landing at sea.
Aircraft landings on a carrier are a tricky endeavor even for experienced pilots, as the ship’s flight deck is hardly spacious, and rises, falls, and sways with the ocean waves. To stop their forward momentum in the shortest distance possible, carrier aircraft have a hook on the underside of the fuselage that latches onto cables stretched across the flight deck. This means that pilots need to land precisely to grab the hook and come to a complete stop in time.
The test flight began when the drone took off from the Naval Air Station at Patuxent River, Md. and then flew to meet the USS George H.W. Bush at sea, a flight which took 35 minutes. Upon reaching the carrier, the same which it took off from this past May, it touched down and caught the 3 wire with its tailhook at a speed of 145 knots, coming to a dead stop in less than 350 feet. After the first landing, it was launched from the Bush’s catapult and then made a second arrested landing.
The Navy tweeted about the success shortly after it happened, and Ray Mabus – Secretary of the Navy – followed that up with a press statement:
The operational unmanned aircraft soon to be developed have the opportunity to radically change the way presence and combat power are delivered from our aircraft carriers.
Naturally, there is still plenty of testing likely to be done before such drones can be considered ready to go into combat zones. For example, perhaps, automated drone-to-drone refueling is scheduled for some time in 2014, another aspect of the UCAS the Navy is determined to try before deploying them in actual operations. Still, for fans and critics alike, this was a major step.
Which brings us to the darker side of this latest news. For many, a fleet of semi or fully-automated drones is a specter that induces serious terror. Earlier this year, the Obama administration sought to allay fears about the development of the X-47 and the ongoing use of UAVs in combat operations by claiming that steps would be taken to ensure that when it came to life and death decisions, a human would always be at the helm.
But of course, promises have been broken when it comes to the use of drones, which doesn’t inspire confidence here. Just eight days after the Obama Administration promised to cease clandestine operations where drones were used by the CIA to conduct operations in Pakistan, Yemen, and Somalia, one such drone was used to kill Wali ur-Rehman – the second in command of the Pakistani Taliban. This was a direct violation of Obama’s promise that UAVs would be used solely against Al-Qaeda and other known anti-US terrorist groups outside of Afghanistan.
What’s more, the development of unmanned drones that are able to function with even less in the way of human oversight has only added to many people’s fear about how, where, and against whom these drones will be used. Much has gone on that the public is now aware of thanks to the fact that only a handful of people are needed to control them from remote locations. If human agency is further removed, what will this mean for oversight, transparency, and ensuring they are not turned on their own citizens?
But of course, it is important to point out that the X-47B is but an experimental precursor to actual production models of a design that’s yet to be determined. At this point, it is not farfetched to assume that preventative measures will be taken to ensure that no autonomous drone will ever be capable of firing its weapons without permission from someone in the chain of command, or that human control will still be needed during combat phases of an operation. Considering the potential for harm and the controversy involved, it simply makes sense.
But of course, when it comes to issues like these the words “trust us” and “don’t worry” are too often applied by those spearheading the development. Much like domestic surveillance and national security matters, concerned citizens are simply unwilling to accept the explanation that “this will never be used for evil” anymore. At this juncture, the public must stay involved and apprised, and measures instituted from the very beginning.
And be sure to check out this video of the X-47B making its first arrested landing. Regardless of the implications of this latest flight, you have to admit that it was pretty impressive:
When it comes to modern research and development, biomimetics appear to be the order of the day. By imitating the function of biological organisms, researchers seek to improve the function of machinery to the point that it can be integrated into human bodies. Already, researchers have unveiled devices that can do the job of organs, or bionic limbs that use the wearer’s nerve signals or thoughts to initiate motion.
But what of machinery that can actually send signals back to the user, registering pressure and stimulation? That’s what researchers from the University of Georgia have been working on of late, and it has inspired them to create a device that can do the job of the largest human organ of them all – our skin. Back in April, they announced that they had successfully created a brand of “smart skin” that is sensitive enough to rival the real thing.
In essence, the skin is a transparent, flexible arrays that uses 8000 touch-sensitive transistors (aka. taxels) that emit electricity when agitated. Each of these comprises a bundle of some 1,500 zinc oxide nanowires, which connect to electrodes via a thin layer of gold, enabling the arrays to pick up on changes in pressure as low as 10 kilopascals, which is what human skin can detect.
Mimicking the sense of touch electronically has long been the dream researchers, and has been accomplished by measuring changes in resistance. But the team at Georgia Tech experimented with a different approach, measuring tiny polarization changes when piezoelectric materials such as zinc oxide are placed under mechanical stress. In these transistors, then, piezoelectric charges control the flow of current through the nanowires.
In a recent news release, lead author Zhong Lin Wang of Georgia Tech’s School of Materials Science and Engineering said:
Any mechanical motion, such as the movement of arms or the fingers of a robot, could be translated to control signals. This could make artificial skin smarter and more like the human skin. It would allow the skin to feel activity on the surface.
This, when integrated to prosthetics or even robots, will allow the user to experience the sensation of touch when using their bionic limbs. But the range of possibilities extends beyond that. As Wang explained:
This is a fundamentally new technology that allows us to control electronic devices directly using mechanical agitation. This could be used in a broad range of areas, including robotics, MEMS, human-computer interfaces, and other areas that involve mechanical deformation.
Not the first time that bionic limbs have come equipped with electrodes to enable sensation. In fact, the robotic hand designed by Silvestro Micera of the Ecole Polytechnique Federale de Lausanne in Switzerland seeks to do the same thing. Using electrodes that connect from the fingertips, palm and index finger to the wearer’s arm nerves, the device registers pressure and tension in order to help them better interact with their environment.
Building on these two efforts, it is easy to get a glimpse of what future prosthetic devices will look like. In all likelihood, they will be skin-colored and covered with a soft “dermal” layer that is studded with thousands of sensors. This way, the wearer will be able to register sensations – everything from pressure to changes in temperature and perhaps even injury – from every corner of their hand.
As usual, the technology may have military uses, since the Defense Advanced Research Projects Agency (DARPA) is involved. For that matter, so is the U.S. Air Force, the U.S. Department of Energy, the National Science Foundation, and the Knowledge Innovation Program of the Chinese Academy of Sciences are all funding it. So don’t be too surprised if bots wearing a convincing suit of artificial skin start popping up in your neighborhood!
Big public events are often used to showcase new technology: the Consumer Electronics Show in Las Vegas, the Bett Show in London, and now the Glastonbury outdoor festival in England, where early last the mobile phone company Vodafone chose to showcase a new line: the Power Shorts, an item of clothing that turns motion and even body heat into electricity.
The shorts were naturally a big hit, and quite appropriate for the venue since they use motion (like dancing), to boost the battery life of your mobile devices. Created with help from scientists at the University of Southampton, the shorts incorporate a Power Pocket that contains foam-like ferroelectret materials with pockets of permanently charged surfaces. When the material gets squashed or deformed through movement, kinetic energy gets produced.
But for those who are looking for a way to charge their gear without exertion, Vodafone is also working on a Recharge Sleeping Bag. This bag apparently harvests body heat via the “Seebeck effect,” a process that produces a voltage from the temperature differences across a thermoelectric module.
These modules are printed on the fabric of the sleeping bag, which supposedly can transform an 8-hour snooze into 11 hours of smartphone battery life. As Stephen Beeby, a professor of electronic systems at the University of Southampton who worked on the innovations explained:
One side of that is cold and the other is hot, and when you get a flow of heat through it you can create a voltage and a current. Voltage and current together equals electrical power.
And this is not the first time that Vodafone chose to unveil something new and innovative that just happens to take advantage of the principles of piezoelectricity during a musical event. For those who attended the Isle of Wight Festival last year, the Vodafone Booster Brolley, a prototype parasol that keeps your phone charged while it keeps you dry might ring a bell.
These are by no means the only examples of kinetic energy devices these days. For example, a piezoelectric rubber material produced by Princeton and Caltech a few years back, is already being considered for shoes and other mobile devices as a means of recharging personal electronics.
And remember Pavegen, the rubber panels that turned runners steps at the finishing line of the Paris Marathon into actual electricity? This technology is already being adapted to provide electricity for a Grammar School in Kent, England, utilizing the thousands of steps students take everyday to keep the lights on.
Such concepts are likely to be powering just about all our devices in the not-too-distant future, at least in part. And beyond personal electronics, piezoelectric motors are also sure to be turning up in buildings and public spaces in the near future. In addition to stairways, hallways, and sidewalks, any surface in the city that moves or is touched on a regular basis could be converted to providing power.
Very clean, and very renewable. People still do a great deal of getting around by foot these days, and if we can convert that motion into energy, so much the better!