The Future is Here: Glucose-Monitoring Contact Lenses

google-novartis-alcon-smart-contact-lens-0Earlier this year, Google announced that it was developing a contact lens that would be capable of monitoring blood glucose levels. By monitoring a person’s glucose levels through their tears, and sending that information to a smartphone, the device promised to do away with tests that require regular blood samples and pinpricks. And now, a partnership has been announced between that will help see this project through to completion.

Alcon, the eye care division of Novartis – a Swiss multinational pharmaceutical company – recently joined Google’s project to commercialize “smart contact lens” technology. The project, which came out of the Google X blue-sky innovation arm of the company, aimed to utilize a “tiny wireless chip and miniaturized glucose sensor that are embedded between two layers of soft contact lens material,” in order to detect glucose levels present in tears.

google-novartis-alcon-smart-contact-lensAt the time of the initial announcement in January, Google said its prototypes were able to take one glucose reading per second and that they was investigating ways for the device to act as an early warning system for the wearer should glucose levels become abnormal. All that was needed was a partner with the infrastructure and experience in the medical industry to see the prototypes put into production.

Under the terms of the new agreement, Google will license the technology to Alcon “for all ocular medical uses” and the two companies will collaborate to develop the lens and bring it to market. Novartis says that it sees Google’s advances in the miniaturization of electronics as complementary to its own expertise in pharmaceuticals and medical device. No doubt, the company also sees this as an opportunity to get in on the new trend of digitized, personalized medicine.

future_medicineAs Novartis said in a recent press release:

The agreement marries Google’s expertise in miniaturized electronics, low power chip design and microfabrication with Alcon’s expertise in physiology and visual performance of the eye, clinical development and evaluation, as well as commercialization of contact and intraocular lenses.

The transaction remains subject to anti-trust approvals, but assuming it goes through, Alcon hopes it will help to accelerate its product innovation. And with that, diabetics can look forward to yet another innovative device that simplifies the blood monitoring process and offers better early warning detection that can help reduce the risk of heart disease, stroke, kidney failure, foot ulcers, loss of vision, and coma.

Sources: gizmag.com, novartis.com

Towards a Cleaner Future: Solar and Wind Drones

solar_cell_galliumWith supplies of easily accessible fossil fuels diminishing, pushing us towards dirtier sources of oil and natural gas (such as tar sands and frakking), researchers are looking for ways to make renewable energy more efficient and accessible. Towards this end, they are pushing the boundaries of solar cells and wind turbines are capable of, but the constraints of land and weather limit where vast solar or wind farms can be set up.

Luckily, a UK-based company known as New Wave Energy has spent the last few years developing the technology to produce an army of power-generating drone aircraft to overcome these very problems. Basically, each craft is a 20x20m (65ft) flat surface fitted with solar panels and turbines to generate power from the sun and wind, and four small propellers that keep it aloft.

solar_dronesThe drones would be capable of flying at altitudes of up to 15,240 meters (50,000 feet), putting them far above the clouds that can obscure the sun. The propellers would allow the craft to track the course of the sun to remain in optimal position for as long as possible. At these altitudes, the wind is also more consistent and powerful, which means smaller turbines can be used in place of the giant towers necessary down near the ground.

In terms of transmitting that power, the key is in the use of microwaves. In essence, power from the drones would be beamed down as a low-energy microwave and collected by antenna arrays on the ground. These antennas can then be used to turn the electromagnetic radiation into usable DC power and then send it to where it is needed.

solar_drones1One of the benefits of this design is that the proposed drone power plants wouldn’t need to land to refuel themselves. Supposedly, they will be able to power themselves entirely with the energy generated on-board, and still produce 50kW of power. This that means several thousand drones would be needed to power a large city of 205,000 homes.

However, these swarms of robotic power plants aren’t just a way to replace the power infrastructure we already have. They could be used to augment our current power supplies as demand increases, removing the need to expand on large, expensive power plants. Also, they bring power to remote areas with poor service, or to restore power in regions affected by natural disasters.

solar_panelThus, the cost of building and deploying the drones will determine whether or not that’s feasible. At present, the company plans to raise about $500,000 on Kickstarter to fund the construction of a prototype for testing and marketing. If this campaign does turn out to be successful, the first flying power plant could be aloft within six months.

Combined with other improvements that are making wind and solar power more efficient and affordable, and future prospects for space-based solar power (SBSP) that are being made possible thanks to space startups like Google X, we could be looking at a near-future where solar and wind meet the lion’s share of our energy requirements.

Source: extremetech.com

Digital Eyewear Through the Ages

google_glassesGiven the sensation created by the recent release of Google Glass – a timely invention that calls to mind everything from 80’s cyberpunk to speculations about our cybernetic, transhuman future – a lot of attention has been focused lately on personalities like Steve Mann, Mark Spritzer, and the history of wearable computers.

For decades now, visionaries and futurists have been working towards a day when all personal computers are portable and blend seamlessly into our daily lives. And with countless imitators coming forward to develop their own variants and hate crimes being committed against users, it seems like portable/integrated machinery is destined to become an issue no one will be able to ignore.

And so I thought it was high time for a little retrospective, a look back at the history of eyewear computers and digital devices and see how far it has come. From its humble beginnings with bulky backpacks and large, head-mounted displays, to the current age of small fixtures that can be worn as easily as glasses, things certainly have changed. And the future is likely to get even more fascinating, weird, and a little bit scary!

Sword of Damocles (1968):
swordofdamoclesDeveloped by Ivan Sutherland and his student Bob Sprouli at the University of Utah in 1968, the Sword of Damocles was the world’s first heads-up mounted display. It consisted of a headband with a pair of small cathode-ray tubes attached to the end of a large instrumented mechanical arm through which head position and orientation were determined.

Hand positions were sensed via a hand-held grip suspended at the end of three fishing lines whose lengths were determined by the number of rotations sensed on each of the reels. Though crude by modern standards, this breakthrough technology would become the basis for all future innovation in the field of mobile computing, virtual reality, and digital eyewear applications.

WearComp Models (1980-84):
WearComp_1_620x465Built by Steve Mann (inventor of the EyeTap and considered to be the father of wearable computers) in 1980, the WearComp1 cobbled together many devices to create visual experiences. It included an antenna to communicate wirelessly and share video. In 1981, he designed and built a backpack-mounted wearable multimedia computer with text, graphics, and multimedia capability, as well as video capability.

Wearcomp_4By 1984, the same year that Apple’s Macintosh was first shipped and the publication of William Gibson’s science fiction novel, “Neuromancer”, he released the WearComp4 model. This latest version employed clothing-based signal processing, a personal imaging system with left eye display, and separate antennas for simultaneous voice, video, and data communication.

Private Eye (1989):
Private_eye_HUDIn 1989 Reflection Technology marketed the Private Eye head-mounted display, which scanned a vertical array of LEDs across the visual field using a vibrating mirror. The monochrome screen was 1.25-inches on the diagonal, but images appear to be a 15-inch display at 18-inches distance.

EyeTap Digital Eye (1998):
EyeTap1
Steve Mann is considered the father of digital eyewear and what he calls “mediated” reality. He is a professor in the department of electrical and computer engineering at the University of Toronto and an IEEE senior member, and also serves as chief scientist for the augmented reality startup, Meta. The first version of the EyeTap was produced in the 1970’s and was incredibly bulky by modern standards.

By 1998, he developed the one that is commonly seen today, mounted over one ear and in front of one side of the face. This version is worn in front of the eye, recording what is immediately in front of the viewer and superimposing the view as digital imagery. It uses a beam splitter to send the same scene to both the eye and a camera, and is tethered to a computer worn to his body in a small pack.

MicroOptical TASK-9 (2000):
MicroOptical TASK-9Founded in 1995 by Mark Spitzer, who is now a director at the Google X lab. the company produced several patented designs which were bought up by Google after the company closed in 2010. One such design was the TASK-9, a wearable computer that is attachable to a set of glasses. Years later, MicroOptical’s line of viewers remain the lightest head-up displays available on the market.

Vuzix (1997-2013):
Vuzix_m100Founded in 1997, Vuzix created the first video eyewear to support stereoscopic 3D for the PlayStation 3 and Xbox 360. Since then, Vuzix went on to create the first commercially produced pass-through augmented reality headset, the Wrap 920AR (seen at bottom). The Wrap 920AR has two VGA video displays and two cameras that work together to provide the user a view of the world which blends real world inputs and computer generated data.

vuzix-wrapOther products of note include the Wrap 1200VR, a virtual reality headset that has numerous applications – everything from gaming and recreation to medical research – and the Smart Glasses M100, a hands free display for smartphones. And since the Consumer Electronics Show of 2011, they have announced and released several heads-up AR displays that are attachable to glasses.

vuzix_VR920

MyVu (2008-2012):
Founded in 1995, also by Mark Spitzer, MyVu developed several different types of wearable video display glasses before closing in 2012. The most famous was their Myvu Personal Media Viewer (pictured below), a set of display glasses that was released in 2008. These became instantly popular with the wearable computer community because they provided a cost effective and relatively easy path to a DIY, small, single eye, head-mounted display.myvu_leadIn 2010, the company followed up with the release of the Viscom digital eyewear (seen below), a device that was developed in collaboration with Spitzer’s other company, MicroOptical. This smaller, head mounted display device comes with earphones and is worn over one eye like a pair of glasses, similar to the EyeTap.

myvu_viscom

Meta Prototype (2013):
Developed by Meta, a Silicon Valley startup that is being funded with the help of a Kickstarter campaign and supported by Steve Mann, this wearable computing eyewear ultizes the latest in VR and projection technology. Unlike other display glasses, Meta’s eyewear enters 3D space and uses your hands to interact with the virtual world, combining the benefits of the Oculus Rift and those being offered by “Sixth Sense” technology.

meta_headset_front_on_610x404The Meta system includes stereoscopic 3D glasses and a 3D camera to track hand movements, similar to the portrayals of gestural control in movies like “Iron Man” and “Avatar.” In addition to display modules embedded in the lenses, the glasses include a portable projector mounted on top. This way, the user is able to both project and interact with computer simulations.

Google Glass (2013):
Google Glass_Cala
Developed by Google X as part of their Project Glass, the Google Glass device is a wearable computer with an optical head-mounted display (OHMD) that incorporates all the major advances made in the field of wearable computing for the past forty years. These include a smartphone-like hands-free format, wireless internet connection, voice commands and a full-color augmented-reality display.

Development began in 2011 and the first prototypes were previewed to the public at the Google I/O annual conference in San Francisco in June of 2012. Though they currently do not come with fixed lenses, Google has announced its intention to partner with sunglass retailers to equip them with regular and prescription lenses. There is also talk of developing contact lenses that come with embedded display devices.

Summary:
Well, that’s the history of digital eyewear in a nutshell. And as you can see, since the late 60’s, the field has progressed by leaps and bounds. What was once a speculative and visionary pursuit has now blossomed to become a fully-fledged commercial field, with many different devices being produced for public consumption.

At this rate, who knows what the future holds? In all likelihood, the quest to make computers more portable and ergonomic will keep pace with the development of more sophisticated electronics and computer chips, miniaturization, biotechnology, nanofabrication and brain-computer interfacing.

The result will no doubt be tiny CPUs that can be implanted in the human body and integrated into our brains via neural chips and tiny electrodes. In all likelihood, we won’t even need voice commands at that point, because neuroscience will have developed a means to communicate directly to our devices via brainwaves. The age of cybernetics will have officially dawned!

Like I said… fascinating, weird, and a little bit scary!

‘High Dynamic Range’