The Future is Here: The AR Bike Helmet

AR_helmetAR displays are becoming all the rage, thanks in no small part to Google Glass and other display glasses. And given the demand and appeal of the technology, it seemed like only a matter of time before AR displays began providing real-time navigation for vehicles. For decades, visor-mounted heads-up displays have been available, but fully-integrated displays have yet to have been produced.

Live Helmet is one such concept, a helmet that superimposes information and directions into a bike-helmet visor. Based in Moscow, this startup seeks to combine a head-mounted display, built-in navigation, and Siri-like voice recognition. The helmet will have a translucent, color display that’s projected on the visor in the center of the field of vision, and a custom user interface, English language-only at launch, based on Android.

AR_helmet1This augmented reality helmet display includes a light sensor for adjusting image brightness according to external light conditions, as well as an accelerometer, gyroscope, and digital compass for tracking head movements. Naturally, the company anticipated that concerns about driver safety would come up, hence numerous safety features which they’ve included.

For one, the digital helmet is cleverly programmed to display maps only when the rider’s speed is close to zero to avoid distracting them at high speeds. And for the sake of hands-free control, it comes equipped with a series of voice commands for navigation and referencing points of interest. No texting and driving with this thing!

ar_helmet4So far, the company has so far built some prototype hardware and software for the helmet with the help of grants from the Russian government, and is also seeking venture capital. However, they have found little within their home country, and have been forced to crowdfund via an Indiegogo campaign. As CEO, Andrew Artishchev, wrote on LiveMap’s Indiegogo page:

Russian venture funds are not disposed to invest into hardware startups. They prefer to back up clones of successful services like Groupon, Airnb, Zappos, Yelp, Booking, etc. They are not interested in producing hardware either.

All told, they are seeking to raise $150,000 to make press molds for the helmet capsule. At present, they have raised $5,989 with 31 days remaining. Naturally, prizes have been offered, ranging from thank yous and a poster (for donations of $1 to $25) to a test drive in a major city (Berlin, Paris, Rome, Moscow, Barcelona) for $100, and a grand prize of a helmet itself for a donation of $1500.

ar_helmet3And of course, the company has announced that they have some “Stretched Goals”, just in case people want to help them overshoot their mandate of $150,000. For 300 000$, they will include a Bluetooth with a headset profile to their helmet, and for 500 000$, they will merge a built-in high-resolution 13Mpix photo&video camera. Good to have goals.

Personally, I’d help sponsor this, except for the fact that I don’t have motorbike and wouldn’t know how to use it if I did. But a long drive across the autobahn or the Amber Route would be totally boss! Speaking of which, check out the company’s promotional video:


The Birth of an Idea: The Computer Coat!

optical_computer1I’ve been thinking… which is not something novel for me, it just so happens that my thoughts have been a bit more focused lately. Specifically, I have an idea for an invention: something futuristic, practical, that could very well be part of our collective, computing future. With all the developments in the field of personal computing lately, and I my ongoing efforts to keep track of them, I hoped I might eventually come up with an idea of my own.

Consider, the growth in smartphones and personal digital assistants. In the last few years, we’ve seen companies produce working prototypes for paper-thin, flexible, and durable electronics. Then consider the growth in projection touchscreens, portable computing, and augmented reality. Could it be that there’s some middle ground here for something that incorporates all of the above?

Pranav Mistry 5Ever since I saw Pranav Mistry’s demonstration of a wearable computer that could interface with others, project its screen onto any surface, and be operated through simple gestures from the user, I’ve been looking for a way to work this into fiction. But in the years since Mistry talked to and showed off his “Sixth Sense Technology”, the possibilities have grown and been refined.

papertab-touchAnd then something happened. While at school, I noticed one of the kids wearing a jacket that had a hole near the lapel with a headphones icon above it. The little tunnel worked into the coat was designed to keep the chord to your iPod or phone safe and tucked away, and it got me thinking! Wires running through a coat, inset electrical gear, all the advancements made in the last few years. Who thinks about this kind of stuff, anyway? Who cares, it was the birth of an idea!

headphonesFor example, its no longer necessary to carry computer components that are big and bulky on your person. With thin, flexible electronics, much like the new Papertab, all the components one would need could be thin enough and flexible enough to be worked into the inlay of a coat. These could include the CPU, a wireless router, and a hard drive.

Paper-thin zinc batteries, also under development, could be worked into the coast as well, with a power cord connected to them so they could be jacked into a socket and recharged. And since they too are paper-thin, they could be expected to move and shift with the coat, along with all the other electronics, without fear of breakage or malfunction.

flexbatteryAnd of course, there would be the screen itself, via a small camera and projector in the collar, which could be placed and interfaced with on any flat surface. Or, forget the projector entirely and just connect the whole thing to a set of glasses. Google’s doing a good job on those, as is DARPA with their development of AR contact lenses. Either one will do in a pinch, and could be wirelessly or wired to the coat itself.

google_glass1Addendum: Shortly after publishing this, I realized that a power cord is totally unnecessary! Thanks to two key technologies, it could be possible to recharge the batteries using a combination of flexible graphene solar panels and some M13 peizoelectric virus packs. The former could be attached to the back, where they would be wired to the coats power system, and the M13 packs could be placed in the arms, where the user’s movement would be harnessed to generate electricity. Total self-sufficiency, baby!

powerbuttonAnd then how about a wrist segment where some basic controls, such as the power switch and a little screen are? This little screen could act as a prompt, telling you you have emails, texts, tweets, and updates available for download. Oh, and lets not forget a USB port, where you can plug in an external hard drive, flash drive, or just hook up to another computer.

So that’s my idea, in a nutshell. I plan to work it into my fiction at the first available opportunity, as I consider it an idea that hasn’t been proposed yet, not without freaky nanotech being involved! Look for it, and in the meantime, check out the video of Pranav Mistry on TED talks back in 2010 when he first proposed 6th Sense Tech. Oh, and just in case, you heard about the Computer Coat here first, patent pending!

New Video Shows Google Glasses in Action

GOOGLE-GLASS-LOGO1In a recently released teaser video, designed to expand Google Glass’ potential consumer base from the tech-savvy to what it refers to as “bold, creative individuals”. While the first video of their futuristic AR specs followed a New Yorker as they conducted mundane tasks through the city, this new clip hosts a dizzying array of activities designed to show just how versatile the product can be.

This includes people engaged in skydiving, horseback riding, catwalking at a fashion show, and performing ballet. Quite the mixed bag! All the while, we are shown what it would look like to do these activities while wearing a set of Google glasses. The purpose here is not only to show their functionality, but to give people a taste of what it an augmented world looks like.google_glass

And based on product information, videos and stillpics from the Google Glass homepage, it also appears that these new AR glasses will take advantage of the latest in flexible technology. Much like the new breeds of smartphones and PDAs which will be making the rounds later this year, these glasses are bendable, flexible, and therefore much more survivable than conventional glasses, which probably cost just as much!

Apparently, this is all in keeping with CEO and co-founder Larry Page’s vision of a world where Google products make their users smarter. In a 2004 interview, Page shared that vision with people, saying: “Imagine your brain is being augmented by Google.” These futurist sentiments may be a step closer now, thanks to a device that can provide on-the-spot information about whatever situation or environment we find ourselves in.

google_glass1One thing is for sure though. With the help of some AR specs, the middle man is effectively cut out. No longer are we required to aim our smartphones, perform image searches, or type things into a search engine (like Google!). Now we can just point, look, and wait for the glasses to identify what we are looking at and provide the requisite information.

Check out the video below:

AR Glasses Restore Sight to the Blind

projectglass01As I’m sure most readers are aware, blindness comes in many forms. It’s not simply a matter of the afflicted not being able to see. In fact, there are many degrees of blindness and in most cases, depth perception is limited. But as it turns out, researchers at the University of Yamanashi in Japan have found a way to improve depth perception for the visually challenged using simple augmented reality glasses.

The process involved a pair of Wrap 920 ARs, an off-the-shelf brand of glasses that allow their wearer to interface with their PC, watch video or surf the internet, all the while staying mobile and carrying out their daily chores. The team then recorded images as seen by the wearer from the angle of both eyes, processed it with a quad-core Windows 7 machine, and then merged the images as they would appear to the healthy eye.

AR_glassesEssentially, the glasses perform the task of rendering a scene as it would be seen through “binocular vision” – i.e. in 3D. By taking two images, merging them together and defining what is near and what is far by their relative resolution, they were able to free the wearer’s brain from having to it for them. This in turn allowed them to interact more freely and effectively with their test environment: a dinner table with chop sticks and food in small bowls, arguably a tricky meal to navigate!

Naturally, the technology is still in its infancy. For one, the processed imagery has a fairly low resolution and frame rate, and it requires the glasses to be connected to a laptop. Newer tech will provide better resolution, faster frames per second, and a larger viewport. In addiiton, mobile computing with smartphones and tablets ought to provide for a greater degree of portability, to the point where all the required technology is in the glasses themselves.

posthumanLooking ahead, it is possible that there could be a f0rm of AR glasses specially programmed to deliver this kind of vision correction. The glasses would then act as a prosthesis, giving people with visual impairment an increased level of visual acuity, bringing them one step closer to vision recovery. And since this is also a development which will blurring the lines between humans and computers even more, it’s arguably another step closer to transhumanism!