Digital Eyewear Through the Ages

google_glassesGiven the sensation created by the recent release of Google Glass – a timely invention that calls to mind everything from 80’s cyberpunk to speculations about our cybernetic, transhuman future – a lot of attention has been focused lately on personalities like Steve Mann, Mark Spritzer, and the history of wearable computers.

For decades now, visionaries and futurists have been working towards a day when all personal computers are portable and blend seamlessly into our daily lives. And with countless imitators coming forward to develop their own variants and hate crimes being committed against users, it seems like portable/integrated machinery is destined to become an issue no one will be able to ignore.

And so I thought it was high time for a little retrospective, a look back at the history of eyewear computers and digital devices and see how far it has come. From its humble beginnings with bulky backpacks and large, head-mounted displays, to the current age of small fixtures that can be worn as easily as glasses, things certainly have changed. And the future is likely to get even more fascinating, weird, and a little bit scary!

Sword of Damocles (1968):
swordofdamoclesDeveloped by Ivan Sutherland and his student Bob Sprouli at the University of Utah in 1968, the Sword of Damocles was the world’s first heads-up mounted display. It consisted of a headband with a pair of small cathode-ray tubes attached to the end of a large instrumented mechanical arm through which head position and orientation were determined.

Hand positions were sensed via a hand-held grip suspended at the end of three fishing lines whose lengths were determined by the number of rotations sensed on each of the reels. Though crude by modern standards, this breakthrough technology would become the basis for all future innovation in the field of mobile computing, virtual reality, and digital eyewear applications.

WearComp Models (1980-84):
WearComp_1_620x465Built by Steve Mann (inventor of the EyeTap and considered to be the father of wearable computers) in 1980, the WearComp1 cobbled together many devices to create visual experiences. It included an antenna to communicate wirelessly and share video. In 1981, he designed and built a backpack-mounted wearable multimedia computer with text, graphics, and multimedia capability, as well as video capability.

Wearcomp_4By 1984, the same year that Apple’s Macintosh was first shipped and the publication of William Gibson’s science fiction novel, “Neuromancer”, he released the WearComp4 model. This latest version employed clothing-based signal processing, a personal imaging system with left eye display, and separate antennas for simultaneous voice, video, and data communication.

Private Eye (1989):
Private_eye_HUDIn 1989 Reflection Technology marketed the Private Eye head-mounted display, which scanned a vertical array of LEDs across the visual field using a vibrating mirror. The monochrome screen was 1.25-inches on the diagonal, but images appear to be a 15-inch display at 18-inches distance.

EyeTap Digital Eye (1998):
EyeTap1
Steve Mann is considered the father of digital eyewear and what he calls “mediated” reality. He is a professor in the department of electrical and computer engineering at the University of Toronto and an IEEE senior member, and also serves as chief scientist for the augmented reality startup, Meta. The first version of the EyeTap was produced in the 1970’s and was incredibly bulky by modern standards.

By 1998, he developed the one that is commonly seen today, mounted over one ear and in front of one side of the face. This version is worn in front of the eye, recording what is immediately in front of the viewer and superimposing the view as digital imagery. It uses a beam splitter to send the same scene to both the eye and a camera, and is tethered to a computer worn to his body in a small pack.

MicroOptical TASK-9 (2000):
MicroOptical TASK-9Founded in 1995 by Mark Spitzer, who is now a director at the Google X lab. the company produced several patented designs which were bought up by Google after the company closed in 2010. One such design was the TASK-9, a wearable computer that is attachable to a set of glasses. Years later, MicroOptical’s line of viewers remain the lightest head-up displays available on the market.

Vuzix (1997-2013):
Vuzix_m100Founded in 1997, Vuzix created the first video eyewear to support stereoscopic 3D for the PlayStation 3 and Xbox 360. Since then, Vuzix went on to create the first commercially produced pass-through augmented reality headset, the Wrap 920AR (seen at bottom). The Wrap 920AR has two VGA video displays and two cameras that work together to provide the user a view of the world which blends real world inputs and computer generated data.

vuzix-wrapOther products of note include the Wrap 1200VR, a virtual reality headset that has numerous applications – everything from gaming and recreation to medical research – and the Smart Glasses M100, a hands free display for smartphones. And since the Consumer Electronics Show of 2011, they have announced and released several heads-up AR displays that are attachable to glasses.

vuzix_VR920

MyVu (2008-2012):
Founded in 1995, also by Mark Spitzer, MyVu developed several different types of wearable video display glasses before closing in 2012. The most famous was their Myvu Personal Media Viewer (pictured below), a set of display glasses that was released in 2008. These became instantly popular with the wearable computer community because they provided a cost effective and relatively easy path to a DIY, small, single eye, head-mounted display.myvu_leadIn 2010, the company followed up with the release of the Viscom digital eyewear (seen below), a device that was developed in collaboration with Spitzer’s other company, MicroOptical. This smaller, head mounted display device comes with earphones and is worn over one eye like a pair of glasses, similar to the EyeTap.

myvu_viscom

Meta Prototype (2013):
Developed by Meta, a Silicon Valley startup that is being funded with the help of a Kickstarter campaign and supported by Steve Mann, this wearable computing eyewear ultizes the latest in VR and projection technology. Unlike other display glasses, Meta’s eyewear enters 3D space and uses your hands to interact with the virtual world, combining the benefits of the Oculus Rift and those being offered by “Sixth Sense” technology.

meta_headset_front_on_610x404The Meta system includes stereoscopic 3D glasses and a 3D camera to track hand movements, similar to the portrayals of gestural control in movies like “Iron Man” and “Avatar.” In addition to display modules embedded in the lenses, the glasses include a portable projector mounted on top. This way, the user is able to both project and interact with computer simulations.

Google Glass (2013):
Google Glass_Cala
Developed by Google X as part of their Project Glass, the Google Glass device is a wearable computer with an optical head-mounted display (OHMD) that incorporates all the major advances made in the field of wearable computing for the past forty years. These include a smartphone-like hands-free format, wireless internet connection, voice commands and a full-color augmented-reality display.

Development began in 2011 and the first prototypes were previewed to the public at the Google I/O annual conference in San Francisco in June of 2012. Though they currently do not come with fixed lenses, Google has announced its intention to partner with sunglass retailers to equip them with regular and prescription lenses. There is also talk of developing contact lenses that come with embedded display devices.

Summary:
Well, that’s the history of digital eyewear in a nutshell. And as you can see, since the late 60’s, the field has progressed by leaps and bounds. What was once a speculative and visionary pursuit has now blossomed to become a fully-fledged commercial field, with many different devices being produced for public consumption.

At this rate, who knows what the future holds? In all likelihood, the quest to make computers more portable and ergonomic will keep pace with the development of more sophisticated electronics and computer chips, miniaturization, biotechnology, nanofabrication and brain-computer interfacing.

The result will no doubt be tiny CPUs that can be implanted in the human body and integrated into our brains via neural chips and tiny electrodes. In all likelihood, we won’t even need voice commands at that point, because neuroscience will have developed a means to communicate directly to our devices via brainwaves. The age of cybernetics will have officially dawned!

Like I said… fascinating, weird, and a little bit scary!

‘High Dynamic Range’

New Video Shows Google Glasses in Action

GOOGLE-GLASS-LOGO1In a recently released teaser video, designed to expand Google Glass’ potential consumer base from the tech-savvy to what it refers to as “bold, creative individuals”. While the first video of their futuristic AR specs followed a New Yorker as they conducted mundane tasks through the city, this new clip hosts a dizzying array of activities designed to show just how versatile the product can be.

This includes people engaged in skydiving, horseback riding, catwalking at a fashion show, and performing ballet. Quite the mixed bag! All the while, we are shown what it would look like to do these activities while wearing a set of Google glasses. The purpose here is not only to show their functionality, but to give people a taste of what it an augmented world looks like.google_glass

And based on product information, videos and stillpics from the Google Glass homepage, it also appears that these new AR glasses will take advantage of the latest in flexible technology. Much like the new breeds of smartphones and PDAs which will be making the rounds later this year, these glasses are bendable, flexible, and therefore much more survivable than conventional glasses, which probably cost just as much!

Apparently, this is all in keeping with CEO and co-founder Larry Page’s vision of a world where Google products make their users smarter. In a 2004 interview, Page shared that vision with people, saying: “Imagine your brain is being augmented by Google.” These futurist sentiments may be a step closer now, thanks to a device that can provide on-the-spot information about whatever situation or environment we find ourselves in.

google_glass1One thing is for sure though. With the help of some AR specs, the middle man is effectively cut out. No longer are we required to aim our smartphones, perform image searches, or type things into a search engine (like Google!). Now we can just point, look, and wait for the glasses to identify what we are looking at and provide the requisite information.

Check out the video below:

Transhumans by 2030?

transhumanismThe issue of transhumanism, the rise of a new type of humanity characterized by man-machine interface and augmented intelligence, is being debated quite fervently in some circles right now. But it seems that groups other than Futurists and speculative fiction writers are joining the discussion. Recently, the National Intelligence Council, a US policy think-tank, released a 140 page report that outlined major trends and technological developments we should expect in the next 20 years.

The report, entitled “Global Trends 2030: Alternative Worlds”, predicted several trends which are likely to come true in the near future. Amongst them is the end of U.S. global dominance, the rising power of individuals against states, a growing middle class that will increasingly challenge governments, and ongoing shortages in water, food and energy. However, predictions were also made concerning a future where humans have been significantly modified by various technologies, what is often referred to as the dawn of the Transhuman Era.

how-nanotechnology-could-reengineer-usIntrinsic to this new era is the invention of implants, prosthetics, and powered exoskeletons which will become regular fixtures of human life. These will go beyond merely correcting for physical disabilities or injury, to the point where average humans are enhanced and become more productive. 2030 is key year here, because it is by this point that the authors predict that prosthetics will exceed organics, and people will begin getting them installed in order to augment themselves.

In addition, life extension therapies and medical advances which will be used predominantly by the elderly will become a means for otherwise healthy people to prolong their lives and maintain health and vitality for longer periods of time. Brain implants are expected to become a reality as well, ostensibly to allow people to have brain-controlled prosthetics, but also for the sake of enhanced memory and augmented thinking.

bionic_handAnd of course, bionics are an important factor in all this. Already, researchers have achieved breakthroughs with bionic limbs, but retinal attachments, artificial eyes, and even fully-functioning organs are expected before 2030. On top of that, improvements in drugs, such as neuropharmaceuticals – drugs that enhance memory, attention, speed of thought – and implants which assist in their delivery are expected to be making the rounds.

google_glassesFinally, there is the matter of virtual and augmented reality systems, which are already becoming a reality thanks to things like Project Glass and recent innovations in PDAs. As the report notes: “Augmented reality systems can provide enhanced experiences of real-world situations. Combined with advances in robotics, avatars could provide feedback in the form of sensors providing touch and smell as well as aural and visual information to the operator.”

However, the big issue, according to the report, is cost and security. Most of these technologies will be not affordable to all people, especially for the first few years of their existence. This could result in a two-tiered society where the well-to-do live longer, healthier and have a competitive advantage over “organics”, people of lesser means who are identifiable by their lack of enhancements. Also, developers will need to be on their guard against hackers who might attempt to subvert or infect these devices with tailor-made viruses.

Naturally, the importance of maintaining uniform scientific progress was stressed, and the need for a regulatory framework is certainly needed. What the CSER recently recommended is certainly worth keeping in mind here, which was to ensure that some kind of regulatory framework be put in place before all of this becomes a reality. What’s more, public education is certainly necessary, so that the current and next generation of human beings knows what to expect and how to go about making informed choices therein.

To see the full report and learn more about the NIC, follow the link below:

National Intelligence Council: Who We Are

Source: IO9.com

The Future Is Here: Google Glasses!

It’s like something out of a cyberpunk wet dream. Long the subject of speculative science fiction, it seems that we now have a working prototype for a set of goggles that can handle our wireless and networking needs. Merging the concepts of Augmented Reality with a Head-Mounted Display (HMD), Google has created what are now known as the “Google Glasses”.

Also known as “Project Glass”, this device is the first working model for what is often referred to as mobile computing. While still being tested, the project has been unveiled and Google Inc announced that they will now be conducting public trials to test their portability and ergonomics.

But of course, some of the terminology needs a little explanation. For example, augmented reality. By definition, this is the live direct, or indirect, view of the real world with computer generated imagery laid over top. One can be walking down the street or otherwise interacting with their world, but will also be able to view a desktop browser, a web page, or streaming video laid just overtop.

According to Google, the glasses will function much like an iPhone with the Siri application, in that wearers will be able to get onto the internet using voice commands. If this goes through, Apple Inc. will have its work cut out for it if they want to remain top dog in the technology race. I wonder what Steve Jobs would have made of this, may he rest in peace!

The project is just one of several being worked on by Google X Lab’s team of crack engineers, which includes Babak Parviz, an electrical engineer who has also worked on putting displays into contact lenses; Steve Lee, a project manager and “geolocation specialist”; and Sebastian Thrun, who developed Udacity education program as well as working on their self-driving car project.

Naturally, this news is causing a great deal of excitement, but I can’t help but wonder if certain people – not the least of which is William Gibson – aren’t getting just a tinge of self-satisfaction as well? You see, it was this Vancouver-based, American born purveyor of cyberpunk that predicted both the use of “cyberspace goggles” and augmented reality many years ago. The former were featured extensively in his Sprawl Trilogy and a similar device, known as Virtual Light glasses, made several appearances in his subsequent Bridge Trilogy.

What’s more, his latest books (known as the Bigend Trilogy) also made extensive mention of Augmented Reality before most people had heard of it. Beginning with Spook Country¬†(2007), the second book in the series, he described an artist who used wireless signals and VR goggles to simulate the appearance of dead celebrities all over LA. This new type of touristic art, known as “locative art” was the first time AR was mentioned in a pop culture context. In his third book of the series, Zero History (2010), he mentions the technology yet again but says how it has been renamed “Augmented Reality” now that its more popular. As always, Gibson was on the cutting edge, or just ahead of the curve.

Click on the links below for a little “light reading” on the announcement:

https://plus.google.com/
http://bits.blogs.nytimes.com/2012/04/04/
http://www.washingtonpost.com/business/technology/