The First Government-Recognized Cyborg

harbisson_cyborgThose who follow tech news are probably familiar with the name Neil Harbisson. As a futurist, and someone who was born with a condition known as achromatopsia – which means he sees everything in shades in gray – he spent much of his life looking to augment himself so that he could see what other people see. And roughly ten years ago, he succeeded by creating a device known as the “eyeborg”.

Also known as a cybernetic “third eye”, this device – which is permanently integrated to his person – allows Harbisson to “hear” colors by translating the visual information into specific sounds. After years of use, he is able to discern different colors based on their sounds with ease. But what’s especially interesting about this device is that it makes Harbisson a bona fide cyborg.

neil_harbisson1What’s more, Neil Harbisson is now the first person on the planet to have a passport photo that shows his cyborg nature. After a long battle with UK authorities, his passport now features a photo of him, eyeborg and all. And now, he is looking to help other cyborgs like himself gain more rights, mainly because of the difficulties such people have been facing in recent years.

Consider the case of Steve Mann, the man recognized as the “father of wearable computers”. Since the 1970’s, he has been working towards the creation of fully-portable, ergonomic computers that people can carry with them wherever they go. The result of this was the EyeTap, a wearable computer he invented in 1998 and then had grafted to his head.

steve-mann1And then in July of 2012, he was ejected from a McDonald’s in Paris after several staff members tried to forcibly remove the wearable device. And then in April of 2013, a bar in Seattle banned patrons from using Google Glass, declaring that “ass-kickings will be encouraged for violators.” Other businesses across the world have followed, fearing that people wearing these devices may be taking photos or video and posting it to the internet.

Essentially, Harbisson believes that recent technological advances mean there will be a rapid growth in the number of people with cybernetic implants in the near future, implants that can will either assist them or give them enhanced abilities. As he put it in a recent interview:

Our instincts and our bodies will change. When you incorporate technology into the body, the body will need to change to accommodate; it modifies and adapts to new inputs. How we adapt to this change will be very interesting.

cyborg_foundationOther human cyborgs include Stelarc, a performance artist who has implanted a hearing ear on his forearm; Kevin Warwick, the “world’s first human cyborg” who has an RFID chip embedded beneath his skin, allowing him to control devices such as lights, doors and heaters; and “DIY cyborg” Tim Cannon, who has a self-administered body-monitoring device in his arm.

And though they are still in the minority, the number of people who live with integrated electronic or bionic devices is growing. In order to ensure that the transition Harbisson foresees is accomplished as painlessly as possible, he created the Cyborg Foundation in 2010. According to their website, the organization’s mission statement is to:

help humans become cyborgs, to promote the use of cybernetics as part of the human body and to defend cyborg rights [whilst] encouraging people to create their own sensory extensions.

transhumanism1And as mind-controlled prosthetics, implants, and other devices meant to augment a person’s senses, faculties, and ambulatory ability are introduced, we can expect people to begin to actively integrate them into their bodies. Beyond correcting for injuries or disabilities, the increasing availability of such technology is also likely to draw people looking to enhance their natural abilities.

In short, the future is likely to be a place in which cyborgs are a common features of our society. The size and shape of that society is difficult to predict, but given that its existence is all but certain, we as individuals need to be able to address it. Not only is it an issue of tolerance, there’s also the need for informed decision-making when it comes whether or not individuals need to make cybernetic enhancements a part of their lives.

Basically, there are some tough issues that need to be considered as we make our way into the future. And having a forum where they can be discussed in a civilized fashion may be the only recourse to a world permeated by prejudice and intolerance on the one hand, and runaway augmentation on the other.

johnnymnemonic04In the meantime, it might not be too soon to look into introducing some regulations, just to make sure we don’t have any yahoos turning themselves into killer cyborgs in the near future! *PS: Bonus points for anyone who can identify which movie the photo above is taken from…

Sources: IO9.com, dezeen.com, eyeborg.wix.com

Digital Eyewear Through the Ages

google_glassesGiven the sensation created by the recent release of Google Glass – a timely invention that calls to mind everything from 80’s cyberpunk to speculations about our cybernetic, transhuman future – a lot of attention has been focused lately on personalities like Steve Mann, Mark Spritzer, and the history of wearable computers.

For decades now, visionaries and futurists have been working towards a day when all personal computers are portable and blend seamlessly into our daily lives. And with countless imitators coming forward to develop their own variants and hate crimes being committed against users, it seems like portable/integrated machinery is destined to become an issue no one will be able to ignore.

And so I thought it was high time for a little retrospective, a look back at the history of eyewear computers and digital devices and see how far it has come. From its humble beginnings with bulky backpacks and large, head-mounted displays, to the current age of small fixtures that can be worn as easily as glasses, things certainly have changed. And the future is likely to get even more fascinating, weird, and a little bit scary!

Sword of Damocles (1968):
swordofdamoclesDeveloped by Ivan Sutherland and his student Bob Sprouli at the University of Utah in 1968, the Sword of Damocles was the world’s first heads-up mounted display. It consisted of a headband with a pair of small cathode-ray tubes attached to the end of a large instrumented mechanical arm through which head position and orientation were determined.

Hand positions were sensed via a hand-held grip suspended at the end of three fishing lines whose lengths were determined by the number of rotations sensed on each of the reels. Though crude by modern standards, this breakthrough technology would become the basis for all future innovation in the field of mobile computing, virtual reality, and digital eyewear applications.

WearComp Models (1980-84):
WearComp_1_620x465Built by Steve Mann (inventor of the EyeTap and considered to be the father of wearable computers) in 1980, the WearComp1 cobbled together many devices to create visual experiences. It included an antenna to communicate wirelessly and share video. In 1981, he designed and built a backpack-mounted wearable multimedia computer with text, graphics, and multimedia capability, as well as video capability.

Wearcomp_4By 1984, the same year that Apple’s Macintosh was first shipped and the publication of William Gibson’s science fiction novel, “Neuromancer”, he released the WearComp4 model. This latest version employed clothing-based signal processing, a personal imaging system with left eye display, and separate antennas for simultaneous voice, video, and data communication.

Private Eye (1989):
Private_eye_HUDIn 1989 Reflection Technology marketed the Private Eye head-mounted display, which scanned a vertical array of LEDs across the visual field using a vibrating mirror. The monochrome screen was 1.25-inches on the diagonal, but images appear to be a 15-inch display at 18-inches distance.

EyeTap Digital Eye (1998):
EyeTap1
Steve Mann is considered the father of digital eyewear and what he calls “mediated” reality. He is a professor in the department of electrical and computer engineering at the University of Toronto and an IEEE senior member, and also serves as chief scientist for the augmented reality startup, Meta. The first version of the EyeTap was produced in the 1970’s and was incredibly bulky by modern standards.

By 1998, he developed the one that is commonly seen today, mounted over one ear and in front of one side of the face. This version is worn in front of the eye, recording what is immediately in front of the viewer and superimposing the view as digital imagery. It uses a beam splitter to send the same scene to both the eye and a camera, and is tethered to a computer worn to his body in a small pack.

MicroOptical TASK-9 (2000):
MicroOptical TASK-9Founded in 1995 by Mark Spitzer, who is now a director at the Google X lab. the company produced several patented designs which were bought up by Google after the company closed in 2010. One such design was the TASK-9, a wearable computer that is attachable to a set of glasses. Years later, MicroOptical’s line of viewers remain the lightest head-up displays available on the market.

Vuzix (1997-2013):
Vuzix_m100Founded in 1997, Vuzix created the first video eyewear to support stereoscopic 3D for the PlayStation 3 and Xbox 360. Since then, Vuzix went on to create the first commercially produced pass-through augmented reality headset, the Wrap 920AR (seen at bottom). The Wrap 920AR has two VGA video displays and two cameras that work together to provide the user a view of the world which blends real world inputs and computer generated data.

vuzix-wrapOther products of note include the Wrap 1200VR, a virtual reality headset that has numerous applications – everything from gaming and recreation to medical research – and the Smart Glasses M100, a hands free display for smartphones. And since the Consumer Electronics Show of 2011, they have announced and released several heads-up AR displays that are attachable to glasses.

vuzix_VR920

MyVu (2008-2012):
Founded in 1995, also by Mark Spitzer, MyVu developed several different types of wearable video display glasses before closing in 2012. The most famous was their Myvu Personal Media Viewer (pictured below), a set of display glasses that was released in 2008. These became instantly popular with the wearable computer community because they provided a cost effective and relatively easy path to a DIY, small, single eye, head-mounted display.myvu_leadIn 2010, the company followed up with the release of the Viscom digital eyewear (seen below), a device that was developed in collaboration with Spitzer’s other company, MicroOptical. This smaller, head mounted display device comes with earphones and is worn over one eye like a pair of glasses, similar to the EyeTap.

myvu_viscom

Meta Prototype (2013):
Developed by Meta, a Silicon Valley startup that is being funded with the help of a Kickstarter campaign and supported by Steve Mann, this wearable computing eyewear ultizes the latest in VR and projection technology. Unlike other display glasses, Meta’s eyewear enters 3D space and uses your hands to interact with the virtual world, combining the benefits of the Oculus Rift and those being offered by “Sixth Sense” technology.

meta_headset_front_on_610x404The Meta system includes stereoscopic 3D glasses and a 3D camera to track hand movements, similar to the portrayals of gestural control in movies like “Iron Man” and “Avatar.” In addition to display modules embedded in the lenses, the glasses include a portable projector mounted on top. This way, the user is able to both project and interact with computer simulations.

Google Glass (2013):
Google Glass_Cala
Developed by Google X as part of their Project Glass, the Google Glass device is a wearable computer with an optical head-mounted display (OHMD) that incorporates all the major advances made in the field of wearable computing for the past forty years. These include a smartphone-like hands-free format, wireless internet connection, voice commands and a full-color augmented-reality display.

Development began in 2011 and the first prototypes were previewed to the public at the Google I/O annual conference in San Francisco in June of 2012. Though they currently do not come with fixed lenses, Google has announced its intention to partner with sunglass retailers to equip them with regular and prescription lenses. There is also talk of developing contact lenses that come with embedded display devices.

Summary:
Well, that’s the history of digital eyewear in a nutshell. And as you can see, since the late 60’s, the field has progressed by leaps and bounds. What was once a speculative and visionary pursuit has now blossomed to become a fully-fledged commercial field, with many different devices being produced for public consumption.

At this rate, who knows what the future holds? In all likelihood, the quest to make computers more portable and ergonomic will keep pace with the development of more sophisticated electronics and computer chips, miniaturization, biotechnology, nanofabrication and brain-computer interfacing.

The result will no doubt be tiny CPUs that can be implanted in the human body and integrated into our brains via neural chips and tiny electrodes. In all likelihood, we won’t even need voice commands at that point, because neuroscience will have developed a means to communicate directly to our devices via brainwaves. The age of cybernetics will have officially dawned!

Like I said… fascinating, weird, and a little bit scary!

‘High Dynamic Range’

The Future is Here: The Cybernetic “Third Eye”

neil_harbissonAchromatopsia is a rare form of color blindness that effects one in thirty-five thousand people. One such individual is Neil Harbisson, who was born with the genetic mutation that rob him of the ability to see the world in anything other than black and white. But since 2004, he has been able to “hear” color, thanks to a body modification that has provided with him with a cybernetic third eye.

EyeborgThis device is known as the “eyeborg”, and given that it constitutes a cybernetic enhancement, some have taken to calling Harbisson a genuine cyborg. For others, he’s an example of a posthuman era where cybernetic enhancements will be the norm. In either case, the function of the eyeborg works was described in the following way in an article by Nautilus entitled “Encounters with the Posthuman”:

It transposes color into a continuous electronic beep, exploiting the fact that both light and sound are made up of waves of various frequencies. Red, at the bottom of the visual spectrum and with the lowest frequency, sounds the lowest, and violet, at the top, sounds highest. A chip at the back of Harbisson’s head performs the necessary computations, and a pressure-pad allows color-related sound to be conducted to Harbisson’s inner ear through the vibration of his skull, leaving his outer ears free for normal noise. Harbisson, who has perfect pitch, has learned to link these notes back to the colors that produced them.

Harbisson’s brain doesn’t convert those sounds back into visual information, so he still doesn’t know exactly what the color blue looks like. But he knows what it sounds like. As he explained to an audience at a TED Talks segment, he used to dress based on appearances. Now, he dresses in a way that sounds good. For example, the pink blazer, blue shirt and yellow pants he was wearing for the talk formed a C Major chord.

neil_harbisson1This may sound like an abstract replacement for actual color perception, but in many ways, the eyeborg surpasses human chromatic perception. For example, the device is capable of distinguishing 360 different hues, he can hear ultraviolet and infrared. So basically, you don’t need a UV index when you have the cybernetic third eye. All you need to do is take a look outside and instantly know if you need sunblock or not.

These and other extension of human abilities are what led Harbisson to found the Cyborg Foundation, a society that is working to create cybernetic devices that compensate for and augment human senses. These include the “fingerborg” that replaces a finger with a camera, a “speedborg” that conveys how fast an object is moving with earlobe vibrations and–according to a promotional film–a “cybernetic nose” that allows people to perceive smells through electromagnetic signals.

steve-mann1In addition to helping people become cyborgs, the foundation claims to fight for cyborg rights. While this might sounds like something out of science fiction, the recent backlash against wearers of Google glasses and the assault on Steve Mann are indications that such a society is increasingly necessary. In addition, Harbisson wants to find ways to fix devices like his eyeborg permanently to his skull, and recharge it with his blood.

For more information on the eyeborg and Project Cyborg, check out Harbisson’s website here. Neil Harbisson’s Project Cyborg promotional video is also available on Vimeo. And be sure to watch the video of Neil Harbisson at the TED Talks lecture:


Sources:
fastcoexist.com, nautil.us, eyeborgproject.com

The Future of the Classroom

virtual_learning2As an educator, technological innovation is a subject that comes up quite often. Not only are teachers expected to keep up with trends so they can adapt them into their teaching strategies, classrooms,and prepare children in how to use them, they are also forced to contend with how these trends are changing the very nature of education itself. If there was one thing we were told repeatedly in Teacher’s College, it was that times are changing, and we must change along with them.

And as history has repeatedly taught us, technological integration not only changes the way we do things, but the way we perceive things. As we come to be more and more dependent on digital devices, electronics and wireless communications to give us instant access to a staggering amount of technology, we have to be concerned with how this will effect and even erode traditional means of information transmission. After all, how can reading and lecture series’ be expected to keep kid’s attention when they are accustomed to lighting fast videos, flash media, and games?

envisioning-the-future-of-education

And let’s not forget this seminal infographic, “Envisioning the future of educational technology” by Envisioning Technology. As one of many think tanks dedicated to predicting tech-trends, they are just one of many voices that is predicting that in time, education will no longer require the classroom and perhaps even teachers, because modern communications have made the locale and the leader virtually obsolete.

Pointing to such trends as Massive Open Online Courses, several forecasters foresee a grand transformation in the not too distant future where all learning happens online and in virtual environments. These would be based around “microlearning”, moments where people access the desired information through any number of means (i.e. a google search) and educate themselves without the need for instruction or direction.

virtual_learning3The technical term for this future trend is Socialstructured Learning = an aggregation of microlearning experiences drawn from a rich ecology of content and driven not by grades but by social and intrinsic rewards. This trend may very well be the future, but the foundations of this kind of education lie far in the past. Leading philosophers of education–from Socrates to Plutarch, Rousseau to Dewey–talked about many of these ideals centuries ago. The only difference is that today, we have a host of tools to make their vision reality.

One such tool comes in the form of augmented reality displays, which are becoming more and more common thanks to devices like Google Glass, the EyeTap or the Yelp Monocle. Simply point at a location, and you are able to obtain information you want about various “points of interest”. Imagine then if you could do the same thing, but instead receive historic, artistic, demographic, environmental, architectural, and other kinds of information embedded in the real world?

virtual_learningThis is the reasoning behind projects like HyperCities, a project from USC and UCLA that layers historical information on actual city terrain. As you walk around with your cell phone, you can point to a site and see what it looked like a century ago, who lived there, what the environment was like. The Smithsonian also has a free app called Leafsnap, which allows people to identify specific strains of trees and botany by simply snapping photos of its leaves.

In many respects, it reminds me of the impact these sorts of developments are having on politics and industry as well. Consider how quickly blogging and open source information has been supplanting traditional media – like print news, tv and news radio. Not only are these traditional sources unable to supply up-to-the-minute information compared to Twitter, Facebook, and live video streams, they are subject to censorship and regulations the others are not.

Attractive blonde navigating futuristic interfaceIn terms of industry, programs like Kickstarter and Indiegogo – crowdsources, crowdfunding, and internet-based marketing – are making it possible to sponsor and fund research and development initiatives that would not have been possible a few years ago. Because of this, the traditional gatekeepers, aka. corporate sponsors, are no longer required to dictate the pace and advancement of commercial development.

In short, we are entering into a world that is becoming far more open, democratic, and chaotic. Many people fear that into this environment, someone new will step in to act as “Big Brother”, or the pace of change and the nature of the developments will somehow give certain monolithic entities complete control over our lives. Personally, I think this is an outmoded fear, and that the real threat comes from the chaos that such open control and sourcing could lead to.

Is humanity ready for democratic anarchy – aka. Demarchy (a subject I am semi-obsessed with)? Do we even have the means to behave ourselves in such a free social arrangement? Opinion varies, and history is not the best indication. Not only is it loaded with examples of bad behavior, previous generations didn’t exactly have the same means we currently do. So basically, we’re flying blind… Spooky!

Sources: fastcoexist.com, envisioningtech.com

Of Cybernetic Hate Crimes

Google Glass_CalaLast week, a bar in Seattle banned the use of Google Glass. The pub declared on their Facebook page that if anyone wanted to order a pint, they had better remove their $1500 pair of augmented reality display glasses beforehand. Citing the glasses potential to film or take pictures and post them on the internet, the bar owner unflinchingly declared that “ass-kickings will be encouraged for violators.”

This is the second case of what some are dubbing a new wave of “Cybernetic hate crimes”. The first took place back in July 2012 when Steve Mann, a Canadian university professor known as the “father of wearable computing”, was physically assaulted at a McDonalds in Paris, France. In this case, three employees took exception with his wearable computer and tried to physically remove it, an impossibility since it is permanent screwed into his head, and then three him out of the restaurant.

steve-mann1Taken together, these two incidents highlight a possible trend which could become commonplace as the technology grows in use. In some ways, this is a reflection of the fears critics have raised about the ways in which these new technologies could be abused. However, there are those who worry that these kinds of fears are likely to lead to people banning these devices and becoming intolerant to those who use them.

By targeting people who employ augmented reality, bionic eyes, or wearable computers, we are effectively stigmatizing a practice which may become the norm in the not too distant future. But Google responded to the incident with optimism and released a statement that cited shifting attitudes over time:

It is still very early days for Glass, and we expect that as with other new technologies, such as cell phones, behaviors and social norms will develop over time.

smartphonesYes, one can remember without much effort how similar worries were raised about smartphones and camera phones not that long ago, and their use has become so widespread that virtually all doubts about how they might be abused and what effect they would have on social norms have gone quiet. Still, doubts remain that with the availability of technologies that make it easier to monitor people, society is becoming more and more invasive.

But to this, Mann, responds by raising what he had always hoped portable computing would result in. Back in the 1970’s when he first began working on the concept for his EyeTap, he believed that camera-embedded wearables could be both liberating and empowering. In a world permeated by security cameras and a sensory-sphere dominated by corporate memes, he foresaw these devices a means for individuals to re-take control of their environment and protect themselves.

EyeTapThis was all in keeping with Mann’s vision of a future where wearable cameras and portable computers could allow for what he calls sousveillance — a way for people to watch the watchers and be at the ready to chronicle any physical assaults or threats. How ironic that his own invention allowed him to do just that when he himself was assaulted!

And in the current day and age, this vision may be even more important and relevant, given the rise in surveillance and repressive measures brought on in the wake of the “War on Terror”. As Mann himself has written:

Rather than tolerating terrorism as a feedback means to restore the balance, an alternative framework would be to build a stable system to begin with, e.g. a system that is self-balancing. Such a society may be built with sousveillance (inverse surveillance) as a way to balance the increasing (and increasingly one-sided) surveillance.

Raises a whole bunch of questions, doesn’t it? As the issue of dwindling privacy becomes more and more of an issue, and where most people respond to such concerns by dredging up dystopian scenarios, it might be helpful to remind ourselves that this is a form of technology that rests firmly in our hands, the consumers, not those of an overbearing government.

google_glass_banBut then again, that doesn’t exactly ease the fears of a privacy invasion much, does it? Whether it is a few functionaries and bureaucrats monitoring us for the sake of detecting criminal behavior or acts of “sedition”, or a legion of cyberbullies and gawking masses scrutinizing our every move, being filmed and photographed against our will and having it posted is still pretty creepy.

But does that necessitate banning the use of this technology outright? Are we within our rights, as a society, to deny service to people sporting AR glasses, or to physically threaten them if they are unable or unwilling to remove them? And is this something that will only get better, or worse, with time?

Sources: IO9, (2), news.cnet.com, eecg.toronto.edu

The Future Is Here: The EyeTap

There has been some rather interesting and revolutionary technology being released lately, and a good deal of it involves the human eye. First, there was the Google Glasses, then there were the VR contact lenses, and now the new EyeTap! This new technology, which is consistent with the whole 6th sense computing trend, uses the human eye as an actual display and camera… after a fashion.

Used in conjunction with a portable computer, the EyeTap combines the latest in display technology and Augmented Reality which allows for computer mediated interaction with their environment. This consists of the device taking in images of the surrounding area, and with the assistance of the computer, augment, diminish, or otherwise alter a user’s visual perception of what they see.

In addition, plans for the EyeTap include computer-generated displays so the user can also interface with the computer and do work while their AFK (Away From Keyboard, according to The Big Bang Theory). The figure below depicts the basic structure of the device and how it works.

Ambient light is taken in by the device just as a normal eye is, but are then reflected by the Diverter. These rays are then collected by a sensor (typically a CCD camera) while the computer processes the data. At this point, the Aremac display device (“camera” spelt backwards) redisplays the image as rays of light. These rays reflect again off the diverter, and are then collinear with the rays of light from the scene. The light which the viewer perceives is what is referred to as “Virtual Light”, which can either be altered or show the same image as before.

While the technology is still very much under development, it represents a major step forward in terms of personal computing, augmented reality, and virtual interfacing. And if this sort of technology can be permanently implanted to the human eye, it will also be a major leap for cybernetics.

Once again, Gibson must be getting royalties! His fourth novel, the first of the Bridge Trilogy, was named Virtual Light and featured a type of display glasses that relied on this very technology in order to project display images in the user’s visual field. Damn that man always seems to be on top of things!

And just for fun, here’s a clip from the recent Futurama episode featuring the new eyePhone. Hilarious, if I do so myself!