Digital Eyewear Through the Ages

google_glassesGiven the sensation created by the recent release of Google Glass – a timely invention that calls to mind everything from 80’s cyberpunk to speculations about our cybernetic, transhuman future – a lot of attention has been focused lately on personalities like Steve Mann, Mark Spritzer, and the history of wearable computers.

For decades now, visionaries and futurists have been working towards a day when all personal computers are portable and blend seamlessly into our daily lives. And with countless imitators coming forward to develop their own variants and hate crimes being committed against users, it seems like portable/integrated machinery is destined to become an issue no one will be able to ignore.

And so I thought it was high time for a little retrospective, a look back at the history of eyewear computers and digital devices and see how far it has come. From its humble beginnings with bulky backpacks and large, head-mounted displays, to the current age of small fixtures that can be worn as easily as glasses, things certainly have changed. And the future is likely to get even more fascinating, weird, and a little bit scary!

Sword of Damocles (1968):
swordofdamoclesDeveloped by Ivan Sutherland and his student Bob Sprouli at the University of Utah in 1968, the Sword of Damocles was the world’s first heads-up mounted display. It consisted of a headband with a pair of small cathode-ray tubes attached to the end of a large instrumented mechanical arm through which head position and orientation were determined.

Hand positions were sensed via a hand-held grip suspended at the end of three fishing lines whose lengths were determined by the number of rotations sensed on each of the reels. Though crude by modern standards, this breakthrough technology would become the basis for all future innovation in the field of mobile computing, virtual reality, and digital eyewear applications.

WearComp Models (1980-84):
WearComp_1_620x465Built by Steve Mann (inventor of the EyeTap and considered to be the father of wearable computers) in 1980, the WearComp1 cobbled together many devices to create visual experiences. It included an antenna to communicate wirelessly and share video. In 1981, he designed and built a backpack-mounted wearable multimedia computer with text, graphics, and multimedia capability, as well as video capability.

Wearcomp_4By 1984, the same year that Apple’s Macintosh was first shipped and the publication of William Gibson’s science fiction novel, “Neuromancer”, he released the WearComp4 model. This latest version employed clothing-based signal processing, a personal imaging system with left eye display, and separate antennas for simultaneous voice, video, and data communication.

Private Eye (1989):
Private_eye_HUDIn 1989 Reflection Technology marketed the Private Eye head-mounted display, which scanned a vertical array of LEDs across the visual field using a vibrating mirror. The monochrome screen was 1.25-inches on the diagonal, but images appear to be a 15-inch display at 18-inches distance.

EyeTap Digital Eye (1998):
EyeTap1
Steve Mann is considered the father of digital eyewear and what he calls “mediated” reality. He is a professor in the department of electrical and computer engineering at the University of Toronto and an IEEE senior member, and also serves as chief scientist for the augmented reality startup, Meta. The first version of the EyeTap was produced in the 1970’s and was incredibly bulky by modern standards.

By 1998, he developed the one that is commonly seen today, mounted over one ear and in front of one side of the face. This version is worn in front of the eye, recording what is immediately in front of the viewer and superimposing the view as digital imagery. It uses a beam splitter to send the same scene to both the eye and a camera, and is tethered to a computer worn to his body in a small pack.

MicroOptical TASK-9 (2000):
MicroOptical TASK-9Founded in 1995 by Mark Spitzer, who is now a director at the Google X lab. the company produced several patented designs which were bought up by Google after the company closed in 2010. One such design was the TASK-9, a wearable computer that is attachable to a set of glasses. Years later, MicroOptical’s line of viewers remain the lightest head-up displays available on the market.

Vuzix (1997-2013):
Vuzix_m100Founded in 1997, Vuzix created the first video eyewear to support stereoscopic 3D for the PlayStation 3 and Xbox 360. Since then, Vuzix went on to create the first commercially produced pass-through augmented reality headset, the Wrap 920AR (seen at bottom). The Wrap 920AR has two VGA video displays and two cameras that work together to provide the user a view of the world which blends real world inputs and computer generated data.

vuzix-wrapOther products of note include the Wrap 1200VR, a virtual reality headset that has numerous applications – everything from gaming and recreation to medical research – and the Smart Glasses M100, a hands free display for smartphones. And since the Consumer Electronics Show of 2011, they have announced and released several heads-up AR displays that are attachable to glasses.

vuzix_VR920

MyVu (2008-2012):
Founded in 1995, also by Mark Spitzer, MyVu developed several different types of wearable video display glasses before closing in 2012. The most famous was their Myvu Personal Media Viewer (pictured below), a set of display glasses that was released in 2008. These became instantly popular with the wearable computer community because they provided a cost effective and relatively easy path to a DIY, small, single eye, head-mounted display.myvu_leadIn 2010, the company followed up with the release of the Viscom digital eyewear (seen below), a device that was developed in collaboration with Spitzer’s other company, MicroOptical. This smaller, head mounted display device comes with earphones and is worn over one eye like a pair of glasses, similar to the EyeTap.

myvu_viscom

Meta Prototype (2013):
Developed by Meta, a Silicon Valley startup that is being funded with the help of a Kickstarter campaign and supported by Steve Mann, this wearable computing eyewear ultizes the latest in VR and projection technology. Unlike other display glasses, Meta’s eyewear enters 3D space and uses your hands to interact with the virtual world, combining the benefits of the Oculus Rift and those being offered by “Sixth Sense” technology.

meta_headset_front_on_610x404The Meta system includes stereoscopic 3D glasses and a 3D camera to track hand movements, similar to the portrayals of gestural control in movies like “Iron Man” and “Avatar.” In addition to display modules embedded in the lenses, the glasses include a portable projector mounted on top. This way, the user is able to both project and interact with computer simulations.

Google Glass (2013):
Google Glass_Cala
Developed by Google X as part of their Project Glass, the Google Glass device is a wearable computer with an optical head-mounted display (OHMD) that incorporates all the major advances made in the field of wearable computing for the past forty years. These include a smartphone-like hands-free format, wireless internet connection, voice commands and a full-color augmented-reality display.

Development began in 2011 and the first prototypes were previewed to the public at the Google I/O annual conference in San Francisco in June of 2012. Though they currently do not come with fixed lenses, Google has announced its intention to partner with sunglass retailers to equip them with regular and prescription lenses. There is also talk of developing contact lenses that come with embedded display devices.

Summary:
Well, that’s the history of digital eyewear in a nutshell. And as you can see, since the late 60’s, the field has progressed by leaps and bounds. What was once a speculative and visionary pursuit has now blossomed to become a fully-fledged commercial field, with many different devices being produced for public consumption.

At this rate, who knows what the future holds? In all likelihood, the quest to make computers more portable and ergonomic will keep pace with the development of more sophisticated electronics and computer chips, miniaturization, biotechnology, nanofabrication and brain-computer interfacing.

The result will no doubt be tiny CPUs that can be implanted in the human body and integrated into our brains via neural chips and tiny electrodes. In all likelihood, we won’t even need voice commands at that point, because neuroscience will have developed a means to communicate directly to our devices via brainwaves. The age of cybernetics will have officially dawned!

Like I said… fascinating, weird, and a little bit scary!

‘High Dynamic Range’

News From Space: Enceladus, the Jet-Powered Moon

enceladusThe Cassini Space Probe is at it again, providing the people of Earth with rare glimpses of Saturn and its moons. And with this latest picturesque capture, revealed by NASA, the ESA and ASI back in April, we got to see the moon of Enceladus as it sprayed icy vapor off into space. For some time, scientists have known about the large collection of geysers located at the moon’s south pole. But thanks to Cassini, this was the first time that it was caught (beautifully) on film.

First discovered by Cassini in 2005, scientists have been trying to learn more about how these plumes of water behave, what they are made of and – most importantly – where they are coming from. The working theory is that Enceladus has a liquid subsurface ocean, and pressure from the rock and ice layers above combined with heat from within force the water up through surface cracks near the moon’s south pole.

Saturn_with_aurorasWhen this water reaches the surface it instantly freezes, sending plumes of water vapor, icy particles, and organic compounds hundreds of kilometers out into space. Cassini has flown through the spray several times now, and instruments have detected that aside from water and organic material, there is salt in the icy particles.

Facing_Enceladus_largeTests run on samples that were captured indicate that the salinity is the same as that of Earth’s oceans. These findings, combined with the presence of organic compounds, indicate that Enceladus may be one of the best candidates in the Solar System for finding life.

Much like Europa, the life would be contained within the planet’s outer crust. But as we all know, life comes in many, many forms. Not all of it needs to be surface-dweling in nature, and an atmosphere need not exist either. Granted, these are essential for life to thrive, but not necessarily exist.

What’s more, this could come in handy if manned missions to Cassini ever do take place. Water is key to making hydrogen fuel, and could come in might handy if ever people set down and feel the need to terraform the place. Of course, they might want to make sure they aren’t depriving subterranean organisms of their livelihood first. Don’t want another Avatar situation on our hands!

Source: universetoday.com

Exploring the Universe with Robotic Avatars and Holodecks

holodeck_nasaSpace exploration is littered with all kinds of hazards. In addition to the danger of dying from decompression, mechanical failures, micro-meteoroids or just crashing into a big ball of rock, there are also the lesser-known problems created by low-gravity, time dilation, and prolonged isolation. Given all that, wouldn’t it just be easier to send probes out to do the legwork, and use virtual technology to experience it back home?

That’s the idea being presented by Dr. Jeff Norris, one of the scientists who works for NASA’s Jet Propulsion Laboratory in Pasadena, California. In a recent presentation that took place at Pax Prime last year – entitled “NASA’s Got Game” – he spoke of the agency’s plans for telexploration – the process of exploring the universe using robotic avatars and holodecks, rather than sending manned flights into deep space.

avatar_imageIn the course of making this presentation, Norris noted several key advantages to this kind of exploration. In addition to being safer and cheaper, its also more readily available. Whereas deep space exploration involving space ships with FTL engines – the Alcubierre Drive they are currently working on – will eventually be available, robot space probes and advanced telecommunications technology are available right now.

At the same time, telexploration is also more democratic. Whereas conventional space travel involves a select few of highly-trained, eminently qualified people witnessing the wonders of the universe, robotic avatars and holographic representations bring the experience home, where millions of people can experience the awe and wonder for themselves. And when you think about it, it’s something we’re already doing, thanks to the current generation of space probes, satellites and – of course! – the Curiosity Rover.

Curiosity_selfportraitBasically, rather than waiting for the warp drive, Norris believes another Star Trek technology – the holodeck – will be the more immediate future of space exploration, one that we won’t have to wait for. Yes, there are more than a few Star Trek motifs going on in this presentation, and a little Avatar too, but that’s to be expected. And as we all know, life can imitate art, and the truth is always stranger than fiction!

Check out the video of the presentation below:


And remember…

holodeck_vegasad

News From Alpha Centauri!

It seems another star system is making the news recently. And much like Gliese 581, the subject is the discovery of a planet that is said to be Earth-like in orientation. Located in Alpha Centauri, a star system just 4.3 light years from our Solar System, this exoplanet is the closest discovery yet to be made by scientists and astronomers.

Those with a penchant for science fiction will be immediately familiar with the name Alpha Centauri. As the closest star system to our own, it has been mentioned and used as the setting for countless science fiction franchises. Star Trek, Transformers, and most recently, Avatar have made use of this binary system and its system of planets. But up until now, speculations as to its ability to actually support life (at least as we know it) have been just that.

Officially, the planet is known as Alpha Centauri Bb, in that it is the second observable planet that orbits Alpha Centauri B, the larger of the stars in the binary system. It took a research team nearly four years to classify the planet and determine that it boasted a mass similar to that of Earth’s. According to Xavier Dumusque, the lead author of the planet-discovering study: “This result represents a major step towards the detection of a twin Earth in the immediate vicinity of the sun.”

But of course, there’s a snag, at least as far as colonization would be involved. According to the same research team, Alpha Centauri Bb is closer to its host star than Mercury is to our Sun, and they estimate that surface temperatures average around 1200 degrees Celsius (2192 Fahrenheit). Forget Pandora, can you say Crematoria? If humans were ever to set foot on this world, it would only be because of terraforming so radical that it completely altered the nature of the planet. Still, it is an exciting find, and is another step along the road to locating nearby exoplanets that humanity might someday call home.

In the meantime, check out this video from the European Southern Observatory. It’s like Google Earth meets the Milky Way Galaxy – Google Galaxy! I like that!

Source: news.cnet.com

Transhumanism… The Shape of Things to Come?

“Your mind is software. Program it. Your body is a shell. Change it. Death is a disease. Cure it. Extinction is approaching. Fight it.”

-Eclipse Phrase

A lot of terms are thrown around these days that allude to the possible shape of our future. Words like Technological Singularity, extropianism, postmortal, posthuman, and Transhuman. What do these words mean? What kind of future do they point to? Though they remain part of a school of thought that is still very much theoretical and speculative, this future appears to be becoming more likely every day.

Ultimately, the concept is pretty simple, in a complex, mind-bending sort of way. The theory has it that at some point in this or the next century, humanity will overcome death, scarcity, and all other limitations imposed on us by nature. The means vary, but it is believed that progress in any one or more of the following areas will make such a leap inevitable:

Artificial Intelligence:
The gradual evolution of computers, from punch cards to integrated circuits to networking, shows an exponential trend upwards. With the concordant growth of memory capacity and processing speed, it is believed that it is only a matter of time before computers are capable of independent reasoning. Progress is already being made in this domain, with the Google X Labs Neural Net that has a connectome of a billion connections.

As such, it is seen as inevitable that a machine will one day exist that is capable of surpassing a human being. This sort of machinery could even be merged with a human’s own mind, enhancing their natural thought patterns, memory, and augmenting their intelligence to the point where their intelligence is immeasurable by modern standards.

Just think of the things we could think up once that’s possible. Well… you can’t exactly, but we can certainly postulate. For starters, such things as the Grand Unifying Theory, the nature of time and space, quantum mechanics, and other mind-bendingly complex fields could suddenly make sense to us. What’s more, this would make further technological leaps that much easier.

Biology:
Here we have an area of development which can fall into one of three categories. On the one hand, advancements in medical science could very well lead to the elimination of disease and the creation of mind-altering pharmaceuticals. On the other, there’s the eventual development of things like biotechnology, machinery that is grown rather than built, composed of DNA strands or other “programmable” material.

Lastly, there is the potential for cybernetics, a man-machine interface where organic is merged with the artificial, either in the form of implants, prosthetic limbs, and artificial organs. All of these, alone or in combination, would enhance a human beings strength, mental capacity, and prolong their life.

This is the meaning behind the word postmortal. If human beings could live to the point where life could be considered indefinite (at least by current standards), the amount we could accomplish in a single lifetime could very well be immeasurable.

Nanotechnology:
The concept of machines so small that anything will be accessible, even the smallest components of matter, has been around for over half a century. However, it was not until the development of microcircuits and miniaturization that the concept graduated from pure speculation and became a scientific possibility.

Here again, the concept is simple, assuming you can wrap your head around the staggering technical aspects and implications. For starters, we are talking about machines that are measurable only on the nanoscale, meaning one to one-hundred billionths of a meter (1 x 10-9 m). At this size, these machines would be capable of manipulating matter at the cellular or even atomic level. This is where the staggering implications come in, when you realize that this kinds of machinery could make just about anything possible.

For starters, all forms of disease would be conquerable, precious metals could be synthesized, seamless, self-regenerating structures could be made, and any and all consumer products could be created out of base matter. We’d be living in a world in which scarcity would be a thing of the past, our current system of values and exchange would become meaningless, buildings could build themselves, and out of raw matter (like dirt and pure scrap) no less, societies would become garbage free, pollution could be eliminated, and manufactured goods could be made of materials that are both extra-light and near-indestructible.

Summary:
All of this progress, either alone or in combination, will add to a future that we can’t even begin to fathom. This is where the concept of the Technological Singularity comes in. If human beings were truly postmortal (evolved beyond death), society was postscarce (meaning food, water, fuel and other necessities would never be in short supply), and machines would be capable of handling all our basic needs.

For Futurists and self-professed Singularitarians, this trend is as desirable as it is inevitable. Citing such things as Moore’s Law (which measures the rate of computing progress) or Kurzweil’s Law of Accelerating Returns – which postulates that the rate of progress increases exponentially with each development – these voices claim that it is humanity’s destiny to conquer death and its inherent limitations. If one looks at the full range of human history – from the Neolithic Revolution to the Digital – the trend seems clear and obvious.

For others, this prospect is both frightening and something to be avoided. When it comes right down to it, transhumanity means leaving behind all the things that make us human. And whereas some people think the Singularity will solve all human problems, others see it as merely an extension of a trend whereby our lives become increasingly complicated and dependent on machinery. And supposing that we do cross some kind of existential barrier, will we ever be able to turn back?

And of course, the more dystopian predictions warn against the cataclysmic possibilities of entrusting so much of our lives to automata, or worse, intelligent machines. Virtually every apocalyptic and dystopian scenario devised in the last sixty years has predicted that doom will result from the development of AI, cybernetics and other advanced technology. The most technophobic claim that the machinery will turn on humanity, while the more moderate warn against increased dependency, since we will be all the more vulnerable if and when the technology fails.

Naturally, there are many who fall somewhere in between and question both outlooks. In recent decades, scientists and speculative fiction writers have emerged who challenge the idea that technological progress will automatically lead to the rise of dystopia. Citing the undeniable trend towards greater and greater levels of material prosperity caused by the industrial revolution and the post-war era – something which is often ignored by people who choose to emphasize the down sides – these voices believe that the future will be neither utopian or dystopian. It will simply be…

Where do you fall?

Immortality Is On The Way!

William Gibson must get a kick out of news items like these. According to a recent article over at IO9, it seems that an entrepreneur named Dmitry Itskova and a team of Russian scientists are developing a project that could render humans immortal by the year 2045, after a fashion. According to the plan, which is called the 2045 Initiative, they hope to create a fully functional, holographic avatar of a human being.

At the core of this avatar will be an artificial brain containing all the thoughts, memories, and emotions of the person being simulated. Given the advancements in the field of computer technology, which includes the Google Neural Net, the team estimates that it won’t be long before a construct can be made which can store the sum total of a human’s mind.

If this concept sounds familiar, then chances are you’ve been reading either from Gibson’s Sprawl Trilogy or Ray Kurzweil’s wishlist. Intrinsic to the former’s cyberpunk novels and the latter’s futurist predictions is the concept of people being able to merge their intelligence with machines for the sake of preserving their very essence for all time. Men like Kurzweil want this technology because it will ensure them the ability to live forever, while novelists like Gibson predicted that this would be something the mega-rich alone would have access to.

Which brings me to another aspect of this project. It seems that Itskova has gone to great lengths to secure investment capital to realize this dream. This included an open letter to roughly the world’s 1226 wealthiest citizens, everybody on Forbes Magazine’s list of the world’s richest people, offering them a chance to invest and make their mark on history. If any of them have already chosen to invest, it’s pretty obvious why. Being so rich and powerful, they can’t be too crazy about the idea of dying. In addition, the process isn’t likely to come cheap. Hence, if and when the technology is realized, the world’s richest people will be the first to create avatars of themselves.

No indication of when the technology will be commercially viable for say, the rest of us. But the team has provided a helpful infographic of when the project’s various steps will be realized (see above). The dates are a little flexible, but they anticipate that they will be able to create a robotic copy of a human body (i.e. an android) within three to eight years. In eight to thirteen, they would be able to build a robotic body capable of housing a brain. By eighteen to twenty-three, a robotic humanoid with a mechanical brain that can house human memories will be realizable. And last, and most impressive, will be a holographic program that is capable of preserving a person’s memories and neural patterns (aka. their personality) indefinitely.

You have to admit, this kind of technology raises an awful lot of questions. For one, there’s the inevitable social consequences of it. If the wealthiest citizens in the world are never going to die, what becomes of their spoiled children? Do they no longer inherit their parent’s wealth, or simply live on forever as they do? And won’t this cramp this style, knowing that mommy and daddy are living forever in the box next to theirs?

What’s more, if there’s no generational turn-over, won’t this effect the whole nature and culture of wealth? It is, by its very nature, something which is passed on from generation to generation, ensuring the creation of elites and their influence over society. In this scenario, the same people are likely to exert influence generation after generation, wielding a sort of power which is virtually godlike.

And let’s not forget the immense spiritual and existential implications! Does technology like this disprove the concept of the immortal soul, or its very transcendent nature? If the human personality can be reduced to a connectome, which can in turn be digitized and stored, then what room is left for the soul? Or, alternately, if the soul really does exist, won’t people who partake in this experiment be committing the ultimate sin?

All stuff to ponder as the project either approaches realization or falls flat on its face, leaving such matters for future generations to ponder. In the meantime, we shouldn’t worry to much. As this century progresses and technology grows, we will have plenty of other chances to desecrate the soul. And given the advance of overpopulation and climate change, odds are we’ll be dying off before any of those plans reach fruition. Always look on the bright side, as they say 😉

Top 15 Things You’ll Never Hear a Geek Say

white-and-nerdyThis really doesn’t happen, but it seems I gave myself an idea with that last post. And to think it all came of an inappropriate joke! Essentially, I began to joke about the kinds of things geeks never say and even listed a few. And it got me thinking, one thing this blog has been missing up until now is a top ten list of things you’ll never hear a geek say. Well, I tried to limit myself to ten, but it was impossible. And so I expanded it a little to incorporate just a few more. I know, top 15 lists aren’t as impressive, but what can you do? Sometimes, you just gotta be inclusive!

Oh, and word of warning, if you’re not a fan of inappropriate jokes, back yourselves up now because you don’t want to see some of the things I’ve written here. As a self-professed geek, I can say these things, but rest assured, my test audience (my wife) didn’t exactly react well 😉 You’ve been warned! Anyhoo, here they are, the top 15 things geeks never say, in ascending order:

15. “Jane Austin is a far superior writer to Ursula K. Le Guin”

14. “I would have liked to have seen less cleavage and more character development from 7 of 9” (substitute any female lead from Star Trek)

13. “The blue chick from Avatar was so not hot”

12. “George R.R. Martin didn’t hit his stride until the third book”

11. “There’s no way I’m paying 50 bucks for a video game!”

10. “I can see no difference between Captain Kirk and Picard”

9. “Blade Runner totally needs to be remade”

8. “I have no strong opinions either way on the remake”

7. “They made a show called The Big Bang Theory?”

6. “I didn’t see the new Star Trek movie, but I hear it’s good.”

5. “Frank Herbert would be so proud of what’s become of the Dune franchise”

4. “You’ll never catch me jerking off to anime!”

3. “The prequels are just as good as the original”

2. “The movie was just as good as the book”

1. “George Lucas didn’t rape my childhood!”

And I’m still thinking of some doozies! Oh well, maybe this can become a thing, but only top 10 lists from now on. 15 at a time is not a pace I foresee being able to keep up for long. Until next time, geek out, get your geek on, and keep on geeking in the free world! Have I found enough ways to work that into a cliched saying?