Revolution in Virtual Reality: Google’s Cardboard Headset

cardboardgifWith the acquisition of the Oculus Rift headset, Facebook appeared ready to corner the market of the new virtual reality market. But at its annual I/O conference, Google declared that it was staking its own claim. At the end of the search giant’s keynote address, Sundar Pichai announced that everyone in attendance would get a nondescript cardboard package, but was coy about its contents. Turns out, it’s the firm’s attempt at a do-it-yourself VR headset.

Known as Cardboard, copies of the headset were handed out as part of a goodie bag, alongside the choice between a brand new LG G Watch or Samsung Gear Live smartwatch. Intended to be a do-it-yourself starter kit, Google Cardboard is a head-mounted housing unit for your smartphone that lets you blend everyday items into a VR headset. With a $10 lens kit, $7 worth of magnets, two Velcro straps, a rubber band, and an optional near-field communication sticker tag, you can have your very own VR headset for the fraction of the price.

box-of-cardboard-google-io-2014You can use household materials to build one, and a rubber band to hold your smartphone in place on the front of the device. Assembly instructions, plans and links for where to source the needed parts (like lenses) — as well as an SDK — are available on the project’s website. Google hopes that by making the tech inexpensive (unlike offerings from, say, Oculus), developers will be able to make VR apps that hit a wider audience.

According to some early reviews, the entire virtual reality experience is surprisingly intuitive, and is as impressive considering how simple it is. And while the quality doesn’t quite match the Oculus Rift’s dual OLED Full HD screens, and it is lacking in that it doesn’t have positional tracking (meaning you can’t lean into something the way you would in real life), the Cardboard is able to create the 3D effect using just a single phone screen and some specialized lenses.

google_cardboardMeanwhile, Google has created some great demos within the Cardboard app, showcasing the kind of experiences people can expect moving forward. Right now, the Cardboard app features simple demonstrations: Google Earth, Street View, Windy Day, and more. But it’s just a small taste of what’s possible. And anyone willing to put some time into putting together their own cardboard headset can get involved. Never before has virtual reality been so accessible, or cheap.

And that was precisely the purpose behind the development of this device. Originally concocted by David Coz and Damien Henry at the Google Cultural Institute in Paris as part of the company’s “20 percent time” initiative, the program was started with the aim of inspiring a more low-cost model for VR development. After an early prototype wowed Googlers, a larger group was tasked with building out the idea, and the current Cardboard headset was born.

google_cardboard1As it reads on Google’s new page for the device’s development:

Virtual reality has made exciting progress over the past several years. However, developing for VR still requires expensive, specialized hardware. Thinking about how to make VR accessible to more people, a group of VR enthusiasts at Google experimented with using a smartphone to drive VR experiences.

Beyond hardware, on June 25th, the company also released a self-described experimental software development kit for Cardboard experiences. Cardboard also has an Android companion app that’s required to utilize Google’s own VR-specific applications, called Chrome Experiments. Some use cases Google cites now are flyover tours in Google Earth, full-screen YouTube video viewing, and first-person art exhibit tours.

google_cardboard2As Google said a related press release:

By making it easy and inexpensive to experiment with VR, we hope to encourage developers to build the next generation of immersive digital experiences and make them available to everyone.

Oculus Rift is still the most promising version of virtual reality right now, and with Facebook at the helm, there are some tremendous resources behind the project. But with Cardboard, Google is opening up VR to every single Android developer, which we hope will lead to some really awesome stuff down the road. Even if you can’t lean in to inspect dials in front of you, or look behind corners, the potential of Cardboard is tremendous. Imagine the kind of not only experiences we’ll see, but augmented reality using your phone’s camera.

But Cardboard is still very early in development. Its only been a few weeks since it was debuted at Google I/O, and the device is still only works with Android. But with availability on such a wide scale, it could very quickly become the go-to VR platform out there. All you need are some magnets, velcro, rubber band, lenses and a pizza box. And be sure to check out this demo of the device, courtesy of “Hands-On” by TechnoBuffalo:


Sources:
cnet.com, technobuffalo.com, engadget.com

The Internet of Things: AR and Real World Search

https://i0.wp.com/screenmediadaily.com/wp-content/uploads/2013/04/augmented_reality_5.jpgWhen it comes to the future, it is clear that the concept of the “Internet of Things” holds sway. This idea – which states that all objects will someday be identifiable thanks to a virtual representations on the internet – is at the center of a great deal of innovation that drives our modern economy. Be it wearables, wireless, augmented reality, voice or image recognition, that which helps us combine the real with the virtual are on the grow.

And so it’s really no surprise that innovators are looking to take augmented reality to the next level. The fruit of some of this labor is Blippar, a market-leading image-recognition and augmented reality platform. Lately, they have been working on a proof of concept for Google Glass showing that 3-D searches are doable. This sort of technology is already available n the form of apps for smartphones, but a central database is lacking that could any device into a visual search engine.

https://i1.wp.com/inthralld.com/wp-content/uploads/2013/08/Say-Hello-to-Ikeas-2014-Interactive-Catalog-App-4.jpegAs Ambarish Mitra, the head of Blippar stated, AR is already gaining traction among consumers thanks to some of the world’s biggest industrial players recognizing the shift to visually mediated lifestyles. Examples include IKEA’s interactive catalog, Heinz’s AR recipe booklet or Amazon’s recent integration of the Flow AR technology into its primary shopping app. As this trend continues, we will need a Wikipedia-like database for 3-D objects that will be available to us anytime, anywhere.

Social networks and platforms like Instagram, Pinterest, Snapchat and Facebook have all driven a cultural shift in the way people exchange information. This takes the form of text updates, instant messaging, and uploaded images. But as the saying goes, “a picture is worth a thousand words”. In short, information absorbed through visual learning has a marked advantage over that which is absorbed through reading and text.

Augmented_Reality_Contact_lensIn fact, a recent NYU study found that people retain close to 80 percent of information they consume through images versus just 10 percent of what they read. If people are able to regularly consume rich content from the real world through our devices, we could learn, retain, and express our ideas and information more effectively. Naturally, there will always be situations where text-based search is the most practical tool, but searches arise from real-world experiences.

Right now, text is the only option available, and oftentimes, people are unable to best describe what they are looking for. But an image-recognition technology that could turn any smartphone, tablet or wearable device into a scanner that could identify any 3-D object would vastly simplify things. Information could be absorbed in a more efficient way, using an object’s features and pulling up information from a rapidly learning engine.

https://i1.wp.com/24reviews.com/wp-content/uploads/2014/03/QWERTY-keyboard.pngFor better or for worse, wearable designs of consumer electronics have come to reflect a new understanding in the past few years. Basically, they have come to be extensions of our senses, much as Marshall McCluhan wrote in his 1964 book Understanding Media: The Extensions of Man. Google Glass is representative of this revolutionary change, a step in the direction of users interacting with the environment around them through technology.

Leading tech companies are already investing time and money into the development of their own AR products, and countless patents and research allocations are being made with every passing year. Facebook’s acquisition of virtual reality company Oculus Rift is the most recent example, but even Samsung received a patent earlier this year for a camera-based augmented reality keyboard that is projected onto the fingers of the user.

https://i1.wp.com/blogs.gartner.com/it-glossary/files/2012/07/internet-of-things-gartner.pngAugmented reality has already proven itself to be a multi-million dollar industry – with 60 million users and around half a billion dollars in global revenues in 2013 alone. It’s expected to exceed $1 billion annually by 2015, and combined with a Google-Glass type device, this AR could eventually allow individuals to build vast libraries of data that will be the foundation for finding any 3-D object in the physical world.

In other words, the Internet of Things will become one step closer, with an evolving database of visual information at the base of it that is becoming ever larger and (in all likelihood) smarter. Oh dear, I sense another Skynet reference coming on! And in the meantime, enjoy this video that showcases Blippar’s vision of what this future of image overlay and recognition will look like:


Source: wired.com, dashboardinsight.com, blippar.com

Digital Eyewear Through the Ages

google_glassesGiven the sensation created by the recent release of Google Glass – a timely invention that calls to mind everything from 80’s cyberpunk to speculations about our cybernetic, transhuman future – a lot of attention has been focused lately on personalities like Steve Mann, Mark Spritzer, and the history of wearable computers.

For decades now, visionaries and futurists have been working towards a day when all personal computers are portable and blend seamlessly into our daily lives. And with countless imitators coming forward to develop their own variants and hate crimes being committed against users, it seems like portable/integrated machinery is destined to become an issue no one will be able to ignore.

And so I thought it was high time for a little retrospective, a look back at the history of eyewear computers and digital devices and see how far it has come. From its humble beginnings with bulky backpacks and large, head-mounted displays, to the current age of small fixtures that can be worn as easily as glasses, things certainly have changed. And the future is likely to get even more fascinating, weird, and a little bit scary!

Sword of Damocles (1968):
swordofdamoclesDeveloped by Ivan Sutherland and his student Bob Sprouli at the University of Utah in 1968, the Sword of Damocles was the world’s first heads-up mounted display. It consisted of a headband with a pair of small cathode-ray tubes attached to the end of a large instrumented mechanical arm through which head position and orientation were determined.

Hand positions were sensed via a hand-held grip suspended at the end of three fishing lines whose lengths were determined by the number of rotations sensed on each of the reels. Though crude by modern standards, this breakthrough technology would become the basis for all future innovation in the field of mobile computing, virtual reality, and digital eyewear applications.

WearComp Models (1980-84):
WearComp_1_620x465Built by Steve Mann (inventor of the EyeTap and considered to be the father of wearable computers) in 1980, the WearComp1 cobbled together many devices to create visual experiences. It included an antenna to communicate wirelessly and share video. In 1981, he designed and built a backpack-mounted wearable multimedia computer with text, graphics, and multimedia capability, as well as video capability.

Wearcomp_4By 1984, the same year that Apple’s Macintosh was first shipped and the publication of William Gibson’s science fiction novel, “Neuromancer”, he released the WearComp4 model. This latest version employed clothing-based signal processing, a personal imaging system with left eye display, and separate antennas for simultaneous voice, video, and data communication.

Private Eye (1989):
Private_eye_HUDIn 1989 Reflection Technology marketed the Private Eye head-mounted display, which scanned a vertical array of LEDs across the visual field using a vibrating mirror. The monochrome screen was 1.25-inches on the diagonal, but images appear to be a 15-inch display at 18-inches distance.

EyeTap Digital Eye (1998):
EyeTap1
Steve Mann is considered the father of digital eyewear and what he calls “mediated” reality. He is a professor in the department of electrical and computer engineering at the University of Toronto and an IEEE senior member, and also serves as chief scientist for the augmented reality startup, Meta. The first version of the EyeTap was produced in the 1970’s and was incredibly bulky by modern standards.

By 1998, he developed the one that is commonly seen today, mounted over one ear and in front of one side of the face. This version is worn in front of the eye, recording what is immediately in front of the viewer and superimposing the view as digital imagery. It uses a beam splitter to send the same scene to both the eye and a camera, and is tethered to a computer worn to his body in a small pack.

MicroOptical TASK-9 (2000):
MicroOptical TASK-9Founded in 1995 by Mark Spitzer, who is now a director at the Google X lab. the company produced several patented designs which were bought up by Google after the company closed in 2010. One such design was the TASK-9, a wearable computer that is attachable to a set of glasses. Years later, MicroOptical’s line of viewers remain the lightest head-up displays available on the market.

Vuzix (1997-2013):
Vuzix_m100Founded in 1997, Vuzix created the first video eyewear to support stereoscopic 3D for the PlayStation 3 and Xbox 360. Since then, Vuzix went on to create the first commercially produced pass-through augmented reality headset, the Wrap 920AR (seen at bottom). The Wrap 920AR has two VGA video displays and two cameras that work together to provide the user a view of the world which blends real world inputs and computer generated data.

vuzix-wrapOther products of note include the Wrap 1200VR, a virtual reality headset that has numerous applications – everything from gaming and recreation to medical research – and the Smart Glasses M100, a hands free display for smartphones. And since the Consumer Electronics Show of 2011, they have announced and released several heads-up AR displays that are attachable to glasses.

vuzix_VR920

MyVu (2008-2012):
Founded in 1995, also by Mark Spitzer, MyVu developed several different types of wearable video display glasses before closing in 2012. The most famous was their Myvu Personal Media Viewer (pictured below), a set of display glasses that was released in 2008. These became instantly popular with the wearable computer community because they provided a cost effective and relatively easy path to a DIY, small, single eye, head-mounted display.myvu_leadIn 2010, the company followed up with the release of the Viscom digital eyewear (seen below), a device that was developed in collaboration with Spitzer’s other company, MicroOptical. This smaller, head mounted display device comes with earphones and is worn over one eye like a pair of glasses, similar to the EyeTap.

myvu_viscom

Meta Prototype (2013):
Developed by Meta, a Silicon Valley startup that is being funded with the help of a Kickstarter campaign and supported by Steve Mann, this wearable computing eyewear ultizes the latest in VR and projection technology. Unlike other display glasses, Meta’s eyewear enters 3D space and uses your hands to interact with the virtual world, combining the benefits of the Oculus Rift and those being offered by “Sixth Sense” technology.

meta_headset_front_on_610x404The Meta system includes stereoscopic 3D glasses and a 3D camera to track hand movements, similar to the portrayals of gestural control in movies like “Iron Man” and “Avatar.” In addition to display modules embedded in the lenses, the glasses include a portable projector mounted on top. This way, the user is able to both project and interact with computer simulations.

Google Glass (2013):
Google Glass_Cala
Developed by Google X as part of their Project Glass, the Google Glass device is a wearable computer with an optical head-mounted display (OHMD) that incorporates all the major advances made in the field of wearable computing for the past forty years. These include a smartphone-like hands-free format, wireless internet connection, voice commands and a full-color augmented-reality display.

Development began in 2011 and the first prototypes were previewed to the public at the Google I/O annual conference in San Francisco in June of 2012. Though they currently do not come with fixed lenses, Google has announced its intention to partner with sunglass retailers to equip them with regular and prescription lenses. There is also talk of developing contact lenses that come with embedded display devices.

Summary:
Well, that’s the history of digital eyewear in a nutshell. And as you can see, since the late 60’s, the field has progressed by leaps and bounds. What was once a speculative and visionary pursuit has now blossomed to become a fully-fledged commercial field, with many different devices being produced for public consumption.

At this rate, who knows what the future holds? In all likelihood, the quest to make computers more portable and ergonomic will keep pace with the development of more sophisticated electronics and computer chips, miniaturization, biotechnology, nanofabrication and brain-computer interfacing.

The result will no doubt be tiny CPUs that can be implanted in the human body and integrated into our brains via neural chips and tiny electrodes. In all likelihood, we won’t even need voice commands at that point, because neuroscience will have developed a means to communicate directly to our devices via brainwaves. The age of cybernetics will have officially dawned!

Like I said… fascinating, weird, and a little bit scary!

‘High Dynamic Range’

IFA 2013!

IFA2013There are certainly no shortages of electronic shows happening this year! It seems that I just finished getting through all the highlights from Touch Taiwan which happened back in August. And then September comes around and I start hearing all about IFA 2013. For those unfamiliar with this consumer electronics exhibition, IFA stands for Internationale Funkausstellung Berlin, which loosely translated means the Berlin Radio Show.

As you can tell from the name, this annual exhibit has some deep roots. Beginning in 1924, the show was intended to gives electronics producers the chance to present their latest products and developments to the general public, as well as showcasing the latest in technology. From radios and cathode-ray display boxes (i.e. television) to personal computers and PDAs, the show has come a long way, and this year’s show promised to be a doozy as well.

IFA-2013Of all those who presented this year, Sony seems to have made the biggest impact. In fact, they very nearly stole the show with their presentation of their new smartphones, cameras and tablets. But it was their new Xperia Z1 smartphone that really garnered attention, given all the fanfare that preceded it. Check out the video by TechRadar:


However, their new Vaio Tap 11 tablet also got quite a bit of fanfare. In addition to a Haswell chip (Core i3, i5 or i7), a six-hour battery, full Windows connectivity, a camera, a stand, 128GB to 512GB of solid-state storage, and a wireless keyboard, the tablet has what is known as Near Field Communications (NFC) which comes standard on smartphones these days.

This technology allows the tablet to communicate with other devices and enable data transfer simply by touching them together or bringing them into close proximity. The wireless keyboard is also attachable to the device via a battery port which allows for constant charging, and the entire thin comes in a very thin package. Check out the video by Engadget:


Then there was the Samsung Galaxy Gear smartwatch, an exhibit which was equally anticipated and proved to be quite entertaining. Initially, the company had announced that their new smartwatch would incorporate flexible technology, which proved to not be the case. Instead, they chose to release a watch that was comparable to Apple’s own smartwatch design.

But as you can see, the end result is still pretty impressive. In addition to telling time, it also has many smartphone-like options, like being able to take pictures, record and play videos, and link to your other devices via Bluetooth. And of course, you can also phone, text, instant message and download all kinds of apps. Check out the hands-on video below:


Toshiba also made a big splash with their exhibit featuring an expanded line of tablets, notebooks and hybrids, as well as Ultra High-Definition TVs. Of note was their M9 design, a next-generation concept that merges the latest in display and networking technology – i.e. the ability to connect to the internet or your laptop, allowing you to stream video, display pictures, and play games on a big ass display!

Check out the video, and my apologies for the fact that this and the next one are in German. There were no English translations:


And then there was their Cloud TV presentation, a form of “smart tv” that merges the best of a laptop to that of a television. Basically, this means that a person can watch video-on-demand, use social utilities, network, and save their files via cloud memory storage, all from their couch using a handheld remote. Its like watching TV, but with all the perks of a laptop computer – one that also has a very big screen!


And then there was the HP Envy Recline, an all-in-one PC that has a hinge that allows the massive touchscreen to pivot over the edge of a desk and into the user’s lap. Clearly, ergonomics and adaptability were what inspired this idea, and many could not tell if it was a brilliant idea or the most enabling invention since the LA-Z-BOY recliner. Still, you have to admit, it looks pretty cool:


Lenovo and Acer also attracted show goers with their new lineup of smartphones, tablets, and notebooks. And countless more came to show off the latest in their wares and pimp out their own versions of the latest and greatest developments. The show ran from September 6th to 11th and there are countless videos, articles and testimonials to still making it to the fore.

For many of the products, release dates are still pending. But all those who attended managed to come away with the understanding that when it comes to computing, networking, gaming, mobile communications, and just plain lazing, the technology is moving by leaps and bounds. Soon enough, we are likely to have flexible technology available in all smart devices, and not just in the displays.

nokia_morphNanofabricated materials are also likely to create cases that are capable of morphing and changing shape and going from a smartwatch, to a smartphone, to a smart tablet. For more on that, check out this video from Epic Technology, which showcases the most anticipated gadgets for 2014. These include transparent devices, robots, OLED curved TVs, next generation smartphones, the PS4, the Oculus Rift, and of course, Google Glass.

I think you’ll agree, next year’s gadgets are even more impressive than this year’s gadgets. Man, the future is moving fast!


Sources:
b2b.ifa-berlin.com, technologyguide.com, telegraph.co.uk, techradar.com

News From Space: Walk on Mars with VR

oculus-rift-omni-treadmill-mars-nasa-640x353Virtual Reality, which was once the stuff of a cyberpunk wet dream, has grown somewhat stagnant in recent years. Large, bulky headsets, heavy cables, and graphics which were low definition and two-dimensional just didn’t seem to capture the essence of the concept. However, thanks to the Oculus Rift, the technology known as Virtual Reality has been getting a new lease on life.

Though it is still in the development phase, the makers of the Oculus Rift has mounted some impressive demos. Though still somewhat limited – using it with a mouse is counter-intuitive, and using it with a keyboard prevents using your body to scan virtual environments –  the potential is certainly there and the only question at this point is how to expand on it and give users the ability to do more.

Oculus-RiftOne group that is determined to explore its uses is NASA, who used it in combination  with an Omni treadmill to simulate walking on Mars. Already, the combination of these two technologies has allowed gamers to do some pretty impressive things, like pretend they are in an immersive environment, move, and interact with it (mainly shooting and blowing things up), which is what VR is meant to allow.

NASA’s Jet Propulsion Laboratory, however, went a step beyond this by combining the Omni and a stereoscopic 360-degree panorama of Mars to create a walking-on-Mars simulator. The NASA JPL team was able to give depth to the image so users could walk around an image of the Martian landscape. This is perhaps the closest normal folks will ever get to walking around on a “real” alien planet.

omni_treadmillAlong with the Martian terrain, JPL created a demo wherein the user could wander around the International Space Station. The JPL team also found that for all the sophisticated imagery beamed back to Earth, it is no substitute for being immersed in an environment. Using a rig similar to the Rift and Omni could help researchers better orient themselves with alien terrain, thus being able to better plan missions and experiments.

Looking to the long run, this kind of technology could be a means for creating “telexploration” (or Immersive Space Exploration) – a process where astronauts would be able to explore alien environments by connecting to rover’s or satellites camera feed and controlling their movements. In a way that is similar to teleconferencing, people would be able to conduct true research on an alien environment while feeling like they were actually in there.

mars-180-degrees-panorama_croppedAlready, scientists at the Mars Science Laboratory have been doing just that with Curiosity and Opportunity, but the potential to bring this immersive experience to others is something many NASA and other space scientists want to see in the near future. What’s more, it is a cheap alternative to actually sending manned mission to other planets and star systems.

By simply beaming images back and allowing users to remotely control the robotic platform that is sending them, the best of both worlds can be had at a fraction of the cost. Whats more, it will allow people other than astronauts to witness and feel involved in the process of exploration, something that social media and live broadcasts from space is already allowing.

As usual, it seems that the age of open and democratic space travel is on its way, my friends. And as usual, there’s a video clip of the Oculus Rift and the Omni treadmill bringing a walk on Mars to life. Check it out:


Sources:
extremetech.com, engadget.com