Top Stories from CES 2014

CES2014_GooglePlus_BoxThe Consumer Electronics Show has been in full swing for two days now, and already the top spots for most impressive technology of the year has been selected. Granted, opinion is divided, and there are many top contenders, but between displays, gaming, smartphones, and personal devices, there’s been no shortage of technologies to choose from.

And having sifted through some news stories from the front lines, I have decided to compile a list of what I think the most impressive gadgets, displays and devices of this year’s show were. And as usual, they range from the innovative and creative, to the cool and futuristic, with some quirky and fun things holding up the middle. And here they are, in alphabetical order:

celestron_cosmosAs an astronomy enthusiast, and someone who enjoys hearing about new and innovative technologies, Celestron’s Cosmos 90GT WiFi Telescope was quite the story. Hoping to make astronomy more accessible to the masses, this new telescope is the first that can be controlled by an app over WiFi. Once paired, the system guides stargazers through the cosmos as directions flow from the app to the motorized scope base.

In terms of comuting, Lenovo chose to breathe some new life into the oft-declared dying industry of desktop PCs this year, thanks to the unveiling of their Horizon 2. Its 27-inch touchscreen can go fully horizontal, becoming both a gaming and media table. The large touch display has a novel pairing technique that lets you drop multiple smartphones directly onto the screen, as well as group, share, and edit photos from them.

Lenovo Horizon 2 Aura scanNext up is the latest set of display glasses to the world by storm, courtesy of the Epson Smart Glass project. Ever since Google Glass was unveiled in 2012, other electronics and IT companies have been racing to produce a similar product, one that can make heads-up display tech, WiFi connectivity, internet browsing, and augmented reality portable and wearable.

Epson was already moving in that direction back in 2011 when they released their BT100 augmented reality glasses. And now, with their Moverio BT200, they’ve clearly stepped up their game. In addition to being 60 percent lighter than the previous generation, the system has two parts – consisting of a pair of glasses and a control unit.

moverio-bt200-1The glasses feature a tiny LCD-based projection lens system and optical light guide which project digital content onto a transparent virtual display (960 x 540 resolution) and has a camera for video and stills capture, or AR marker detection. With the incorporation of third-party software, and taking advantage of the internal gyroscope and compass, a user can even create 360 degree panoramic environments.

At the other end, the handheld controller runs on Android 4.0, has a textured touchpad control surface, built-in Wi-Fi connectivity for video content streaming, and up to six hours of battery life.


The BT-200 smart glasses are currently being demonstrated at Epson’s CES booth, where visitors can experience a table-top virtual fighting game with AR characters, a medical imaging system that allows wearers to see through a person’s skin, and an AR assistance app to help perform unfamiliar tasks .

This year’s CES also featured a ridiculous amount of curved screens. Samsung seemed particularly proud of its garish, curved LCD TV’s, and even booked headliners like Mark Cuban and Michael Bay to promote them. In the latter case, this didn’t go so well. However, one curved screen device actually seemed appropriate – the LG G Flex 6-inch smartphone.

LG_G_GlexWhen it comes to massive curved screens, only one person can benefit from the sweet spot of the display – that focal point in the center where they feel enveloped. But in the case of the LG G Flex-6, the subtle bend in the screen allows for less light intrusion from the sides, and it distorts your own reflection just enough to obscure any distracting glare. Granted, its not exactly the flexible tech I was hoping to see, but its something!

In the world of gaming, two contributions made a rather big splash this year. These included the Playstation Now, a game streaming service just unveiled by Sony that lets gamers instantly play their games from a PS3, PS4, or PS Vita without downloading and always in the most updated version. Plus, it gives users the ability to rent titles they’re interested in, rather than buying the full copy.

maingear_sparkThen there was the Maingear Spark, a gaming desktop designed to run Valve’s gaming-centric SteamOS (and Windows) that measures just five inches square and weighs less than a pound. This is a big boon for gamers who usually have to deal gaming desktops that are bulky, heavy, and don’t fit well on an entertainment stand next to other gaming devices, an HD box, and anything else you might have there.

Next up, there is a device that helps consumers navigate the complex world of iris identification that is becoming all the rage. It’s known as the Myris Eyelock, a simple, straightforward gadget that takes a quick video of your eyeball, has you log in to your various accounts, and then automatically signs you in, without you ever having to type in your password.

myris_eyelockSo basically, you can utilize this new biometric ID system by having your retinal scan on your person wherever you go. And then, rather than go through the process of remembering multiple (and no doubt, complicated passwords, as identity theft is becoming increasingly problematic), you can upload a marker that leaves no doubt as to your identity. And at less than $300, it’s an affordable option, too.

And what would an electronics show be without showcasing a little drone technology? And the Parrot MiniDrone was this year’s crowd pleaser: a palm-sized, camera-equipped, remotely-piloted quad-rotor. However, this model has the added feature of two six-inch wheels, which affords it the ability to zip across floors, climb walls, and even move across ceilings! A truly versatile personal drone.

 

scanaduAnother very interesting display this year was the Scanadu Scout, the world’s first real-life tricorder. First unveiled back in May of 2013, the Scout represents the culmination of years of work by the NASA Ames Research Center to produce the world’s first, non-invasive medical scanner. And this year, they chose to showcase it at CES and let people test it out on themselves and each other.

All told, the Scanadu Scout can measure a person’s vital signs – including their heart rate, blood pressure, temperature – without ever touching them. All that’s needed is to place the scanner above your skin, wait a moment, and voila! Instant vitals. The sensor will begin a pilot program with 10,000 users this spring, the first key step toward FDA approval.

wowwee_mip_sg_4And of course, no CES would be complete without a toy robot or two. This year, it was the WowWee MiP (Mobile Inverted Pendulum) that put on a big show. Basically, it is an eight-inch bot that balances itself on dual wheels (like a Segway), is controllable by hand gestures, a Bluetooth-conncted phone, or can autonomously roll around.

Its sensitivity to commands and its ability to balance while zooming across the floor are super impressive. While on display, many were shown carrying a tray around (sometimes with another MiP on a tray). And, a real crowd pleaser, the MiP can even dance. Always got to throw in something for the retro 80’s crowd, the people who grew up with the SICO robot, Jinx, and other friendly automatons!

iOptikBut perhaps most impressive of all, at least in my humble opinion, is the display of the prototype for the iOptik AR Contact Lens. While most of the focus on high-tech eyewear has been focused on wearables like Google Glass of late, other developers have been steadily working towards display devices that are small enough to worse over your pupil.

Developed by the Washington-based company Innovega with support from DARPA, the iOptik is a heads-up display built into a set of contact lenses. And this year, the first fully-functioning prototypes are being showcased at CES. Acting as a micro-display, the glasses project a picture onto the contact lens, which works as a filter to separate the real-world from the digital environment and then interlaces them into the one image.

ioptik_contact_lenses-7Embedded in the contact lenses are micro-components that enable the user to focus on near-eye images. Light projected by the display (built into a set of glasses) passes through the center of the pupil and then works with the eye’s regular optics to focus the display on the retina, while light from the real-life environment reaches the retina via an outer filter.

This creates two separate images on the retina which are then superimposed to create one integrated image, or augmented reality. It also offers an alternative solution to traditional near-eye displays which create the illusion of an object in the distance so as not to hinder regular vision. At present, still requires clearance from the FDA before it becomes commercially available, which may come in late 2014 or early 2015.


Well, its certainly been an interesting year, once again, in the world of electronics, robotics, personal devices, and wearable technology. And it manages to capture the pace of change that is increasingly coming to characterize our lives. And according to the tech site Mashable, this year’s show was characterized by televisions with 4K pixel resolution, wearables, biometrics, the internet of personalized and data-driven things, and of course, 3-D printing and imaging.

And as always, there were plenty of videos showcasing tons of interesting concepts and devices that were featured this year. Here are a few that I managed to find and thought were worthy of passing on:

Internet of Things Highlights:


Motion Tech Highlights:


Wearable Tech Highlights:


Sources: popsci.com, (2), cesweb, mashable, (2), gizmag, (2), news.cnet

Digital Eyewear Through the Ages

google_glassesGiven the sensation created by the recent release of Google Glass – a timely invention that calls to mind everything from 80’s cyberpunk to speculations about our cybernetic, transhuman future – a lot of attention has been focused lately on personalities like Steve Mann, Mark Spritzer, and the history of wearable computers.

For decades now, visionaries and futurists have been working towards a day when all personal computers are portable and blend seamlessly into our daily lives. And with countless imitators coming forward to develop their own variants and hate crimes being committed against users, it seems like portable/integrated machinery is destined to become an issue no one will be able to ignore.

And so I thought it was high time for a little retrospective, a look back at the history of eyewear computers and digital devices and see how far it has come. From its humble beginnings with bulky backpacks and large, head-mounted displays, to the current age of small fixtures that can be worn as easily as glasses, things certainly have changed. And the future is likely to get even more fascinating, weird, and a little bit scary!

Sword of Damocles (1968):
swordofdamoclesDeveloped by Ivan Sutherland and his student Bob Sprouli at the University of Utah in 1968, the Sword of Damocles was the world’s first heads-up mounted display. It consisted of a headband with a pair of small cathode-ray tubes attached to the end of a large instrumented mechanical arm through which head position and orientation were determined.

Hand positions were sensed via a hand-held grip suspended at the end of three fishing lines whose lengths were determined by the number of rotations sensed on each of the reels. Though crude by modern standards, this breakthrough technology would become the basis for all future innovation in the field of mobile computing, virtual reality, and digital eyewear applications.

WearComp Models (1980-84):
WearComp_1_620x465Built by Steve Mann (inventor of the EyeTap and considered to be the father of wearable computers) in 1980, the WearComp1 cobbled together many devices to create visual experiences. It included an antenna to communicate wirelessly and share video. In 1981, he designed and built a backpack-mounted wearable multimedia computer with text, graphics, and multimedia capability, as well as video capability.

Wearcomp_4By 1984, the same year that Apple’s Macintosh was first shipped and the publication of William Gibson’s science fiction novel, “Neuromancer”, he released the WearComp4 model. This latest version employed clothing-based signal processing, a personal imaging system with left eye display, and separate antennas for simultaneous voice, video, and data communication.

Private Eye (1989):
Private_eye_HUDIn 1989 Reflection Technology marketed the Private Eye head-mounted display, which scanned a vertical array of LEDs across the visual field using a vibrating mirror. The monochrome screen was 1.25-inches on the diagonal, but images appear to be a 15-inch display at 18-inches distance.

EyeTap Digital Eye (1998):
EyeTap1
Steve Mann is considered the father of digital eyewear and what he calls “mediated” reality. He is a professor in the department of electrical and computer engineering at the University of Toronto and an IEEE senior member, and also serves as chief scientist for the augmented reality startup, Meta. The first version of the EyeTap was produced in the 1970’s and was incredibly bulky by modern standards.

By 1998, he developed the one that is commonly seen today, mounted over one ear and in front of one side of the face. This version is worn in front of the eye, recording what is immediately in front of the viewer and superimposing the view as digital imagery. It uses a beam splitter to send the same scene to both the eye and a camera, and is tethered to a computer worn to his body in a small pack.

MicroOptical TASK-9 (2000):
MicroOptical TASK-9Founded in 1995 by Mark Spitzer, who is now a director at the Google X lab. the company produced several patented designs which were bought up by Google after the company closed in 2010. One such design was the TASK-9, a wearable computer that is attachable to a set of glasses. Years later, MicroOptical’s line of viewers remain the lightest head-up displays available on the market.

Vuzix (1997-2013):
Vuzix_m100Founded in 1997, Vuzix created the first video eyewear to support stereoscopic 3D for the PlayStation 3 and Xbox 360. Since then, Vuzix went on to create the first commercially produced pass-through augmented reality headset, the Wrap 920AR (seen at bottom). The Wrap 920AR has two VGA video displays and two cameras that work together to provide the user a view of the world which blends real world inputs and computer generated data.

vuzix-wrapOther products of note include the Wrap 1200VR, a virtual reality headset that has numerous applications – everything from gaming and recreation to medical research – and the Smart Glasses M100, a hands free display for smartphones. And since the Consumer Electronics Show of 2011, they have announced and released several heads-up AR displays that are attachable to glasses.

vuzix_VR920

MyVu (2008-2012):
Founded in 1995, also by Mark Spitzer, MyVu developed several different types of wearable video display glasses before closing in 2012. The most famous was their Myvu Personal Media Viewer (pictured below), a set of display glasses that was released in 2008. These became instantly popular with the wearable computer community because they provided a cost effective and relatively easy path to a DIY, small, single eye, head-mounted display.myvu_leadIn 2010, the company followed up with the release of the Viscom digital eyewear (seen below), a device that was developed in collaboration with Spitzer’s other company, MicroOptical. This smaller, head mounted display device comes with earphones and is worn over one eye like a pair of glasses, similar to the EyeTap.

myvu_viscom

Meta Prototype (2013):
Developed by Meta, a Silicon Valley startup that is being funded with the help of a Kickstarter campaign and supported by Steve Mann, this wearable computing eyewear ultizes the latest in VR and projection technology. Unlike other display glasses, Meta’s eyewear enters 3D space and uses your hands to interact with the virtual world, combining the benefits of the Oculus Rift and those being offered by “Sixth Sense” technology.

meta_headset_front_on_610x404The Meta system includes stereoscopic 3D glasses and a 3D camera to track hand movements, similar to the portrayals of gestural control in movies like “Iron Man” and “Avatar.” In addition to display modules embedded in the lenses, the glasses include a portable projector mounted on top. This way, the user is able to both project and interact with computer simulations.

Google Glass (2013):
Google Glass_Cala
Developed by Google X as part of their Project Glass, the Google Glass device is a wearable computer with an optical head-mounted display (OHMD) that incorporates all the major advances made in the field of wearable computing for the past forty years. These include a smartphone-like hands-free format, wireless internet connection, voice commands and a full-color augmented-reality display.

Development began in 2011 and the first prototypes were previewed to the public at the Google I/O annual conference in San Francisco in June of 2012. Though they currently do not come with fixed lenses, Google has announced its intention to partner with sunglass retailers to equip them with regular and prescription lenses. There is also talk of developing contact lenses that come with embedded display devices.

Summary:
Well, that’s the history of digital eyewear in a nutshell. And as you can see, since the late 60’s, the field has progressed by leaps and bounds. What was once a speculative and visionary pursuit has now blossomed to become a fully-fledged commercial field, with many different devices being produced for public consumption.

At this rate, who knows what the future holds? In all likelihood, the quest to make computers more portable and ergonomic will keep pace with the development of more sophisticated electronics and computer chips, miniaturization, biotechnology, nanofabrication and brain-computer interfacing.

The result will no doubt be tiny CPUs that can be implanted in the human body and integrated into our brains via neural chips and tiny electrodes. In all likelihood, we won’t even need voice commands at that point, because neuroscience will have developed a means to communicate directly to our devices via brainwaves. The age of cybernetics will have officially dawned!

Like I said… fascinating, weird, and a little bit scary!

‘High Dynamic Range’

The Future is Here: Augmented Reality Storybooks

ar_storybookDisney has always been on the forefront of technological innovation whenever and wherever their animation is concerned. Augmented reality has been a part of their operations for quite some time, usually in the form of displays put on at Epcot Center or their Haunted Mansion. But now, they are bringing their efforts in AR to the kind of standard storybook that you would read to your children before bedtime.

Thanks to innovations provided by Nintendo DS, the PSP, tablets and smartphones, books have become alive and interactive in ways that were simply not possible ten or twenty years ago. However, one cannot deny that ebooks simply do not have the same kind of old world charm and magic that paperbacks do. Call it nostalgic appeal or tradition, but reading to a child from a bounded tome just seems somehow more meaningful to most people.

disneyhideout-640x353And that’s where Disney’s HideOut project comes into play, a mobile projector is used to create an augmented reality storybook. How it works is simple enough, and in a way, involves merging the best of electronic and paper media. Within the book, certain parts will be printed using special infrared-absorbing ink, so that sentences and images can be tracked.

The mobile projector, in turn, uses a built-in camera to sense the ink, then projects digital images onto the page’s surface that are animated to interact with the markers. In this way, it knows to show certain images when parts of the book call for them to be displayed, and can turn normal pictures into 3D animated segments.

disney_argameAnd storybooks aren’t the only application being investigated by Disney. In addition, they have been experimenting with game concepts, where a user would moves a mobile projector around a board, causing a character to avoid enemies. In another scenario, a characters projected onto a surface interacts with tangible objects placed around them. This would not be entertaining to a child, but could be educational as well.

The applications also extend to the world of work, as the demo below shows. in this case, HideOut projects a file system onto the top of a desk, allowing the user to choose folders by aiming the projector, not unlike how a person selects channels or options using a Wii remote by aiming it at a sensor bar. And the technology could even be used on smartphones and mobile devices, allowing people the ability to interact with their phone, Facetime, or Skype on larger surfaces.

disneyhideoutAnd of course, Disney is not the only company developing this kind of AR interactive technology, nor are they the first. Products like ColAR, an app that brings your coloring book images to life, and Eye of Judgment, an early PS3 game that accessed CCG cards and animated the characters on-screen, are already on the market. And while there does not appear to be a release date for Disney’s HideOut device just yet, its likely to be making the rounds within a few years tops.

For anyone familiar with the world of Augmented Reality and computing, this is likely to call to mind what Pranav Mistry demonstrated with his Sixth Sense technology, something which is being adopted by numerous developers for mobile computing. Since he first unveiled his concept back in 2009, the technology has been improving and the potential for commercial applications has been keeping pace.

In just a few years time, every storybook is likely to come equipped with its own projector. And I wouldn’t be surprised if it quickly becomes the norm to see people out on the streets interacting with images and worlds that only they can see. And those of us who are old enough will think back to a time when only crazy people did this!

In the meantime, check out this demo of the Disney’s HideOut device in action:


Source: extremetech.com

The Future of the Classroom

virtual_learning2As an educator, technological innovation is a subject that comes up quite often. Not only are teachers expected to keep up with trends so they can adapt them into their teaching strategies, classrooms,and prepare children in how to use them, they are also forced to contend with how these trends are changing the very nature of education itself. If there was one thing we were told repeatedly in Teacher’s College, it was that times are changing, and we must change along with them.

And as history has repeatedly taught us, technological integration not only changes the way we do things, but the way we perceive things. As we come to be more and more dependent on digital devices, electronics and wireless communications to give us instant access to a staggering amount of technology, we have to be concerned with how this will effect and even erode traditional means of information transmission. After all, how can reading and lecture series’ be expected to keep kid’s attention when they are accustomed to lighting fast videos, flash media, and games?

envisioning-the-future-of-education

And let’s not forget this seminal infographic, “Envisioning the future of educational technology” by Envisioning Technology. As one of many think tanks dedicated to predicting tech-trends, they are just one of many voices that is predicting that in time, education will no longer require the classroom and perhaps even teachers, because modern communications have made the locale and the leader virtually obsolete.

Pointing to such trends as Massive Open Online Courses, several forecasters foresee a grand transformation in the not too distant future where all learning happens online and in virtual environments. These would be based around “microlearning”, moments where people access the desired information through any number of means (i.e. a google search) and educate themselves without the need for instruction or direction.

virtual_learning3The technical term for this future trend is Socialstructured Learning = an aggregation of microlearning experiences drawn from a rich ecology of content and driven not by grades but by social and intrinsic rewards. This trend may very well be the future, but the foundations of this kind of education lie far in the past. Leading philosophers of education–from Socrates to Plutarch, Rousseau to Dewey–talked about many of these ideals centuries ago. The only difference is that today, we have a host of tools to make their vision reality.

One such tool comes in the form of augmented reality displays, which are becoming more and more common thanks to devices like Google Glass, the EyeTap or the Yelp Monocle. Simply point at a location, and you are able to obtain information you want about various “points of interest”. Imagine then if you could do the same thing, but instead receive historic, artistic, demographic, environmental, architectural, and other kinds of information embedded in the real world?

virtual_learningThis is the reasoning behind projects like HyperCities, a project from USC and UCLA that layers historical information on actual city terrain. As you walk around with your cell phone, you can point to a site and see what it looked like a century ago, who lived there, what the environment was like. The Smithsonian also has a free app called Leafsnap, which allows people to identify specific strains of trees and botany by simply snapping photos of its leaves.

In many respects, it reminds me of the impact these sorts of developments are having on politics and industry as well. Consider how quickly blogging and open source information has been supplanting traditional media – like print news, tv and news radio. Not only are these traditional sources unable to supply up-to-the-minute information compared to Twitter, Facebook, and live video streams, they are subject to censorship and regulations the others are not.

Attractive blonde navigating futuristic interfaceIn terms of industry, programs like Kickstarter and Indiegogo – crowdsources, crowdfunding, and internet-based marketing – are making it possible to sponsor and fund research and development initiatives that would not have been possible a few years ago. Because of this, the traditional gatekeepers, aka. corporate sponsors, are no longer required to dictate the pace and advancement of commercial development.

In short, we are entering into a world that is becoming far more open, democratic, and chaotic. Many people fear that into this environment, someone new will step in to act as “Big Brother”, or the pace of change and the nature of the developments will somehow give certain monolithic entities complete control over our lives. Personally, I think this is an outmoded fear, and that the real threat comes from the chaos that such open control and sourcing could lead to.

Is humanity ready for democratic anarchy – aka. Demarchy (a subject I am semi-obsessed with)? Do we even have the means to behave ourselves in such a free social arrangement? Opinion varies, and history is not the best indication. Not only is it loaded with examples of bad behavior, previous generations didn’t exactly have the same means we currently do. So basically, we’re flying blind… Spooky!

Sources: fastcoexist.com, envisioningtech.com

Robots Meet the Fashion Industry

robot_fashionRobotics has come a long way in recent years. Why, just take a look at NASA’s X1 Robotic exoskeleton, the Robonaut, robotaxis and podcars, the mind-controlled EMT robot suit, Stompy the giant robot, Kenshiro and Roboy, and the 3D printed android. I suppose it was only a matter of time before the world of fashion looked at this burgeoning marketplace and said “me too!”

And here are just some of the first attempts to merge the two worlds: First up there’s the robot mannequin, a means of making window shopping more fun for consumers. Known as the MarionetteBot, this automaton has already made several appearances in shops in Japan and can expected to be making debut appearances across Asia, in North America and the EU soon enough!

Check out the video below to see the robot in action. Designed by the Japanese robotics company United Arrows, the mannequin uses a Kinect to capture and help analyze the movements of a person while a motor moves a total of 16 wires to match the person’s pose. Though it is not yet fast or limber enough to perfectly mimic the moves of a person, the technology shows promise, and has provided many a window-shopper with plenty of entertainment!


And next up, there’s the equally impressive FitBot, a shape-shifting mannequin that is capable of emulating thousands of body types. Designed by the British virtual shopping company Fits.Me, the FitBot is designed to help take some of the guesswork out of online shopping, where a good 25% of purchases are regularly returned because they were apparently the wrong size.

But with the FitBots, along with a virtual fitting room, customers will be able to see right away what the clothes will look like on them. The only downside is you will have to know your exact measurements, because that’s what the software will use to adjust the bot’s body. Click here to visit the company’s website and see how the virtual fitting room works, and be sure to check out there video below:


What does the future hold for the fashion industry and high-tech? Well, already customers are able to see what they look like using Augmented Reality technology displays, and can get pictures thanks to tablet and mobile phone apps that can present them with the image before making a purchase. Not only does it take a lot of the legwork out of the process, its much more sanitary as far as trying on clothes is concerned. And in a world where clothing can be printed on site, it would be downright necessary.

The "magic mirror"
The “magic mirror”

But in the case of online shopping, its likely to take the form of a Kinect device in your computer, which scans your body and lets you know what size to get. How cool/lazy would that be? Oh, and as for those AR displays that put you in the clothes you want? They should come with a disclaimer: Objects in mirror are less attractive than they appear!

Source: en.akihabaranews.com, technabob.com

New Video Shows Google Glasses in Action

GOOGLE-GLASS-LOGO1In a recently released teaser video, designed to expand Google Glass’ potential consumer base from the tech-savvy to what it refers to as “bold, creative individuals”. While the first video of their futuristic AR specs followed a New Yorker as they conducted mundane tasks through the city, this new clip hosts a dizzying array of activities designed to show just how versatile the product can be.

This includes people engaged in skydiving, horseback riding, catwalking at a fashion show, and performing ballet. Quite the mixed bag! All the while, we are shown what it would look like to do these activities while wearing a set of Google glasses. The purpose here is not only to show their functionality, but to give people a taste of what it an augmented world looks like.google_glass

And based on product information, videos and stillpics from the Google Glass homepage, it also appears that these new AR glasses will take advantage of the latest in flexible technology. Much like the new breeds of smartphones and PDAs which will be making the rounds later this year, these glasses are bendable, flexible, and therefore much more survivable than conventional glasses, which probably cost just as much!

Apparently, this is all in keeping with CEO and co-founder Larry Page’s vision of a world where Google products make their users smarter. In a 2004 interview, Page shared that vision with people, saying: “Imagine your brain is being augmented by Google.” These futurist sentiments may be a step closer now, thanks to a device that can provide on-the-spot information about whatever situation or environment we find ourselves in.

google_glass1One thing is for sure though. With the help of some AR specs, the middle man is effectively cut out. No longer are we required to aim our smartphones, perform image searches, or type things into a search engine (like Google!). Now we can just point, look, and wait for the glasses to identify what we are looking at and provide the requisite information.

Check out the video below:

AR Glasses Restore Sight to the Blind

projectglass01As I’m sure most readers are aware, blindness comes in many forms. It’s not simply a matter of the afflicted not being able to see. In fact, there are many degrees of blindness and in most cases, depth perception is limited. But as it turns out, researchers at the University of Yamanashi in Japan have found a way to improve depth perception for the visually challenged using simple augmented reality glasses.

The process involved a pair of Wrap 920 ARs, an off-the-shelf brand of glasses that allow their wearer to interface with their PC, watch video or surf the internet, all the while staying mobile and carrying out their daily chores. The team then recorded images as seen by the wearer from the angle of both eyes, processed it with a quad-core Windows 7 machine, and then merged the images as they would appear to the healthy eye.

AR_glassesEssentially, the glasses perform the task of rendering a scene as it would be seen through “binocular vision” – i.e. in 3D. By taking two images, merging them together and defining what is near and what is far by their relative resolution, they were able to free the wearer’s brain from having to it for them. This in turn allowed them to interact more freely and effectively with their test environment: a dinner table with chop sticks and food in small bowls, arguably a tricky meal to navigate!

Naturally, the technology is still in its infancy. For one, the processed imagery has a fairly low resolution and frame rate, and it requires the glasses to be connected to a laptop. Newer tech will provide better resolution, faster frames per second, and a larger viewport. In addiiton, mobile computing with smartphones and tablets ought to provide for a greater degree of portability, to the point where all the required technology is in the glasses themselves.

posthumanLooking ahead, it is possible that there could be a f0rm of AR glasses specially programmed to deliver this kind of vision correction. The glasses would then act as a prosthesis, giving people with visual impairment an increased level of visual acuity, bringing them one step closer to vision recovery. And since this is also a development which will blurring the lines between humans and computers even more, it’s arguably another step closer to transhumanism!

Source: Extremetech.com