The Future of Devices: The Wearable Tech Boom

Wearable-Computing-RevolutionThe wearable computing revolution that has been taking place in recent years has drawn in developers and tech giants from all over the world. Though its roots are deep, dating back to the late 60’s and early 80’s with the Sword of Damocles concept and the work of Steve Mann. But in recent years, thanks to the development of Google Glass, the case for wearable tech has moved beyond hobbyists and enthusiasts and into the mainstream.

And with display glasses now accounted for, the latest boom in development appears to be centered on smart watches and similar devices. These range from fitness trackers with just a few features to wrist-mounted version of smart phones that boast the same constellations of functions and apps (email, phone, text, skyping, etc.) And as always, the big-name industries are coming forward with their own concepts and designs.

apple_iwatch1First, there’s the much-anticipated Apple iWatch, which is still in the rumor stage. The company has been working on this project since late 2012, but has begun accelerating the process as it tries to expand its family of mobile devices to the wrist. Apple has already started work on trademarking the name in a number of countries in preparation for a late 2014 launch perhaps in October, with the device entering mass production in July.

And though it’s not yet clear what the device will look like, several mockups and proposals have been leaked. And recent reports from sources like Reuters and The Wall Street Journal have pointed towards multiple screen sizes and price points, suggesting an array of different band and face options in various materials to position it as a fashion accessory. It is also expected to include a durable sapphire crystal display, produced in collaboration with Apple partner GT Advanced.

iWatchWhile the iWatch will perform some tasks independently using the new iOS 8 platform, it will be dependent on a compatible iOS device for functions like receiving messages, voice calls, and notifications. It is also expected to feature wireless charging capabilities, advanced mapping abilities, and possibly near-field communication (NFC) integration. But an added bonus, as indicated by Apple’s recent filing for patents associated with their “Health” app, is the inclusion of biometric and health sensors.

Along with serving as a companion device to the iPhone and iPad, the iWatch will be able to measure multiple different health-related metrics. Consistent with the features of a fitness band, these will things like a pedometer, calories burned, sleep quality, heart rate, and more. The iWatch is said to include 10 different sensors to track health and fitness, providing an overall picture of health and making the health-tracking experience more accessible to the general public.

iOS8Apple has reportedly designed iOS 8 with the iWatch in mind, and the two are said to be heavily reliant on one another. The iWatch will likely take advantage of the “Health” app introduced with iOS 8, which may display all of the health-related information gathered by the watch. Currently, Apple is gearing up to begin mass production on the iWatch, and has been testing the device’s fitness capabilities with professional athletes such as Kobe Bryant, who will likely go on to promote the iWatch following its release.

Not to be outdone, Google launched its own brand of smartwatch – known as Android Wear – at this year’s I/O conference. Android Wear is the company’s software platform for linking smartwatches from companies including LG, Samsung and Motorola to Android phones and tablets. A preview of Wear was introduced this spring, the I/O conference provided more details on how it will work and made it clear that the company is investing heavily in the notion that wearables are the future.

android-wear-showdownAndroid Wear takes much of the functionality of Google Now – an intelligent personal assistant – and uses the smartwatch as a home for receiving notifications and context-based information. For the sake of travel, Android Wear will push relevant flight, weather and other information directly to the watch, where the user can tap and swipe their way through it and use embedded prompts and voice control to take further actions, like dictating a note with reminders to pack rain gear.

For the most part, Google had already revealed most of what Wear will be able to do in its preview, but its big on-stage debut at I/O was largely about getting app developers to buy into the platform and keep designing for a peripheral wearable interface in mind. Apps can be designed to harness different Android Wear “intents.” For example, the Lyft app takes advantage of the “call me a car” intent and can be set to be the default means of hailing a ride when you tell your smartwatch to find you a car.

androidwear-3Google officials also claimed at I/O that the same interface being Android Wear will be behind their new Android Auto and TV, two other integrated services that allow users to interface with their car and television via a mobile device. So don’t be surprised if you see someone unlocking or starting their car by talking into their watch in the near future. The first Android Wear watches – the Samsung Gear Live and the LG G Watch – are available to pre-order and the round-face Motorola Moto 360 is expected to come out later this summer.

All of these steps in integration and wearable technology are signs of an emergent trend, one where just about everything from personal devices to automobiles and even homes are smart and networked together – thus giving rise to a world where everything is remotely accessible. This concept, otherwise known as the “Internet of Things”, is expected to become the norm in the next 20 years, and will include other technologies like display contacts and mediated (aka. augmented) reality.

And be sure to check out this concept video of the Apple iWatch:


Sources:
cnet.com, (2), macrumors.com, engadget.com, gizmag.com

Computex 2014

https://download.taiwantradeshows.com.tw/files/model/photo/CP/2014/PH00013391-2.jpgEarlier this month, Computex 2014 wrapped up in Taipei. And while this trade show may not have all the glitz and glamor of its counterpart in Vegas (aka. the Consumer Electronics Show), it is still an important launch pad for new IT products slated for release during the second half of the year. Compared to other venues, the Taiwanese event is more formal, more business-oriented, and for those people who love to tinker with their PCs.

For instance, it’s an accessible platform for many Asian vendors who may not have the budget to head to Vegas. And in addition to being cheaper to set up booths and show off their products, it gives people a chance to look at devices that wouldn’t often be seen in the western parts of the world. The timing of the show is also perfect for some manufacturers. Held in June, the show provides a fantastic window into the second half of the year.

https://i0.wp.com/www.lowyat.net/wp-content/uploads/2014/06/140602dellcomputex.jpgFor example, big name brands like Asus typically use the event to launch a wide range of products. This year, this included such items as the super-slim Asus Book Chi and the multi-mode Book V, which like their other products, have demonstrated that the company has a flair for innovation that easily rivals the big western and Korean names. In addition, Intel has been a long stalwart at Computex, premiered its fanless reference design tablet that runs on the Llama Mountain chipset.

And much like CES, there were plenty of cool gadgets to be seen. This included a GPS tracker that can be attached to a dog collar to track a pet’s movements; the Fujitsu laptop, a hardy new breed of gadget that showcases Japanese designers’ aim to make gear that are both waterproof and dustproof; the Rosewill Chic-C powerbank that consists of 1,000mAh battery packs that attach together to give additional power and even charge gadgets; and the Altek Cubic compact camera that fits in the palm of the hand.

https://i0.wp.com/twimages.vr-zone.net/2013/12/altek-Cubic-1.jpgAnd then there was the Asus wireless storage, a gadget that looks like an air freshener, but is actually a wireless storage device that can be paired with a smartphone using near-field communication (NFC) technology – essentially being able to transfer info simply by bringing a device into near-proximity with it. And as always, there were plenty of cameras, display headsets, mobile devices, and wearables. This last aspect was particularly ever-present, in the form of look-alike big-name wearables.

By and all large, the devices displayed this year were variations on a similar theme: wrist-mounted fitness trackers, smartwatches, and head-mounted smartglasses. The SiMEye smartglass display, for example, was every bit inspired by Google Glass, and even bears a strong resemblance. Though the show was admittedly short on innovation over imitation, it did showcase a major trend in the computing and tech industry.

http://img.scoop.it/FWa9Z463Q34KPAgzjElk3Tl72eJkfbmt4t8yenImKBVvK0kTmF0xjctABnaLJIm9In his keynote speech, Microsoft’s Nick Parker talked about the age of ubiquitous computing, and the “devices we carry on us, as opposed to with us.” What this means is, we may very well be entering a PC-less age, where computing is embedded in devices of increasingly diminished size. Eventually, it could even be miniaturized to the point where it is stitched into our clothing as accessed through contacts, never mind glasses or headsets!

Sources: cnet.com, (2), (3), computextaipei.com

The Internet of Things: AR and Real World Search

https://i0.wp.com/screenmediadaily.com/wp-content/uploads/2013/04/augmented_reality_5.jpgWhen it comes to the future, it is clear that the concept of the “Internet of Things” holds sway. This idea – which states that all objects will someday be identifiable thanks to a virtual representations on the internet – is at the center of a great deal of innovation that drives our modern economy. Be it wearables, wireless, augmented reality, voice or image recognition, that which helps us combine the real with the virtual are on the grow.

And so it’s really no surprise that innovators are looking to take augmented reality to the next level. The fruit of some of this labor is Blippar, a market-leading image-recognition and augmented reality platform. Lately, they have been working on a proof of concept for Google Glass showing that 3-D searches are doable. This sort of technology is already available n the form of apps for smartphones, but a central database is lacking that could any device into a visual search engine.

https://i0.wp.com/inthralld.com/wp-content/uploads/2013/08/Say-Hello-to-Ikeas-2014-Interactive-Catalog-App-4.jpegAs Ambarish Mitra, the head of Blippar stated, AR is already gaining traction among consumers thanks to some of the world’s biggest industrial players recognizing the shift to visually mediated lifestyles. Examples include IKEA’s interactive catalog, Heinz’s AR recipe booklet or Amazon’s recent integration of the Flow AR technology into its primary shopping app. As this trend continues, we will need a Wikipedia-like database for 3-D objects that will be available to us anytime, anywhere.

Social networks and platforms like Instagram, Pinterest, Snapchat and Facebook have all driven a cultural shift in the way people exchange information. This takes the form of text updates, instant messaging, and uploaded images. But as the saying goes, “a picture is worth a thousand words”. In short, information absorbed through visual learning has a marked advantage over that which is absorbed through reading and text.

Augmented_Reality_Contact_lensIn fact, a recent NYU study found that people retain close to 80 percent of information they consume through images versus just 10 percent of what they read. If people are able to regularly consume rich content from the real world through our devices, we could learn, retain, and express our ideas and information more effectively. Naturally, there will always be situations where text-based search is the most practical tool, but searches arise from real-world experiences.

Right now, text is the only option available, and oftentimes, people are unable to best describe what they are looking for. But an image-recognition technology that could turn any smartphone, tablet or wearable device into a scanner that could identify any 3-D object would vastly simplify things. Information could be absorbed in a more efficient way, using an object’s features and pulling up information from a rapidly learning engine.

https://i0.wp.com/24reviews.com/wp-content/uploads/2014/03/QWERTY-keyboard.pngFor better or for worse, wearable designs of consumer electronics have come to reflect a new understanding in the past few years. Basically, they have come to be extensions of our senses, much as Marshall McCluhan wrote in his 1964 book Understanding Media: The Extensions of Man. Google Glass is representative of this revolutionary change, a step in the direction of users interacting with the environment around them through technology.

Leading tech companies are already investing time and money into the development of their own AR products, and countless patents and research allocations are being made with every passing year. Facebook’s acquisition of virtual reality company Oculus Rift is the most recent example, but even Samsung received a patent earlier this year for a camera-based augmented reality keyboard that is projected onto the fingers of the user.

https://i0.wp.com/blogs.gartner.com/it-glossary/files/2012/07/internet-of-things-gartner.pngAugmented reality has already proven itself to be a multi-million dollar industry – with 60 million users and around half a billion dollars in global revenues in 2013 alone. It’s expected to exceed $1 billion annually by 2015, and combined with a Google-Glass type device, this AR could eventually allow individuals to build vast libraries of data that will be the foundation for finding any 3-D object in the physical world.

In other words, the Internet of Things will become one step closer, with an evolving database of visual information at the base of it that is becoming ever larger and (in all likelihood) smarter. Oh dear, I sense another Skynet reference coming on! And in the meantime, enjoy this video that showcases Blippar’s vision of what this future of image overlay and recognition will look like:


Source: wired.com, dashboardinsight.com, blippar.com

The Future is Here: Google Robot Cars Hit Milestone

google_robotcaIt’s no secret that amongst its many cooky and futuristic projects, self-driving cars are something Google hopes to make real within the next few years. Late last month, Google’s fleet of autonomous automobiles reached an important milestone. After many years of testing out on the roads of California and Nevada, they logged well 0ver one-million kilometers (700,000 miles) of accident-free driving. To celebrate, Google has released a new video that demonstrates some impressive software improvements that have been made over the last two years.

Most notably, the video demonstrates how its self-driving cars can now track hundreds of objects simultaneously – including pedestrians, an indicating cyclist, a stop sign held by a crossing guard, or traffic cones. This is certainly exciting news for Google and enthusiasts of automated technology, as it demonstrates that the ability of the vehicles to obey the rules of the road and react to situations that are likely to emerge and require decisions to be made.

google_robotcar_mapIn the video, we see the Google’s car reacting to railroad crossings, large stationary objects, roadwork signs and cones, and cyclists. In the case of the cyclist — not only are the cars able to discern whether the cyclist wants to move left or right, it even watches out for cyclists coming from behind when making a right turn. And while the demo certainly makes the whole process seem easy and fluid, there is actually a considerable amount of work going on behind the scenes.

For starters, there are around $150,000 of equipment in each car performing real-time LIDAR and 360-degree computer vision – a complex and computing-intensive task. The software powering the whole process is also the result of years of development. Basically, every single driving situation that can possibly occur has to be anticipated and then painstakingly programmed into the software. This is an important qualifier when it comes to these “autonomous vehicles”. They are not capable of independent judgement, only following pre-programmed instructions.

BMW 7 Series F01 July 2009 Miramas FranceWhile a lot has been said about the expensive LIDAR hardware, the most impressive aspect of the innovations is the computer vision. While LIDAR provides a very good idea of the lay of the land and the position of large objects (like parked cars), it doesn’t help with spotting speed limits or “construction ahead” signs, and whether what’s ahead is a cyclist or a railroad crossing barrier. And Google has certainly demonstrated plenty of adeptness in the past, what with their latest versions of Street View and their Google Glass project.

Naturally, Google says that it has lots of issues to overcome before its cars are ready to move out from their home town of Mountain View, California and begin driving people around. For instance, the road maps needed to be finely tuned and expanded, and Google is likely to be selling map packages in the future in the same way that apps are sold for smartphones. In the mean time, the adoption of technologies like adaptive cruise control (ACC) and lane keep assist (LKA) will bring lots of almost-self-driving cars to the road over the next few years.

In the meantime, be sure to check out the video of the driverless car in action:


Source:
extremetech.com

The Future is Here: Zombie Fitness App!

3027311-poster-p-runnerFleeing a horde of flesh-eating zombies? There’s an app for that. Seriously though, it seems that some cheeky IT developer recently created a fitness app for Google Glass that motivated runners by letting them know if they pace they are setting would be enough to flee from a pursuing zombie. But of course, that’s just one option that comes with this Glass application – known as Race Yourself – which first previewed this past January.

Mainly, the app seeks to take advantage of Google Glass display technology, which allows runners to see their progress in real-time without having to check their watch, device, or a series of chimes. Using the Glass’ heads-up display, it allows users to keep track of time, distance, and calories by simply taking a quick glance at the screen. And it comes complete with some games, including running from zombies or fleeing a giant boulder (a la Raiders of the Lost Ark).

google_glass1While it is still in development, early reviews state that the app would be of use to both casual runners and those training for a big race. In addition to keeping track of your time and distance, runners are able to see how many calories they’ve burned and the pace they are setting. In the end, these useful stats, which can be consulted at a glance, are the real point of the app. The games (which require far more concentration) are just a fun bonus.

In addition to the zombie chase and the fleeing of the boulder, they include running against an Olympic athlete (100 metes in 9 seconds), racing  against a train to save a woman lying on the tracks, or against your own speed during the last 50 meters of your run, where the name Race Yourself comes from. Runners using the app can expect two hours of battery life, which is more than enough for a good workout.

Richard Goodrum, COO of Race Yourself, says that the app will be launching later this year, at about the same time that Glass opens up to the general public. It will be joined by apps like Strava Cycling app, which offers similar stats to cyclists. And while you’re waiting, be sure to check out this video of the app in action:


Source:
fastcoexist.com

The Future is Here: Google Glass for the Battlefield

q-warrior see through displayWearing a Google Glass headset in public may get you called a “hipster”, “poser”, and (my personal favorite) “glasshole”. But not surprisingly, armies around the world are looking to turn portable displays into a reality. Combined with powered armor, and computer-assisted aiming, display glasses are part of just about every advanced nation’s Future Soldier program.

Q-Warrior is one such example, the latest version of helmet-mounted display technology from BAE Systems’ Q-Sight line. The 3D heads-up display provides full-color, high resolution images and overlays data and a video stream over the soldier’s view of the real world. In short, it is designed to provide soldiers in the field with rapid, real-time “situational awareness”.

q-warrior1The Q-Warrior also includes enhanced night vision, waypoints and routing information, the ability to identify hostile and non-hostile forces, track personnel and assets, and coordinate small unit actions. As Paul Wright, the soldier systems business development lead at BAE Systems’ Electronic Systems, said in a recent statement:

Q-Warrior increases the user’s situational awareness by providing the potential to display ‘eyes-out’ information to the user, including textual information, warnings and threats. The biggest demand, in the short term at least, will be in roles where the early adoption of situational awareness technology offers a defined advantage.

The display is being considered for use as part of the Army Tactical Assault Light Operator Suit (TALOS) system, a powered exoskeleton with liquid armor capable of stopping bullets and the ability to apply wound-sealing foam that is currently under development.

q-warrior2As Lt. Col. Karl Borjes, a U.S. Army Research, Development and Engineering Command (RDECOM) science adviser, said in a statement:

[The] requirement is a comprehensive family of systems in a combat armor suit where we bring together an exoskeleton with innovative armor, displays for power monitoring, health monitoring, and integrating a weapon into that — a whole bunch of stuff that RDECOM is playing heavily in.

The device is likely to be used by non-traditional military units with reconnaissance roles, such as Forward Air Controllers/Joint Tactical Aircraft Controllers (JTACS) or with Special Forces during counter terrorist tasks. The next level of adoption could be light role troops such as airborne forces or marines, where technical systems and aggression help to overcome their lighter equipment.

iron_man_HUDMore and more, the life in the military is beginning to imitate art – in this case, Iron Man or Starship Troopers (the novel, not the movie). In addition to powered exoskeletons and heads-up-displays, concepts that are currently in development include battlefield robots, autonomous aircraft and ships, and even direct-energy weapons.

And of course, BAE Systems was sure to make a promotional video, showcasing the concept and technology behind it. And be sure to go by the company’s website for additional footage, photos and descriptions of the Q-Warrior system. Check it out below:


Sources: wired.com, baesystems.com

Top Stories from CES 2014

CES2014_GooglePlus_BoxThe Consumer Electronics Show has been in full swing for two days now, and already the top spots for most impressive technology of the year has been selected. Granted, opinion is divided, and there are many top contenders, but between displays, gaming, smartphones, and personal devices, there’s been no shortage of technologies to choose from.

And having sifted through some news stories from the front lines, I have decided to compile a list of what I think the most impressive gadgets, displays and devices of this year’s show were. And as usual, they range from the innovative and creative, to the cool and futuristic, with some quirky and fun things holding up the middle. And here they are, in alphabetical order:

celestron_cosmosAs an astronomy enthusiast, and someone who enjoys hearing about new and innovative technologies, Celestron’s Cosmos 90GT WiFi Telescope was quite the story. Hoping to make astronomy more accessible to the masses, this new telescope is the first that can be controlled by an app over WiFi. Once paired, the system guides stargazers through the cosmos as directions flow from the app to the motorized scope base.

In terms of comuting, Lenovo chose to breathe some new life into the oft-declared dying industry of desktop PCs this year, thanks to the unveiling of their Horizon 2. Its 27-inch touchscreen can go fully horizontal, becoming both a gaming and media table. The large touch display has a novel pairing technique that lets you drop multiple smartphones directly onto the screen, as well as group, share, and edit photos from them.

Lenovo Horizon 2 Aura scanNext up is the latest set of display glasses to the world by storm, courtesy of the Epson Smart Glass project. Ever since Google Glass was unveiled in 2012, other electronics and IT companies have been racing to produce a similar product, one that can make heads-up display tech, WiFi connectivity, internet browsing, and augmented reality portable and wearable.

Epson was already moving in that direction back in 2011 when they released their BT100 augmented reality glasses. And now, with their Moverio BT200, they’ve clearly stepped up their game. In addition to being 60 percent lighter than the previous generation, the system has two parts – consisting of a pair of glasses and a control unit.

moverio-bt200-1The glasses feature a tiny LCD-based projection lens system and optical light guide which project digital content onto a transparent virtual display (960 x 540 resolution) and has a camera for video and stills capture, or AR marker detection. With the incorporation of third-party software, and taking advantage of the internal gyroscope and compass, a user can even create 360 degree panoramic environments.

At the other end, the handheld controller runs on Android 4.0, has a textured touchpad control surface, built-in Wi-Fi connectivity for video content streaming, and up to six hours of battery life.


The BT-200 smart glasses are currently being demonstrated at Epson’s CES booth, where visitors can experience a table-top virtual fighting game with AR characters, a medical imaging system that allows wearers to see through a person’s skin, and an AR assistance app to help perform unfamiliar tasks .

This year’s CES also featured a ridiculous amount of curved screens. Samsung seemed particularly proud of its garish, curved LCD TV’s, and even booked headliners like Mark Cuban and Michael Bay to promote them. In the latter case, this didn’t go so well. However, one curved screen device actually seemed appropriate – the LG G Flex 6-inch smartphone.

LG_G_GlexWhen it comes to massive curved screens, only one person can benefit from the sweet spot of the display – that focal point in the center where they feel enveloped. But in the case of the LG G Flex-6, the subtle bend in the screen allows for less light intrusion from the sides, and it distorts your own reflection just enough to obscure any distracting glare. Granted, its not exactly the flexible tech I was hoping to see, but its something!

In the world of gaming, two contributions made a rather big splash this year. These included the Playstation Now, a game streaming service just unveiled by Sony that lets gamers instantly play their games from a PS3, PS4, or PS Vita without downloading and always in the most updated version. Plus, it gives users the ability to rent titles they’re interested in, rather than buying the full copy.

maingear_sparkThen there was the Maingear Spark, a gaming desktop designed to run Valve’s gaming-centric SteamOS (and Windows) that measures just five inches square and weighs less than a pound. This is a big boon for gamers who usually have to deal gaming desktops that are bulky, heavy, and don’t fit well on an entertainment stand next to other gaming devices, an HD box, and anything else you might have there.

Next up, there is a device that helps consumers navigate the complex world of iris identification that is becoming all the rage. It’s known as the Myris Eyelock, a simple, straightforward gadget that takes a quick video of your eyeball, has you log in to your various accounts, and then automatically signs you in, without you ever having to type in your password.

myris_eyelockSo basically, you can utilize this new biometric ID system by having your retinal scan on your person wherever you go. And then, rather than go through the process of remembering multiple (and no doubt, complicated passwords, as identity theft is becoming increasingly problematic), you can upload a marker that leaves no doubt as to your identity. And at less than $300, it’s an affordable option, too.

And what would an electronics show be without showcasing a little drone technology? And the Parrot MiniDrone was this year’s crowd pleaser: a palm-sized, camera-equipped, remotely-piloted quad-rotor. However, this model has the added feature of two six-inch wheels, which affords it the ability to zip across floors, climb walls, and even move across ceilings! A truly versatile personal drone.

 

scanaduAnother very interesting display this year was the Scanadu Scout, the world’s first real-life tricorder. First unveiled back in May of 2013, the Scout represents the culmination of years of work by the NASA Ames Research Center to produce the world’s first, non-invasive medical scanner. And this year, they chose to showcase it at CES and let people test it out on themselves and each other.

All told, the Scanadu Scout can measure a person’s vital signs – including their heart rate, blood pressure, temperature – without ever touching them. All that’s needed is to place the scanner above your skin, wait a moment, and voila! Instant vitals. The sensor will begin a pilot program with 10,000 users this spring, the first key step toward FDA approval.

wowwee_mip_sg_4And of course, no CES would be complete without a toy robot or two. This year, it was the WowWee MiP (Mobile Inverted Pendulum) that put on a big show. Basically, it is an eight-inch bot that balances itself on dual wheels (like a Segway), is controllable by hand gestures, a Bluetooth-conncted phone, or can autonomously roll around.

Its sensitivity to commands and its ability to balance while zooming across the floor are super impressive. While on display, many were shown carrying a tray around (sometimes with another MiP on a tray). And, a real crowd pleaser, the MiP can even dance. Always got to throw in something for the retro 80’s crowd, the people who grew up with the SICO robot, Jinx, and other friendly automatons!

iOptikBut perhaps most impressive of all, at least in my humble opinion, is the display of the prototype for the iOptik AR Contact Lens. While most of the focus on high-tech eyewear has been focused on wearables like Google Glass of late, other developers have been steadily working towards display devices that are small enough to worse over your pupil.

Developed by the Washington-based company Innovega with support from DARPA, the iOptik is a heads-up display built into a set of contact lenses. And this year, the first fully-functioning prototypes are being showcased at CES. Acting as a micro-display, the glasses project a picture onto the contact lens, which works as a filter to separate the real-world from the digital environment and then interlaces them into the one image.

ioptik_contact_lenses-7Embedded in the contact lenses are micro-components that enable the user to focus on near-eye images. Light projected by the display (built into a set of glasses) passes through the center of the pupil and then works with the eye’s regular optics to focus the display on the retina, while light from the real-life environment reaches the retina via an outer filter.

This creates two separate images on the retina which are then superimposed to create one integrated image, or augmented reality. It also offers an alternative solution to traditional near-eye displays which create the illusion of an object in the distance so as not to hinder regular vision. At present, still requires clearance from the FDA before it becomes commercially available, which may come in late 2014 or early 2015.


Well, its certainly been an interesting year, once again, in the world of electronics, robotics, personal devices, and wearable technology. And it manages to capture the pace of change that is increasingly coming to characterize our lives. And according to the tech site Mashable, this year’s show was characterized by televisions with 4K pixel resolution, wearables, biometrics, the internet of personalized and data-driven things, and of course, 3-D printing and imaging.

And as always, there were plenty of videos showcasing tons of interesting concepts and devices that were featured this year. Here are a few that I managed to find and thought were worthy of passing on:

Internet of Things Highlights:


Motion Tech Highlights:


Wearable Tech Highlights:


Sources: popsci.com, (2), cesweb, mashable, (2), gizmag, (2), news.cnet

The First Government-Recognized Cyborg

harbisson_cyborgThose who follow tech news are probably familiar with the name Neil Harbisson. As a futurist, and someone who was born with a condition known as achromatopsia – which means he sees everything in shades in gray – he spent much of his life looking to augment himself so that he could see what other people see. And roughly ten years ago, he succeeded by creating a device known as the “eyeborg”.

Also known as a cybernetic “third eye”, this device – which is permanently integrated to his person – allows Harbisson to “hear” colors by translating the visual information into specific sounds. After years of use, he is able to discern different colors based on their sounds with ease. But what’s especially interesting about this device is that it makes Harbisson a bona fide cyborg.

neil_harbisson1What’s more, Neil Harbisson is now the first person on the planet to have a passport photo that shows his cyborg nature. After a long battle with UK authorities, his passport now features a photo of him, eyeborg and all. And now, he is looking to help other cyborgs like himself gain more rights, mainly because of the difficulties such people have been facing in recent years.

Consider the case of Steve Mann, the man recognized as the “father of wearable computers”. Since the 1970’s, he has been working towards the creation of fully-portable, ergonomic computers that people can carry with them wherever they go. The result of this was the EyeTap, a wearable computer he invented in 1998 and then had grafted to his head.

steve-mann1And then in July of 2012, he was ejected from a McDonald’s in Paris after several staff members tried to forcibly remove the wearable device. And then in April of 2013, a bar in Seattle banned patrons from using Google Glass, declaring that “ass-kickings will be encouraged for violators.” Other businesses across the world have followed, fearing that people wearing these devices may be taking photos or video and posting it to the internet.

Essentially, Harbisson believes that recent technological advances mean there will be a rapid growth in the number of people with cybernetic implants in the near future, implants that can will either assist them or give them enhanced abilities. As he put it in a recent interview:

Our instincts and our bodies will change. When you incorporate technology into the body, the body will need to change to accommodate; it modifies and adapts to new inputs. How we adapt to this change will be very interesting.

cyborg_foundationOther human cyborgs include Stelarc, a performance artist who has implanted a hearing ear on his forearm; Kevin Warwick, the “world’s first human cyborg” who has an RFID chip embedded beneath his skin, allowing him to control devices such as lights, doors and heaters; and “DIY cyborg” Tim Cannon, who has a self-administered body-monitoring device in his arm.

And though they are still in the minority, the number of people who live with integrated electronic or bionic devices is growing. In order to ensure that the transition Harbisson foresees is accomplished as painlessly as possible, he created the Cyborg Foundation in 2010. According to their website, the organization’s mission statement is to:

help humans become cyborgs, to promote the use of cybernetics as part of the human body and to defend cyborg rights [whilst] encouraging people to create their own sensory extensions.

transhumanism1And as mind-controlled prosthetics, implants, and other devices meant to augment a person’s senses, faculties, and ambulatory ability are introduced, we can expect people to begin to actively integrate them into their bodies. Beyond correcting for injuries or disabilities, the increasing availability of such technology is also likely to draw people looking to enhance their natural abilities.

In short, the future is likely to be a place in which cyborgs are a common features of our society. The size and shape of that society is difficult to predict, but given that its existence is all but certain, we as individuals need to be able to address it. Not only is it an issue of tolerance, there’s also the need for informed decision-making when it comes whether or not individuals need to make cybernetic enhancements a part of their lives.

Basically, there are some tough issues that need to be considered as we make our way into the future. And having a forum where they can be discussed in a civilized fashion may be the only recourse to a world permeated by prejudice and intolerance on the one hand, and runaway augmentation on the other.

johnnymnemonic04In the meantime, it might not be too soon to look into introducing some regulations, just to make sure we don’t have any yahoos turning themselves into killer cyborgs in the near future! *PS: Bonus points for anyone who can identify which movie the photo above is taken from…

Sources: IO9.com, dezeen.com, eyeborg.wix.com

Immortality Inc: Google’s “Calico”

calico-header-640x353Google has always been famous for investing in speculative ventures and future trends. Between their robot cars, Google Glass, the development of AI (the Google Brain), high-speed travel (the Hyperloop), and alternative energy, their seems to be no limit to what Musk and Page’s company will take on. And now, with Calico, Google has made the burgeoning industry of life-extension its business.

The newly formed company has set itself to “focus on health and well-being, in particular the challenge of aging and associated diseases.” Those were the words of Google co-founder Larry Page, who issued a two-part press release back in September. From this, it is known that Calico will focus on life extension and improvement. But in what way and with what business model, the company has yet to explain.

DNA-1What does seem clear at this point is that Art Levinson, the chairman of Apple and former CEO of Genentech (a pioneer in biotech) will be the one to head up this new venture. His history working his way from a research scientist on up to CEO of Genentech makes him the natural choice, since he will bring medical connections and credibility to a company that’s currently low on both.

Google Health, the company’s last foray into the health industry, was a failure for the company. This site, which began in 2008 and shut down in 2011, was a personal health information centralization service that allowed Google users to volunteer their health records. Once entered, the site would provide them with a merged health record, information on conditions, and possible interactions between drugs, conditions, and allergies.

Larry_PageIn addition, the reasons for the company’s venture into the realm of health and aging may have something to do with Larry Page’s own recent health concerns. For years, Page has struggled with vocal nerve strain, which led him to make a significant donation to research into the problem. But clearly, Calico aims to go beyond simple health problems and cures for known diseases.

google.cover.inddIn a comment to Time Magazine, Page stated that a cure for cancer would only extent the average human lifespan by 3 years. They want to think bigger than that, which could mean addressing the actual causes of aging, the molecular processes that break down cells. Given that Google Ventures included life extension technology as part of their recent bid to attract engineering students, Google’s top brass might have a slightly different idea.

And while this might all sound a bit farfetched, the concept of life-extension and even clinical immortality have been serious pursuits for some time. We tend to think of aging as a fact of life, something that is as inevitable as it is irreversible. However, a number of plausible scenarios have already been discussed that could slow or even end this process, ranging from genetic manipulation, nanotechnology, implant technology, and cellular therapy.

Fountain_of_Eternal_Life_cropWhether or not Calico will get into any of these fields remains to be seen. But keeping in mind that this is the company that has proposed setting aside land for no-hold barred experimentation and even talked about building a Space Elevator with a straight face. I wouldn’t be surprised if they started building cryogentic tanks and jars for preserving disembodies brains before long!

Source: extremetech.com, (2), content.time.com

Digital Eyewear Through the Ages

google_glassesGiven the sensation created by the recent release of Google Glass – a timely invention that calls to mind everything from 80’s cyberpunk to speculations about our cybernetic, transhuman future – a lot of attention has been focused lately on personalities like Steve Mann, Mark Spritzer, and the history of wearable computers.

For decades now, visionaries and futurists have been working towards a day when all personal computers are portable and blend seamlessly into our daily lives. And with countless imitators coming forward to develop their own variants and hate crimes being committed against users, it seems like portable/integrated machinery is destined to become an issue no one will be able to ignore.

And so I thought it was high time for a little retrospective, a look back at the history of eyewear computers and digital devices and see how far it has come. From its humble beginnings with bulky backpacks and large, head-mounted displays, to the current age of small fixtures that can be worn as easily as glasses, things certainly have changed. And the future is likely to get even more fascinating, weird, and a little bit scary!

Sword of Damocles (1968):
swordofdamoclesDeveloped by Ivan Sutherland and his student Bob Sprouli at the University of Utah in 1968, the Sword of Damocles was the world’s first heads-up mounted display. It consisted of a headband with a pair of small cathode-ray tubes attached to the end of a large instrumented mechanical arm through which head position and orientation were determined.

Hand positions were sensed via a hand-held grip suspended at the end of three fishing lines whose lengths were determined by the number of rotations sensed on each of the reels. Though crude by modern standards, this breakthrough technology would become the basis for all future innovation in the field of mobile computing, virtual reality, and digital eyewear applications.

WearComp Models (1980-84):
WearComp_1_620x465Built by Steve Mann (inventor of the EyeTap and considered to be the father of wearable computers) in 1980, the WearComp1 cobbled together many devices to create visual experiences. It included an antenna to communicate wirelessly and share video. In 1981, he designed and built a backpack-mounted wearable multimedia computer with text, graphics, and multimedia capability, as well as video capability.

Wearcomp_4By 1984, the same year that Apple’s Macintosh was first shipped and the publication of William Gibson’s science fiction novel, “Neuromancer”, he released the WearComp4 model. This latest version employed clothing-based signal processing, a personal imaging system with left eye display, and separate antennas for simultaneous voice, video, and data communication.

Private Eye (1989):
Private_eye_HUDIn 1989 Reflection Technology marketed the Private Eye head-mounted display, which scanned a vertical array of LEDs across the visual field using a vibrating mirror. The monochrome screen was 1.25-inches on the diagonal, but images appear to be a 15-inch display at 18-inches distance.

EyeTap Digital Eye (1998):
EyeTap1
Steve Mann is considered the father of digital eyewear and what he calls “mediated” reality. He is a professor in the department of electrical and computer engineering at the University of Toronto and an IEEE senior member, and also serves as chief scientist for the augmented reality startup, Meta. The first version of the EyeTap was produced in the 1970’s and was incredibly bulky by modern standards.

By 1998, he developed the one that is commonly seen today, mounted over one ear and in front of one side of the face. This version is worn in front of the eye, recording what is immediately in front of the viewer and superimposing the view as digital imagery. It uses a beam splitter to send the same scene to both the eye and a camera, and is tethered to a computer worn to his body in a small pack.

MicroOptical TASK-9 (2000):
MicroOptical TASK-9Founded in 1995 by Mark Spitzer, who is now a director at the Google X lab. the company produced several patented designs which were bought up by Google after the company closed in 2010. One such design was the TASK-9, a wearable computer that is attachable to a set of glasses. Years later, MicroOptical’s line of viewers remain the lightest head-up displays available on the market.

Vuzix (1997-2013):
Vuzix_m100Founded in 1997, Vuzix created the first video eyewear to support stereoscopic 3D for the PlayStation 3 and Xbox 360. Since then, Vuzix went on to create the first commercially produced pass-through augmented reality headset, the Wrap 920AR (seen at bottom). The Wrap 920AR has two VGA video displays and two cameras that work together to provide the user a view of the world which blends real world inputs and computer generated data.

vuzix-wrapOther products of note include the Wrap 1200VR, a virtual reality headset that has numerous applications – everything from gaming and recreation to medical research – and the Smart Glasses M100, a hands free display for smartphones. And since the Consumer Electronics Show of 2011, they have announced and released several heads-up AR displays that are attachable to glasses.

vuzix_VR920

MyVu (2008-2012):
Founded in 1995, also by Mark Spitzer, MyVu developed several different types of wearable video display glasses before closing in 2012. The most famous was their Myvu Personal Media Viewer (pictured below), a set of display glasses that was released in 2008. These became instantly popular with the wearable computer community because they provided a cost effective and relatively easy path to a DIY, small, single eye, head-mounted display.myvu_leadIn 2010, the company followed up with the release of the Viscom digital eyewear (seen below), a device that was developed in collaboration with Spitzer’s other company, MicroOptical. This smaller, head mounted display device comes with earphones and is worn over one eye like a pair of glasses, similar to the EyeTap.

myvu_viscom

Meta Prototype (2013):
Developed by Meta, a Silicon Valley startup that is being funded with the help of a Kickstarter campaign and supported by Steve Mann, this wearable computing eyewear ultizes the latest in VR and projection technology. Unlike other display glasses, Meta’s eyewear enters 3D space and uses your hands to interact with the virtual world, combining the benefits of the Oculus Rift and those being offered by “Sixth Sense” technology.

meta_headset_front_on_610x404The Meta system includes stereoscopic 3D glasses and a 3D camera to track hand movements, similar to the portrayals of gestural control in movies like “Iron Man” and “Avatar.” In addition to display modules embedded in the lenses, the glasses include a portable projector mounted on top. This way, the user is able to both project and interact with computer simulations.

Google Glass (2013):
Google Glass_Cala
Developed by Google X as part of their Project Glass, the Google Glass device is a wearable computer with an optical head-mounted display (OHMD) that incorporates all the major advances made in the field of wearable computing for the past forty years. These include a smartphone-like hands-free format, wireless internet connection, voice commands and a full-color augmented-reality display.

Development began in 2011 and the first prototypes were previewed to the public at the Google I/O annual conference in San Francisco in June of 2012. Though they currently do not come with fixed lenses, Google has announced its intention to partner with sunglass retailers to equip them with regular and prescription lenses. There is also talk of developing contact lenses that come with embedded display devices.

Summary:
Well, that’s the history of digital eyewear in a nutshell. And as you can see, since the late 60’s, the field has progressed by leaps and bounds. What was once a speculative and visionary pursuit has now blossomed to become a fully-fledged commercial field, with many different devices being produced for public consumption.

At this rate, who knows what the future holds? In all likelihood, the quest to make computers more portable and ergonomic will keep pace with the development of more sophisticated electronics and computer chips, miniaturization, biotechnology, nanofabrication and brain-computer interfacing.

The result will no doubt be tiny CPUs that can be implanted in the human body and integrated into our brains via neural chips and tiny electrodes. In all likelihood, we won’t even need voice commands at that point, because neuroscience will have developed a means to communicate directly to our devices via brainwaves. The age of cybernetics will have officially dawned!

Like I said… fascinating, weird, and a little bit scary!

‘High Dynamic Range’