The Future of Devices: The Wearable Tech Boom

Wearable-Computing-RevolutionThe wearable computing revolution that has been taking place in recent years has drawn in developers and tech giants from all over the world. Though its roots are deep, dating back to the late 60’s and early 80’s with the Sword of Damocles concept and the work of Steve Mann. But in recent years, thanks to the development of Google Glass, the case for wearable tech has moved beyond hobbyists and enthusiasts and into the mainstream.

And with display glasses now accounted for, the latest boom in development appears to be centered on smart watches and similar devices. These range from fitness trackers with just a few features to wrist-mounted version of smart phones that boast the same constellations of functions and apps (email, phone, text, skyping, etc.) And as always, the big-name industries are coming forward with their own concepts and designs.

apple_iwatch1First, there’s the much-anticipated Apple iWatch, which is still in the rumor stage. The company has been working on this project since late 2012, but has begun accelerating the process as it tries to expand its family of mobile devices to the wrist. Apple has already started work on trademarking the name in a number of countries in preparation for a late 2014 launch perhaps in October, with the device entering mass production in July.

And though it’s not yet clear what the device will look like, several mockups and proposals have been leaked. And recent reports from sources like Reuters and The Wall Street Journal have pointed towards multiple screen sizes and price points, suggesting an array of different band and face options in various materials to position it as a fashion accessory. It is also expected to include a durable sapphire crystal display, produced in collaboration with Apple partner GT Advanced.

iWatchWhile the iWatch will perform some tasks independently using the new iOS 8 platform, it will be dependent on a compatible iOS device for functions like receiving messages, voice calls, and notifications. It is also expected to feature wireless charging capabilities, advanced mapping abilities, and possibly near-field communication (NFC) integration. But an added bonus, as indicated by Apple’s recent filing for patents associated with their “Health” app, is the inclusion of biometric and health sensors.

Along with serving as a companion device to the iPhone and iPad, the iWatch will be able to measure multiple different health-related metrics. Consistent with the features of a fitness band, these will things like a pedometer, calories burned, sleep quality, heart rate, and more. The iWatch is said to include 10 different sensors to track health and fitness, providing an overall picture of health and making the health-tracking experience more accessible to the general public.

iOS8Apple has reportedly designed iOS 8 with the iWatch in mind, and the two are said to be heavily reliant on one another. The iWatch will likely take advantage of the “Health” app introduced with iOS 8, which may display all of the health-related information gathered by the watch. Currently, Apple is gearing up to begin mass production on the iWatch, and has been testing the device’s fitness capabilities with professional athletes such as Kobe Bryant, who will likely go on to promote the iWatch following its release.

Not to be outdone, Google launched its own brand of smartwatch – known as Android Wear – at this year’s I/O conference. Android Wear is the company’s software platform for linking smartwatches from companies including LG, Samsung and Motorola to Android phones and tablets. A preview of Wear was introduced this spring, the I/O conference provided more details on how it will work and made it clear that the company is investing heavily in the notion that wearables are the future.

android-wear-showdownAndroid Wear takes much of the functionality of Google Now – an intelligent personal assistant – and uses the smartwatch as a home for receiving notifications and context-based information. For the sake of travel, Android Wear will push relevant flight, weather and other information directly to the watch, where the user can tap and swipe their way through it and use embedded prompts and voice control to take further actions, like dictating a note with reminders to pack rain gear.

For the most part, Google had already revealed most of what Wear will be able to do in its preview, but its big on-stage debut at I/O was largely about getting app developers to buy into the platform and keep designing for a peripheral wearable interface in mind. Apps can be designed to harness different Android Wear “intents.” For example, the Lyft app takes advantage of the “call me a car” intent and can be set to be the default means of hailing a ride when you tell your smartwatch to find you a car.

androidwear-3Google officials also claimed at I/O that the same interface being Android Wear will be behind their new Android Auto and TV, two other integrated services that allow users to interface with their car and television via a mobile device. So don’t be surprised if you see someone unlocking or starting their car by talking into their watch in the near future. The first Android Wear watches – the Samsung Gear Live and the LG G Watch – are available to pre-order and the round-face Motorola Moto 360 is expected to come out later this summer.

All of these steps in integration and wearable technology are signs of an emergent trend, one where just about everything from personal devices to automobiles and even homes are smart and networked together – thus giving rise to a world where everything is remotely accessible. This concept, otherwise known as the “Internet of Things”, is expected to become the norm in the next 20 years, and will include other technologies like display contacts and mediated (aka. augmented) reality.

And be sure to check out this concept video of the Apple iWatch:


Sources:
cnet.com, (2), macrumors.com, engadget.com, gizmag.com

The Future is Here: The Thumbles Robot Touch Screen

thumblesSmartphones and tablets, with their high-resolution touchscreens and ever-increasing number of apps, are all very impressive and good. And though some apps are even able to jump from the screen in 3D, the vast majority are still limited to two-dimensions and are limited in terms of interaction. More and more, interface designers are attempting to break this fourth wall and make information something that you can really feel and move with your own two hands.

Take the Thumbles, an interactive screen created by James Patten from Patten Studio. Rather than your convention 2D touchscreen that responds to the heat in your fingers, this desktop interface combines touch screens with tiny robots that act as interactive controls. Whenever a new button would normally pop on the screen, a robot drives up instead, precisely parking for the user to grab it, turn it, or rearrange it. And the idea is surprisingly versatile.

thumbles1As the video below demonstrates, the robots serve all sorts of functions. In various applications, they appear as grabbable hooks at the ends of molecules, twistable knobs in a sound and video editor, trackable police cars on traffic maps, and swappable space ships in a video game. If you move or twist one robot, another robot can mirror the movement perfectly. And thanks to their omnidirectional wheels, the robots always move with singular intent, driving in any direction without turning first.

Naturally, there are concerns about the practicality of this technology where size is concerned. While it makes sense for instances where space isn’t a primary concern, it doesn’t exactly work for a smartphone or tablet touchscreen. In that case, the means simply don’t exist to create robots small enough to wander around the tiny screen space and act as interfaces. But in police stations, architecture firms, industrial design settings, or military command centers, the Thumbles and systems like it are sure to be all the rage.

thumbles2Consider another example shown in the video, where we see a dispatcher who is able to pick up and move a police car to a new location to dispatch it. Whereas a dispatcher is currently required to listen for news of a disturbance, check an available list of vehicles, see who is close to the scene, and then call that police officer to go to that scene, this tactile interface streamlines such tasks into quick movements and manipulations.

The same holds true for architects who want to move design features around on a CAD model; corporate officers who need to visualize their business model; landscapers who want to see what a stretch of Earth will look like once they’ve raised a section of land, changed the drainage, planted trees or bushes, etc.; and military planners can actively tell different units on a battlefield (or a natural disaster) what to do in real-time, responding to changing circumstances quicker and more effectively, and with far less confusion.

Be sure to check out the demo video below, showing the Thumbles in action. And be sure to check out Patten Studio on their website.


Sources: fastcodesign.com, pattenstudio.com

The Future is Here: Google Robot Cars Hit Milestone

google_robotcaIt’s no secret that amongst its many cooky and futuristic projects, self-driving cars are something Google hopes to make real within the next few years. Late last month, Google’s fleet of autonomous automobiles reached an important milestone. After many years of testing out on the roads of California and Nevada, they logged well 0ver one-million kilometers (700,000 miles) of accident-free driving. To celebrate, Google has released a new video that demonstrates some impressive software improvements that have been made over the last two years.

Most notably, the video demonstrates how its self-driving cars can now track hundreds of objects simultaneously – including pedestrians, an indicating cyclist, a stop sign held by a crossing guard, or traffic cones. This is certainly exciting news for Google and enthusiasts of automated technology, as it demonstrates that the ability of the vehicles to obey the rules of the road and react to situations that are likely to emerge and require decisions to be made.

google_robotcar_mapIn the video, we see the Google’s car reacting to railroad crossings, large stationary objects, roadwork signs and cones, and cyclists. In the case of the cyclist — not only are the cars able to discern whether the cyclist wants to move left or right, it even watches out for cyclists coming from behind when making a right turn. And while the demo certainly makes the whole process seem easy and fluid, there is actually a considerable amount of work going on behind the scenes.

For starters, there are around $150,000 of equipment in each car performing real-time LIDAR and 360-degree computer vision – a complex and computing-intensive task. The software powering the whole process is also the result of years of development. Basically, every single driving situation that can possibly occur has to be anticipated and then painstakingly programmed into the software. This is an important qualifier when it comes to these “autonomous vehicles”. They are not capable of independent judgement, only following pre-programmed instructions.

BMW 7 Series F01 July 2009 Miramas FranceWhile a lot has been said about the expensive LIDAR hardware, the most impressive aspect of the innovations is the computer vision. While LIDAR provides a very good idea of the lay of the land and the position of large objects (like parked cars), it doesn’t help with spotting speed limits or “construction ahead” signs, and whether what’s ahead is a cyclist or a railroad crossing barrier. And Google has certainly demonstrated plenty of adeptness in the past, what with their latest versions of Street View and their Google Glass project.

Naturally, Google says that it has lots of issues to overcome before its cars are ready to move out from their home town of Mountain View, California and begin driving people around. For instance, the road maps needed to be finely tuned and expanded, and Google is likely to be selling map packages in the future in the same way that apps are sold for smartphones. In the mean time, the adoption of technologies like adaptive cruise control (ACC) and lane keep assist (LKA) will bring lots of almost-self-driving cars to the road over the next few years.

In the meantime, be sure to check out the video of the driverless car in action:


Source:
extremetech.com