Revolution in Virtual Reality: Google’s Cardboard Headset

cardboardgifWith the acquisition of the Oculus Rift headset, Facebook appeared ready to corner the market of the new virtual reality market. But at its annual I/O conference, Google declared that it was staking its own claim. At the end of the search giant’s keynote address, Sundar Pichai announced that everyone in attendance would get a nondescript cardboard package, but was coy about its contents. Turns out, it’s the firm’s attempt at a do-it-yourself VR headset.

Known as Cardboard, copies of the headset were handed out as part of a goodie bag, alongside the choice between a brand new LG G Watch or Samsung Gear Live smartwatch. Intended to be a do-it-yourself starter kit, Google Cardboard is a head-mounted housing unit for your smartphone that lets you blend everyday items into a VR headset. With a $10 lens kit, $7 worth of magnets, two Velcro straps, a rubber band, and an optional near-field communication sticker tag, you can have your very own VR headset for the fraction of the price.

box-of-cardboard-google-io-2014You can use household materials to build one, and a rubber band to hold your smartphone in place on the front of the device. Assembly instructions, plans and links for where to source the needed parts (like lenses) — as well as an SDK — are available on the project’s website. Google hopes that by making the tech inexpensive (unlike offerings from, say, Oculus), developers will be able to make VR apps that hit a wider audience.

According to some early reviews, the entire virtual reality experience is surprisingly intuitive, and is as impressive considering how simple it is. And while the quality doesn’t quite match the Oculus Rift’s dual OLED Full HD screens, and it is lacking in that it doesn’t have positional tracking (meaning you can’t lean into something the way you would in real life), the Cardboard is able to create the 3D effect using just a single phone screen and some specialized lenses.

google_cardboardMeanwhile, Google has created some great demos within the Cardboard app, showcasing the kind of experiences people can expect moving forward. Right now, the Cardboard app features simple demonstrations: Google Earth, Street View, Windy Day, and more. But it’s just a small taste of what’s possible. And anyone willing to put some time into putting together their own cardboard headset can get involved. Never before has virtual reality been so accessible, or cheap.

And that was precisely the purpose behind the development of this device. Originally concocted by David Coz and Damien Henry at the Google Cultural Institute in Paris as part of the company’s “20 percent time” initiative, the program was started with the aim of inspiring a more low-cost model for VR development. After an early prototype wowed Googlers, a larger group was tasked with building out the idea, and the current Cardboard headset was born.

google_cardboard1As it reads on Google’s new page for the device’s development:

Virtual reality has made exciting progress over the past several years. However, developing for VR still requires expensive, specialized hardware. Thinking about how to make VR accessible to more people, a group of VR enthusiasts at Google experimented with using a smartphone to drive VR experiences.

Beyond hardware, on June 25th, the company also released a self-described experimental software development kit for Cardboard experiences. Cardboard also has an Android companion app that’s required to utilize Google’s own VR-specific applications, called Chrome Experiments. Some use cases Google cites now are flyover tours in Google Earth, full-screen YouTube video viewing, and first-person art exhibit tours.

google_cardboard2As Google said a related press release:

By making it easy and inexpensive to experiment with VR, we hope to encourage developers to build the next generation of immersive digital experiences and make them available to everyone.

Oculus Rift is still the most promising version of virtual reality right now, and with Facebook at the helm, there are some tremendous resources behind the project. But with Cardboard, Google is opening up VR to every single Android developer, which we hope will lead to some really awesome stuff down the road. Even if you can’t lean in to inspect dials in front of you, or look behind corners, the potential of Cardboard is tremendous. Imagine the kind of not only experiences we’ll see, but augmented reality using your phone’s camera.

But Cardboard is still very early in development. Its only been a few weeks since it was debuted at Google I/O, and the device is still only works with Android. But with availability on such a wide scale, it could very quickly become the go-to VR platform out there. All you need are some magnets, velcro, rubber band, lenses and a pizza box. And be sure to check out this demo of the device, courtesy of “Hands-On” by TechnoBuffalo:


Sources:
cnet.com, technobuffalo.com, engadget.com

The Future of Devices: The Wearable Tech Boom

Wearable-Computing-RevolutionThe wearable computing revolution that has been taking place in recent years has drawn in developers and tech giants from all over the world. Though its roots are deep, dating back to the late 60’s and early 80’s with the Sword of Damocles concept and the work of Steve Mann. But in recent years, thanks to the development of Google Glass, the case for wearable tech has moved beyond hobbyists and enthusiasts and into the mainstream.

And with display glasses now accounted for, the latest boom in development appears to be centered on smart watches and similar devices. These range from fitness trackers with just a few features to wrist-mounted version of smart phones that boast the same constellations of functions and apps (email, phone, text, skyping, etc.) And as always, the big-name industries are coming forward with their own concepts and designs.

apple_iwatch1First, there’s the much-anticipated Apple iWatch, which is still in the rumor stage. The company has been working on this project since late 2012, but has begun accelerating the process as it tries to expand its family of mobile devices to the wrist. Apple has already started work on trademarking the name in a number of countries in preparation for a late 2014 launch perhaps in October, with the device entering mass production in July.

And though it’s not yet clear what the device will look like, several mockups and proposals have been leaked. And recent reports from sources like Reuters and The Wall Street Journal have pointed towards multiple screen sizes and price points, suggesting an array of different band and face options in various materials to position it as a fashion accessory. It is also expected to include a durable sapphire crystal display, produced in collaboration with Apple partner GT Advanced.

iWatchWhile the iWatch will perform some tasks independently using the new iOS 8 platform, it will be dependent on a compatible iOS device for functions like receiving messages, voice calls, and notifications. It is also expected to feature wireless charging capabilities, advanced mapping abilities, and possibly near-field communication (NFC) integration. But an added bonus, as indicated by Apple’s recent filing for patents associated with their “Health” app, is the inclusion of biometric and health sensors.

Along with serving as a companion device to the iPhone and iPad, the iWatch will be able to measure multiple different health-related metrics. Consistent with the features of a fitness band, these will things like a pedometer, calories burned, sleep quality, heart rate, and more. The iWatch is said to include 10 different sensors to track health and fitness, providing an overall picture of health and making the health-tracking experience more accessible to the general public.

iOS8Apple has reportedly designed iOS 8 with the iWatch in mind, and the two are said to be heavily reliant on one another. The iWatch will likely take advantage of the “Health” app introduced with iOS 8, which may display all of the health-related information gathered by the watch. Currently, Apple is gearing up to begin mass production on the iWatch, and has been testing the device’s fitness capabilities with professional athletes such as Kobe Bryant, who will likely go on to promote the iWatch following its release.

Not to be outdone, Google launched its own brand of smartwatch – known as Android Wear – at this year’s I/O conference. Android Wear is the company’s software platform for linking smartwatches from companies including LG, Samsung and Motorola to Android phones and tablets. A preview of Wear was introduced this spring, the I/O conference provided more details on how it will work and made it clear that the company is investing heavily in the notion that wearables are the future.

android-wear-showdownAndroid Wear takes much of the functionality of Google Now – an intelligent personal assistant – and uses the smartwatch as a home for receiving notifications and context-based information. For the sake of travel, Android Wear will push relevant flight, weather and other information directly to the watch, where the user can tap and swipe their way through it and use embedded prompts and voice control to take further actions, like dictating a note with reminders to pack rain gear.

For the most part, Google had already revealed most of what Wear will be able to do in its preview, but its big on-stage debut at I/O was largely about getting app developers to buy into the platform and keep designing for a peripheral wearable interface in mind. Apps can be designed to harness different Android Wear “intents.” For example, the Lyft app takes advantage of the “call me a car” intent and can be set to be the default means of hailing a ride when you tell your smartwatch to find you a car.

androidwear-3Google officials also claimed at I/O that the same interface being Android Wear will be behind their new Android Auto and TV, two other integrated services that allow users to interface with their car and television via a mobile device. So don’t be surprised if you see someone unlocking or starting their car by talking into their watch in the near future. The first Android Wear watches – the Samsung Gear Live and the LG G Watch – are available to pre-order and the round-face Motorola Moto 360 is expected to come out later this summer.

All of these steps in integration and wearable technology are signs of an emergent trend, one where just about everything from personal devices to automobiles and even homes are smart and networked together – thus giving rise to a world where everything is remotely accessible. This concept, otherwise known as the “Internet of Things”, is expected to become the norm in the next 20 years, and will include other technologies like display contacts and mediated (aka. augmented) reality.

And be sure to check out this concept video of the Apple iWatch:


Sources:
cnet.com, (2), macrumors.com, engadget.com, gizmag.com

Latest Anthology Sample: Ember Storm!

exoplanet_hotThe past few months have been a busy and productive time for the people behind the Yuva anthology. Not only did we take on a host of new writers who adventurously volunteered to join us and share their passion for science fiction, they even managed to produce some solid first and even second drafts. In addition, several members that have been with the project from the beginning have managed to do some final drafts which merit sharing right now!

And this time, it’s Amber Iver’s and Goran Zidar’s Ember Storm, which they just put the final touches on. In this story, we see are given front row seats to a crisis in progress – as told from the points of view of two down and out maintenance workers, and a small family unit caught in the thick of things. Here’s a sample from the beginning, hope you all enjoy! And remember, there’s more where this came from once the book is published:

_____

“Hey, Charlie, do you hear that?”

“Leave me alone, Rhina,” Charlie grunted and pulled his cap down over his face. “I’m trying to sleep here.”

“The environmental alarm’s going off.” Rhina moved over to the console and brought up the display.

“So?”

Rhina studied the screen for a moment. “So it looks like there’s a storm coming.”

“Good.”

“Good?”

“Yeah, it means I’ve got nothing else to do but kick back and study the inside of my eyelids.”

“Wake up idiot,” Rhina tossed a PAD at her colleague’s supine form.

“Hey! What was that for?”

“Strap in. I’m taking us back.”

Charlie let out a huge sigh as he got to his feet and stumbled across to Rhina. She could smell the alcohol on his breath as he loomed over her and tried to get his eyes to focus on the screen.

“You won’t make it.”

“What do you mean?”

Charlie stabbed a finger at a coloured line on the screen. “Front’s coming in fast, it’ll hit before we reach the colony. We might as well just wait it out here.”

“Well I’m gonna try anyway.” Rhina reached forward and touched the ignition. “I don’t relish the idea of spending the next few hours with just your drunk arse for company.”

“That’s harsh.” Charlie’s face twisted in mock disappointment. “I’ll just be asleep on the floor. You won’t even notice I’m here.”

“Even asleep you’re crap company. Now strap yourself in, we’re leaving.”

*                    *                     *

“Good morning, Miss Siera. It’s time to wake up.”

“Just ten more minutes, please,” Siera said, sleep making her words run into each other.

The room was suddenly bathed in sunlight.

“Hey!” Siera was forced to shield her eyes from the bright light.

“Your mother’s instructions were quite clear, miss.”

Siera squinted as she threw the covers aside and strode across the room, snatching the PAD from David’s loose grasp. “Leave my PAD alone.” Her fingers danced over the screen and soon the light in the room dimmed to a more manageable level. “Why do I need to be up? It’s the weekend.”

“Isn’t this the day you’re to make lunch for your father?”

Siera sucked a breath, her drowsiness banished.

“Oh, no. I forgot.”

“That’s why I am here, miss.”

Siera smiled and leaned forward to kiss David lightly on the cheek. “What would I do without you?”

David raised a hand to his face, the latex skin of his cheek still warm where Siera’s lips touched him. “You’re appreciation is welcome but not necessary, miss. I am simply doing what I have been programmed to do.”

“If you’re going to look like a human being, I’m going to treat you like one.” She said as she scooped a bundle of clothes from the floor then ran to the bathroom.

“I am not responsible for my appearance. It was your father who constructed me. I had no say in the matter at all.”

Siera called from the bathroom. “None of us do, David. You’ve got more in common with humans than you realise.”

David shrugged. “I must say I don’t really think about it.”

Siera emerged from the bathroom. “Well you should. You’re part of this family, you know. You’re like the big brother I never had.”

“Well this big brother needs you to go to the kitchen.”

“Hang on a minute, I need my wrist com.”

Siera looked around the room quickly but couldn’t see the wearable communication device anywhere. She moved to the bedside table and rummaged through the drawer to no avail.

“Don’t just stand there. Help me find it,” she said, as she started tearing the sheets off her bed.

“When was the last time you saw it?”

Siera raised an eyebrow as she looked at David. “Are you kidding me?”

“You asked me to help.”

“How is that helping? Just look for it.”

David walked to the bathroom and returned a few seconds later holding the wrist com. “Here you go, miss.”

Siera ran up to him and enveloped him in a firm embrace. “Thank you, David. You’re a life saver.”

“As I said before, your thanks are not necessary.”

Siera clipped the device onto her wrist then looked at the mess she’d created in her room. “Oops … Mum’s going to kill me.”

“Don’t worry, miss. You go to the kitchen; I’ll stay and clean this up for you.”

Siera opened her mouth to say thank you, but David placed a finger on her lips. “Go. Your mother is waiting for you.”

Siera gave her untidy room one last glance then sped down the hall to the kitchen. The sound of pots and pans clanking told her that her mum and sister had started without her, and she hoped that she hadn’t missed too much of the preparation. Cooking with fresh ingredients, on an actual stove, like they did on Earth in the old days was a real treat, and one that didn’t happen very often.

Her mum, Tara, looked up as Siera entered the kitchen. “Good, you’re finally up. You can start by cleaning up Meghan’s mess.”

Her four year old sister, Meghan, sat with a broad grin as she stirred a bowl of dark coloured sauce. With each turn of the spoon, more of the sticky substance spilled on the bench and dripped onto the floor.

“Give the bowl to Siera, sweetie,” Tara said. “Then go wash your hands before we start on the next part.”

Meghan did as she was told, and Siera was left standing with a sticky mess to clean up. “I probably should have gotten up earlier, eh?”

Her mum glanced up. “I didn’t say a word.”

Siera set to cleaning the mess her sister created. “What’re we making?”

“It’s called Mongolian barbeque. The protein sequencer has replicated a few different kinds of meat, and I was able to pick up some garlic and onions from the market as well as something that tastes a bit like plum.”

“The sauce smells good.”

“Try some,” her mother suggested.

Siera dipped a finger in the sauce and placed it in her mouth. The sweet, spicy flavour of the fruit combined with the garlic and other ingredients exploded in her mouth.

“Oh my god, that’s amazing.”

Tara smiled. “Much better than synth food isn’t it?”

“I’ll say. Pity we can’t eat like this all the time.”

“It wouldn’t be special if we did it every day.”

“I suppose.” Siera took another taste.

“Enough of that, we’ve got a lot to do before your father and Joey get here.”

Siera placed the bowl of delicious sauce down on the bench and finished wiping the floor while her mother used a knife to cut the replicated meat into strips. When Tara was done she took the meat and placed it into the bowl of sauce using her fingers to knead the mixture together.

“What can I do now?” Siera asked.

“Can you ground some pepper in here while I do this? There should be some in the pantry.”

Siera opened the pantry door and hunted around for the pepper grinder. She picked it up and shook it. “I think we’re out of pepper, mum.”

“You’re sure?”

Siera rolled her eyes. “Yes, mum, I’m sure. Can we do without it?”

“It won’t be the same without pepper. I need you to run up to the market and get some.”

“Can’t David do it?”

Tara gave Siera a serious look. “I thought you wanted to help.”

“I do but–”

“Well this is helping. Take my chit and go to the market. Don’t worry; there’ll still be lots to do when you get back.”

Siera left their home, and walked along the open streets of the colony to the market. It was a clear day, and Yuva’s orange sun bathed the habitat with light and warmth, but this close to the light side of the planet, warmth was rarely an issue.

Their colony was built in the new style; a new style for Yuva.

The market and other amenities were located at the center of the colony, with the residential population surrounding it. It was a civic model that dated back to ancient times. No matter how far humanity had come, some things would never change.

People here lived and worked in detached buildings, with streets and walkways linking them together beneath a massive plasteel dome that shielded them from radiation and the elements. The terraformers had been able to make the air of Yuva breathable, but the planet’s ozone layer remained weak.

It was possible for a person to go outside the dome, but unless they wore a suit their skin would suffer from dangerous levels of ultra violet radiation.

Siera’s wrist com buzzed as she crested a rise in the street.

“Now what’s she forgotten?” she muttered as she checked the device.

LEVEL 5 STORM WARNING

Environment hazard protocols in place

Her heart raced and she lifted her gaze to look out past the colony’s dome. A thin line of grey marked the horizon. The storm was still a long way off, but she’d lived here long enough to know that it would be here in no time at all.

Tech News: Google Seeking “Conscious Homes”

nest_therm1In Google’s drive for world supremacy, a good number of start-ups and developers have been bought up. Between their acquisition of eight robotics companies in the space of sixth months back in 2013 to their ongoing  buyout of anyone in the business of aerospace, voice and facial recognition, and artificial intelligence, Google seems determined to have a controlling interest in all fields of innovation.

And in what is their second-largest acquisition to date, Google announced earlier this month that they intend get in on the business of smart homes. The company in question is known as Nest Labs, a home automation company that was founded by former Apple engineers Tony Fadell and Matt Rogers in 2010 and is behind the creation of The Learning Thermostat and the Protect smoke and carbon monoxide detector.

nest-thermostatThe Learning Thermostat, the company’s flagship product, works by learning a home’s heating and cooling preferences over time, removing the need for manual adjustments or programming. Wi-Fi networking and a series of apps also let users control and monitor the unit Nest from afar, consistent with one of the biggest tenets of smart home technology, which is connectivity.

Similarly, the Nest Protect, a combination smoke and carbon monoxide detector, works by differentiating between burnt toast and real fires. Whenever it detects smoke, one alarm goes off, which can be quieted by simply waving your hand in front of it. But in a real fire, or where deadly carbon monoxide is detected, a much louder alarm sounds to alert its owners.

nest_smoke_detector_(1_of_9)_1_610x407In addition, the device sends a daily battery status report to the Nest mobile app, which is the same one that controls the thermostats, and is capable of connecting with other units in the home. And, since Nest is building a platform for all its devices, if a Nest thermostat is installed in the same home, the Protect and automatically shut it down in the event that carbon monoxide is detected.

According to a statement released by co-f0under Tony Fadell, Nest will continue to be run in-house, but will be partnered with Google in their drive to create a conscious home. On his blog, Fadell explained his company’s decision to join forces with the tech giant:

Google will help us fully realize our vision of the conscious home and allow us to change the world faster than we ever could if we continued to go it alone. We’ve had great momentum, but this is a rocket ship. Google has the business resources, global scale, and platform reach to accelerate Nest growth across hardware, software, and services for the home globally.

smarthomeYes, and I’m guessing that the $3.2 billion price tag added a little push as well! Needless to say, some wondered why Apple didn’t try to snatch up this burgeoning company, seeing as how its being run by two of its former employees. But according to Fadell, Google founder Sergey Brin “instantly got what we were doing and so did the rest of the Google team” when they got a Nest demo at the 2011 TED conference.

In a press release, Google CEO Larry Page had this to say about bringing Nest into their fold:

They’re already delivering amazing products you can buy right now – thermostats that save energy and smoke/[carbon monoxide] alarms that can help keep your family safe. We are excited to bring great experiences to more homes in more countries and fulfill their dreams!

machine_learningBut according to some, this latest act by Google goes way beyond wanting to develop devices. Sara Watson at Harvard University’s Berkman Center for Internet and Society is one such person, who believes Google is now a company obsessed with viewing everyday activities as “information problems” to be solved by machine learning and algorithms.

Consider Google’s fleet of self-driving vehicles as an example, not to mention their many forays into smartphone and deep learning technology. The home is no different, and a Google-enabled smart home of the future, using a platform such as the Google Now app – which already gathers data on users’ travel habits – could adapt energy usage to your life in even more sophisticated ways.

Larry_PageSeen in these terms, Google’s long terms plans of being at the forefront of the new technological paradigm  – where smart technology knows and anticipates and everything is at our fingertips – certainly becomes more clear. I imagine that their next goal will be to facilitate the creation of household AIs, machine minds that monitor everything within our household, provide maintenance, and ensure energy efficiency.

However, another theory has it that this is in keeping with Google’s push into robotics, led by the former head of Android, Andy Rubin. According to Alexis C. Madrigal of the Atlantic, Nest always thought of itself as a robotics company, as evidence by the fact that their VP of technology is none other than Yoky Matsuoka – a roboticist and artificial intelligence expert from the University of Washington.

yokymatsuoka1During an interview with Madrigal back in 2012, she explained why this was. Apparently, Matsuoka saw Nest as being positioned right in a place where it could help machine and human intelligence work together:

The intersection of neuroscience and robotics is about how the human brain learns to do things and how machine learning comes in to augment that.

In short, Nest is a cryptorobotics company that deals in sensing, automation, and control. It may not make a personable, humanoid robot, but it is producing machine intelligences that can do things in the physical world. Seen in this respect, the acquisition was not so much part of Google’s drive to possess all our personal information, but a mere step along the way towards the creation of a working artificial intelligence.

It’s a Brave New World, and it seems that people like Musk, Page, and a slew of futurists that are determined to make it happen, are at the center of it.

Sources: cnet.news.com, (2), newscientist.com, nest.com, theatlantic.com

The Future of Smart Living: Smart Homes

Future-Home-Design-Dupli-CasaAt this year’s Consumer Electronics Show, one of the tech trends to watch was the concept of the Smart Home. Yes, in addition to 4K televisions, curved OLEDs, smart car technology and wearables, a new breed of in-home technology that extends far beyond the living room made some serious waves. And after numerous displays and presentations, it seems that future homes will involve connectivity and seamless automation.

To be fair, some smart home devices – such as connected light bulbs and thinking thermostats – have made their way into homes already. But by the end of 2014, a dizzying array of home devices are expected to appear, communicating across the Internet and your home network from every room in the house. It’s like the internet of things meets modern living, creating solutions that are right at your fingertips (via your smartphone)

smarthomeBut in many ways, the companies on the vanguard of this movement are still working on drawing the map and several questions still loom. For example, how will your connected refrigerator and your connected light bulbs talk to each other? Should the interface for the connected home always be the cell phone, or some other wirelessly connect device.

Such was the topic of debate at this year’s CES Smart Home Panel. The panel featured GE Home & Business Solutions Manager John Ouseph; Nest co-founder and VP of Engineering Matt Rogers; Revolv co-founder and Head of Marketing Mike Soucie; Philips’ Head of Technology, Connected Lighting George Yianni; Belkin Director of Product Management Ohad Zeira, and CNET Executive Editor Rich Brown.

samsunglumenSpecific technologies that were showcased this year that combined connectivity and smart living included the Samsung Lumen Smart Home Control Panel. This device is basically a way to control all the devices in your home, including the lighting, climate control, and sound and entertainment systems. It also networks with all your wireless devices (especially if their made by Samsung!) to run your home even when your not inside it.

Ultimately, Samsung hopes to release a souped-up version of this technology that can be integrated to any device in the home. Basically, it would be connected to everything from the washer and dryer to the refrigerator and even household robots, letting you know when the dishes are done, the clothes need to be flipped, the best before dates are about to expire, and the last time you house was vacuumed.


As already noted, intrinsic to the Smart Home concept is the idea of integration to smartphones and other devices. Hence, Samsung was sure to develop a Smart Home app that would allow people to connect to all the smart devices via WiFi, even when out of the home. For example, people who forget to turn off the lights and the appliances can do so even from the road or the office.

These features can be activated by voice, and several systems can be controlled at once through specific commands (i.e. “going to bed” turns the lights off and the temperature down). Cameras also monitor the home and give the user the ability to survey other rooms in the house, keeping a remote eye on things while away or in another room. And users can even answer the phone when in another room.

Check out the video of the Smart Home demonstration below:


Other companies made presentations as well. For instance, LG previewed their own software that would allow people to connect and communicate with their home. It’s known as HomeChat, an app based on Natural Language Processing (NLP) that lets users send texts to their compatible LG appliances. It works on Android, BlackBerry, iOS, Nokia Asha, and Windows Phone devices as well as OS X and Windows computers.

This represents a big improvement over last year’s Smart ThinQ, a set of similar application that were debuted at CES 2013. According to many tech reviewers, the biggest problem with these particular apps was the fact that each one was developed for a specific appliance. Not so with the HomeChat, which allows for wireless control over every integrated device in the home.

LGHomeChatAura, a re-imagined alarm clock that monitors your sleep patterns to promote rest and well-being. Unlike previous sleep monitoring devices, which monitor sleep but do not intervene to improve it, the Aura is fitted a mattress sensor that monitors your movements in the night, as well as a series of multi-colored LED light that “hack” your circadian rhythms.

In the morning, its light glows blue like daytime light, signaling you to wake up when it’s optimal, based upon your stirrings. At night, the LED glows orange and red like a sunset and turn itself off when you fall asleep. The designers hopes that this mix of cool and warm light can fill in where the seasons fall short, and coax your body into restful homeostasis.

aura_nightlightMeanwhile, the Aura will send your nightly sleep report to the cloud via Wi-Fi, and you can check in on your own rest via the accompanying smartphone app. The entire body is also touch-sensitive, its core LED – which are generally bright and piercing – is cleverly projected into an open air orb, diffusing the light while evoking the shape of the sun. And to deactivate the alarm, people need only trigger the sensor by getting out of bed.

Then there was Mother, a robotic wellness monitor produced by French inventor Rafi Haladjian. This small, Russian-doll shaped device is basically an internet base station with four sensors packs that track 15 different parts of your life. It is small enough to fit in your pocket to track your steps, affix to your door to act as a security alarm, and stick to your coffee maker to track how much you’re drinking and when you need more beans.

mother_robotAnd though the name may sound silly or tongue-in-cheek, it is central to Haladjian’s vision of what the “Internet of things” holds for us. More and more, smart and sensor-laden devices are manifesting as wellness accessories, ranging from fitness bands to wireless BP and heart rate monitors. But the problem is, all of these devices require their own app to operate. And the proliferation of devices is leading to a whole lot of digital clutter.

As Haladjian said in a recent interview with Co.Design:

Lots of things that were manageable when the number of smart devices was scarce, become unbearable when you push the limit past 10. You won’t be willing to change 50 batteries every couple of weeks. You won’t be willing to push the sync button every day. And you can’t bear to have 50 devices sending you notifications when something happens to them!

keekerAnd last, but not least, there was the Keecker – a robotic video projector that may just be the future of video entertainment. Not only is this robot able to wheel around the house like a Roomba, it can also sync with smartphones and display anything on your smart devices – from email, to photos, to videos. And it got a battery charge that lasts a week, so no cords are needed.

Designed by Pierre Lebeau, a former product manager at Google, the robot is programmed to follow its human owner from room to room like a little butler (via the smartphone app). It’s purpose is to create an immersive media environment by freeing the screen from its fixed spots and projecting them wherever their is enough surface space.


In this respect, its not unlike the Omnitouch or other projection smartscreens, which utilizes projectors and motion capture technology to allow people to turn any surface into a screen. The design even includes features found in other smart home devices – like the Nest smoke detector or the Spotter – which allow for the measuring of a home’s CO2 levels and temperature, or alerting users to unusual activity when they aren’t home.

Lebeau and his company will soon launching a Kickstarter campaign in order to finance bringing the technology to the open market. And though it has yet to launch, the cost of the robot is expected to be between $4000 and $5000.

Sources: cnet.com, (2), (3), (4), fastcodesign, (2), (3), (4)

Judgement Day Update: Google Robot Army Expanding

Atlas-x3c.lrLast week, Google announced that it will be expanding its menagerie of robots, thanks to a recent acquisition. The announcement came on Dec. 13th, when the tech giant confirmed that it had bought out the engineering company known as Boston Dynamics. This company, which has had several lucrative contracts with DARPA and the Pentagon, has been making the headlines in the past few years, thanks to its advanced robot designs.

Based in Waltham, Massachusetts, Boston Dynamics has gained an international reputation for machines that walk with an uncanny sense of balance, can navigate tough terrain on four feet, and even run faster than the fastest humans. The names BigDog, Cheetah, WildCat, Atlas and the Legged Squad Support System (LS3), have all become synonymous with the next generation of robotics, an era when machines can handle tasks too dangerous or too dirty for most humans to do.

Andy-Rubin-and-Android-logoMore impressive is the fact that this is the eight robot company that Google has acquired in the past six months. Thus far, the company has been tight-lipped about what it intends to do with this expanding robot-making arsenal. But Boston Dynamics and its machines bring significant cachet to Google’s robotic efforts, which are being led by Andy Rubin, the Google executive who spearheaded the development of Android.

The deal is also the clearest indication yet that Google is intent on building a new class of autonomous systems that might do anything from warehouse work to package delivery and even elder care. And considering the many areas of scientific and technological advancement Google is involved in – everything from AI and IT to smartphones and space travel – it is not surprising to see them branching out in this way.

wildcat1Boston Dynamics was founded in 1992 by Marc Raibert, a former professor at the Massachusetts Institute of Technology. And while it has not sold robots commercially, it has pushed the limits of mobile and off-road robotics technology thanks to its ongoing relationship and funding from DARPA. Early on, the company also did consulting work for Sony on consumer robots like the Aibo robotic dog.

Speaking on the subject of the recent acquisition, Raibert had nothing but nice things to say about Google and the man leading the charge:

I am excited by Andy and Google’s ability to think very, very big, with the resources to make it happen.

Videos uploaded to Youtube featuring the robots of Boston Dynamics have been extremely popular in recent years. For example, the video of their four-legged, gas powered, Big Dog walker has been viewed 15 million times since it was posted on YouTube in 2008. In terms of comments, many people expressed dismay over how such robots could eventually become autonomous killing machines with the potential to murder us.

petman-clothesIn response, Dr. Raibert has emphasized repeatedly that he does not consider his company to be a military contractor – it is merely trying to advance robotics technology. Google executives said the company would honor existing military contracts, but that it did not plan to move toward becoming a military contractor on its own. In many respects, this acquisition is likely just an attempt to acquire more talent and resources as part of a larger push.

Google’s other robotics acquisitions include companies in the United States and Japan that have pioneered a range of technologies including software for advanced robot arms, grasping technology and computer vision. Mr. Rubin has also said that he is interested in advancing sensor technology. Mr. Rubin has called his robotics effort a “moonshot,” but has declined to describe specific products that might come from the project.

Cheetah-robotHe has, however, also said that he does not expect initial product development to go on for some time, indicating that Google commercial robots of some nature would not be available for several more years. Google declined to say how much it paid for its newest robotics acquisition and said that it did not plan to release financial information on any of the other companies it has recently bought.

Considering the growing power and influence Google is having over technological research – be it in computing, robotics, neural nets or space exploration – it might not be too soon to assume that they are destined to one day create the supercomputer that will try to kill us all. In short, Google will play Cyberdyne to Skynet and unleash the Terminators. Consider yourself warned, people! 😉

Source: nytimes.com

Judgement Day Update: Artificial Muscles for Robots

artificial-muscle-1It’s a science fiction staple, the android or humanoid robot opens up its insides to reveal a network of gears or brightly-lit cables running underneath. However, as the science behind making androids improves, we are moving farther and farther away from this sci-fi cliche. In fact, thanks to recent advancements, robots in the future may look a lot like us when you strip away their outer layers.

It’s what is known as biomimetics, the science of creating technology that mimics biology. And the latest breakthrough in this field comes from National University of Singapore’s Faculty of Engineering where researchers have developed the world’s first “robotic” muscle. Much like the real thing, this artificial tissue extends to five times its original length, has the potential to lift 80 times its own weight.

???????????????????????In addition to being a first in robotics, this new development is exciting because it resolves a central problem that has plagued robots since their inception. In the 1960s, John W. Campbell Jr, editor of Analog Science Fiction magazine, pointed out this problem when he outlined a scenario where a man is chased across rough country by a mad scientist’s horde of killer robots.

In this scenario, the various models that were chasing the man were stymied by obstacles that the he could easily overcome, such as sinking in mud, jumping over logs, getting around rocks, or tangled up in bushes. In the end, the only robots that were capable of keeping up with him were so light and underpowered that he was able to tear them apart with his bare hands.

robot_muscleThis is a far cry from another science fiction staple, the one which presents robots as powerful automatons that can bend steel girders and carry out immense feats of strength. While some robots certainly can do this, they are extremely heavy and use hydraulics for the heavy lifting. Pound for pound, they’re actually very weak compared to a human, being capable of lifting only half their weight.

Another problem is the fact that robots using gears and motors, pneumatics, or hydraulics lack fine control. They tend to move in jerky motions and have to pause between each move, giving rise to a form of motion that we like to call “the robot”. Basically, it is very difficult to make a robot that is capable of delicate, smooth movements, the kind humans and animals take for granted.

kenshiroFor some time now, scientists and researchers have been looking to biomimetics to achieve the long sought-after dream of smaller, stronger robots that are capable of more refined movements. And taken in tandem with other development – such as the Kenshiro robot developed by roboticists at the University of Tokyo – that time might finally be here.

Developed by a four-person team led by Dr. Adrian Koh – from the NUS Engineering Science Program and Department of Civil and Environmental Engineering – the new artificial muscle is an example of an electroactive polymer. Basically, this is a combination dielectric elastomer and rubber that changes shape when stimulated by an electric field. In this respect, the artificial muscle is much like an organic one, using electrical stimulus to trigger movement.

 

robot-arm-wrestling-03-20-09Robots using artificial muscles would be a far cry from clanking mechanical men. They would be much more lifelike, capable of facial expression and precise, graceful movements. They would also have superhuman strength, yet weigh the same as a person. In addition, the polymer used to fabricate the muscles may have more general applications in machines, such as cranes.

An added bonus of the polymer is that is can convert and store energy, which means it’s possible to design robots that power themselves after charging for only minutes. In a statement released by his department, Dr. Koh highlighted the benefits of the design and what it is capable of doing:

Our novel muscles are not just strong and responsive. Their movements produce a by-product – energy. As the muscles contract and expand, they are capable of converting mechanical energy into electrical energy. Due to the nature of this material, it is capable of packing a large amount of energy in a small package. We calculated that if one were to build an electrical generator from these soft materials, a 10 kg (22 lb) system is capable of producing the same amount of energy of a one-ton electrical turbine.

AI_robotDr. Koh also indicated that robots equipped with these types of muscles “will be able to function in a more human-like manner – and outperform humans in strength.” Theoretically, such polymer-based tissues could extend to ten times their original length and lift up to 500 times its own weight, though the current version isn’t anywhere near that limit just yet.

In the meantime, Dr Koh and his team have applied for a patent for the artificial muscle and are continuing work on it. They predict that within five years they could have a robot arm that is half the size and weight of a human arm, yet could win an arm wrestling match. And the applications are limitless, ranging from robotic servants to search and rescue bots and heavy robot laborers. And let’s not forget that cybernetic arms that boast that kind of increased strength are also likely to become a popular prosthetic and enhancement item.

And for those who are naturally afraid of a future where super-human robots that have the strength to tear us limb from limb are walking among us, let me remind you that we still have Asimov’s “Three Laws of Robotics” to fall back on. Never mind what happened in the terrible movie adaptation, those laws are incontrovertible and will work… I hope!

Sources: gizmag.com, engadget.com, 33rdsqaure.com

Cool Video: “Kara”, by Quantic Dream

KaraI just came across this very interesting video over at Future Timeline, where the subject in question was how by the 22nd century, androids would one day be indistinguishable from humans. To illustrate the point, the writer’s used a video produced by Quantic Dream, a motion capture and animation studio that produces 3D sequences for video games as well as their own video shorts and proprietary technologies.

The video below is entitled “Kara”, a video short that was developed for the PS3 and presented during the 2012 Game Developers Conference in San Francisco. A stunning visual feet and the winner of the Best Experimental Film award at the International LA Shorts Film Fest 2012, Kara tells the story of an AX 400 third generation android getting assembled and initiated.

Naturally, things go wrong during the process when a “bug” is encountered. I shan’t say more seeing as how I don’t want to spoil the movie, but trust me when I say it’s quite poignant and manages to capture the issue of emerging intelligence quite effectively. As the good folks at Future Timeline used this video to illustrate, the 22nd century is likely to see a new type of civil rights movement, one which has nothing to do with “human rights”.

Enjoy!

The Future of Medicine: Smartphone Medicine!

iphone_specIt’s no secret that the exponential growth in smartphone use has been paralleled by a similar growth in what they can do. Everyday, new and interesting apps are developed which give people the ability to access new kinds of information, interface with other devices, and even perform a range of scans on themselves. It is this latter two aspect of development which is especially exciting, as it is opening the door to medical applications.

Yes, in addition to temporary tattoos and tiny medimachines that can be monitored from your smartphone or other mobile computing device, there is also a range of apps that allow you to test your eyesight and even conduct ultrasounds on yourself. But perhaps most impressive is the new Smartphone Spectrometer, an iPhone program which will allow users to diagnose their own illnesses.

iphone_spec2Consisting of an iPhone cradle, phone and app, this spectrometer costs just $200 and has the same level of diagnostic accuracy as a $50,000 machine, according to Brian Cunningham, a professor at the University of Illinois, who developed it with his students. Using the phone’s camera and a series of optical components in the cradle, the machine detects the light spectrum passing through a liquid sample.

This liquid can consist of urine or blood, any of the body’s natural fluids that are exhibit traces of harmful infection when they are picked up by the body. By comparing the sample’s spectrum to spectrums for target molecules, such as toxins or bacteria, it’s possible to work out how much is in the sample. In short, a quickie diagnosis for the cost of a fancy new phone.

Granted there are limitations at this point. For one, the device is nowhere near as efficient as its industrial counterpart. Whereas automated $50,000 version can process up to 100 samples at a time, the iPhone spectrometer can only do one at a time. But by the time Cunningham and his team plan on commercializing the design, they hope to increase that efficiency by a few magnitudes.

iphone_spec1On the plus side, the device is far more portable than any other known spectrometer. Whereas a lab is fixed in place and has to process thousands of samples at any given time, leading to waiting lists, this device can be used just about anywhere. In addition, there’s no loss of accuracy. As Cunningham explained:

We were using the same kits you can use to detect cancer markers, HIV infections, or certain toxins, putting the liquid into our cartridge and measuring it on the phone. We have compared the measurements from full pieces of equipment, and we get the same outcome.

Cunningham is currently filing a patent application and looking for investment. He also has a grant from the National Science Foundation to develop an Android version. And while he doesn’t think smartphone-based devices will replace standard spectrometry machines with long track records, and F.D.A approval, he does believe they could enable more testing.

publiclaboratoryThis is especially in countries where government-regulated testing is harder to come by, or where medical facilities are under-supplied or waiting lists are prohibitively long. With diseases like cancer and HIV, early detection can be the difference between life and death, which is a major advantage, according to Cunningham:

In the future, it’ll be possible for someone to monitor themselves without having to go to a hospital. For example, that might be monitoring their cardiac disease or cancer treatment. They could do a simple test at home every day, and all that information could be monitored by their physician without them having to go in.

But of course, the new iPhone is not alone. Many other variations are coming out, such as the PublicLaboratory Mobile Spectrometer, or Androids own version of the Spectral Workbench. And of course, this all calls to mind the miniature spectrometer that Jack Andraka, the 16-year old who invented a low-cost litmus test for pancreatic cancer and who won the 2012 Intel International Science and Engineering Fair (ISEF). That’s him in the middle of the picture below:

ISEF2012-Top-Three-WinnersIt’s the age of mobile medicine, my friends. Thanks to miniaturization, nanofabrication, wireless technology, mobile devices, and an almost daily rate of improvement in medical technology, we are entering into an age where early detection and cost-saving devices are making medicine more affordable and accessible.

In addition, all this progress is likely to add up to many lives being saved, especially in developing regions or low-income communities. It’s always encouraging when technological advances have the effect of narrowing the gap between the haves and the have nots, rather than widening it.

And of course, there’s a video of the smartphone spectrometer at work, courtesy of Cunningham’s research team and the University of Illinois:


Source:
fast.coexist.com

Judgement Day Update: Geminoid Robotic Clones

geminoidWe all know it’s coming: the day when machines would be indistinguishable from human beings. And with a robot that is capable of imitating human body language and facial expressions, it seems we are that much closer to realizing it. It’s known as the Geminoid HI-2, a robotic clone of its maker, famed Japanese roboticist Hiroshi Ishiguro.

Ishiguro unveiled his latest creation at this year’s Global Future 2045 conference, an annual get-together for all sorts of cybernetics enthusiasts, life extension researchers, and singularity proponents. As one of the world’s top experts on human-mimicking robots, Ishiguro wants his creations to be as close to human as possible.

avatar_imageAlas, this has been difficult, since human beings tend to fidget and experience involuntary tics and movements. But that’s precisely what his latest bot excels at. Though it still requires a remote controller, the Ishiguro clone has all his idiosyncrasies hard-wired into his frame, and can even give you dirty looks.

geminoidfThis is not the first robot Ishiguro has built, as his female androids Repliee Q1Expo and Geminoid F will attest. But above all, Ishiguro loves to make robotic versions of himself, since one of his chief aims with robotics is to make human proxies. As he said during his talk, “Thanks to my android, when I have two meetings I can be in two places simultaneously.” I honestly think he was only half-joking!

During the presentation, Ishiguro’s robotic clone was on stage with him, where it realistically fidgeted as he pontificated and joked with the audience. The Geminoid was controlled from off-stage, where an unseen technician guided it, and fidgeted, yawned, and made annoyed facial expressions. At the end of the talk, Ishiguro’s clone suddenly jumped to life and told a joke that startled the crowd.

geminoid_uncanny_valleyIn Ishiguro’s eyes, robotic clones can outperform humans at basic human behaviors thanks to modern engineering. And though they are not yet to the point where the term “android” can be applied, he believes it is only a matter of time before they can rival and surpass the real thing. Roboticists and futurists refer to this as the “uncanny valley” – that strange, off-putting feeling people get when robots begin to increasingly resemble humans. If said valley was a physical place, I think we can all agree that Ishiguro would be its damn mayor!

And judging by these latest creations, the time when robots are indistinguishable from humans may be coming sooner than we think. As you can see from the photos, there seems to be very little difference in appearance between his robots and their human counterparts. And those who viewed them live have attested to them being surprisingly life-like. And once they are able to control themselves and have an artificial neural net that can rival a human one in terms of complexity, we can expect them to mimic many of our other idiosyncrasies as well.

As usual, there are those who will respond to this news with anticipation and those who respond with trepidation. Where do you fall? Maybe these videos from the conference of Ishiguro’s inventions in action will help you make up your mind:

Ishiguro Clone:


Geminoid F:

Sources: fastcoexist.com, geminoid.jp