Judgement Day Update: Cheetah Robot Unleashed!

MIT-Cheetah-05-640x366There have been lots of high-speed bio-inspired robots in recent years, as exemplified by Boston Dynamics WildCat. But MIT’s Cheetah robot, which made its big debut earlier this month, is in a class by itself. In addition to being able to run at impressive speeds, bound, and jump over obstacles, this particular biomimetic robot is also being battery-and-motor driven rather than by a gasoline engine and hydraulics, and can function untethered (i.e. not connected to a power source).

While gasoline-powered robots are still very much bio-inspired, they are dependent on sheer power to try and match the force and speed of their flesh-and-blood counterparts. They’re also pretty noisy, as the demonstration of the WildCat certainly showed (video below). MIT’s Cheetah takes the alternate route of applying less power but doing so more efficiently, more closely mimicking the musculoskeletal system of a living creature.

mit-cheetahThis is not only a reversal on contemporary robotics, but a break from history. Historically, to make a robot run faster, engineers made the legs move faster. The alternative is to keep the same kind of frequency, but to push down harder at the ground with each step. As MIT’s Sangbae Kim explained:

Our robot can be silent and as efficient as animals. The only things you hear are the feet hitting the ground… Many sprinters, like Usain Bolt, don’t cycle their legs really fast. They actually increase their stride length by pushing downward harder and increasing their ground force, so they can fly more while keeping the same frequency.

MIT’s Cheetah uses much the same approach as a sprinter, combining custom-designed high-torque-density electric motors made at MIT with amplifiers that control the motors (also a custom MIT job). These two technologies, combined with a bio-inspired leg, allow the Cheetah to apply exactly the right amount of force to successfully bound across the ground and navigate obstacles without falling over.

MIT-cheetah_jumpWhen it wants to jump over an obstacle, it simply pushes down harder; and as you can see from the video below, the results speak for themselves. For now, the Cheetah can run untethered at around 16 km/h (10 mph) across grass, and hurdle over obstacles up to 33 centimeters high. The Cheetah currently bounds – a fairly simple gait where the front and rear legs move almost in unison – but galloping, where all four legs move asymmetrically, is the ultimate goal.

With a new gait, and a little byte surgery to the control algorithms, MIT hopes that the current Cheetah can hit speeds of up to 48 km/h (30 mph), which would make it the fastest untethered quadruped robot in the world. While this is still a good deal slower than the real thing  – real cheetah’s can run up to 60 km/h (37 mph) – it will certainly constitute another big step for biomimetics and robotics.

Be sure to check out the video of the Cheetah’s test, and see how it differs from the Boston Dynamics/DARPA’s WildCat’s tests from October of last year:



Source:
extremetech.com

News From Space: Astronaut Robots

spheres_1As if it weren’t bad enough that they are replacing workers here on Earth, now they are being designed to replace us in space! At least, that’s the general idea behind Google and NASA’s collaborative effort to make SPHERES (Synchronized Position Hold, Engage, Reorient, Experimental Satellites). As the name suggests, these robots are spherical, floating machines that use small CO2 thrusters to move about and performing chores usually done by astronauts.

Earlier this month, NASA announced it’s plan to launch some SPHERES aboard an unmanned Cygnus spacecraft to the International Space Station to begin testing. That launch took place on July 11th, and the testing has since begun. Powered by Tango, Google’s prototype smartphone that comes with 3D sensors that map the environment around them, the three satellites were used to perform routine tasks.

nasa-antares-launch-photoNASA has sent SPHERES to the ISS before, but all they could really do was move around using their small CO2 thruster. With the addition of a Tango “brain” though, the hope is that the robots will actually be able to assist astronauts on some tasks, or even completely carry out some mundane chores. In addition, the mission is to prepare the robots for long-term use and harmonized them to the ISS’ environment.

This will consist of the ISS astronauts testing SPHERES ability to fly around and dock themselves to recharge (since their batteries only last 90 minutes), and use the Tango phones to map the Space Station three-dimensionally. This data will be fed into the robots so they have a baseline for their flight patterns. The smartphones will be attached to the robots for future imaging tasks, and they will help with mathematical calculations and transmitting a Wi-Fi signal.

spheres_0In true science fiction fashion, the SPHERES project began in 2000 after MIT professor David W. Miller was inspired by the “Star Wars” scene where Luke Skywalker is being trained in handling a lightsaber by a small flying robot. Miller asked his students to create a similar robot for the aerospace Industry. Their creations were then sent to the ISS in 2006, where they have been ever since.

As these early SPHERES aren’t equipped with tools, they will mostly just fly around the ISS, testing out their software. The eventual goal is to have a fleet of these robots flying around in formation, fixing things, docking with and moving things about, and autonomously looking for misplaced items. If SPHERES can also perform EVAs (extra-vehicular activity, space walks), then the risk of being an astronaut would be significantly reduced.

spheresIn recent years there has been a marked shift towards the use of off-the-shelf hardware in space (and military) applications. This is partly due to tighter budgets, and partly because modern technology has become pretty damn sophisticated. As Chris Provencher, SPHERES project manager, said in an interview with Reuters:

We wanted to add communication, a camera, increase the processing capability, accelerometers and other sensors [to the SPHERES]. As we were scratching our heads thinking about what to do, we realized the answer was in our hands. Let’s just use smartphones.

The SPHERES system is currently planned to be in use on the ISS until at least 2017. Combined with NASA’s Robonaut, there are some fears that this is the beginning of a trend where astronauts are replaced entirely by robots. But considering how long it would take to visit a nearby star, maybe that’s not such a bad thing. At least until all of the necessary terraforming have been carried out in advance of the settlers.

So perhaps robots will only be used to do the heavy lifting, or the work that is too dull, dangerous or dirty for regular astronauts – just like drones. Hopefully, they won’t be militarized though. We all saw how that went! And be sure to check out this video of SPHERES being upgraded with Project Tango, courtesy of Google’s Advanced Technology and Projects group (ATAP):


Sources:
nasa.gov, extremetech.com, techtimes.com

Judgement Day Update: Terminators at I/O 2014

google_terminatorsWe’ve all thought about it… the day when super-intelligent computer becomes self-aware and unleashes a nuclear holocaust, followed shortly thereafter by the rise of the machines (cue theme from Terminator). But as it turns out, when the robot army does come to exterminate humanity, at two humans might be safe – Google co-founders Larry Page and Sergey Brin to be precise.

Basically, they’ve uploaded a killer-robots.txt file to their servers that instructs T-800 and T-1000 Terminators to spare the company’s co-founders (or “disallow” their deaths). Such was the subject of a totally tongue-in-cheek presentation at this year’s Google I/O at the Moscone Center in San Fransisco, which coincided with the 20th anniversary of the Robots.txt file.

https://i2.wp.com/www.product-reviews.net/wp-content/uploads/Google-IO-2014-keynote-dated-live-stream-as-normal1.jpgThis tool, which was created in 1994, instructs search engines and other automated bots to avoid crawling certain pages or directories of a website. The industry has done a remarkable job staying true to the simple text file in the two decades since; Google, Bing, and Yahoo still obey its directives. The changes they uploaded read like this, just in case you’re planning on adding your name to the “disallow” list:

Screen_shot_2014-07-03_at_7.15.23_pm

While that tool didn’t exactly take the rise of the machines into account, it’s appearance on the Google’s website as an Easter egg did add some levity to a company that is already being accused of facilitating in the creation of killer robots. Calling Google’s proposed line or robots “killer” does seem both premature and extreme, that did not stop a protester from interrupting the I/O 2014 keynote address.

Google_Terminators_WideBasically, as Google’s senior VP of technical infrastructure Urs Hölze spoke about their cloud platform, the unidentified man stood up and began screaming “You all work for a totalitarian company that builds machines that kill people!” As you can see from the video below, Hölze did his best to take the interruptions in stride and continued with the presentation. The protestor was later escorted out by security.

This wasn’t the first time that Google has been the source of controversy over the prospect of building “killer robots”. Ever since Google acquired Boston Dynamics and seven other robots companies in the space of six months (between and June and Dec of 2013), there has been some fear that the company has a killer machine in the works that it will attempt to sell to the armed forces.

campaign_killerrobotsNaturally, this is all part of a general sense of anxiety that surrounds developments being made across multiple fields. Whereas some concerns have crystallized into dedicated and intelligent calls for banning autonomous killer machines in advance – aka. the Campaign To Stop Killer Robots – others have resulted in the kinds of irrational outbreaks observed at this year’s I/O.

Needless to say, if Google does begin developing killer robots, or just starts militarizing its line of Boston Dynamics acquisitions, we can expect that just about everyone who can access (or hack their way into) the Robots.txt file to be adding their names. And it might not be too soon to update the list to include the T-X, Replicants, and any other killer robots we can think of!

And be sure to check out the video of the “killer robot” protester speaking out at 2014 I/O:


Sources: 
theverge.com, (2)

The Future is Here: First Android Newscasters in Japan

japan-android-robotsThis past week, Japanese scientists unveiled what they claim is the world’s first news-reading android. The adolescent-looking “Kodomoroid” – an amalgamation of the Japanese word “kodomo” (child) and “android”- and “Otonaroid” (“otona” meaning adult) introduced themselves at an exhibit entitled Android: What is a Human?, which is being presented at Tokyo’s National Museum of Emerging Science and Innovation (Miraikan).

The androids were flanked by robotics professor Hiroshi Ishiguro and Miraikan director Mamoru Mori. After Kodomoroid delivered news of an earthquake and an FBI raid to amazed reporters in Tokyo. She even poked fun at her creator, leading robotics professor Hiroshi Ishiguro, “You’re starting to look like a robot!” This was followed by Otonaroid fluffing her lines when asked to introduced herself, which was followed by her excusing herself by saying, “I’m a little bit nervous.”

geminoidBoth androids will be working at Miraikan and interacting with visitors, as part of Ishiguro’s studies into human reactions to the machines. Ishiguro is well-known for his work with “geminoid”, robots that bare a frightening resemblance to their creator. As part of his lecture process, Ishiguro takes his geminoid with him when he travels and even let’s it deliver his lectures for him. During an interview with AFP, he explained the reasoning behind this latest exhibit:

This will give us important feedback as we explore the question of what is human. We want robots to become increasingly clever. We will have more and more robots in our lives in the future… This will give us important feedback as we explore the question of what is human. We want robots to become increasingly clever.

Granted the unveiling did have its share of bugs. For her part, Otonaroid looked as if she could use some rewiring before beginning her new role as the museum’s science communicator, her lips out of sync and her neck movements symptomatic of a bad night’s sleep. But Ishiguro insisted both would prove invaluable to his continued research as museum visitors get to have conversations with the ‘droids and operate them as extensions of their own body.

pepperAnd this is just one of many forays into a world where the line between robots and humans are becoming blurred. After a successful debut earlier this month, a chatty humanoid called Pepper is set to go on sale as a household companion in Japan starting next year. Designed by SoftBank, using technology acquired from French robotics company Aldebaran, and marketed as a household companion, each robot will cost around $2,000, the same cost of a laptop.

Pepper can communicate through emotion, speech or body language and it’s equipped with both mics and proximity sensors. Inside, it will be possible to install apps and upgrade the unit’s functionality, the plan being to make Pepper far smarter than when you first bought it. It already understands 4,500 Japanese words, but perhaps more impressively, Pepper can apparently read into the tone used to understand its master’s disposition.

pepperAldebaran CEO Bruno Maisonnier claims that robots that can recognize human emotion will change the way we live and communicate. And this is certainly a big step towards getting robots into our daily lives, at least if you live in Japan (the only place Pepper will be available for the time being). He also believes this is the start of a “robotic revolution” where robotic household companions that can understand and interact with their human owners will become the norm.

Hmm, a world where robots are increasingly indistinguishable from humans, can do human jobs, and are capable of understanding and mimicking our emotions. Oh, and they live in our houses too? Yeah, I’m just going to ignore the warning bells going off in my head now! And in the meantime, be sure to check out these videos of Kodomoroid and Otonaroid and Pepper being unveiled for the first time:

World’s First Android Newscasters:


Aldebaran’s Pepper:


Sources:
cnet.com, gizmodo.com, engadget.com, nydailynews.com

Stephen Hawking: AI Could Be a “Real Danger”

http://flavorwire.files.wordpress.com/2014/06/safe_image.jpgIn a hilarious appearance on “Last Week Tonight” – John Oliver’s HBO show – guest Stephen Hawking spoke about some rather interesting concepts. Among these were the concepts of “imaginary time” and, more interestingly, artificial intelligence. And much to the surprise of Oliver, and perhaps more than a few viewers, Hawking’s was not too keen on the idea of the latter. In fact, his predictions were just a tad bit dire.

Of course, this is not the first time Oliver had a scientific authority on his show, as demonstrated by his recent episode which dealt with Climate Change and featured guest speaker Bill Nye “The Science Guy”. When asked about the concept of imaginary time, Hawking explained it as follows:

Imaginary time is like another direction in space. It’s the one bit of my work science fiction writers haven’t used.

singularity.specrepIn sum, imaginary time has something to do with time that runs in a different direction to the time that guides the universe and ravages us on a daily basis. And according to Hawking, the reason why sci-fi writers haven’t built stories around imaginary time is apparently due to the fact that  “They don’t understand it”. As for artificial intelligence, Hawking replied without any sugar-coating:

Artificial intelligence could be a real danger in the not too distant future. [For your average robot could simply] design improvements to itself and outsmart us all.

Oliver, channeling his inner 9-year-old, asked: “But why should I not be excited about fighting a robot?” Hawking offered a very scientific response: “You would lose.” And in that respect, he was absolutely right. One of the greatest concerns with AI, for better or for worse, is the fact that a superior intelligence, left alone to its own devices, would find ways to produce better and better machines without human oversight or intervention.

terminator2_JDAt worst, this could lead to the machines concluding that humanity is no longer necessary. At best, it would lead to an earthly utopia where machines address all our worries. But in all likelihood, it will lead to a future where the pace of technological change will impossible to predict. As history has repeatedly shown, technological change brings with it all kinds of social and political upheaval. If it becomes a runaway effect, humanity will find it impossible to keep up.

Keeping things light, Oliver began to worry that Hawking wasn’t talking to him at all. Instead, this could be a computer spouting wisdoms. To which, Hawking replied: “You’re an idiot.” Oliver also wondered whether, given that there may be many parallel universes, there might be one where he is smarter than Hawking. “Yes,” replied the physicist. “And also a universe where you’re funny.”

Well at least robots won’t have the jump on us when it comes to being irreverent. At least… not right away! Check out the video of the interview below:


Source: cnet.com

The Future is Here: Roombot Transforming Furniture

roombots_tableRobotic arms and other mechanisms have long been used to make or assemble furniture; but thus far, no one has ever created robots that are capable of becoming furniture. However, Swiss researchers are aiming to change that with Roombots, a brand of reconfigurable robotic modules that connect to each other to change shape and transform into different types of furniture, based on the needs and specifications of users.

Created by the Biorobotics Laboratory (BioRob) at École polytechnique fédérale de Lausanne (EPFL), the self-assembling Roombots attach to each other via connectors which enables them to take on the desired shape. The team’s main goal is to create self-assembling interactive furniture that can be used in a variety of ways. They were designed primarily for the sake of helping the disabled or elderly by morphing to suit their needs.

roombots_unpackLike LEGO bricks, Roombots can be stacked upon each other to create various structures and/or combined with furniture and other objects, changing not only their shape, but also and functionality. For instance, a person lying down on a Roombot bed could slowly be moved into a seated position, or a table could scoot over to a corner or tilt itself to help a book slide into a person’s hands. The team has solved a number of significant milestones, such as the having the Roombots move freely, to bring all this multi-functionality closer.

Each 22 cm-long module (which is made up of four half-spheres) has a wireless connection, a battery, and three motors that allow the module to pivot with three degrees of freedom. Each modules also has retractable “claws” that are used to attach to other pieces to form larger structures. With a series of rotations and connections, the modules can change shape and become any of a variety of objects. A special surface with holes adapted to the Roombots’ mechanical claws can also allow the modules to anchor to a wall or floor.

roombots_configThe Roombots can even climb up a wall or over a step, when the surface is outfitted with connector plates. They’re are also capable of picking up connector plates and arranging them to form, say, a table’s surface. Massimo Vespignani, a PhD student at BioRob, explained the purpose of this design and the advantages in a recent interview with Gizmag:

We start from a group of Roombot modules that might be stacked together for storage. The modules detach from this pile to form structures of two or more modules. At this point they can start moving around the room in what we call off-grid locomotion…

A single module can autonomously reach any position on a plane (this being on the floor, walls, or ceiling), and overcome a concave edge. In order to go over convex edges two modules need to collaborate…

The advantage would be that the modules can be tightly packed together for transportation and then can reconfigure into any type of structure (for example a robotic manipulator)…

We can ‘augment’ existing furniture by placing compatible connectors on it and attaching Roombots modules to allow it to move around the house.

roombots_boxThe range of applications for these kind of robotics is virtually infinite. For example, as seen in the video below, a series of Roombots as feet on a table that not only let it move around the room and come to the owner, but adjust its height as well. Auke Ijspeert, head of the Biorob, envisions that this type of customization could be used for physically challenged people who could greatly benefit from furniture that adapts to their needs and movements.

As he said in a recent statement:

It could be very useful for disabled individuals to be able to ask objects to come closer to them, or to move out of the way. [They could also be used as] ‘Lego blocks’ [for makers to] find their own function and applications.

Meanwhile, design students at ENSCI Les Ateliers in France have come up with several more ideas for uses of Roombots, such as flower pots that can move from window to window around a building and modular lighting components and sound systems. Similar to the MIT’s more complex self-assembling M-Blocks – which are programmable cube robots with no external moving parts – Roombots represent a step in the direction of self-assembling robots that are capable of taking on just about any task.

roombotsFor instance, imagine a series of small robotic modules that could be used for tasks like repairing bridges or buildings during emergencies. Simply release them from their container and feed them the instructions, and they assemble to prop up an earthquake-stricken structure or a fallen bridge. At the same time, it is a step in the direction of smart matter and nanotechnology, a futuristic vision that sees the very building blocks of everyday objects as programmable, reconfiguring materials that can shape or properties as needed.

To get a closer, more detailed idea of what the Roombot can do, check out the video below from EPFL News:


Source:
gizmag.com, cnet.com, kurzweilai.net

The Future is Here: The Thumbles Robot Touch Screen

thumblesSmartphones and tablets, with their high-resolution touchscreens and ever-increasing number of apps, are all very impressive and good. And though some apps are even able to jump from the screen in 3D, the vast majority are still limited to two-dimensions and are limited in terms of interaction. More and more, interface designers are attempting to break this fourth wall and make information something that you can really feel and move with your own two hands.

Take the Thumbles, an interactive screen created by James Patten from Patten Studio. Rather than your convention 2D touchscreen that responds to the heat in your fingers, this desktop interface combines touch screens with tiny robots that act as interactive controls. Whenever a new button would normally pop on the screen, a robot drives up instead, precisely parking for the user to grab it, turn it, or rearrange it. And the idea is surprisingly versatile.

thumbles1As the video below demonstrates, the robots serve all sorts of functions. In various applications, they appear as grabbable hooks at the ends of molecules, twistable knobs in a sound and video editor, trackable police cars on traffic maps, and swappable space ships in a video game. If you move or twist one robot, another robot can mirror the movement perfectly. And thanks to their omnidirectional wheels, the robots always move with singular intent, driving in any direction without turning first.

Naturally, there are concerns about the practicality of this technology where size is concerned. While it makes sense for instances where space isn’t a primary concern, it doesn’t exactly work for a smartphone or tablet touchscreen. In that case, the means simply don’t exist to create robots small enough to wander around the tiny screen space and act as interfaces. But in police stations, architecture firms, industrial design settings, or military command centers, the Thumbles and systems like it are sure to be all the rage.

thumbles2Consider another example shown in the video, where we see a dispatcher who is able to pick up and move a police car to a new location to dispatch it. Whereas a dispatcher is currently required to listen for news of a disturbance, check an available list of vehicles, see who is close to the scene, and then call that police officer to go to that scene, this tactile interface streamlines such tasks into quick movements and manipulations.

The same holds true for architects who want to move design features around on a CAD model; corporate officers who need to visualize their business model; landscapers who want to see what a stretch of Earth will look like once they’ve raised a section of land, changed the drainage, planted trees or bushes, etc.; and military planners can actively tell different units on a battlefield (or a natural disaster) what to do in real-time, responding to changing circumstances quicker and more effectively, and with far less confusion.

Be sure to check out the demo video below, showing the Thumbles in action. And be sure to check out Patten Studio on their website.


Sources: fastcodesign.com, pattenstudio.com