Judgement Day Update: Cheetah Robot Unleashed!

MIT-Cheetah-05-640x366There have been lots of high-speed bio-inspired robots in recent years, as exemplified by Boston Dynamics WildCat. But MIT’s Cheetah robot, which made its big debut earlier this month, is in a class by itself. In addition to being able to run at impressive speeds, bound, and jump over obstacles, this particular biomimetic robot is also being battery-and-motor driven rather than by a gasoline engine and hydraulics, and can function untethered (i.e. not connected to a power source).

While gasoline-powered robots are still very much bio-inspired, they are dependent on sheer power to try and match the force and speed of their flesh-and-blood counterparts. They’re also pretty noisy, as the demonstration of the WildCat certainly showed (video below). MIT’s Cheetah takes the alternate route of applying less power but doing so more efficiently, more closely mimicking the musculoskeletal system of a living creature.

mit-cheetahThis is not only a reversal on contemporary robotics, but a break from history. Historically, to make a robot run faster, engineers made the legs move faster. The alternative is to keep the same kind of frequency, but to push down harder at the ground with each step. As MIT’s Sangbae Kim explained:

Our robot can be silent and as efficient as animals. The only things you hear are the feet hitting the ground… Many sprinters, like Usain Bolt, don’t cycle their legs really fast. They actually increase their stride length by pushing downward harder and increasing their ground force, so they can fly more while keeping the same frequency.

MIT’s Cheetah uses much the same approach as a sprinter, combining custom-designed high-torque-density electric motors made at MIT with amplifiers that control the motors (also a custom MIT job). These two technologies, combined with a bio-inspired leg, allow the Cheetah to apply exactly the right amount of force to successfully bound across the ground and navigate obstacles without falling over.

MIT-cheetah_jumpWhen it wants to jump over an obstacle, it simply pushes down harder; and as you can see from the video below, the results speak for themselves. For now, the Cheetah can run untethered at around 16 km/h (10 mph) across grass, and hurdle over obstacles up to 33 centimeters high. The Cheetah currently bounds – a fairly simple gait where the front and rear legs move almost in unison – but galloping, where all four legs move asymmetrically, is the ultimate goal.

With a new gait, and a little byte surgery to the control algorithms, MIT hopes that the current Cheetah can hit speeds of up to 48 km/h (30 mph), which would make it the fastest untethered quadruped robot in the world. While this is still a good deal slower than the real thing  – real cheetah’s can run up to 60 km/h (37 mph) – it will certainly constitute another big step for biomimetics and robotics.

Be sure to check out the video of the Cheetah’s test, and see how it differs from the Boston Dynamics/DARPA’s WildCat’s tests from October of last year:



Source:
extremetech.com

News From Space: Astronaut Robots

spheres_1As if it weren’t bad enough that they are replacing workers here on Earth, now they are being designed to replace us in space! At least, that’s the general idea behind Google and NASA’s collaborative effort to make SPHERES (Synchronized Position Hold, Engage, Reorient, Experimental Satellites). As the name suggests, these robots are spherical, floating machines that use small CO2 thrusters to move about and performing chores usually done by astronauts.

Earlier this month, NASA announced it’s plan to launch some SPHERES aboard an unmanned Cygnus spacecraft to the International Space Station to begin testing. That launch took place on July 11th, and the testing has since begun. Powered by Tango, Google’s prototype smartphone that comes with 3D sensors that map the environment around them, the three satellites were used to perform routine tasks.

nasa-antares-launch-photoNASA has sent SPHERES to the ISS before, but all they could really do was move around using their small CO2 thruster. With the addition of a Tango “brain” though, the hope is that the robots will actually be able to assist astronauts on some tasks, or even completely carry out some mundane chores. In addition, the mission is to prepare the robots for long-term use and harmonized them to the ISS’ environment.

This will consist of the ISS astronauts testing SPHERES ability to fly around and dock themselves to recharge (since their batteries only last 90 minutes), and use the Tango phones to map the Space Station three-dimensionally. This data will be fed into the robots so they have a baseline for their flight patterns. The smartphones will be attached to the robots for future imaging tasks, and they will help with mathematical calculations and transmitting a Wi-Fi signal.

spheres_0In true science fiction fashion, the SPHERES project began in 2000 after MIT professor David W. Miller was inspired by the “Star Wars” scene where Luke Skywalker is being trained in handling a lightsaber by a small flying robot. Miller asked his students to create a similar robot for the aerospace Industry. Their creations were then sent to the ISS in 2006, where they have been ever since.

As these early SPHERES aren’t equipped with tools, they will mostly just fly around the ISS, testing out their software. The eventual goal is to have a fleet of these robots flying around in formation, fixing things, docking with and moving things about, and autonomously looking for misplaced items. If SPHERES can also perform EVAs (extra-vehicular activity, space walks), then the risk of being an astronaut would be significantly reduced.

spheresIn recent years there has been a marked shift towards the use of off-the-shelf hardware in space (and military) applications. This is partly due to tighter budgets, and partly because modern technology has become pretty damn sophisticated. As Chris Provencher, SPHERES project manager, said in an interview with Reuters:

We wanted to add communication, a camera, increase the processing capability, accelerometers and other sensors [to the SPHERES]. As we were scratching our heads thinking about what to do, we realized the answer was in our hands. Let’s just use smartphones.

The SPHERES system is currently planned to be in use on the ISS until at least 2017. Combined with NASA’s Robonaut, there are some fears that this is the beginning of a trend where astronauts are replaced entirely by robots. But considering how long it would take to visit a nearby star, maybe that’s not such a bad thing. At least until all of the necessary terraforming have been carried out in advance of the settlers.

So perhaps robots will only be used to do the heavy lifting, or the work that is too dull, dangerous or dirty for regular astronauts – just like drones. Hopefully, they won’t be militarized though. We all saw how that went! And be sure to check out this video of SPHERES being upgraded with Project Tango, courtesy of Google’s Advanced Technology and Projects group (ATAP):


Sources:
nasa.gov, extremetech.com, techtimes.com

Judgement Day Update: Terminators at I/O 2014

google_terminatorsWe’ve all thought about it… the day when super-intelligent computer becomes self-aware and unleashes a nuclear holocaust, followed shortly thereafter by the rise of the machines (cue theme from Terminator). But as it turns out, when the robot army does come to exterminate humanity, at two humans might be safe – Google co-founders Larry Page and Sergey Brin to be precise.

Basically, they’ve uploaded a killer-robots.txt file to their servers that instructs T-800 and T-1000 Terminators to spare the company’s co-founders (or “disallow” their deaths). Such was the subject of a totally tongue-in-cheek presentation at this year’s Google I/O at the Moscone Center in San Fransisco, which coincided with the 20th anniversary of the Robots.txt file.

https://i2.wp.com/www.product-reviews.net/wp-content/uploads/Google-IO-2014-keynote-dated-live-stream-as-normal1.jpgThis tool, which was created in 1994, instructs search engines and other automated bots to avoid crawling certain pages or directories of a website. The industry has done a remarkable job staying true to the simple text file in the two decades since; Google, Bing, and Yahoo still obey its directives. The changes they uploaded read like this, just in case you’re planning on adding your name to the “disallow” list:

Screen_shot_2014-07-03_at_7.15.23_pm

While that tool didn’t exactly take the rise of the machines into account, it’s appearance on the Google’s website as an Easter egg did add some levity to a company that is already being accused of facilitating in the creation of killer robots. Calling Google’s proposed line or robots “killer” does seem both premature and extreme, that did not stop a protester from interrupting the I/O 2014 keynote address.

Google_Terminators_WideBasically, as Google’s senior VP of technical infrastructure Urs Hölze spoke about their cloud platform, the unidentified man stood up and began screaming “You all work for a totalitarian company that builds machines that kill people!” As you can see from the video below, Hölze did his best to take the interruptions in stride and continued with the presentation. The protestor was later escorted out by security.

This wasn’t the first time that Google has been the source of controversy over the prospect of building “killer robots”. Ever since Google acquired Boston Dynamics and seven other robots companies in the space of six months (between and June and Dec of 2013), there has been some fear that the company has a killer machine in the works that it will attempt to sell to the armed forces.

campaign_killerrobotsNaturally, this is all part of a general sense of anxiety that surrounds developments being made across multiple fields. Whereas some concerns have crystallized into dedicated and intelligent calls for banning autonomous killer machines in advance – aka. the Campaign To Stop Killer Robots – others have resulted in the kinds of irrational outbreaks observed at this year’s I/O.

Needless to say, if Google does begin developing killer robots, or just starts militarizing its line of Boston Dynamics acquisitions, we can expect that just about everyone who can access (or hack their way into) the Robots.txt file to be adding their names. And it might not be too soon to update the list to include the T-X, Replicants, and any other killer robots we can think of!

And be sure to check out the video of the “killer robot” protester speaking out at 2014 I/O:


Sources: 
theverge.com, (2)

The Future is Here: First Android Newscasters in Japan

japan-android-robotsThis past week, Japanese scientists unveiled what they claim is the world’s first news-reading android. The adolescent-looking “Kodomoroid” – an amalgamation of the Japanese word “kodomo” (child) and “android”- and “Otonaroid” (“otona” meaning adult) introduced themselves at an exhibit entitled Android: What is a Human?, which is being presented at Tokyo’s National Museum of Emerging Science and Innovation (Miraikan).

The androids were flanked by robotics professor Hiroshi Ishiguro and Miraikan director Mamoru Mori. After Kodomoroid delivered news of an earthquake and an FBI raid to amazed reporters in Tokyo. She even poked fun at her creator, leading robotics professor Hiroshi Ishiguro, “You’re starting to look like a robot!” This was followed by Otonaroid fluffing her lines when asked to introduced herself, which was followed by her excusing herself by saying, “I’m a little bit nervous.”

geminoidBoth androids will be working at Miraikan and interacting with visitors, as part of Ishiguro’s studies into human reactions to the machines. Ishiguro is well-known for his work with “geminoid”, robots that bare a frightening resemblance to their creator. As part of his lecture process, Ishiguro takes his geminoid with him when he travels and even let’s it deliver his lectures for him. During an interview with AFP, he explained the reasoning behind this latest exhibit:

This will give us important feedback as we explore the question of what is human. We want robots to become increasingly clever. We will have more and more robots in our lives in the future… This will give us important feedback as we explore the question of what is human. We want robots to become increasingly clever.

Granted the unveiling did have its share of bugs. For her part, Otonaroid looked as if she could use some rewiring before beginning her new role as the museum’s science communicator, her lips out of sync and her neck movements symptomatic of a bad night’s sleep. But Ishiguro insisted both would prove invaluable to his continued research as museum visitors get to have conversations with the ‘droids and operate them as extensions of their own body.

pepperAnd this is just one of many forays into a world where the line between robots and humans are becoming blurred. After a successful debut earlier this month, a chatty humanoid called Pepper is set to go on sale as a household companion in Japan starting next year. Designed by SoftBank, using technology acquired from French robotics company Aldebaran, and marketed as a household companion, each robot will cost around $2,000, the same cost of a laptop.

Pepper can communicate through emotion, speech or body language and it’s equipped with both mics and proximity sensors. Inside, it will be possible to install apps and upgrade the unit’s functionality, the plan being to make Pepper far smarter than when you first bought it. It already understands 4,500 Japanese words, but perhaps more impressively, Pepper can apparently read into the tone used to understand its master’s disposition.

pepperAldebaran CEO Bruno Maisonnier claims that robots that can recognize human emotion will change the way we live and communicate. And this is certainly a big step towards getting robots into our daily lives, at least if you live in Japan (the only place Pepper will be available for the time being). He also believes this is the start of a “robotic revolution” where robotic household companions that can understand and interact with their human owners will become the norm.

Hmm, a world where robots are increasingly indistinguishable from humans, can do human jobs, and are capable of understanding and mimicking our emotions. Oh, and they live in our houses too? Yeah, I’m just going to ignore the warning bells going off in my head now! And in the meantime, be sure to check out these videos of Kodomoroid and Otonaroid and Pepper being unveiled for the first time:

World’s First Android Newscasters:


Aldebaran’s Pepper:


Sources:
cnet.com, gizmodo.com, engadget.com, nydailynews.com

Stephen Hawking: AI Could Be a “Real Danger”

http://flavorwire.files.wordpress.com/2014/06/safe_image.jpgIn a hilarious appearance on “Last Week Tonight” – John Oliver’s HBO show – guest Stephen Hawking spoke about some rather interesting concepts. Among these were the concepts of “imaginary time” and, more interestingly, artificial intelligence. And much to the surprise of Oliver, and perhaps more than a few viewers, Hawking’s was not too keen on the idea of the latter. In fact, his predictions were just a tad bit dire.

Of course, this is not the first time Oliver had a scientific authority on his show, as demonstrated by his recent episode which dealt with Climate Change and featured guest speaker Bill Nye “The Science Guy”. When asked about the concept of imaginary time, Hawking explained it as follows:

Imaginary time is like another direction in space. It’s the one bit of my work science fiction writers haven’t used.

singularity.specrepIn sum, imaginary time has something to do with time that runs in a different direction to the time that guides the universe and ravages us on a daily basis. And according to Hawking, the reason why sci-fi writers haven’t built stories around imaginary time is apparently due to the fact that  “They don’t understand it”. As for artificial intelligence, Hawking replied without any sugar-coating:

Artificial intelligence could be a real danger in the not too distant future. [For your average robot could simply] design improvements to itself and outsmart us all.

Oliver, channeling his inner 9-year-old, asked: “But why should I not be excited about fighting a robot?” Hawking offered a very scientific response: “You would lose.” And in that respect, he was absolutely right. One of the greatest concerns with AI, for better or for worse, is the fact that a superior intelligence, left alone to its own devices, would find ways to produce better and better machines without human oversight or intervention.

terminator2_JDAt worst, this could lead to the machines concluding that humanity is no longer necessary. At best, it would lead to an earthly utopia where machines address all our worries. But in all likelihood, it will lead to a future where the pace of technological change will impossible to predict. As history has repeatedly shown, technological change brings with it all kinds of social and political upheaval. If it becomes a runaway effect, humanity will find it impossible to keep up.

Keeping things light, Oliver began to worry that Hawking wasn’t talking to him at all. Instead, this could be a computer spouting wisdoms. To which, Hawking replied: “You’re an idiot.” Oliver also wondered whether, given that there may be many parallel universes, there might be one where he is smarter than Hawking. “Yes,” replied the physicist. “And also a universe where you’re funny.”

Well at least robots won’t have the jump on us when it comes to being irreverent. At least… not right away! Check out the video of the interview below:


Source: cnet.com

The Future is Here: Roombot Transforming Furniture

roombots_tableRobotic arms and other mechanisms have long been used to make or assemble furniture; but thus far, no one has ever created robots that are capable of becoming furniture. However, Swiss researchers are aiming to change that with Roombots, a brand of reconfigurable robotic modules that connect to each other to change shape and transform into different types of furniture, based on the needs and specifications of users.

Created by the Biorobotics Laboratory (BioRob) at École polytechnique fédérale de Lausanne (EPFL), the self-assembling Roombots attach to each other via connectors which enables them to take on the desired shape. The team’s main goal is to create self-assembling interactive furniture that can be used in a variety of ways. They were designed primarily for the sake of helping the disabled or elderly by morphing to suit their needs.

roombots_unpackLike LEGO bricks, Roombots can be stacked upon each other to create various structures and/or combined with furniture and other objects, changing not only their shape, but also and functionality. For instance, a person lying down on a Roombot bed could slowly be moved into a seated position, or a table could scoot over to a corner or tilt itself to help a book slide into a person’s hands. The team has solved a number of significant milestones, such as the having the Roombots move freely, to bring all this multi-functionality closer.

Each 22 cm-long module (which is made up of four half-spheres) has a wireless connection, a battery, and three motors that allow the module to pivot with three degrees of freedom. Each modules also has retractable “claws” that are used to attach to other pieces to form larger structures. With a series of rotations and connections, the modules can change shape and become any of a variety of objects. A special surface with holes adapted to the Roombots’ mechanical claws can also allow the modules to anchor to a wall or floor.

roombots_configThe Roombots can even climb up a wall or over a step, when the surface is outfitted with connector plates. They’re are also capable of picking up connector plates and arranging them to form, say, a table’s surface. Massimo Vespignani, a PhD student at BioRob, explained the purpose of this design and the advantages in a recent interview with Gizmag:

We start from a group of Roombot modules that might be stacked together for storage. The modules detach from this pile to form structures of two or more modules. At this point they can start moving around the room in what we call off-grid locomotion…

A single module can autonomously reach any position on a plane (this being on the floor, walls, or ceiling), and overcome a concave edge. In order to go over convex edges two modules need to collaborate…

The advantage would be that the modules can be tightly packed together for transportation and then can reconfigure into any type of structure (for example a robotic manipulator)…

We can ‘augment’ existing furniture by placing compatible connectors on it and attaching Roombots modules to allow it to move around the house.

roombots_boxThe range of applications for these kind of robotics is virtually infinite. For example, as seen in the video below, a series of Roombots as feet on a table that not only let it move around the room and come to the owner, but adjust its height as well. Auke Ijspeert, head of the Biorob, envisions that this type of customization could be used for physically challenged people who could greatly benefit from furniture that adapts to their needs and movements.

As he said in a recent statement:

It could be very useful for disabled individuals to be able to ask objects to come closer to them, or to move out of the way. [They could also be used as] ‘Lego blocks’ [for makers to] find their own function and applications.

Meanwhile, design students at ENSCI Les Ateliers in France have come up with several more ideas for uses of Roombots, such as flower pots that can move from window to window around a building and modular lighting components and sound systems. Similar to the MIT’s more complex self-assembling M-Blocks – which are programmable cube robots with no external moving parts – Roombots represent a step in the direction of self-assembling robots that are capable of taking on just about any task.

roombotsFor instance, imagine a series of small robotic modules that could be used for tasks like repairing bridges or buildings during emergencies. Simply release them from their container and feed them the instructions, and they assemble to prop up an earthquake-stricken structure or a fallen bridge. At the same time, it is a step in the direction of smart matter and nanotechnology, a futuristic vision that sees the very building blocks of everyday objects as programmable, reconfiguring materials that can shape or properties as needed.

To get a closer, more detailed idea of what the Roombot can do, check out the video below from EPFL News:


Source:
gizmag.com, cnet.com, kurzweilai.net

The Future is Here: The Thumbles Robot Touch Screen

thumblesSmartphones and tablets, with their high-resolution touchscreens and ever-increasing number of apps, are all very impressive and good. And though some apps are even able to jump from the screen in 3D, the vast majority are still limited to two-dimensions and are limited in terms of interaction. More and more, interface designers are attempting to break this fourth wall and make information something that you can really feel and move with your own two hands.

Take the Thumbles, an interactive screen created by James Patten from Patten Studio. Rather than your convention 2D touchscreen that responds to the heat in your fingers, this desktop interface combines touch screens with tiny robots that act as interactive controls. Whenever a new button would normally pop on the screen, a robot drives up instead, precisely parking for the user to grab it, turn it, or rearrange it. And the idea is surprisingly versatile.

thumbles1As the video below demonstrates, the robots serve all sorts of functions. In various applications, they appear as grabbable hooks at the ends of molecules, twistable knobs in a sound and video editor, trackable police cars on traffic maps, and swappable space ships in a video game. If you move or twist one robot, another robot can mirror the movement perfectly. And thanks to their omnidirectional wheels, the robots always move with singular intent, driving in any direction without turning first.

Naturally, there are concerns about the practicality of this technology where size is concerned. While it makes sense for instances where space isn’t a primary concern, it doesn’t exactly work for a smartphone or tablet touchscreen. In that case, the means simply don’t exist to create robots small enough to wander around the tiny screen space and act as interfaces. But in police stations, architecture firms, industrial design settings, or military command centers, the Thumbles and systems like it are sure to be all the rage.

thumbles2Consider another example shown in the video, where we see a dispatcher who is able to pick up and move a police car to a new location to dispatch it. Whereas a dispatcher is currently required to listen for news of a disturbance, check an available list of vehicles, see who is close to the scene, and then call that police officer to go to that scene, this tactile interface streamlines such tasks into quick movements and manipulations.

The same holds true for architects who want to move design features around on a CAD model; corporate officers who need to visualize their business model; landscapers who want to see what a stretch of Earth will look like once they’ve raised a section of land, changed the drainage, planted trees or bushes, etc.; and military planners can actively tell different units on a battlefield (or a natural disaster) what to do in real-time, responding to changing circumstances quicker and more effectively, and with far less confusion.

Be sure to check out the demo video below, showing the Thumbles in action. And be sure to check out Patten Studio on their website.


Sources: fastcodesign.com, pattenstudio.com

The Future is Here: AirMule’s Autonomous Demo Flight

airmule1Vertical Take-Off and Landing craft have been the subject of military developers for some time. In addition to being able to deploy from landing strips that are damaged or small for conventional aircraft, they are also able to navigate terrain and land where other craft cannot. Add to that the ability to hover and fly close to the ground, and you have a craft that can also provide support while avoiding IEDs and landmines.

One concept that incorporates all of these features is the AirMule, a compact, unmanned, single-engine vehicle that is being developed by Tactical Robotics in Israel. In January of 2013, the company unveiled the prototype which they claimed was created for the sake of supporting military personnel,  evacuating the wounded, and conducting remote reconnaissance missions.

airmule-1Now, less than a year later, the company conducted a demonstration with their prototype aircraft recently demonstrated its ability to fly autonomously, bringing it one step closer to carrying out a full mission demo. During the test, which took place in December, the craft autonomously performed a vertical take-off, flew to the end of a runway, then turned around on the spot and flew back to its starting point.

All the while, it maintained altitude using two laser altimeters, while maintaining positioning via a combination of GPS, an inertial navigation system, and optical reference to markers on the ground. These autonomous systems, which allow it to fly on its own, can also be countermanded in favor of remote control, in case a mission seems particularly harry and requires a human controller.

airmule-0In its current form, the AirMule possesses many advantages over other VTOL craft, such as helicopters. For starters, it weighs only 770 kg (1,700 lb) – as opposed to a Bell UH-1 empty weights of 2,365 kg (5,215 lbs) – can carry a payload of up to 640 kg (1,400 lb), has a top speed of 180 km/h (112 mph), and can reach a maximum altitude of 12,000 ft (3,658 m).

In short, it has a better mass to carrying capacity ratio than a helicopter, comparable performance, and can land and take-off within an area of 40 square meters (430.5 sq ft), which is significantly smaller than what a manned helicopter requires for a safe landing. The internal rotor blades are reportedly also much quieter than those of a helicopter, giving the matte-black AirMule some added stealth.

BD_atlasrobotPlans now call for “full mission demonstrations” next year, utilizing a second prototype that is currently under construction. And when complete, this vehicle and those like it can expected to be deployed to many areas of the world, assisting Coalition and other forces in dirty, dangerous environments where landmines, IEDs and other man-made and natural hazards are common.

Alongside machines like the Alpha Dog, LS3 or Wildcat, machines that were built by Boston Dynamics (recently acquired by Google) to offer transport and support to infantry in difficult terrain, efforts to “unman the front lines” through the use of autonomous drones or remote-controlled robots continue. Clearly, the future battlefield is a place where robots where will be offering a rather big hand!

 

And be sure to check this video of the AirMule demonstration, showing the vehicle take-off, hover, fly around, and then come in for a landing:


Sources: gizmag.com, tactical-robotics.com

Judgement Day Update: DARPA Robotics Challenge!

darpa-robotics-challenge-conceptFor the past two years, the Defense Advanced Research Projects Agency has been holding a series of trials where robots are tasked with navigating disaster areas and performing tasks with tools and materials provided. This is known as the Robotics Challenge, which took place from Dec.20th to 21st and was streamed live from Florida’s Homestead Miami Speedway.

And this year, Google’s Schaft humanoid robot took home the top prize after scoring 27 points out of a total of 32 points. IHMC Robotics, based in Florida, grabbed second place, while Carnegie Mellon University’s Team Tartan Rescue placed third. Eight of the top teams that participated in the challenge may receive as much as $1 million in funding from DARPA, ahead of further trials next year with a $2 million prize.

schaft_robotBuilt by a Japanese start-up – one of Google’s many recent acquisitions – the Schaft is an updated version of the Humanoid Robot Project robot (HRP-2), with hardware and software modifications that include more powerful actuators, a walking/stabilization system, and a capacitor instead of a battery. The robot stands 1.48 m (4.8 ft) tall, weighs in at 95 kg (209 lb), and is generally unappealing to the eye.

However, what it lacks in photogenic quality, it makes up for in performance. Over the course of the trials, the bipedal robot was able to bring stable walking and significant torque power to fore as it opened doors, wielded hoses, and cut away part of a wall. However, team Schaft lost points when a gust of wind blew a door out of the robot’s hand and the robot was unable to exit a vehicle after navigated a driving course successfully.

Check out the video of the Schaft in action:


Initially, over 100 teams applied to compete when the challenged was announced in April of last year. After a series of reviews and virtual challenges, the field was narrowed down to 16 competing in four “tracks. On Track A, Schaft was joined by the RoboSimian, the robot recently built by NASA’s Jet Propulsion Laboratory (JPL). Another primate-like robot was the Tartan Rescue CHIMP, a red headless robot with rollers on its feet.

At the other end of the spectrum was the Johnson Space Center’s Valkyrie, a biped, anthropomorphic robot that honestly looks like something out of anime or Tony Stark’s closet. This latter aspect is due largely to the fact that it has a glowing chest light, though the builders claim that it’s just a bulge to make room in the torso for linear actuators to move the waist.

Valkyrie_robotOfficially designated “R5” by NASA, Val was designed to be a high-powered rescue robot, capable of traversing uneven terrain, climbing ladders, using tools, and even driving. According to the designers, the Valkyrie was designed to be human in form because:

a human form makes sense because we’re humans, and these robots will be doing the jobs that we don’t want to be doing because they’re too dangerous. To that end, Valkyrie has seven degree of freedom arms with actuated wrists and hands, each with three fingers and a thumb. It has a head that can tilt and swivel, a waist that can rotate, and six degree of freedom legs complete with feet equipped with six-axis force-torque sensors.

Unfortunately, the robot failed in its tasks this year, scoring 0 points and placing amongst the last three competitors. I guess NASA has some bugs to work out before this patently badass design can go toe-to-toe with other disaster robots. Or perhaps the anthropomorphic concept is just not up to the task. Only time and further trials will tell. And of course, there’s a video of Val in action too:


The B and C track teams are often difficult to tell apart because they all used Atlas robots. Meanwhile, the D track teams brought several of their own robots to the fore. These included Chiron, a robot that resembles a a metallic sea louse; Mojovation, a distinctly minimalist robot; South Korea’s Kaist, and China’s Intelligent Pioneer.

DARPA says that the point of the competition is to provide a baseline from which to develop robotics for disaster response. Events such as the 2011 Fukushima nuclear disaster, which not only damaged the reactors but made it impossible for crews to respond in time, demonstrate that robots have a potential role. DARPA believes that robots that could navigate the ruins and work in radioactive environments would have been of great help.

DARPA Robotics Challenge The problem is that current robots simply aren’t up to task. Specialized robots can’t be built to deal with the unpredictable, full telepresence control is neither practical nor desirable, and most robots tend to be a bit on the delicate side. What’s needed is a robot that can work on its own, use tools and vehicles at hand, deal with the unpredictable, and is durable and agile enough to operate in the ruins of a building.

That’s where DARPA Robotics Challenge comes in. Over the next few years, DARPA will use the results of the competition to draw a baseline that will benefit engineers working on the next generation of robots. For now, the top eight of the teams go on with DARPA funding to compete in the Robotics Finals event late next year, for a US $2 million prize.

DARPACourseIf there’s one thing the current challenge demonstrated, its that anthropomorphic designs are not well-suited to the tasks they were given. And ironic outcome, considering that one of the aims of the challenge is to develop robots capable of performing human tasks, but under conditions considered unsafe for humans. As always, the top prize goes to those who can think outside the box!

And in the meantime, enjoy this video of the Robot Challenge, taken on the second day of the trials.


Sources: gizmag.com, news.cnet.com, wired.com, IO9.com, theroboticchallenge.org

Judgement Day Update: The Robotic Security Gaurd

knightscope-1It’s quite the interesting premise isn’t it? And one that might make an interesting movie! It’s known as the Knightscope, an “autonomous data machine” currently in development by Silicon Valley startup Knightscope Inc. Ultimately, the purpose of this new breed of robot will be to conduct the important and often monotonous task of keeping watch over property more cost effectively and comprehensively than a human security guard.

Earlier this month, Knightscope revealed that they had secured beta customers for the first two models – the K5 and K10. The robots, which share a passing resemblance to R2-D2, collect real-time data via a network of sensors. These would range from 360-degree HD video camera, microphones, thermal imaging sensors, infrared sensors, radar, lidar, ultrasonic speed and distance sensors, air quality sensors, and optical character recognition technology for scanning things like license plates.

knightscopeDepending on the sensor loadout, the units can be used to monitor differences in temperature, calculate the traveling speed and distance of surrounding objects/people, observe night time activity using infrared technology, and provide precision 3D mapping of an area. There are also plans to include facial recognition technology to help recognize an offender or wanted persons once the technology has been perfected.

This data would then be fed into a centralized data center that law enforcement agencies would be able to access data in real time, giving them a unique vantage point to assess the situation before arrival. As well as providing real time alerts, Knightscope says companies will be able to analyze historical data collected over time to help predict crime and allow companies to make better business decisions.

knightscope-2According to William Santana Li, Chairman and CEO of Knightscope, the inspiration behind these security robots came from the terrible tragedy that occurred over a year ago in a Connecticut school:

We founded Knightscope in response to the President and Sandy Hook’s calls to action and with the ultimate goal of providing an avenue for all Americans to join the fight against crime.

The company also says that the K10 model is intended for vast open areas and on private roads, while the K5 robot is better suited to more space-constrained environments. In essence, the K10 would be well suited to things like detailed traffic analysis while the K5 would be capable of handling indoor tasks, everything ranging from security to factory inspections.

knightscope-3Personally, I think a fleet of robotic surveillance and security robots is an cost-effective and sensible alternative to bulletproofing classrooms or arming teachers. So far, no options have been made for arming the robots, but that’s probably for the be best. No sense in arming the machines before they are intelligent enough to turn them on their masters with hostile intent!

The K5 Beta prototype was featured at the Plug and Play Winter Expo in Sunnyvale, California and beta testing is due to commence at the end of this year. And be sure to enjoy the following video, courtesy of the Knightscope company


Source:
gizmag.com