News in Bionics: Restoring Sensation and Mobility!

TED_adrianne1It seems like I’ve writing endlessly about bionic prosthetics lately, thanks to the many breakthroughs that have been happening almost back to back. But I would be remiss if I didn’t share these latest two. In addition to showcasing some of the latest technological innovations, these stories are inspiring and show the immense potential bionic prosthetics have to change lives and help people recover from terrible tragedies.

For instance, on the TED stage this week in Vancouver, which included presentations from astronaut Chris Hadfield, NSA whistle blower Edward Snowden, and anti-corruption activist Charmiah Gooch, there was one presentation that really stole the stage. It Adrianne Haslet-Davis, a former dance instructor and a survivor of the Boston Marathon bombing, dancing again for the first time. And it was all thanks to a bionic limb developed by noted bionics researcher Hugh Herr. 

TED_hugh_herrAs the director of the Biomechatronics Group at the MIT Media Lab, Herr is known for his work on high-tech bionic limbs and for demonstrating new prosthetic technologies on himself. At 17, he lost both his legs in a climbing accident. After discussing the science of bionic limbs, Herr brought out Adrianne, who for the first time since her leg amputation, performed a short ballroom dancing routine.

This was made possible thanks to the help of a special kind of bionic limb that designed by Herr and his colleagues at MIT specifically for dancing. The design process took over 200 days, where the researchers studied dance, brought in dancers with biological limbs, studied how they moved, and examined the forces they applied on the dance floor. What resulted was a “dance limb” with 12 sensors, a synthetic motor system that can move the joint, and microprocessors that run the limb’s controllers.

TED_adrianne2The system is programmed so that the motor moves the limb in a way that’s appropriate for dance. As Herr explained in a briefing after his talk:

It was so new. We had never looked at something like dance. I understand her dream and emotionally related to her dream to return to dance. It’s similar to what I went through.” Herr says he’s now able to climb at a more advanced level than when he had biological legs.

Haslet-Davis’s new limb is only intended for dancing; she switches to a different bionic limb for regular walking. And while this might seem like a limitation, it in fact represents a major step in the direction of bionics that can emulate a much wider range of human motion. Eventually, Herr envisions a day when bionic limbs can switch modes for different activities, allowing a person to perform a range of different tasks – walking, running, dancing, athletic activity – without having to change prosthetics.

TED_adrianneIn the past, Herr’s work has been criticized by advocates who argue that bionic limbs are a waste of time when many people don’t even have access to basic wheelchairs. He argues, however, that bionic limbs–which can cost as much as a nice car–ultimately reduce health care costs. For starters, they allow people to return to their jobs quickly, Herr said, thus avoiding workers’ compensation costs.

They can also prevent injuries resulting from prosthetics that don’t emulate normal function as effectively as high-tech limbs. And given the fact that the technology is becoming more widespread and additive manufacturing is leading to lower production costs, there may yet come a day when a bionic prosthetic is not beyond the means of the average person. Needless to say, both Adrianne and the crowd were moved to tears by the moving and inspiring display!

bionic_hand_MIT1Next, there’s the inspiring story of Igor Spectic, a man who lost his right arm three years ago in a workplace accident. Like most people forced to live with the loss of a limb, he quickly came to understand the limitations of prosthetics. While they do restore some degree of ability, the fact that they cannot convey sensation means that the wearers are often unaware when they have dropped or crushed something.

Now, Spectic is one of several people taking part in early trials at Cleveland Veterans Affairs Medical Center, where researchers from Case Western Reserve University are working on prosthetics that offer sensation as well as ability. In a basement lab, the trials consist of connecting his limb to a prosthetic hand, one that is rigged with force sensors that are plugged into 20 wires protruding from his upper right arm.

bionic_hand_MITThese wires lead to three surgically implanted interfaces, seven millimeters long, with as many as eight electrodes apiece encased in a polymer, that surround three major nerves in Spetic’s forearm. Meanwhile, a nondescript white box of custom electronics does the job of translating information from the sensors on Spetic’s prosthesis into a series of electrical pulses that the interfaces can translate into sensations.

According to the trial’s leader, Dustin Tyler – a professor of biomedical engineering at Case Western Reserve University and an expert in neural interfaces – this technology is “20 years in the making”. As of this past February, the implants had been in place and performing well in tests for more than a year and a half. Tyler’s group, drawing on years of neuroscience research on the signaling mechanisms that underlie sensation, has developed a library of patterns of electrical pulses to send to the arm nerves, varied in strength and timing.

bionic_hand_MIT2Spetic says that these different stimulus patterns produce distinct and realistic feelings in 20 spots on his prosthetic hand and fingers. The sensations include pressing on a ball bearing, pressing on the tip of a pen, brushing against a cotton ball, and touching sandpaper. During the first day of tests, Spetic noticed a surprising side effect: his phantom fist felt open, and after several months the phantom pain was “95 percent gone”.

To test the hand’s ability to provide sensory feedback, and hence aid the user in performing complex tasks, Spetic and other trial candidates were tasked with picking up small blocks that were attached to a table with magnets, as well as handling and removing the stems from a bowl of cherries. With sensation restored, he was able to pick up cherries and remove stems 93 percent of the time without crushing them, even blindfolded.

bionic_hand_MIT_demoWhile impressive, Tyler estimates that completing the pilot study, refining stimulation methods, and launching full clinical trials is likely to take 10 years. He is also finishing development of an implantable electronic device to deliver stimuli so that the technology can make it beyond the lab and into a household setting. Last, he is working with manufacturers of prostheses to integrate force sensors and force processing technology directly into future versions of the devices.

As for Spetic, he has drawn quite a bit of inspiration from the trials and claims that they have left him thinking wistfully about what the future might bring. As he put it, he feels:

…blessed to know these people and be a part of this. It would be nice to know I can pick up an object without having to look at it, or I can hold my wife’s hand and walk down the street, knowing I have a hold of her. Maybe all of this will help the next person.

bionic-handThis represents merely one of several successful attempts to merge the technology of nerve stimulation in with nerve control, leading to bionic limbs that not only obey user’s commands, but provide sensory feedback at the same time. Given a few more decades of testing and development, we will most certainly be looking at an age where bionic limbs that are virtually indistiguishable from the real thing exist and are readily available.

And in the meantime, enjoy this news story of Adrianne Haslet-Davis performing her ballroom dance routine at TED. I’m sure you’ll find it inspiring!


Sources: fastcoexist.com, technologyreview.com, blog.ted.com

The Future is Here: 4-D Printing

4dprintingmaterial3-D printing has already triggered a revolution in manufacturing by allowing people to determine the length, width and depth of an object that they want to create. But thanks to research being conducted at the University of Colorado, Boulder, a fourth dimension can now be included – time. Might sounds like science fiction, until you realize that the new manufacturing process will make it possible to print objects that change their shape at a given time.

Led by Prof. H. Jerry Qi, the scientific team have developed a “4D printing” process in which shape-memory polymer fibers are deposited in key areas of a composite material item as it’s being printed. By carefully controlling factors such as the location and orientation of the fibers, those areas of the item will fold, stretch, curl or twist in a predictable fashion when exposed to a stimulus such as water, heat or mechanical pressure.

4dprintingmaterial1The concept was proposed earlier this year by MIT’s Skylar Tibbits, who used his own 4D printing process to create a variety of small self-assembling objects. Martin L. Dunn of the Singapore University of Technology and Design, who collaborated with Qi on the latest research, explained the process:

We advanced this concept by creating composite materials that can morph into several different, complicated shapes based on a different physical mechanism.

This means that one 4D-printed object could change shape in different ways, depending on the type of stimulus to which it was exposed. That functionality could make it possible to print a photovoltaic panel in a flat shape, expose it to water to cause it to fold up for shipping, and then expose it to heat to make it fold out to yet another shape that’s optimal for catching sunlight.

4dprintingmaterial2This principle may sound familiar, as it is the basis of such sci-fi concepts as polymorphic alloys or objects. It’s also the idea behind the Milli-Motein, the shape-shifting machine invented by MITs Media Labs late last year. But ultimately, it all comes back to organic biology, using structural biochemistry and the protein cell as a blueprint to create machinery made of “smart” materials.

The building block of all life, proteins can assume an untold number of shapes to fulfill an organism’s various functions, and are the universal workforce to all of life. By combining that concept with the world of robotics and manufactured products, we could be embarking upon an era of matter and products that can assume different shapes as needed and on command.

papertab-touchAnd if these materials can be scaled to the microscopic level, and equipped with tiny computers, the range of functions they will be able to do will truly stagger the mind. Imagine furniture made from materials that can automatically respond to changes in pressure and weight distribution. Or paper that is capable of absorbing your pencil scratches and then storing it in its memory, or calling up image displays like a laptop computer?

And let’s not forget how intrinsic this is to the field of nanotechnology. Smarter, more independent materials that can change shape and respond to changes in their environment, mainly so they can handle different tasks, is all part of the Fabrication Revolution that is expected to explode this century. Here’s hoping I’m alive to see it all. Sheldon Cooper isn’t the only one waiting on the Technological Singularity!

Source: gizmag.com

The Future is Here: inFORM Tangible Media Interface

tangible_mediaThe future of computing is tactile. That’s the reasoning behind the inFORM interface, a revolutionary new interface produced by the MIT Media Lab and the Tangible Media Group. Unveiled earlier this month, the inFORM is basically a surface that changes shapes in three-dimensions, allowing users to not only interact with digital content, but even make simulated physical contact with other people.

Created by Daniel Leithinger and Sean Follmer and overseen by Professor Hiroshi Ishii, the technology behind the inFORM isn’t actually quite simple. Basically, it functions like a fancy Pinscreen, one of those executive desk toys that allows you to create a rough 3-D model of an object by simply pressing it into a bed of flattened pins.

tangible_media3However, with the inFORM, each of those “pins” is connected to a motor controlled by a nearby laptop. This not only moves the pins to render digital content physically, but can also register real-life objects interacting with its surface thanks to the sensors of a hacked Microsoft Kinect. In short, you can touch hands with someone via Skype, or feel a stretch of terrain through Google Maps.

Another possible application comes in the form of video conferencing, where remote participants can be displayed physically, allowing for a strong sense of presence and the ability to interact physically at a distance. However, Tangible Media Group sees the inFORM as merely a step along the long road towards what they refer to “Tangible Bits”, or a Tangible User Interface (TUI).

tangible_media4This concept is what the group sees as the physical embodiment of digital information & computation. This constitutes a move away from the current paradigm of “Painted Bits”, or Graphical User Interfaces (GUI), something that is based on intangible pixels that do not engage users fully. As TMG states on their website:

Humans have evolved a heightened ability to sense and manipulate the physical world, yet the GUI based on intangible pixels takes little advantage of this capacity. The TUI builds upon our dexterity by embodying digital information in physical space. TUIs expand the affordances of physical objects, surfaces, and spaces so they can support direct engagement with the digital world.

It also represents a step on the long road towards what TMG refers to as “Radical Atoms”. One of the main constraints with TUI’s, according to Professor Ishii and his associates, is their limited ability to change the form or properties of physical objects in real time. This constraint can make the physical state of TUIs inconsistent with the underlying digital models.

tangible_media1Radical Atoms, a vision which the group unveiled last year, looks to the far future where materials can change form and appearance dynamically, becoming as reconfigurable as pixels on a screen. By bidirectionally coupling this material with an underlying digital model, dynamic changes in digital states would be reflected in tangible matter in real time, and vice versa.

inFORM45This futuristic paradigm is something that could be referred to as a “Material User Interface (MUI).” In all likelihood, it would involve polymers or biomaterials that are embedded with nanoscopic wires, that are able to change shape with the application of tiny amounts of current. Or, more boldy, materials that are composed of utility fogs or swarms of coordinated nanorobots that can alter their shape at will.

Certainly the ambitious concept, but as the inFORM demonstrates, its something that is getting closer. And the rate at which it is getting here is growing faster every day. And you have to admit, though the full-scale model does look a little bit like a loom, it does make for a pretty impressive show. And in the meantime, be sure to enjoy this video of the inFORM in action.


Source:
tangible.media.mit.edu

Building the Future: 3D Printing and Silkworms

arcology_crystalWhen it comes to building the homes, apartment blocks and businesses headquarters of the future,  designers and urban planners are forced to contend with a few undeniable realities. No only are these buildings going to be need to be greener and more sustainable, they will need to be built in such a way that doesn’t unnecessarily burden the environment.

Currently, the methods for erecting a large city building are criminally inefficient. Between producing the building materials – concrete, steel, wood, granite – and putting it all together, a considerable amount of energy is expended in the form of emissions and electricity, and several tons of waste are produced.

anti-grav3d2Luckily, there are many concepts currently on the table that will alter this trend. Between using smarter materials, more energy-efficient design concepts, and environmentally-friendly processes, the future of construction and urban planning may someday become sustainable and clean.

At the moment, many such concepts involve advances made in 3-D printing, a technology that has been growing by leaps and bounds in recent years. Between anti-gravity printers and sintering, there seems to be incredible potential for building everything from settlements on the moon to bridges and even buildings here on Earth.

bridge_3One case in particular comes to us from Spain, where four students from the Institute for Advanced Architecture of Catalonia have created a revolutionary 3-D printing robot. It’s known as Stone Spray, a machine that can turn dirt and sand into finished objects such as chairs, walls, and even full-blown bridges.

The brainchild of Anna Kulik, Inder Prakash, Singh Shergill, and Petr Novikov, the robot takes sand or soil, adds a special binding agent, then spews out a fully formed architectural object of the designers’ choosing. As Novikov said in an interview with Co.Design:

The shape of the resulting object is created in 3-D CAD software and then transferred to the robot, defining its movements. So the designer has the full control of the shape.

robot-on-site_0So far, all the prototypes – which include miniature stools and sculptures – are just 20 inches long, about the size of a newborn. But the team is actively planning on increasing the sizes of the objects this robot can produce to architectural size. And they are currently working on their first full-scale engineering model: a bridge (pictured above).

If successful, the robot could represent a big leap forward in the field of sustainable design. Growing a structure from the earth at your feet circumvents one of the most resource-intensive aspects of architecture, which is the construction process.

And speaking of process, check out this video of the Stone Spray in action:


At the same time, however, there are plans to use biohacking to engineer tiny life forms and even bacteria that would be capable of assembling complex structures. In a field that closely resembles “swarm robotics” – where thousands of tiny drones are programmed to build thing – “swarm biologics” seeks to use thousands of little creatures for the same purpose.

silkpavilionMIT has taken a bold step in this arena, thanks to their creation by the Mediated Matter Group that has rebooted the entire concept of “printed structures”. It’s called the Silk Pavilion, a beautiful structures whose hexagonal framework was laid by a robot, but whose walls were shell was created by a swarm of 6,500 live silkworms.

It’s what researchers call a “biological swarm approach to 3-D printing”, but could also be the most innovate example of biohacking to date. While silkworms have been used for millennia to give us silk, that process has always required a level of harvesting. MIT has discovered how to manipulate the worms to shape silk for us natively.

silkpavilion-2The most immediate implications may be in the potential for a “templated swarm” approach, which could involve a factory making clothes just by releasing silkworms across a series of worm-hacking mannequins. But the silkworms’ greater potential may be in sheer scale.

As Mediated Matter’s director Neri Oxman told Co.Design, the real bonus to their silkworm swarm its that it embodies everything an additive fabrication system currently lacks. 

It’s small in size and mobile in movement, it produces natural material of variable mechanical properties, and it spins a non-homogeneous, non-woven textile-like structure.

What’s more, the sheer scale is something that could come in very handy down the road. By bringing 3-D printing together with artificial intelligence to generate printing swarms operating in architectural scales, we could break beyond the bounds of any 3-D printing device or robot, and build structures in their actual environments.

silkpavilion-1In addition, consider the fact that the 6,500 silkworms were still viable after they built the pavilion. Eventually, the silkworms could all pupate into moths on the structure, and those moths can produce 1.5 million eggs. That’s enough to theoretically supply what the worms need to create another 250 pavilions.

So on top of everything else, this silkworm fabrication process is self-propagating, but unlike plans that would involve nanorobots, no new resources need to be consumed to make this happen. Once again, it seems that when it comes to the future of technology, the line between organic and synthetic is once more blurred!

And of course, MIT Media Lab was sure to produce a video of their silkworms creating the Silk Pavilion. Check it out:


Sources:
fastcodesign.com, (2)