The Future of Medicine: New Cancer Tests and Treatments

cancer_growingWhile a cure for cancer is still beyond medical science, improvements in how we diagnose and treat the disease are being made every day. These range from early detection, which makes all the difference in preventing the spread of the disease; to less-invasive treatments, which makes for a kinder, gentler recovery. By combining better medicine with cost-saving measures, accessibility is also a possibility.

When it comes to better diagnostics, the aim is to find ways to detect cancer without harmful and expensive scans or exploratory surgery. An alternative is a litmus test, like the one invented by Jack Andraka to detect pancreatic cancer. His method, which was unveiled at the 2012 Intel International Science and Engineering Fair (ISEF), won him the top prize due to the fact that it’s 90% accurate, 168 times faster than current tests and 1/26,000th the cost of regular tests.

cancer_peetestSince that time, Jack and his research group (Generation Z), have been joined by such institutions as MIT, which recently unveiled a pee stick test to detect cancer. In research published late last month in the Proceedings of the National Academy of Sciences, MIT Professor Sangeeta Bhatia reported that she and her team developed paper test strips using the same technology behind in-home pregnancy tests, ones which were able to detect colon tumors in mice.

The test strips work in conjunction with an injection of iron oxide nanoparticles, like those used as MRI contrast agents, that congregate at tumor sites in the body. Once there, enzymes known as matrix metalloproteinases (MMPs), which cancer cells use to invade healthy tissue, break up the nanoparticles, which then pass out through the patient’s urine. Antibodies on the test strip grab them, causing gold nanoparticles to create a red color indicating the presence of the tumor.

cancer_peetest2According to Bhatia, the technology is likely to make a big splash in developing countries where complicated and expensive medical tests are a rarity. Closer to home, the technology is also sure to be of significant use in outpatient clinics and other decentralized health settings. As Bhatia said in a press release:

For the developing world, we thought it would be exciting to adapt (the technology) to a paper test that could be performed on unprocessed samples in a rural setting, without the need for any specialized equipment. The simple readout could even be transmitted to a remote caregiver by a picture on a mobile phone.

To help Bhatia and her research team to bring her idea to fruition, MIT has given her and her team a grant from the university’s Deshpande Center for Technological Innovation. The purpose of the grant is to help the researchers develop a startup that could execute the necessary clinical trials and bring the technology to market. And now, Bhatia and her team are working on expanding the test to detect breast, prostate cancers, and all other types of cancer.

?????????????In a separate but related story, researchers are also working towards a diagnostic methods that do not rely on radiation. While traditional radiation scanners like PET and CT are good at finding cancer, they expose patients to radiation that can create a catch-22 situation where cancer can be induced later in life, especially for younger patients. By potentially inducing cancer in young people, it increases the likelihood that they will have to be exposed to more radiation down the line.

The good news is that scientists have managed to reduce radiation exposure over the past several years without sacrificing image quality. But thanks to ongoing work at the Children’s Hospital of Michigan, the Stanford School of Medicine, and Vanderbilt Children’s Hospital, there’s a potential alternative that involves combining MRI scans with a contrast agent, similar to the one Prof. Bhatia and her MIT group use in their peestick test.

cancer_braintumorAccording to a report published in the journal The Lancet Oncology, the researchers claimed that the new MRI approach found 158 tumors in twenty-two 8 to 33-year-olds, compared with 163 found using the traditional PET and CT scan combo. And since MRIs use radio waves instead of radiation, the scans themselves have no side effects. While the study is small, the positive findings are a step toward wider-spread testing to determine the effectiveness and safety of the new method.

The next step in testing this method will be to study the approach on more children and investigate how it might work in adults. The researchers say physicians are already launching a study of the technique in at least six major children’s hospitals throughout the country. And because the cost of each method could be roughly the same, if the MRI approach proves just as effective yet safer, radiation-free cancer scans are likely to be the way of the future.

cancer_georgiatechAnd last, but not least, there’s a revolutionary new treatment pioneered by researchers at Georgia Tech that relies on engineered artificial pathways to lure malignant cells to their death. This treatment is designed to address brain tumors – aka. Glioblastoma multiform cancer (GBM) – which are particularly insidious because they spread through the brain by sliding along blood vessels and nerve passageways (of which the brain has no shortage of!)

This capacity for expansion means that sometimes tumors developed in parts of the brain where surgery is extremely difficult – if not impossible – or that even if the bulk of a tumor can be removed, chances are good its tendrils would still exist throughout the brain. That is where the technique developed by scientists at Georgia Tech comes in, which involves creating artificial pathways along which cancer can travel to either more operable areas or even to a deadly drug located in a gel outside the body.

cancer_georgiatech1According to Ravi Bellamkonda, lead investigator and chair of the Wallace H. Coulter Department of Biomedical Engineering at Georgia Tech and Emory University:

[T]he cancer cells normally latch onto … natural structures and ride them like a monorail to other parts of the brain. By providing an attractive alternative fiber, we can efficiently move the tumors along a different path to a destination that we choose.

The procedure was reported in a recent issue of the journal Nature Materials. It involved Bellamkonda and his team implanting nanofibers about half the size of a human hair in rat brains where GBMs were growing. The fibers were made from a polycaprolactone (PCL) polymer surrounded by a polyurethane carrier and mimicked the contours of the nerves and blood vessels cancer cells like to use as a biological route.

cancer_georgiatech2One end of a fiber was implanted into the tumor inside the brain and the other into a gel containing the drug cyclopamine (which kills cancer cells) outside the brain. After 18 days, enough tumor cells had migrated along the fiber into the gel to shrink the tumor size 93 percent. Not only does Bellamkonda think his technique could be used to relocate and/or destroy cancers, he says he believes it could be used to help people live with certain inoperable cancers as a chronic condition.

In a recent statement, Bellakomba had this to say about the new method and the benefits its offers patients:

If we can provide cancer an escape valve of these fibers, that may provide a way of maintaining slow-growing tumors such that, while they may be inoperable, people could live with the cancers because they are not growing. Perhaps with ideas like this, we may be able to live with cancer just as we live with diabetes or high blood pressure.

Many of today’s methods for treating cancer focus on using drugs to kill tumors. The Georgia Tech team’s approach was engineering-driven and allows cancer to be treated with a device rather than with chemicals, potentially saving the patient many debilitating side effects. Part of the innovation in the technique is that it’s actually easier for tumors to move along the nanofibers than it is for them to take their normal routes, which require significant enzyme secretion as they invade healthy tissue.

cancer_georgiatech3Anjana Jain, the primary author of the study, was also principally responsible for the design of the nanofiber technique. After doing her graduate work on biomaterials used for spinal cord regeneration, she found herself working in Bellamkonda’s lab as a postdoctoral fellow and came up with the idea of routing materials using engineered materials. In a recent statement, she said the following of her idea:

Our idea was to give the tumor cells a path of least resistance, one that resembles the natural structures in the brain, but is attractive because it does not require the cancer cells to expend any more energy.

Extensive testing, which could take up to 10 years, still needs to be conducted before this technology can be approved for use in human patients. In the meantime, Bellamkonda and his team will be working towards using this technology to lure other cancers that like to travel along nerves and blood vessels. With all the advances being made in diagnostics, treatments, and the likelihood of a cure being found in the near future, the 21st century is likely to be the era where cancer becomes history.

Sources: news.cnet.com, (2), (3)

The Future of 3D Printing: Exoskeletons and Limbs

???????????????????????3-D printing is leading to a revolution in manufacturing, and the list of applications grows with each passing day. But more important is the way it is coming together with other fields of research to make breakthroughs  more affordable and accessible. Nowhere is this more true than in the fields of robotics and medicine, where printing techniques are producing a new generation of bionic and mind-controlled prosthetics.

For example, 3D Systems (a an additive manufacturing company) and EksoBionics (a company specializing in bionic prosthetic devices) recently partnered to produce the new “bespoke” exoskeleton that will restore ambulatory ability to paraplegics. The prototype was custom made for a woman named Amanda Boxtel, who was paralyzed in 1992 from a tragic skiing accident.

3d_amanda2Designers from 3D Systems began by scanning her body, digitizing the contours of her spine, thighs, and shins; a process that helped them mold the robotic suit to her needs and specifications. They then combined the suit with a set of mechanical actuators and controls made by EksoBionics. The result, said 3D Systems, is the first-ever “bespoke” exoskeleton.

Intrinsic to the partnership between 3D Systems and EksoBionics was the common goal of finding a way to fit the exoskeleton comfortably to Boxtel’s body. One of the greatest challenges with exosuits and prosthetic devices is finding ways to avoid the hard parts bumping into “bony prominences,” such as the knobs on the wrists and ankles. These areas as not only sensitive, but prolonged exposure to hard surfaces can lead to a slew of health problems, given time.

3d-printed-ekso-suit-frontAs Scott Summit, the senior director for functional design at 3D Systems, explained it,:

[Such body parts] don’t want a hard surface touching them. We had to be very specific with the design so we never had 3D-printed parts bumping into bony prominences, which can lead to abrasions [and bruising].

One problem that the designers faced in this case was that a paralyzed person like Boxtel often can’t know that bruising is happening because they can’t feel it. This is dangerous because undetected bruises or abrasions can become infected. In addition, because 3D-printing allows the creation of very fine details, Boxtel’s suit was designed to allow her skin to breathe, meaning she can walk around without sweating too much.

3d_amandaThe process of creating the 3D-printed robotic suit lasted about three months, starting when Summit and 3D Systems CEO Avi Reichenthal met Boxtel during a visit to EksoBionics. Boxtel is one of ten EksoBionics “test pilots”, and the exoskeleton was already designed to attach to the body very loosely with Velcro straps, with an adjustable fit. But it wasn’t yet tailored to fit her alone.

That’s where 3D Systems came into play, by using a special 3D scanning system to create the custom underlying geometry that would be used to make the parts that attach to the exoskeleton. As Boxtel put it:

When the robot becomes the enabling device to take every step for the rest of your life. the connection between the body and the robot is everything. So our goal is to enhance the quality of that connection so the robot becomes more symbiotic.

3D_DudleyAnd human beings aren’t the only ones who are able to take advantage of this marriage between 3-D printing and biomedicine. Not surprisingly, animals are reaping the benefits of all the latest technological breakthroughs in these fields as well, as evidenced by the little duck named Dudley from the K911 animal rescue service in Sicamous, Canada.

Not too long ago, Dudley lost a leg when a chicken in the same pen mauled him. But thanks to a 3-D printed leg design, especially made for him, he can now walk again. It was created by Terence Loring of 3 Pillar Designs, a company that specializes in 3D-printing architectural prototypes. After hearing of Dudley’s plight through a friend, he decided to see what he could do to help.

3D_buttercupfootUnlike a previous printed limb, the printed foot that was fashioned for Buttercup the Duck, Loring sought to create an entire limb that could move. The first limb he designed had a jointed construction, and was fully 3D-printed in plastic. Unfortunately, the leg broke the moment Dudley pit it on, forcing Loring to go back to the drawing board for a one-piece printed from softer plastic.

The subsequent leg he created had no joints and could bend on its own. And when Dudley put it on, he started walking straight away and without hesitation. Issues remain to be solved, like how to prevent friction sores – a problem that Mike Garey (who designed Buttercup’s new foot) solved with a silicone sock and prosthetic gel liner.

3D_Dudley2Nevertheless, Dudley is nothing if not as happy as a duck in a pond, and it seems very likely that any remaining issues will be ironed out in time. In fact, one can expect that veterinary medicine will fully benefit from the wide range of 3D printed prosthetic devices and even bionic limbs as advancement and research continues to produce new and exciting possibilities.

And in the meantime, enjoy the following videos which show both Amanda Boxtel and Dudley the duck enjoying their new devices and the ways in which they help bring mobility back to their worlds:

 

Amanda Boxtel taking her first steps in 22 years:

 


Dudley the duck walking again:


Sources: news.cnet.com, (2), (3), 3dsystems.com, 3pillardesigns.com

The Future is Here: Google Glass for the Battlefield

q-warrior see through displayWearing a Google Glass headset in public may get you called a “hipster”, “poser”, and (my personal favorite) “glasshole”. But not surprisingly, armies around the world are looking to turn portable displays into a reality. Combined with powered armor, and computer-assisted aiming, display glasses are part of just about every advanced nation’s Future Soldier program.

Q-Warrior is one such example, the latest version of helmet-mounted display technology from BAE Systems’ Q-Sight line. The 3D heads-up display provides full-color, high resolution images and overlays data and a video stream over the soldier’s view of the real world. In short, it is designed to provide soldiers in the field with rapid, real-time “situational awareness”.

q-warrior1The Q-Warrior also includes enhanced night vision, waypoints and routing information, the ability to identify hostile and non-hostile forces, track personnel and assets, and coordinate small unit actions. As Paul Wright, the soldier systems business development lead at BAE Systems’ Electronic Systems, said in a recent statement:

Q-Warrior increases the user’s situational awareness by providing the potential to display ‘eyes-out’ information to the user, including textual information, warnings and threats. The biggest demand, in the short term at least, will be in roles where the early adoption of situational awareness technology offers a defined advantage.

The display is being considered for use as part of the Army Tactical Assault Light Operator Suit (TALOS) system, a powered exoskeleton with liquid armor capable of stopping bullets and the ability to apply wound-sealing foam that is currently under development.

q-warrior2As Lt. Col. Karl Borjes, a U.S. Army Research, Development and Engineering Command (RDECOM) science adviser, said in a statement:

[The] requirement is a comprehensive family of systems in a combat armor suit where we bring together an exoskeleton with innovative armor, displays for power monitoring, health monitoring, and integrating a weapon into that — a whole bunch of stuff that RDECOM is playing heavily in.

The device is likely to be used by non-traditional military units with reconnaissance roles, such as Forward Air Controllers/Joint Tactical Aircraft Controllers (JTACS) or with Special Forces during counter terrorist tasks. The next level of adoption could be light role troops such as airborne forces or marines, where technical systems and aggression help to overcome their lighter equipment.

iron_man_HUDMore and more, the life in the military is beginning to imitate art – in this case, Iron Man or Starship Troopers (the novel, not the movie). In addition to powered exoskeletons and heads-up-displays, concepts that are currently in development include battlefield robots, autonomous aircraft and ships, and even direct-energy weapons.

And of course, BAE Systems was sure to make a promotional video, showcasing the concept and technology behind it. And be sure to go by the company’s website for additional footage, photos and descriptions of the Q-Warrior system. Check it out below:


Sources: wired.com, baesystems.com

The Future of Medicine: 3D Printing and Bionic Organs!

biomedicineThere’s just no shortage of breakthroughs in the field of biomedicine these days. Whether it’s 3D bioprinting, bionics, nanotechnology or mind-controlled prosthetics, every passing week seems to bring more in the way of amazing developments. And given the rate of progress, its likely going to be just a few years before mortality itself will be considered a treatable condition.

Consider the most recent breakthrough in 3D printing technology, which comes to us from the J.B Speed School of Engineering at the University of Louisville where researchers used a printed model of a child’s hear to help a team of doctors prepare for open heart surgery. Thanks to these printer-assisted measures, the doctors were able to save the life of a 14-year old child.

3d_printed_heartPhilip Dydysnki, Chief of Radiology at Kosair Children’s Hospital, decided to approach the school when he and his medical team were looking at ways of treating Roland Lian Cung Bawi, a boy born with four heart defects. Using images taken from a CT scan, researchers from the school’s Rapid Prototyping Center were able to create and print a 3D model of Roland’s heart that was 1.5 times its actual size.

Built in three pieces using a flexible filament, the printing reportedly took around 20 hours and cost US$600. Cardiothoracic surgeon Erle Austin III then used the model to devise a surgical plan, ultimately resulting in the repairing of the heart’s defects in just one operation. As Austin said, “I found the model to be a game changer in planning to do surgery on a complex congenital heart defect.”

Roland has since been released from hospital and is said to be in good health. In the future, this type of rapid prototyping could become a mainstay for medical training and practice surgery, giving surgeons the options of testing out their strategies beforehand. And be sure to check out this video of the procedure from the University of Louisville:


And in another story, improvements made in the field of bionics are making a big difference for people suffering from diabetes. For people living with type 1 diabetes, the constant need to extract blood and monitor it can be quite the hassle. Hence why medical researchers are looking for new and non-invasive ways to monitor and adjust sugar levels.

Solutions range from laser blood-monitors to glucose-sensitive nanodust, but the field of bionics also offer solutions. Consider the bionic pancreas that was recently trialled among 30 adults, and has also been approved by the US Food and Drug Administration (FDA) for three transitional outpatient studies over the next 18 months.

bionic-pancreasThe device comprises a sensor inserted under the skin that relays hormone level data to a monitoring device, which in turn sends the information wirelessly to an app on the user’s smartphone. Based on the data, which is provided every five minutes, the app calculates required dosages of insulin or glucagon and communicates the information to two hormone infusion pumps worn by the patient.

The bionic pancreas has been developed by associate professor of biomedical engineering at Boston University Dr. Edward Damiano, and assistant professor at Harvard Medical School Dr. Steven Russell. To date, it has been trialled with diabetic pigs and in three hospital-based feasibility studies amongst adults and adolescents over 24-48 hour periods.

bionic_pancreasThe upcoming studies will allow the device to be tested by participants in real-world scenarios with decreasing amounts of supervision. The first will test the device’s performance for five continuous days involving twenty adults with type 1 diabetes. The results will then be compared to a corresponding five-day period during which time the participants will be at home under their own care and without the device.

A second study will be carried out using 16 boys and 16 girls with type 1 diabetes, testing the device’s performance for six days against a further six days of the participants’ usual care routine. The third and final study will be carried out amongst 50 to 60 further participants with type 1 diabetes who are also medical professionals.

bionic_pancreas_technologyShould the transitional trials be successful, a more developed version of the bionic pancreas, based on results and feedback from the previous trials, will be put through trials in 2015. If all goes well, Prof. Damiano hopes that the bionic pancreas will gain FDA approval and be rolled out by 2017, when his son, who has type 1 diabetes, is expected to start higher education.

With this latest development, we are seeing how smart technology and non-invasive methods are merging to assist people living with chronic health issues. In addition to “smart tattoos” and embedded monitors, it is leading to an age where our health is increasingly in our own hands, and preventative medicine takes precedence over corrective.

Sources: gizmag.com, (2)

Drone Wars: Bigger, Badder, and Deadlier

UAVsIn their quest to “unman the front the lines”, and maintain drone superiority over other states, the US armed forces have been working on a series of designs that will one day replace their air fleet of Raptors and Predators. Given that potential rivals, like Iran and China, are actively imitating aspects of these designs in an added incentive, forcing military planners to think bigger and bolder.

Consider the MQ-4C Triton Unmanned Aerial System (UAS), a jet-powered drone that is the size of a Boeing 757 passenger jet. Developed by Northrop Grumman and measuring some 40 meters (130 feet) from wingtip to wingtip, this “super drone” is intended to replace the US Navy’s fleet of RQ-4 Global Hawks, a series of unmanned aerial vehicles that have been in service since the late 90’s.

Triton_droneThanks to a sensor suite that supplies a 360-degree view at a radius of over 3700 kms (2,300 miles), the Triton can provide high-altitude, real-time intelligence, surveillance and reconnaissance (ISR) at heights and distances in excess of any of its competitors. In addition, the drone possess unique de-icing and lightning protection capabilities, allowing to plunge through the clouds to get a closer view at surface ships.

And although Triton has a higher degree of autonomy than the most autonomous drones, operators on the ground are still relied upon to obtain high-resolution imagery, use radar for target detection and provide information-sharing capabilities to other military units. Thus far, Triton has completed flights up to 9.4 hours at altitudes of 15,250 meters (50,000 feet) at the company’s manufacturing facility in Palmdale, California.

?????????????????????????????????Mike Mackey, Northrop Grumman’s Triton UAS program director, had the following to say in a statement:

During surveillance missions using Triton, Navy operators may spot a target of interest and order the aircraft to a lower altitude to make positive identification. The wing’s strength allows the aircraft to safely descend, sometimes through weather patterns, to complete this maneuver.

Under an initial contract of $1.16 billion in 2008, the Navy has ordered 68 of the MQ-4C Triton drones with expected delivery in 2017. Check out the video of the Triton during its most recent test flight below:


But of course, this jetliner-sized customer is just one of many enhancements the US armed forces is planning on making to its drone army. Another is the jet-powered, long-range attack drone that is a planned replacement for the aging MQ-1 Predator system. It’s known as the Avenger (alternately the MQ-1 Predator C), a next-generation unmanned aerial vehicle that has a range of close to 3000 kms (1800 miles).

Designed by General Atomics, the Avenger is designed with Afghanistan in mind; or rather, the planned US withdrawal by the end 0f 2014. Given the ongoing CIA anti-terrorism operations in neighboring Pakistan are expected to continue, and airstrips in Afghanistan will no longer be available, the drones they use will need to have significant range.

(c) Kollected Pty Ltd.

The Avenger prototype made its first test flight in 2009, and after a new round of tests completed last month, is now operationally ready. Based on the company’s more well-known MQ-9 Reaper drone, Avenger is designed to perform high-speed, long-endurance surveillance or strike missions, flying up to 800 kms (500 mph) at a maximum of 15,250 meters (50,000 feet) for as long as 18 hours.

Compared to its earlier prototype, the Avenger’s fuselage has been increased by four feet to accommodate larger payloads and more fuel, allowing for extended missions. It can carry up to 1000 kilograms (3,500 pounds) internally, and its wingspan is capable of carrying weapons as large as a 2,000-pound Joint Direct Attack Munition (JDAM) and a full-compliment of Hellfire missiles.

Avenger_drone1Switching from propeller-driven drones to jets will allow the CIA to continue its Pakistan strikes from a more distant base if the U.S. is forced to withdraw entirely from neighboring Afghanistan. And according to a recent Los Angeles Times report, the Obama administration is actively making contingency plans to maintain surveillance and attacks in northwest Pakistan as part of its security agreement with Afghanistan.

The opportunity to close the gap between the need to act quickly and operating from a further distance with technology isn’t lost on the US military, or the company behind the Avenger. Frank Pace, president of the Aircraft Systems Group at General Atomics, said in a recent statement:

Avenger provides the right capabilities for the right cost at the right time and is operationally ready today. This aircraft offers unique advantages in terms of performance, cost, timescale, and adaptability that are unmatched by any other UAS in its class.

??????????????????????????????What’s more, one can tell by simply looking at the streamlined fuselage and softer contours that stealth is part of the package. By reducing the drone’s radar cross-section (RCS) and applying radar-absorbing materials, next-generation drone fleets will also be mimicking fifth-generation fighter craft. Perhaps we can expect aerial duels between remotely-controlled fighters to follow not long after…

And of course, there’s the General Atomic’s Avenger concept video to enjoy:


Sources:
wired.com, (2)

The Future is Here: VR Body-Swapping

simstimOne of the most interesting and speculative things to come out of the William Gibson’s cyberpunk series The Sprawl Trilogy was the concept of Simstim. A term which referred to “simulated stimulation”, this technology  involved stimulating the nervous system of one person so that they could experience another’s consciousness. As is so often the case, science fiction proves to be the basis for science fact.

This latest case of science imitating sci-fi comes from Barcelona, where a group of interdisciplinary students have created a revolutionary VR technology that uses virtual reality and neuroscience to let people see, hear, and even feel what it’s like in another person’s body. The focus, though, is on letting men and women undergo a sort of high-tech “gender swapping”, letting people experience what it’s like to be in the others’ shoes.

VR_simstim2Be Another Lab is made up of Philippe Bertrand, Daniel Gonzalez Franco, Christian Cherene, and Arthur Pointea, a collection of interdisciplinary artists whose fields range from programming and electronic engineering to interactive system design and neuro-rehabilitation. Together, the goal of Be Another Lab is to explore the concepts of empathy through technology, science, and art.

In most neuroscience experiments that examine issues of empathy and bias, participants “trade places” with others using digital avatars. If a study wants to explore empathy for the handicapped, for example, scientists might sit subjects down in front of a computer and make them play a video game in which they are confined to a wheelchair, then ask them a series of questions about how the experience made them feel.

BeanotherlabHowever, Be Another Lab takes a different, more visceral approach to exploring empathy. Instead of using digital avatars, the group uses performers to copy the movements of a subject. For example, racial bias is studied by having a subject’s actions mirrored by a performer of color. And for something like gender bias, men and women would take a run at living inside the body of one another.

Bertrand and company have taken this approach to the next level by leveraging the tech of a paid Oculus Rift virtual reality headset, renaming it the Machine To Be Another. In the project, two participants stand in front of one another, put on their headsets, and effectively see out of one anothers’ eyes. When they look at each other, they see themselves. When they speak, they hear the other person’s voice in their ears.

VR_simstim1But things don’t end there! Working together, the two participants are encouraged to sync their movements, touching objects in the room, looking at things, and exploring their ‘own’ bodies simultaneously. Bertrand explains the experience as follows:

The brain integrates different senses to create your experience of the world. In turn, the information from each of these senses influences how the other senses are processed. We use these techniques from neuroscience to actually affect the psychophysical sensation of being in your body.

In other words, in combination with being fed video and sound from their partner’s headset, by moving and touching things at the same time, the Machine To Be Another can actually convince people that they are in someone else’s body as long as the two partners remain in sync.

VR_simstimIt’s a radical idea that Be Another Lab is only beginning to explore. Right now, their experiments have mostly focused on gender swapping, but the team hopes to expand on this and tackle issues such as transgender and homosexuality. The group is currently looking to partner with various organizations, experts and activists to help them further perfect their techniques.

It’s a unique idea, giving people the ability to not only walk a mile in another’s shoes, but to know what that actually feels like physically. I can foresee this sort of technology becoming a part of sensitivity training in the future, and even as education for sex offenders and hate criminals. Currently, such training focuses on getting offenders to empathize with their victims.

What better way to do that than making them see exactly what it’s like to be them? And in the meantime, enjoy this video of the Machine To Be Another in action:


Source:
fastcodesign.com

Powered by Wind: World’s Tiniest Windmills

tiny_windmillWind turbines are one of the fastest growing industries thanks to their ability to provide clean, renewable energy. And while most designs are trending towards larger and larger sizes and power yields, some are looking in the opposite direction. By equipping everyday objects with tiny windmills, we just might find our way towards a future where batteries are unnecessary.

Professor J.C. Chiao and his postdoc Dr. Smitha Rao of the University of Texas at Arlington are two individuals who are making this idea into a reality. Their new MEMS-based nickel alloy windmill is so small that 10 could be mounted on a single grain of rice. Aimed at very-small-scale energy harvesting applications, these windmills could recharge batteries for smartphones, and directly power ultra-low-power electronic devices.

tiny_windmill1These micro-windmills – called horizontal axis wind turbines – have a three-bladed rotor that is 1.8 mm in diameter, 100 microns thick, and are mounted on a tower about 2 mm tall mount. Despite their tiny size, the micro-windmills can endure strong winds, owing to being constructed of a tough nickel alloy rather than silicon, which is typical of most microelectromechanical systems (MEMS), and a smart aerodynamic design.

According to Dr. Rao, the problem with most MEMS designs is that they are too fragile, owing to silicon and silicon oxide’s brittle nature. Nickel alloy, by contrast, is very durable, and the clever design and size of the windmill means that several thousands of them could be applied to a single 200 mm (8 inch) silicon wafer, which in turn makes for very low cost-per-unit prices.

tiny_windmill2The windmills were crafted using origami techniques that allow two-dimensional shapes to be electroplated on a flat plane, then self-assembled into 3D moving mechanical structures. Rao and Chiao created the windmill for a Taiwanese superconductor company called WinMEMS, which developed the fabrication technique. And as Rao stats, they were interested in her work in micro-robotics:

It’s very gratifying to first be noticed by an international company and second to work on something like this where you can see immediately how it might be used. However, I think we’ve only scratched the surface on how these micro-windmills might be used.

Chiao claims that the windmills could perhaps be crafted into panels of thousands, which could then be attached to the sides of buildings to harvest wind energy for lighting, security, or wireless communication. So in addition to wind tunnels, large turbines, and piezoelectric fronds, literally every surface on a building could be turned into a micro-generator.

Powered by the wind indeed! And in the meantime, check out this video from WinMEMS, showcasing one of the micro-windmills in action:


Source: news.cnet.com, gizmag.com

The Future is Here: VR Taste Buds and Google Nose

holodeck_telexOne of the most intriguing and fastest-growing aspects of digital media is the possibilities it offers for augmenting reality. Currently, that means overlaying images or text on top of the real world through the use of display glasses or projectors. But in time, the range of possibilities might expand far beyond the visual range, incorporating the senses of taste and smell.

That’s where devices like the Digital Taste Interface comes into play. Developed by Nimesha Ranasinghe, an electrical engineer and the lead researcher of the team at National University of Singapore, this new technology seeks to combine the worlds of virtual reality and gestation. As Ranasinghe explained it in a recent interview with fastcompany.com:

Gustation is one of the fundamental and essential senses, [yet] it is almost unheard of in Internet communication, mainly due to the absence of digital controllability over the sense of taste. To simulate the sensation of taste digitally, we explored a new methodology which delivers and controls primary taste sensations electronically on the human tongue.

digital_taste_interfaceThe method involves two main modules, the first being a control system which formulates different properties of stimuli – basically, levels of current, frequency, and temperature. These combine to provide thermal changes and electrical stimulation that simulate taste sensations, which are in turn delivered by the second module. This is the tongue interface, which consists of two thin, metal electrodes.

According to Ranasinghe, during the course of clinical trials, subjects reported a range of taste experiences. These ranged from sour, salty and bitter sensations to minty, spicy, and sweet. But to successfully communicate between the control systems and sensors, Ranasinghe and her team created a new language format. Known as the TasteXML (TXML), this software specifies the format of specific taste messages.

digital_taste_interface1While the team is currently in negotiations to make the technology commercially available, there are a few pressing updates in the works for the Digital Taste Interface. The first is a more appealing way to use the tongue sensors, which currently are attached while the mouth is open. To that end, they want an interface that can be held in the mouth, called the digital lollipop because it looks like the candy.

In addition to making the system look more aesthetically pleasing and appetizing, it will also allow for a deeper understanding of how electrical stimulation affects taste sensors on different parts of the tongue. In addition, they also want to incorporate smell and texture into the experience, to further extend the range of sensations and create a truly immersive virtual experience.

digital_taste_interface2Ultimately, the Digital Taste Interface has many potential benefits and applications, ranging from medical advances to diet regimens and video games. As Ranasinghe explains:

We are exploring different domains such as entertainment (taste changing drink-ware and accessories) and medical (for patients who lost the sense of taste or have a diminished sense of taste). However, our main focus is to introduce the sensation of taste as a digitally controllable media, especially to facilitate virtual and augmented reality domains.

So in the coming years, do not be surprised if virtual simulations come augmented with a full-range of sensory experiences. In addition to being able to interact with simulated environments (i.e. blowing shit up), you may also be able to smell the air, taste the food, and feel like you’re really and truly there. I imagine they won’t even call it virtual reality anymore. More like “alternate reality”!

And of course, there’s a video:


Sources:
fastcompany.com

The Future is Here: Laser 3D Printing

pegasus-touch3D printing has really come into is own in recent years, with the range of applications constantly increasing. However, not all 3D printers or printing methods are the same, ranging from ones that use layered melted plastic to ones that print layers of metal dust, then fuse them with microwave radiation. This range in difference also means that some printers are faster, more accurate, and more expensive than others.

Take the Pegasus Touch as an example. Built by a Las Vegas-based company Full Spectrum Laser (FSL), this desktop 3D printer uses lasers to create objects faster and in finer detail than most other printers in its price range. Available for as little as US$2,000 via a Kickstarter campaign, its performance is claimed to be comparable to machines costing 50 times more.

 

pegasus-touch-8Instead of building up an object by melting plastic filaments and depositing the liquid like ink from a nozzle, the Pegasus touch uses what’s called laser-based stereolithography (SLA). This consists of using a series of 500 kHz ultraviolet lasers moving at 3,000 mm/sec to solidify curable photopolymer resin. As the object rises out of a vat of resin, the laser focuses on the surface, building up layer after layer with high precision.

To be fair, the technology has been around for many years. What is different with the Pegasus Touch is that FSL has shrunk the printer down and made it more economical. Normally, SLA machines are huge and cost in the order of hundreds of thousands of dollars. The Pegasus Touch, on other hand, measures just 28 x 36 x 57 cm (11 x 14 x 22.5 inches) and costs only a few thousand dollars.

pegasus-touch-4This affordability is due in part to the wide availability of Bluray players has made UC laser diodes much more affordable. In addition, FSL is already adept at making laser cutting and engraving machines, which has allowed the company to base the Pegasus Touch on modelling software and electronics already developed for these machines. This allows the device to operate at tolerances equivalent to a $100,000 machine.

The device also has an on-board 1GHz Linux computer with 512 MB memory that can do much of the 3D processing computation itself, making a connected PC all but unnecessary. There’s also an internet-connected 4.3-in color touchscreen, which allows the user to access open-source models that are printer-ready, plus the machine comes with multi-touch-capable desktop software.

pegasus-touch-3It also has a relatively large build area of approximately 18 x 18 x 23 cm (7 x 7 x 9 inch), which is one of the largest in the consumer 3D printer market. The company also says that the Pegasus Touch is 10 times faster than a filament deposition modelling (FDM) printer, has finer control, and up to six times faster than other SLA printers, and can produces a better and more detailed finish.

The Pegasus Touch’s Kickstarter campaign wrapped up earlier this month and raised a total of $819,535, putting them well above their original goal of $100,000. For those who pledged $2000 or more, the printer was made available for pre-order. When and if it goes on sale, the asking price will be $3,499. Given time, I imagine the technology will improve to use metal and other materials instead of resin.

And of course, there’s a promotional video, showcasing the device at work:


Sources: gizmag.com, kickstarter.com, fsl3d.com

Making Tech Accessible: Helping Amputees in War-Torn Sudan

3Dprinting_SudanThe new year is just flying by pretty quickly, and many relevant stories involving life-changing tech developments are flying by even faster. And in my business and haste to deal with my own writing, I’ve sadly let a lot of stories slip through my fingers. Lucky for me that there’s no statute of limitations when it comes to blogging. Even if you cover something late, it’s not like someone’s going to fire you!

That said, here is one news item I’m rather of ashamed of having not gotten to sooner. It’s no secret that 3D printing is offering new possibilities for amputees and prosthetic devices, in part because the technology is offering greater accessibility and lower costs to those who need them. And one area that is in serious need is the developing and wartorn nation of Sudan.

robotic_hand2And thanks to Mick Ebeling, co-founder and CEO of Not Impossible Labs, 3D printed prosthetics are now being offered to victims of the ongoing war. After learning of a 14-year old boy named Daniel who lost both arms in a government air raid, he traveled to the Nuba Mountains to meet him in person. Having already worked on a similar project in South Africa, he decided to bring 3D printed prosthetics to the area.

Ebeling was so moved by Daniel’s plight that he turned to a world-class team of thinkers and doers – including the inventor of the Robohand, an MIT neuroscientist, a 3D printing company in California, and funding from Intel and Precipart – to see how they could help Daniel and kids like him. Fittingly, he decided to name it “Project Daniel”.

ProjectDaniel-Training-NotImpossibleAnd now, just a year later, Not Impossible Labs has its own little lab at a hospital in the region where it is able to print prosthetic arms for $100 a pop, and in less than six hours. Meanwhile, Daniel not only got his left-arm prosthetic in November, but he is currently employed at the hospital helping to print prosthetics for others children who have suffered the same fate as him.

Ebeling says the printed arm isn’t as sophisticated as others out there, but it did allow him to feed himself for the first time in two years. And while Daniel won’t be able to lift heavy objects or control his fingers with great precision, the prosthetic is affordable and being produced locally, so it also serves as an economically viable stand-in until the tech for 3D-printed prosthetics improves and comes down in cost.

Not-ImpossibleNot Impossible Labs, which has already fitted others with arms, says it hopes to extend its campaign to thousands like Daniel. It’s even made the design open source in the hopes that others around the world will be able to replicate the project, setting up similar labs to provide low-cost prosthetics to those in need. After all, there are plenty of war torn regions in the developing world today, and no shortage of victims.

In the coming years, it would be incredibly encouraging to see similar labs set up in developing nations in order to address the needs of local amputees. In addition to war, landmines, terrorism, and even lack of proper medical facilities give rise to the need for cheap, accessible prosthetics. All that’s really needed is an internet connection, a 3D printer, and some ABS plastic for raw material.

ProjectDaniel-Mohammad&Daniel-NotImpossibleNone of this is beyond the budgets of most governments or NGOs, so such partnerships are not only possible but entirely feasible. For the sake of kids like Daniel, it’s something that we should make happen! And in the meantime, check out this video below courtesy of Not Impossible Labs which showcases the printing technology used by Project Daniel and the inspiring story behind it.

And be sure to check out their website for more information and information on how you can help!



Source:
news.cnet.com, notimpossiblelabs.com