The Future is Here: The Soft Robotic Exosuit

aliens_powerloaderRobotic exoskeletons have come a long way, and are even breaking the mold. When one utters the term, it tends to conjure up images of a heavy suit with a metal frame that bestows the wearer super-human strength – as exemplified by Daewoo’s robot worker suits. And whereas those are certainly making an impact, there is a burgeoning market for flexible exoskeletons that would assist with everyday living.

Researchers at Harvard’s Wyss Institute for Biologically Inspired Engineering have developed just such a device, a flexible fabric exoskeleton that earned them a $2.9 million grant by DARPA to continue developing the technology. Unlike the traditional exoskeleton concept, Harvard’s so-called “Soft Exosuit” is not designed to give the wearer vastly increase lifting capacity.

Exosuit-640x353Instead, the Soft Exosuit works with the musculature to reduce injuries, improve stamina, and enhance balance even for those with weakened muscles. In some ways, this approach to wearable robotics is the opposite of past exoskeletons. Rather than the human working within the abilities and constraints of the exoskeleton, the exoskeleton works with the natural movements of the human wearer.

The big challenge of this concept is designing a wearable machine that doesn’t get in the way. In order to address this, the Wyss Institute researchers went beyond the usual network of fabric straps that hold the suit in place around the user’s limbs. In addition, they carefully studied the way people walk and determined which muscles would benefit from the added forces offered by the Exosuit.

softexosuitWith a better understanding of the biomechanics involved, the team decided to go with a network of cables to transmit forces to the joints. Batteries and motors are mounted at the waist to avoid having any rigid components interfering with natural joint movement. This allows the wearer the freedom to move without having to manually control how the forces are applied.

Basically, the wearer does not have to push on a joystick, pull against restraints, or stick to a certain pace when walking with the Exosuit. The machine is supposed to work with the wearer, not the other way around. The designers integrated a network of strain sensors throughout the straps that transmit data back to the on-board microcomputer to interpret and apply supportive force with the cables.

Warrior_Web_Boston_Dynamics_sentDARPA is funding this project as part of the Warrior Web program, which seeks to reduce musculoskeletal injuries for military personnel. However, Harvard expects this technology to be useful in civilian applications as well. Anyone who needs to walk for long periods of time at work could benefit from the Soft Exosuit, which is less expensive and more comfortable that conventional exosuits; and with a little rescaling, could even be worn under clothing.

But the greatest impact of the Soft Exosuit is likely to be for those who suffer from a physical impairment and/or injuries. Someone that has trouble standing or walking could possibly attain normal mobility with the aid of this wearable robot. And people working their way through physiotherapy would find it very useful in assisting them with restoring their muscles and joints to their usual strength.

exosuit_cyberdyneHALThe team plans to collaborate with clinical partners to create a version of the exosuit for just this purpose. What the Wyss Institute has demonstrated so far has just been the general proof-of-concept for the Soft Exosuit. In time, and with further refinements, we could see all sorts of versions becoming available – from the militarized to the medical, from mobility assistance for seniors, to even astronauts looking to prevent atrophy.

And as always, technology that is initially designed to assist and address mobility issues is likely to give way to enhancement and augmentation. It’s therefore not hard to imagine a future where soft robotic exosuits are produced for every possible use, including recreation and transhumanism. Hell, it may even be foreseeable that an endoskeleton will be possible in the not-too-distant future, something implantable that can do the same job but be permanent…

Cool and scary! And be sure to check out this video from the Wyss Institute being tested:

 

 


Source:
extremetech.com
, wyss.harvard.edu, darpa.mil

The Future of Medicine: The “Human Body-on-a-Chip”

bodyonachip One of the aims of modern medicine is perfecting the way we tests treatments and drugs, so that the lengthy guess-work and clinical trials can be shortened or even cut out of the equation. While this would not only ensure the speedier delivery of drugs to market, it would also eliminate the need for animal testing, something which has become increasingly common and controversial in recent years.

Over the last century, animal testing has expanded from biomedical research to included things like drug, chemical, and cosmetic testing. One 2008 study conducted by The Guardian estimated that 115 million animals are used a year for scientific research alone. It is therefore no surprise that opposition is growing, and that researchers, regulators and even military developers are looking for more accurate, efficient, and cruelty-free alternatives.

bodyonachip1Enter the National Insitute of Health in Besthesda, Maryland; where researchers have teamed up with the FDA and even DARPA to produce a major alternative. Known as the “Human Body-on-a Chip”, this device is similar to other “Organs-on-a-chip” in that it is basically a small, flexible pieces of plastic with hollow micro-fluidic channels lined with human cells that can mimic human systems far more effectively than simple petri dish cell cultures.

Dan Tagle, the associate director of the NIH’s National Center for Advancing Translational Sciences, explained the benefits of this technology as follows:

If our goal is to create better drugs, in a way that is much more efficient, time and cost-wise, I think it’s almost inevitable that we will have to either minimize or do away with animal testing.

https://i0.wp.com/images.medicaldaily.com/sites/medicaldaily.com/files/styles/large/public/2014/03/18/new-technology-may-obviate-need-animal-testing.jpgWhat’s more, chips like this one could do away with animal testing entirely, which is not only good news for animals and activists, but drug companies themselves. As it stands, pharmaceutical companies have hit a wall in developing new drugs, with roughly 90% failing in human clinical trials based on safety and effectiveness. One reason for this high rate of failure is that drugs that first seem promising in rodents often don’t have the same response in people.

In fact, so-called “animal models” are only typically 30% to 60% predictive of human responses, and there are potentially life-saving drug therapies that never make it to human clinical trials because they’re toxic to mice. In these cases, there’s no way to measure the lost opportunity when animals predict the wrong response. And all told, it takes an average of 14 years and often billions of dollars to actually deliver a new drug to the market.

bodyonachip2According to Geraldine Hamilton, a senior staff scientist at Harvard University’s Wyss Institute for Biologically Inspired Engineering, it all began five years ago with the “lung-on-a-chip”:

We’ve also got the lung, gut, liver and kidney. We’re working on skin. The goal is really to do the whole human body, and then we can fluidically link multiple chips to capture interactions between different organs and eventually recreate a body on a chip.

This has led to further developments in the technology, and Hamilton is now launching a new startup company to bring it to the commercial market. Emulate, the new startup that will license Wyss’s technology, isn’t looking to literally create a human body but rather to represent its “essential functions” and develop a platform that’s easy for all scientists and doctors to use, says Hamilton, who will become Emulate’s president and chief scientific officer.

lung-on-a-chip-5Borrowing microfabrication techniques from the semiconductor industry, each organ-on-a-chip is built with small features – such as channels, vessels, and flexible membranes – designed to recreate the flow and forces that cells experience inside a human body. All that’s needed are different chips with different culture of human cells; then researchers can performed tests to see how drugs work in one region of the body before being metabolized by the liver.

This might one day help the military to test treatments for biological or chemical weapons, a process that is unethical (and illegal) with humans, and cruel and often inaccurate with animals. Hospitals may also be able to use a patient’s own stem cells to develop and test “personalized” treatments for their disease, and drug companies could more quickly screen promising new drugs to see if they are effective and what (if any) side effects they have on the body’s organs.

It’s a process that promises speedier tests, quicker delivery, a more cost-effective medical system, and the elimination of cruel and often inaccurate animal testing. Can you say win-win-win?

Source: fastcoexist.com, ncats.nih.gov, wyss.harvard.edu, theguardian.com

Powered by the Sun: Boosting Solar Efficiency

solar1Improving the efficiency of solar power – which is currently the most promising alternative energy source – is central to ensuring that it an becomes economically viable replacement to fossil fuels, coal, and other “dirty” sources. And while many solutions have emerged in recent years that have led to improvements in solar panel efficiency, many developments are also aimed at the other end of things – i.e. improving the storage capacity of solar batteries.

In the former case, a group of scientists working with the University of Utah believe they’ve discovered a method of substantially boosting solar cell efficiencies. By adding a polychromat layer that separates and sorts incoming light, redirecting it to strike particular layers in a multijunction cell, they hope to create a commercial cell that can absorb more wavelengths of light, and therefor generate more energy for volume than conventional cells.

EMSpectrumTraditionally, solar cell technology has struggled to overcome a significant efficiency problem. The type of substrate used dictates how much energy can be absorbed from sunlight — but each type of substrate (silicon, gallium arsenide, indium gallium arsenide, and many others) corresponds to capturing a particular wavelength of energy. Cheap solar cells built on inexpensive silicon have a maximum theoretical efficiency of 34% and a practical (real-world) efficiency of around 22%.

At the other end of things, there are multijunction cells. These use multiple layers of substrates to capture a larger section of the sun’s spectrum and can reach up to 87% efficiency in theory – but are currently limited to 43% in practice. What’s more, these types of multijunction cells are extremely expensive and have intricate wiring and precise structures, all of which leads to increased production and installation costs.

SolarCellResearchIn contrast, the cell created by the University of Utah used two layers — indium gallium phosphide (for visible light) and gallium arsenide for infrared light. According to the research team, when their polychromat was added, the power efficiency increased by 16 percent. The team also ran simulations of a polychromat layer with up to eight different absorbtion layers and claim that it could potentially yield an efficiency increase of up to 50%.

However, there were some footnotes to their report which temper the good news. For one, the potential gain has not been tested yet, so any major increases in solar efficiency remain theoretical at this time. Second, the report states that the reported gain was a percentage of a percentage, meaning that if the original cell efficiency was 30%, then a gain of 16% percent means that the new efficiency is 34.8%. That’s still a huge gain for a polychromat layer that is easily produced, but not as impressive as it originally sounded.

PolyChromat-640x353However, given that the biggest barrier to multi-junction solar cell technology is manufacturing complexity and associated cost, anything that boosts cell efficiency on the front end without requiring any major changes to the manufacturing process is going to help with the long-term commercialization of the technology. Advances like this could help make technologies cost effective for personal deployment and allow them to scale in a similar fashion to cheaper devices.

In the latter case, where energy storage is concerned, a California-based startup called Enervault recently unveiled battery technology that could increase the amount of renewable energy utilities can use. The technology is based on inexpensive materials that researchers had largely given up on because batteries made from them didn’t last long enough to be practical. But the company says it has figured out how to make the batteries last for decades.

SONY DSCThe technology is being demonstrated in a large battery at a facility in the California desert near Modeso, 0ne that stores one megawatt-hour of electricity, enough to run 10,000 100-watt light bulbs for an hour. The company has been testing a similar, though much smaller, version of the technology for about two years with good results. It has also raised $30 million in funding, including a $5 million grant from the U.S. Department of Energy.

The technology is a type of flow battery, so called because the energy storage materials are in liquid form. They are stored in big tanks until they’re needed and then pumped through a relatively small device (called a stack) where they interact to generate electricity. Building bigger tanks is relatively cheap, so the more energy storage is needed, the better the economics become. That means the batteries are best suited for storing hours’ or days’ worth of electricity, and not delivering quick bursts.

solarpanelsThis is especially good news for solar and wind companies, which have remained plagued by problems of energy storage despite improvements in both yield and efficiency. Enervault says that when the batteries are produced commercially at even larger sizes, they will cost just a fifth as much as vanadium redox flow batteries, which have been demonstrated at large scales and are probably the type of flow battery closest to market right now.

And the idea is not reserved to just startups. Researchers at Harvard recently made a flow battery that could prove cheaper than Enervault’s, but the prototype is small and could take many years to turn into a marketable version. An MIT spinoff, Sun Catalytix, is also developing an advanced flow battery, but its prototype is also small. And other types of inexpensive, long-duration batteries are being developed, using materials such as molten metals.

Sumitomo-redox-flow-battery-YokohamaOne significant drawback to the technology is that it’s less than 70 percent efficient, which falls short of the 90 percent efficiency of many batteries. The company says the economics still work out, but such a wasteful battery might not be ideal for large-scale renewable energy. More solar panels would have to be installed to make up for the waste. What’s more, the market for batteries designed to store hours of electricity is still uncertain.

A combination of advanced weather forecasts, responsive fossil-fuel power plants, better transmission networks, and smart controls for wind and solar power could delay the need for them. California is requiring its utilities to invest in energy storage but hasn’t specified what kind, and it’s not clear what types of batteries will prove most valuable in the near term, slow-charging ones like Enervault’s or those that deliver quicker bursts of power to make up for short-term variations in energy supply.

Tesla Motors, one company developing the latter type, hopes to make them affordable by producing them at a huge factory. And developments and new materials are being considered all time (i.e. graphene) that are improving both the efficiency and storage capacity of batteries. And with solar panels and wind becoming increasingly cost-effective, the likelihood of storage methods catching up is all but inevitable.

Sources: extremetech.com, technologyreview.com

 

The Future of Medicine: Elastic Superglue and DNA Clamps

nanomachineryIf there’s one thing medical science is looking to achieve, it’s ways of dealing with sickness and injuries that are less invasive. And now more than ever, researchers are looking to the natural world for solutions. Whether it is working with the bodies own components to promote healing, or using technologies that imitate living organism, the future of medicine is all about engineered-natural solutions.

Consider the elastic glue developed by associate professor Jeffrey Karp, a Canadian-born medical researcher working at Harvard University. Created for heart surgery, this medical adhesive is designed to replace sutures and staples as the principle means of sealing incisions and defects in heart tissue. But the real kicker? The glue was inspired by sticky natural secretions of slugs.

hlaa-4Officially known as hydrophobic light-activated adhesive (HLAA), the glue was developed in a collaboration between Boston Children’s Hospital, MIT, and Harvard-affiliated Brigham and Women’s Hospital. And in addition to being biocompatible and biodegradable (a major plus in surgery), it’s both water-resistant and elastic, allowing it to stretch as a beating heart expands and contracts.

All of this adds up to a medical invention that is far more user-friendly than stitches and staples, does not have to be removed, and will not cause complications. On top of all that, it won’t complicate healing by restricting the heart’s movements, and only becomes active when an ultraviolet light shines on it, so surgeons can more accurately bind the adhesive exactly where needed.

hlaa-3The technology could potentially be applied not just to congenital heart defects, but to a wide variety of organs and other body parts. In an recent interview with CBC Radio’s Quirks & Quarks, Karp explained the advantages of the glue:

Sutures and staples really are not mechanically similar to the tissues in the body, so they can induce stress on the tissue over time. This is a material that’s made from glycerol and sebacic acid, both of which exist in the body and can be readily metabolized. What happens over time is that this material will degrade. Cells will invade into it and on top of it, and ideally the hole will remain closed and the patient won’t require further operations.

In lab tests, biodegradable patches coated with HLAA were applied to holes in the hearts of live pigs. Despite the high pressure of the blood flowing through the organs, the patches maintained a leakproof seal for the 24-hour test period. HLAA is now being commercially developed by Paris-based start-up Gecko Biomedical, which hopes to have it on the market within two to three years.

dnaclampIn another recent development, scientists at the Université de Montréal have created a new DNA clamp capable of detecting the genetic mutations responsible for causing cancers, hemophilia, sickle cell anemia and other diseases. This clamp is not only able to detect mutations more efficiently than existing techniques, it could lead to more advanced screening tests and more efficient DNA-based nanomachines for targeted drug delivery.

To catch diseases at their earliest stages, researchers have begun looking into creating quick screening tests for specific genetic mutations that pose the greatest risk of developing into life-threatening illnesses. When the nucleotide sequence that makes up a DNA strand is altered, it is understood to be a mutation, which specific types of cancers can be caused by.

DNA-MicroarrayTo detect this type of mutation and others, researchers typically use molecular beacons or probes, which are DNA sequences that become fluorescent on detecting mutations in DNA strands. The team of international researchers that developed the DNA clamp state that their diagnostic nano machine allows them to more accurately differentiate between mutant and non-mutant DNA.

According to the research team, the DNA clamp is designed to recognize complementary DNA target sequences, binds with them, and form a stable triple helix structure, while fluorescing at the same time. Being able to identify single point mutations more easily this way is expected to help doctors identify different types of cancer risks and inform patients about the specific cancers they are likely to develop.

dna_cancerDiagnosing cancer at a genetic level could potentially help arrest the disease, before it even develops properly. Alexis Vallée-Bélisle, a Chemistry Professor at the Université de Montréal, explained the long-term benefits of this breakthrough in a recent interview:

Cancer is a very complex disease that is caused by many factors. However, most of these factors are written in DNA. We only envisage identifying the cancers or potential of cancer. As our understanding of the effect of mutations in various cancer will progress, early diagnosis of many forms of cancer will become more and more possible.

Currently the team has only tested the probe on artificial DNA, and plans are in the works to undertake testing on human samples. But the team also believes that the DNA clamp will have nanotechnological applications, specifically in the development of machines that can do targeted drug-delivery.

dna_nanomachineFor instance, in the future, DNA-based nanomachines could be assembled using many different small DNA sequences to create a 3D structure (like a box). When it encounters a disease marker, the box could then open up and deliver the anti-cancer drug, enabling smart drug delivery. What’s more, this new DNA clamp could prove intrinsic in that assembly process.

Professor Francesco Ricci of the University of Rome, who collaborated on the project, explained the potential in a recent interview:

The clamp switches that we have designed and optimized can recognize a DNA sequence with high precision and high affinity. This means that our clamp switches can be used, for example, as super-glue to assemble these nano machines and create a better and more precise 3D structure that can, for example, open in the presence of a disease marker and release a drug.

Hmm, glues inspired by mollusc secretions, machines made from DNA. Medical technology is looking less like technology and more like biology every day now!

Sources: cbc.ca, gizmag.com, (2)

The Future of Computing: Brain-Like Computers

neuronsIt’s no secret that computer scientists and engineers are looking to the human brain as means of achieving the next great leap in computer evolution. Already, machines are being developed that rely on machine blood, can continue working despite being damaged, and recognize images and speech. And soon, a computer chip that is capable of learning from its mistakes will also be available.

The new computing approach, already in use by some large technology companies, is based on the biological nervous system – specifically on how neurons react to stimuli and connect with other neurons to interpret information. It allows computers to absorb new information while carrying out a task, and adjust what they do based on the changing signals.

brain_chip2The first commercial version of the new kind of computer chip is scheduled to be released in 2014, and was the result of a collaborative effort between I.B.M. and Qualcomm, as well as a Stanford research team. This “neuromorphic processor” can not only automate tasks that once required painstaking programming, but can also sidestep and even tolerate errors, potentially making the term “computer crash” obsolete.

In coming years, the approach will make possible a new generation of artificial intelligence systems that will perform some functions that humans do with ease: see, speak, listen, navigate, manipulate and control. That can hold enormous consequences for tasks like facial and speech recognition, navigation and planning, which are still in elementary stages and rely heavily on human programming.

googleneuralnetworkFor example, computer vision systems only “recognize” objects that can be identified by the statistics-oriented algorithms programmed into them. An algorithm is like a recipe, a set of step-by-step instructions to perform a calculation. But last year, Google researchers were able to get a machine-learning algorithm, known as a “Google Neural Network”, to perform an identification task (involving cats) without supervision.

And this past June, the company said it had used those neural network techniques to develop a new search service to help customers find specific photos more accurately. And this past November, researchers at Standford University came up with a new algorithm that could give computers the power to more reliably interpret language. It’s known as the Neural Analysis of Sentiment (NaSent).

deep_learning_laptopA similar concept known as Deep Leaning is also looking to endow software with a measure of common sense. Google is using this technique with their voice recognition technology to aid in performing searches. In addition, the social media giant Facebook is looking to use deep learning to help them improve Graph Search, an engine that allows users to search activity on their network.

Until now, the design of computers was dictated by ideas originated by the mathematician John von Neumann about 65 years ago. Microprocessors perform operations at lightning speed, following instructions programmed using long strings of binary code (0s and 1s). The information is stored separately in what is known as memory, either in the processor itself, in adjacent storage chips or in higher capacity magnetic disk drives.

neural-networksBy contrast, the new processors consist of electronic components that can be connected by wires that mimic biological synapses. Because they are based on large groups of neuron-like elements, they are known as neuromorphic processors, a term credited to the California Institute of Technology physicist Carver Mead, who pioneered the concept in the late 1980s.

These processors are not “programmed”, in the conventional sense. Instead, the connections between the circuits are “weighted” according to correlations in data that the processor has already “learned.” Those weights are then altered as data flows in to the chip, causing them to change their values and to “spike.” This, in turn, strengthens some connections and weakens others, reacting much the same way the human brain does.

Neuromorphic-chip-640x353In the words of Dharmendra Modha, an I.B.M. computer scientist who leads the company’s cognitive computing research effort:

Instead of bringing data to computation as we do today, we can now bring computation to data. Sensors become the computer, and it opens up a new way to use computer chips that can be everywhere.

One great advantage of the new approach is its ability to tolerate glitches, whereas traditional computers are cannot work around the failure of even a single transistor. With the biological designs, the algorithms are ever changing, allowing the system to continuously adapt and work around failures to complete tasks. Another benefit is energy efficiency, another inspiration drawn from the human brain.

IBM_stacked3dchipsThe new computers, which are still based on silicon chips, will not replace today’s computers, but augment them; at least for the foreseeable future. Many computer designers see them as coprocessors, meaning they can work in tandem with other circuits that can be embedded in smartphones and the centralized computers that run computing clouds.

However, the new approach is still limited, thanks to the fact that scientists still do not fully understand how the human brain functions. As Kwabena Boahen, a computer scientist who leads Stanford’s Brains in Silicon research program, put it:

We have no clue. I’m an engineer, and I build things. There are these highfalutin theories, but give me one that will let me build something.

calit2PhotoLuckily, there are efforts underway that are designed to remedy this, with the specific intention of directing that knowledge towards the creation of better computers and AIs. One such effort comes from the National Science Foundation financed the Center for Brains, Minds and Machines, a new research center based at the Massachusetts Institute of Technology, with Harvard and Cornell.

Another is the California Institute for Telecommunications and Information Technology (aka. Calit2) – a center dedicated to innovation in nanotechnology, life sciences, information technology, and telecommunications. As
Larry Smarr, an astrophysicist and director of Institute, put it:

We’re moving from engineering computing systems to something that has many of the characteristics of biological computing.

Human-Brain-project-Alp-ICTAnd last, but certainly not least, is the Human Brain Project, an international group of 200 scientists from 80 different research institutions and based in Lausanne, Switzerland. Having secured the $1.6 billion they need to fund their efforts, these researchers will spend the next ten years conducting research that cuts across multiple disciplines.

This initiative, which has been compared to the Large Hadron Collider, will attempt to reconstruct the human brain piece-by-piece and gradually bring these cognitive components into an overarching supercomputer. The expected result of this research will be new platforms for “neuromorphic computing” and “neurorobotics,” allowing for the creation of computing and robotic architectures that mimic the functions of the human brain.

neuromorphic_revolutionWhen future generations look back on this decade, no doubt they will refer to it as the birth of the neuromophic computing revolution. Or maybe just Neuromorphic Revolution for short, but that sort of depends on the outcome. With so many technological revolutions well underway, it is difficult to imagine how the future will look back and characterize this time.

Perhaps, as Charles Stross suggest, it will simply be known as “the teens”, that time in pre-Singularity history where it was all starting to come together, but was yet to explode and violently change everything we know. I for one am looking forward to being around to witness it all!

Sources: nytimes.com, technologyreview.com, calit2.net, humanbrainproject.eu

Year-End Tech News: Stanene and Nanoparticle Ink

3d.printingThe year of 2013 was also a boon for the high-tech industry, especially where electronics and additive manufacturing were concerned. In fact, several key developments took place last year that may help scientists and researchers to move beyond Moore’s Law, as well as ring in a new era of manufacturing and production.

In terms of computing, developers have long feared that Moore’s Law – which states that the number of transistors on integrated circuits doubles approximately every two years – could be reaching a bottleneck. While the law (really it’s more of an observation) has certainly held true for the past forty years, it has been understood for some time that the use of silicon and copper wiring would eventually impose limits.

copper_in_chips__620x350Basically, one can only miniaturize circuits made from these materials so much before resistance occurs and they are too fragile to be effective. Because of this, researchers have been looking for replacement materials to substitute the silicon that makes up the 1 billion transistors, and the one hundred or so kilometers of copper wire, that currently make up an integrated circuit.

Various materials have been proposed, such as graphene, carbyne, and even carbon nanotubes. But now, a group of researchers from Stanford University and the SLAC National Accelerator Laboratory in California are proposing another material. It’s known as Stanene, a theorized material fabricated from a single layer of tin atoms that is theoretically extremely efficient, even at high temperatures.

computer_chip5Compared to graphene, which is stupendously conductive, the researchers at Stanford and the SLAC claim that stanene should be a topological insulator. Topological insulators, due to their arrangement of electrons/nuclei, are insulators on their interior, but conductive along their edge and/or surface. Being only a single atom in thickness along its edges, this topological insulator can conduct electricity with 100% efficiency.

The Stanford and SLAC researchers also say that stanene would not only have 100%-efficiency edges at room temperature, but with a bit of fluorine, would also have 100% efficiency at temperatures of up to 100 degrees Celsius (212 Fahrenheit). This is very important if stanene is ever to be used in computer chips, which have operational temps of between 40 and 90 C (104 and 194 F).

Though the claim of perfect efficiency seems outlandish to some, others admit that near-perfect efficiency is possible. And while no stanene has been fabricated yet, it is unlikely that it would be hard to fashion some on a small scale, as the technology currently exists. However, it will likely be a very, very long time until stanene is used in the production of computer chips.

Battery-Printer-640x353In the realm of additive manufacturing (aka. 3-D printing) several major developments were made during the year 0f 2013. This one came from Harvard University, where a materials scientist named Jennifer Lewis Lewis – using currently technology – has developed new “inks” that can be used to print batteries and other electronic components.

3-D printing is already at work in the field of consumer electronics with casings and some smaller components being made on industrial 3D printers. However, the need for traditionally produced circuit boards and batteries limits the usefulness of 3D printing. If the work being done by Lewis proves fruitful, it could make fabrication of a finished product considerably faster and easier.

3d_batteryThe Harvard team is calling the material “ink,” but in fact, it’s a suspension of nanoparticles in a dense liquid medium. In the case of the battery printing ink, the team starts with a vial of deionized water and ethylene glycol and adds nanoparticles of lithium titanium oxide. The mixture is homogenized, then centrifuged to separate out any larger particles, and the battery ink is formed.

This process is possible because of the unique properties of the nanoparticle suspension. It is mostly solid as it sits in the printer ready to be applied, then begins to flow like liquid when pressure is increased. Once it leaves the custom printer nozzle, it returns to a solid state. From this, Lewis’ team was able to lay down multiple layers of this ink with extreme precision at 100-nanometer accuracy.

laser-welding-640x353The tiny batteries being printed are about 1mm square, and could pack even higher energy density than conventional cells thanks to the intricate constructions. This approach is much more realistic than other metal printing technologies because it happens at room temperature, no need for microwaves, lasers or high-temperatures at all.

More importantly, it works with existing industrial 3D printers that were built to work with plastics. Because of this, battery production can be done cheaply using printers that cost on the order of a few hundred dollars, and not industrial-sized ones that can cost upwards of $1 million.

Smaller computers, and smaller, more efficient batteries. It seems that miniaturization, which some feared would be plateauing this decade, is safe for the foreseeable future! So I guess we can keep counting on our electronics getting smaller, harder to use, and easier to lose for the next few years. Yay for us!

Sources: extremetech.com, (2)

Judgement Day Update: Banning Autonomous Killing Machines

drone-strikeDrone warfare is one of the most controversial issues facing the world today. In addition to ongoing concerns about lack of transparency and who’s making the life-and-death decisions, there has also been serious and ongoing concerns about the cost in civilian lives, and the efforts of both the Pentagon and the US government to keep this information from the public.

This past October, the testimonial of a Pakistani family to Congress helped to put a human face on the issue. Rafiq ur Rehman, a Pakistani primary school teacher, described how his mother, Momina Bibi, had been killed by a drone strike. His two children – Zubair and Nabila, aged 13 and 9 – were also injured in the attack that took place on October 24th of this year.

congress_dronetestimonyThis testimony occurred shortly after the publication of an Amnesty International report, which listed Bibi among 900 other civilians they say have been killed by drone strikes since 2001. Not only is this number far higher than previously reported, the report claims that the US may have committed war crimes and should stand trial for its actions.

Already, efforts have been mounted to put limitations on drone use and development within the US. Last year, Human Rights Watch and Harvard University released a joint report calling for the preemptive ban of “killer robots”. Shortly thereafter, Deputy Defense Secretary Ashton Carter signed a series of instructions to “minimize the probability and consequences of failures that could lead to unintended engagements.”

campaignkillerrobots_UNHowever, these efforts officially became international in scope when, on Monday October 21st, a growing number of humans rights activists, ethicists, and technologists converged on the United Nations Headquarters in New York City to call for an international agreement that would ban the development and use of fully autonomous weapons technology.

Known as the “Campaign To Stop Killer Robots,” an international coalition formed this past April, this group has demanded that autonomous killing machines should be treated like other tactics and tools of war that have been banned under the Geneva Convention – such as chemical weapons or anti-personnel landmines.

UAVsAs Jody Williams. a Nobel Peace Prize winner and, a founding member of the group said:

If these weapons move forward, it will transform the face of war forever. At some point in time, today’s drones may be like the ‘Model T’ of autonomous weaponry.

According to Noel Sharkey, an Irish computer scientist who is chair of the International Committee for Robot Arms Control, the list of challenges in developing autonomous robots is enormous. They range from the purely technological, such as the ability to properly identify a target using grainy computer vision, to ones that involve fundamental ethical, legal, and humanitarian questions.

As the current drone campaign has shown repeatedly, a teenage insurgent is often hard to distinguish from a child playing with a toy. What’s more, in all engagements in war, there is what is called the “proportionality test” – whether the civilian risks outweigh the military advantage of an attack. At present, no machine exists that would be capable of making these distinctions and judgement calls.

X-47B_over_coastlineDespite these challenges, militaries around the world – including China, Israel, Russia, and especially the U.S. – are enthusiastic about developing and adopting technologies that will take humans entirely out of the equation, often citing the potential to save soldiers’ lives as a justification. According to Williams, without preventative action, the writing is on the wall.

Consider the U.S. military’s X-47 aircraft, which can take off, land, and refuel on its own and has weapons bays, as evidence of the trend towards greater levels of autonomy in weapons systems. Similarly, the U.K. military is collaborating with B.A.E. Systems to develop a drone called the Taranis, or “God of Thunder,” which can fly faster than the speed of sound and select its own targets.

campaign_killerrobotsThe Campaign to Stop Killer Robots, a coalition of international and national NGOs, may have only launched recently, but individual groups have been to raise awareness for the last few years. Earlier this month, 272 engineers, computer scientists and roboticists signed onto the coalition’s letter calling for a ban. In addition, the U.N. is already expressed concern about the issue.

For example, the U.N. Special Rapporteur issued a report to the General Assembly back in April that recommended states establish national moratorium on the development of such weapons. The coalition is hoping to follow up on this by asking that other nations will join those already seeking to start early talks on the issue at the U.N. General Assembly First Committee on Disarmament and International Security meeting in New York later this month.

AI'sOn the plus side, there is a precedent for a “preventative ban”: blinding lasers were never used in war, because they were preemptively included in a treaty. On the downside, autonomous weapons technology is not an easily-defined system, which makes it more difficult to legislate. If a ban is to be applied, knowing where it begins and ends, and what loopholes exist, is something that will have to be ironed out in advance.

What’s more, there are alternatives to a ban, such as regulation and limitations. By allowing states to develop machinery that is capable of handling itself in non-combat situations, but which require a human operator to green light the use of weapons, is something the US military has already claimed it is committed to. As far as international law is concerned, this represents a viable alternative to putting a stop to all research.

Overall, it is estimated that we are at least a decade away from a truly autonomous machine of war, so there is time for the law to evolve and prepare a proper response. In the meantime, there is also plenty of time to address the current use of drones and all its consequences. I’m sure I speak for more than myself when I say that I hope its get better before it gets worse.

And in the meantime, be sure to enjoy this video produced by Human Rights Watch:


Sources:
fastcoexist.com, thegaurdian.com, stopkillerrobots.org

The Amplituhedron: Quantum Physics Decoded

amplutihedron_spanScientists recently made a major breakthrough that may completely alter our perceptions of quantum physics, and the nature of the universe itself. After many decades of trying to reformulate quantum field theory, scientists at Harvard University discovered of a jewel-like geometric object that they believe will not only simplify quantum science, but even challenge the notion that space and time are fundamental components of reality.

This jewel has been named the “amplituhedron”, and it is radically simplifying how physicists calculate particle interactions. Previously, these Interactions were calculated using quantum field theory – mathematical formulas that were thousands of terms long. Now, these interactions can be described by computing the volume of the corresponding amplituhedron, which yields an equivalent one-term expression.

theory_of_everythingJacob Bourjaily, a theoretical physicist at Harvard University and one of the researchers who developed the new idea, has this to say about the discovery:

The degree of efficiency is mind-boggling. You can easily do, on paper, computations that were infeasible even with a computer before.

This is exciting news, in part because it could help facilitate the search for a Grand Unifying Theory (aka. Theory of Everything) that manages to unify all the fundamental forces of the universe. These forces are electromagnetism, weak nuclear forces, strong nuclear forces, and gravity. Thus far, attempts at resolving these forces have run into infinities and deep paradoxes.

gravityWhereas the field of quantum physics has been able to account for the first three, gravity has remained explainable only in terms of General Relativity (Einstein’s baby). As a result, scientists have been unable to see how the basic forces of the universe interact on a grand scale, and all attempts have resulted in endless infinities and deep paradoxes.

The amplituhedron, or a similar geometric object, could help by removing two deeply rooted principles of physics: locality and unitarity. Locality is the notion that particles can interact only from adjoining positions in space and time, while unitarity holds that the probabilities of all possible outcomes of a quantum mechanical interaction must add up to one.

quantum_field_theoryThe concepts are the central pillars of quantum field theory in its original form, but in certain situations involving gravity, both break down, suggesting neither is a fundamental aspect of nature. As Nima Arkani-Hamed – a professor of physics at the Institute for Advanced Study in Princeton, N.J. and the lead author of the new work – put it: “Both are hard-wired in the usual way we think about things. Both are suspect.”

In keeping with this idea, the new geometric approach to particle interactions removes locality and unitarity from its starting assumptions. The amplituhedron is not built out of space-time and probabilities; these properties merely arise as consequences of the jewel’s geometry. The usual picture of space and time, and particles moving around in them, is a construct.

Photon_follow8And while the amplituhedron itself does not describe gravity, Arkani-Hamed and his collaborators think there might be a related geometric object that does. Its properties would make it clear why particles appear to exist, and why they appear to move in three dimensions of space and to change over time. This is because, as Bourjaily put it:

[W]e know that ultimately, we need to find a theory that doesn’t have [unitarity and locality]. It’s a starting point to ultimately describing a quantum theory of gravity.

Imagine that. After decades of mind-boggling research and attempts at resolving the theoretical issues, all existence comes down to a small jewel-shaped structure. I imagine the Intelligent Design people will have a field day with this, and I can foresee it making it into the new season of Big Bang Theory as well. Breakthroughs like this always do seem to have a ripple effect…

Source: simonsfoundation.org

The Worlds First Brain to Brain Interface!

Brain-ScanIt finally happened! It seems like only yesterday, I was talking about the limitations of Brain to Brain Interfacing (BBI), and how it was still limited to taking place between rats and between a human and a rat. Actually, it was two days ago, but the point remains. In spite of that, after only a few months of ongoing research, scientists have finally performed the first human-to-human interface.

Using a Skype connection, Rajesh Rao, who studies computational neuroscience at the University of Washington, successfully used his mind to control the hand of his colleague, Andrea Stucco. The experiment was conducted on Aug. 12th, less than month after researchers at Harvard used a non-invasive technique and a though to control the movement of a rat’s tail.

brain-to-brain-interfacingThis operation was quite simple: In his laboratory, Rao put on a skull cap containing electrodes which was connected to an electroencephalography (EEG) machine. These electrodes read his brainwaves and transmitted them across campus to Stocco who, seated in a separate lab, was equipped with a cap that was hooked up to a transcranial magnetic stimulation (TMS) machine.

This machine activating a magnetic stimulation coil that was integrated into the cap directly above Stocco’s left motor cortex, the part of the brain that controls movements of the hands. Back in Rao’s lab, he watched a screen displaying a video game, in which the player must tap the spacebar in order to shoot down a rocket; while  in Stocco’s lab. the computer was linked to that same game.

braininterfacing-0Instead of tapping the bar, however, Rao merely visualized himself doing so. The EEG detected the electrical impulse associated with that imagined movement, and proceeded to send a signal – via the Skype connection – to the TMS in Stocco’s lab. This caused the coil in Stocco’s cap to stimulate his left motor cortex, which in turn made his right hand move.

Given that his finger was already resting over the spacebar on his computer, this caused a cannon to fire in the game, successfully shooting down the rocket. He compared the feeling to that of a nervous tic. And to ensure that there was no chance of any outside influence, the Skype feeds were not visible to each other, and Stucco wore noise cancelling headphones and ear buds.

brain-activityIn the course of being interviewed, Rao was also quick to state that the technology couldn’t be used to read another person’s mind, or to make them do things without their willing participation. The researchers now hope to establish two-way communications between participants’ brains, as the video game experiment just utilized one-way communication.

Additionally, they would like to transmit more complex packets of information between brains, things beyond simple gestures. Ultimately, they hope that the technology could be used for things like allowing non-pilots to land planes in emergency situations, or letting disabled people transmit their needs to caregivers. And in time, the technology might even be upgraded to involve wireless implants.

brainpainting-brain-computer-interfaces-2One thing that should be emphasized here is the issue of consent. In this study, both men were willing participants, and it is certain that any future experimentation will involve people willingly accepting information back and forth. The same goes for commands, which theoretically could only occur between people willing to be linked to one another.

However, that doesn’t preclude that such links couldn’t one day be hacked, which would necessitate that anyone who chose to equip themselves with neural implants and uplinks also get their hands on protection and anti-hacking software. But that’s an issue for another age, and no doubt some future crime drama! Dick Wolf, you should be paying me for all the suggestions I’m giving you!

And of course, there’s a video of the experiment, courtesy of the University of Washington. Behold and be impressed, and maybe even a little afraid for the future:


Source:
gizmag.com

The Future is Here: Brain to Brain Interfaces (Cont’d)

telepathyThis year is shaping up to be an exciting time for technology that enables people to communicate their thoughts via an electronic link. For the most part, this has involved the use of machinery to communicate a person’s thoughts to a machine – such as a prosthetic device. However, some researchers have gone beyond the field of brain-computer interfaces (BCIs) and have been making strides with brain-to-brain interfacing (BBI) instead.

Back in February, a research team in Natal Brazil, led by Miguel Nicolelis of Duke University, managed to create a link between the brains of two laboratory rats. In the experiment, an “encoder” rat in Natal was placed inside a “Skinner Box” where it would press a lever with an expectation of getting a treat in return.

BMIThe brain activity was then recorded and sent via electrical signal which was delivered to a second “decoder” rat which, though it was thousands of kilometers away, interpreted the signal and pressed a similar lever with a similar a expectation of reward. This developmental milestone was certainly big news, and has led to some even more impressive experiments since.

One of these comes from Harvard University, where scientists have developed a new, non-invasive interface that allowed a similar thought transfer to take place. Led by Seung-Schik Yoo, an assistant professor of radiology, the research team created a brain-to-brain interface (BBI) that allows a human controller to move a portion of a rat’s body just by thinking about it, all without invasive surgical implants.

BBIThe new technique takes advantage of a few advances being made in the field. These include focused ultrasound (FUS) technology, which delivers focused acoustic energy to a specific point. Ordinarily, this technology has used heat to destroy tumors and other diseased tissue in the deeper reaches of the brain.  Yoo’s team, however, has found that a lower-intensity blast can be used to stimulate brain tissue without damaging it.

In terms of the interface, a human controller was hooked up to an EEG-based BCI while the rat is hooked up to an FUS-based computer-to-brain interface (CBI). The human subject then viewed an image of a circle flashing in a specific pattern which generated electrical brain activity in the same frequency. When the BCI detected this activity, it sent a command to the CBI, which in turn sends FUS into the region of the rat’s brain that controls its tail, causing it to move.

MMIUsing six different human subjects and six different rat subjects, the team achieved a success rate of 94 percent, with a time delay of 1.59 ± 1.07 seconds between user intention and the rat’s response. Granted, it might not be quite the pinnacle of machine-powered telepathy, and the range of control over the animal test subject was quite limited. Still, the fact that two brains could be interfaced, and without the need for electrodes, is still a very impressive feat.

And of course, it raises quite a few possibilities. If brain-to-brain interfaces between humans and animals are possible, just what could it mean for the helper animal industry? Could seeing eye dogs be telepathically linked to their animals, thus able to send and receive signals from them instantaneously? What about butler monkeys? Could a single thought send them scurrying to the kitchen to fetch a fresh drink?

Who knows? But the fact that it could one day be possible is both inspiring and frightening!

Source: news.cnet.com