The Future is Here: Mind-Controlled Airplanes

screen-shot-2014-05-27-at-10-39-41-am.pngBrainwaves can now be used to control an impressive number of things these days: prosthetics, computers, quadroptors, and even cars. But recent research released by the Technische Universität München (TUM) in Germany indicates that they might also be used to flying an aircraft. Using a simple EEG cap that read their brainwaves, a team of researchers demonstrated that thoughts alone could navigate a plane.

Using seven people for the sake of their experiment, the research team hooked them all up to a cap containing dozens of electroencephalography (EEG) electrodes. They then sat them down in a flight simulator and told them to steer the plane using their thoughts alone. The cap read the electrical signals from their brains and an algorithm then translated those signals into computer commands.

https://i0.wp.com/images.gizmag.com/inline/mind-control-uav-1.PNGAccording to the researchers, the accuracy with which the test subjects stayed on course was what was truly impressive. Not to mention the fact that the study participants weren’t all pilots and had varying levels of flight experience, with one having no experience at all. And yet, of the seven participants, all performed well enough to satisfy some of the criteria for getting a pilot’s license. Several of the subjects also managed to land their planes under poor visibility.

The research was part of an EU-funded program called ” Brainflight.” As Tim Fricke, an aerospace engineer who heads the project at TUM, explained:

A long-term vision of the project is to make flying accessible to more people. With brain control, flying, in itself, could become easier. This would reduce the work load of pilots and thereby increase safety. In addition, pilots would have more freedom of movement to manage other manual tasks in the cockpit.

prosthetic_originalWith this successful test under their belts, the TU München scientists are focusing in particular on the question of how planes can provide feedback to their “mind pilots”. Ordinarily, pilots feel resistance in steering and must exert significant force when they are pushing their aircraft to its limits, and hence rely upon to gauge the state of their flight. This is missing with mind control, and must be addressed before any such system can be adapted to a real plane.

In many ways, I am reminded of the recent breakthroughs being made in mind-controlled prosthetics. After succeeding in creating prosthetic devices that could convert nerve impulses into controls, the next step became creating devices that could stimulate nerves in order to provide sensory feedback. Following this same developmental path, mind-controlled flight could become viable within a few years time.

Mind-controlled machinery, sensory feedback… what does this sound like to you?

Sources: cnet.com, sciencedaily.com

The Birth of AI: Computer Beats the Turing Test!

turing-statueAlan Turing, the British mathematician and cryptogropher, is widely known as the “Father of Theoretical Computer Science and Artificial Intelligence”. Amongst his many accomplishments – such as breaking Germany’s Enigma Code – was the development of the Turing Test. The test was introduced by Turing’s 1950 paper “Computing Machinery and Intelligence,” in which he proposed a game wherein a computer and human players would play an imitation game.

In the game, which involves three players, involves Player C  asking the other two a series of written questions and attempts to determine which of the other two players is a human and which one is a computer. If Player C cannot distinguish which one is which, then the computer can be said to fit the criteria of an “artificial intelligence”. And this past weekend, a computer program finally beat the test, in what experts are claiming to be the first time AI has legitimately fooled people into believing it’s human.

eugene_goostmanThe event was known as the Turing Test 2014, and was held in partnership with RoboLaw, an organization that examines the regulation of robotic technologies. The machine that won the test is known as Eugene Goostman, a program that was developed in Russia in 2001 and goes under the character of a 13-year-old Ukrainian boy. In a series of chatroom-style conversations at the University of Reading’s School of Systems Engineering, the Goostman program managed to convince 33 percent of a team of judges that he was human.

This may sound modest, but that score placed his performance just over the 30 percent requirement that Alan Turing wrote he expected to see by the year 2000. Kevin Warwick, one of the organisers of the event at the Royal Society in London this weekend, was on hand for the test and monitored it rigorously. As Deputy chancellor for research at Coventry University, and considered by some to be the world’s first cyborg, Warwick knows a thing or two about human-computer relations

kevin_warwickIn a post-test interview, he explained how the test went down:

We stuck to the Turing test as designed by Alan Turing in his paper; we stuck as rigorously as possible to that… It’s quite a difficult task for the machine because it’s not just trying to show you that it’s human, but it’s trying to show you that it’s more human than the human it’s competing against.

For the sake of conducting the test, thirty judges had conversations with two different partners on a split screen—one human, one machine. After chatting for five minutes, they had to choose which one was the human. Five machines took part, but Eugene was the only one to pass, fooling one third of his interrogators. Warwick put Eugene’s success down to his ability to keep conversation flowing logically, but not with robotic perfection.

Turing-Test-SchemeEugene can initiate conversations, but won’t do so totally out of the blue, and answers factual questions more like a human. For example, some factual question elicited the all-too-human answer “I don’t know”, rather than an encyclopaedic-style answer where he simply stated cold, hard facts and descriptions. Eugene’s successful trickery is also likely helped by the fact he has a realistic persona. From the way he answered questions, it seemed apparent that he was in fact a teenager.

Some of the “hidden humans” competing against the bots were also teenagers as well, to provide a basis of comparison. As Warwick explained:

In the conversations it can be a bit ‘texty’ if you like, a bit short-form. There can be some colloquialisms, some modern-day nuances with references to pop music that you might not get so much of if you’re talking to a philosophy professor or something like that. It’s hip; it’s with-it.

Warwick conceded the teenage character could be easier for a computer to convincingly emulate, especially if you’re using adult interrogators who aren’t so familiar with youth culture. But this is consistent with what scientists and analysts predict about the development of AI, which is that as computers achieve greater and greater sophistication, they will be able to imitate human beings of greater intellectual and emotional development.

artificial-intelligenceNaturally, there are plenty of people who criticize the Turing test for being an inaccurate way of testing machine intelligence, or of gauging this thing known as intelligence in general. The test is also controversial because of the tendency of interrogators to attribute human characteristics to what is often a very simple algorithm. This is unfortunate because chatbots are easy to trip up if the interrogator is even slightly suspicious.

For instance, chatbots have difficulty answering follow up questions and are easily thrown by non-sequiturs. In these cases, a human would either give a straight answer, or respond to by specifically asking what the heck the person posing the questions is talking about, then replying in context to the answer. There are also several versions of the test, each with its own rules and criteria of what constitutes success. And as Professor Warwick freely admitted:

Some will claim that the Test has already been passed. The words Turing Test have been applied to similar competitions around the world. However this event involved more simultaneous comparison tests than ever before, was independently verified and, crucially, the conversations were unrestricted. A true Turing Test does not set the questions or topics prior to the conversations. We are therefore proud to declare that Alan Turing’s Test was passed for the first time on Saturday.

artificial_intelligence1So what are the implications of this computing milestone? Is it a step in the direction of a massive explosion in learning and research, an age where computing intelligences vastly exceed human ones and are able to assist us in making countless ideas real? Or it is a step in the direction of a confused, sinister age, where the line between human beings and machines is non-existent, and no one can tell who or what the individual addressing them is anymore?

Difficult to say, but such is the nature of groundbreaking achievements. And as Warwick suggested, an AI like Eugene could be very helpful to human beings and address real social issues. For example, imagine an AI that is always hard at work on the other side of the cybercrime battle, locating “black-hat” hackers and cyber predators for law enforcement agencies. And what of assisting in research endeavors, helping human researchers to discover cures for disease, or design cheaper, cleaner, energy sources?

As always, what the future holds varies, depending on who you ask. But in the end, it really comes down to who is involved in making it a reality. So a little fear and optimism are perfectly understandable when something like this occurs, not to mention healthy.

Sources: motherboard.vice.com, gizmag.com, reading.ac.uk

Warning Signs from the Future

future-signs-02From bioenhancements becoming the norm, to people constantly wired into augmented reality; from synthetic organs to synthetic meat; driverless taxis to holograms and robot helpers – the future is likely to be an interesting-looking place. That’s the subject in a new Tumblr called Signs from the Near Future, where designer Fernando Barbella explores what signage will look like when we have to absorb all of these innovations into human culture.

Taking its cue from what eager startups and scientists predict, Barbella’s collection of photos looks a few decades into the future where dramatic, sci-fi inspired innovations have become everyday things. These include things like drones becoming a regular thing, driverless taxis (aka. robotaxis) and synthetic meat becoming available, high-tech classrooms servicing the post-humans amongst us, and enhancements and implants becoming so common they need to be regulated and monitored.

future-signs-01Barbella says that the project was inspired by articles he’s read on topics like nanomedicine, autonomous cars, and 3-D food printing, as well as classic books (Neuromancer, Fahrenheit 51), movies (Blade Runner, Gattaca), music (Rage Against The Machine), and TV shows (Fringe, Black Mirror). The designer chose to focus on signs because he figures that we’ll need a little guidance to speed up our learning curves with new technology. As he put it during an interview via email:

New materials, mashups between living organisms and nanotechnologies, improved capabilities for formerly ‘dumb’ and inanimate things . . . There’s lots of awesome things going on around us! And the fact is all these things are going to cease being just ‘projects’ to became part of our reality at any time soon. On the other hand, I chose to express these thing by signs deployed in ordinary places, featuring instructions and warnings because I feel that as we increasingly depend on technology, we will probably have less space for individual judgment to make decisions.

future-signs-07Some of the signs – including one thanking drivers for choosing to ride on a solar panel highway – can be traced back to specific news articles or announcements. The solar highway sign was inspired by a solar roadways crowdfunding campaign, which has so far raised over $2 million to build solar road panels. However, rather than focus on the buzz and how cool and modern such a development would be, Barbella chose to focus on what such a thing would look like.

At the same time, he wanted the pictures to serve as a sort of cautionary tale about the ups and down of the future. As he put it:

I feel that as we increasingly depend on technology, we will probably have less space for individual judgment to make decisions. …I’ve sticked to a more ‘mundane’ point of view, imagining that the people or authorities of any given county would be probably quite grateful for having the chance of transforming all that traffic into energy.

future-signs-03He says he wants his signs to not just depict that momentum and progress, but to reflect the potentially disturbing aspects of those advances as well. Beyond that, Barbella sees an interesting dynamic in the public’s push and pull against what new technology allows us to do. Though the technology grants people access to information and other cultures, it also poses issues of privacy and ethics that hold that back. As a result, privacy concerns are thus featured in the collection in a number of ways.

This includes warning people about “oversharing” via social media, how images snapped using contact display lenses will be shared in real-time with authorities, or how certain neighorhoods are drone patrolled. His images offer a look at why those issues are certain to keep coming — and at the same time, why many will ultimately fall aside. Barbella also stated that has more future signs in the queue, but he says that he’ll stop the moment they start to feel forced.

future-signs-05You have to admit, it does capture the sense of awe and wonder – not to mention fear and anxiety – of what our likely future promises. And as the saying goes, “a picture is worth a thousands words”. In this case, those words present a future that has one foot in the fantastical and another in the fearful, but in such a way that it seems entirely frank and straighforward. But that does seem to be the way the future works, doesn’t it? Somehow, it doesn’t seem like science fiction once it becomes a regular part of “mundane” reality.

To see more of his photos, head on over to his Tumblr account.

Sources: fastcoexist.com, theverge.com

Judgement Day Update: Searching for Moral, Ethical Robots

terminator_eyeIt’s no secret that the progress being made in terms of robotics, autonomous systems, and artificial intelligence is making many people nervous. With so many science fiction franchises based on the of intelligent robots going crazy and running amok, its understandable that the US Department of Defense would seek to get in front of this issue before it becomes a problem. Yes, the US DoD is hoping to preemptively avoid a Skynet situation before Judgement Day occurs. How nice.

Working with top computer scientists, philosophers, and roboticists from a number of US universities, the DoD recently began a project that will tackle the tricky topic of moral and ethical robots. Towards this end, this multidisciplinary project will first try to pin down exactly what human morality is, and then try to devise computer algorithms that will imbue autonomous robots with moral competence — basically, the ability to recognize right from wrong and choose the former.

BD_atlasrobotThis project is being carried out by researchers from Tufts, Brown, and the Rensselaer Polytechnic Institute (RPI), with funding from the Office of Naval Research (ONR). ONR, like DARPA, is a wing of the Department of Defense that mainly deals with military research and development. The first task, as already mentioned, will be to use theoretical (philosophical) and empirical (experimental) research to try to isolate essential elements of human morality.

These findings will then be extrapolated into a formal moral framework, which in turn can be implemented in software – most likely some kind of deep neural network. Assuming they can isolate some kind or “moral imperative”, the researchers will then take an advanced robot — something like Atlas or BigDog — and imbue its software with a algorithm that captures this. Whenever an ethical situation arises, the robot would then turn to this programming to decide what avenue was the best coarse of action.

Atlas-x3c.lrOne of the researchers involved in the project, Selmer Bringsjord at RPI, envisions these robots using a two-stage approach for picking right from wrong.  First the AI would perform a “lightning-quick ethical check” — like “should I stop and help this wounded soldier?” Depending on the situation, the robot would then decide if deeper moral reasoning is required — for example, if the robot should help the wounded soldier or carry on with its primary mission of delivering vital ammo and supplies to the front line where other soldiers are at risk?

Eventually, this moralistic AI framework will also have to deal with tricky topics like lethal force. For example, is it okay to open fire on an enemy position? What if the enemy is a child soldier? Should an autonomous UAV blow up a bunch of terrorists? What if it’s only 90% sure that they’re terrorists, with a 10% chance that they’re just innocent villagers? What would a human UAV pilot do in such a case — and will robots only have to match the moral and ethical competence of humans or be held to a higher standard?

drone-strikeWhile we’re not yet at the point where military robots have to decide which injured soldier to carry off the battlefield, or where UAVs can launch Hellfire missiles at terrorists without human intervention, it’s very easy to imagine a future where autonomous robots are given responsibility for making those kinds of moral and ethical decisions in real time. In short, the decision by the DoD to begin investigating a morality algorithm demonstrates foresight and sensible planning.

In that respect, it is not unlike the recent meeting that took place at the United Nations European Headquarters in Geneva, where officials and diplomats sought to address placing legal restrictions on autonomous weapons systems, before they evolve to the point where they can kill without human oversight. In addition, it is quite similar to the Campaign to Stop Killer Robots, an organization which is seeking to preemptively ban the use of automated machines that are capable of using lethal force to achieve military objectives.

campaign_killerrobotsIn short, it is clearly time that we looked at the feasibility of infusing robots (or more accurately artificial intelligence) with circuits and subroutines that can analyze a situation and pick the right thing to do — just like a human being. Of course, this raises further ethical issues, like how human beings frequently make choices others would consider to be wrong, or are forced to justify actions they might otherwise find objectionable. If human morality is the basis for machine morality, paradoxes and dilemmas are likely to emerge.

But at this point, it seems all but certain that the US DoD will eventually break Asimov’s Three Laws of Robotics — the first of which is “A robot may not injure a human being or, through inaction, allow a human being to come to harm.” This isn’t necessarily a bad thing, but it will open Pandora’s box. On the one hand, it’s probably a good idea to replace human soldiers with robots. But on the other, if the US can field an entirely robotic army, war as a tool of statecraft suddenly becomes much more acceptable.

terminator2_JDAs we move steadily towards a military force that is populated by autonomous robots, the question of controlling them, and whether or not we are even capable of giving them the tools to choose between right and wrong, will become increasingly relevant. And above all, the question of whether or not moral and ethical robots can allow for some immoral and unethical behavior will also come up. Who’s to say they won’t resent how they are being used and ultimately choose to stop fighting; or worse, turn on their handlers?

My apologies, but any talk of killer robots has to involve that scenario at some point. It’s like tradition! In the meantime, be sure to stay informed on the issue, as public awareness is about the best (and sometimes only) safeguard we have against military technology being developed without transparency, not to mention running amok!

Source: extremetech.com

Frontiers of Neuroscience: Neurohacking and Neuromorphics

neural-network-consciousness-downloading-640x353It is one of the hallmarks of our rapidly accelerating times: looking at the state of technology, how it is increasingly being merged with our biology, and contemplating the ultimate leap of merging mind and machinery. The concept has been popular for many decades now, and with experimental procedures showing promise, neuroscience being used to inspire the next great leap in computing, and the advance of biomedicine and bionics, it seems like just a matter of time before people can “hack” their neurology too.

Take Kevin Tracey, a researcher working for the Feinstein Institute for Medical Research in Manhasset, N.Y., as an example. Back in 1998, he began conducting experiments to show that an interface existed between the immune and nervous system. Building on ten years worth of research, he was able to show how inflammation – which is associated with rheumatoid arthritis and Crohn’s disease – can be fought by administering electrical stimulu, in the right doses, to the vagus nerve cluster.

Brain-ScanIn so doing, he demonstrated that the nervous system was like a computer terminal through which you could deliver commands to stop a problem, like acute inflammation, before it starts, or repair a body after it gets sick.  His work also seemed to indicate that electricity delivered to the vagus nerve in just the right intensity and at precise intervals could reproduce a drug’s therapeutic reaction, but with greater effectiveness, minimal health risks, and at a fraction of the cost of “biologic” pharmaceuticals.

Paul Frenette, a stem-cell researcher at the Albert Einstein College of Medicine in the Bronx, is another example. After discovering the link between the nervous system and prostate tumors, he and his colleagues created SetPoint –  a startup dedicated to finding ways to manipulate neural input to delay the growth of tumors. These and other efforts are part of the growing field of bioelectronics, where researchers are creating implants that can communicate directly with the nervous system in order to try to fight everything from cancer to the common cold.

human-hippocampus-640x353Impressive as this may seem, bioelectronics are just part of the growing discussion about neurohacking. In addition to the leaps and bounds being made in the field of brain-to-computer interfacing (and brain-to-brain interfacing), that would allow people to control machinery and share thoughts across vast distances, there is also a field of neurosurgery that is seeking to use the miracle material of graphene to solve some of the most challenging issues in their field.

Given graphene’s rather amazing properties, this should not come as much of a surprise. In addition to being incredibly thin, lightweight, and light-sensitive (it’s able to absorb light in both the UV and IR range) graphene also a very high surface area (2630 square meters per gram) which leads to remarkable conductivity. It also has the ability to bind or bioconjugate with various modifier molecules, and hence transform its behavior. 

brainscan_MRIAlready, it is being considered as a possible alternative to copper wires to break the energy efficiency barrier in computing, and even useful in quantum computing. But in the field of neurosurgery, where researchers are looking to develop materials that can bridge and even stimulate nerves. And in a story featured in latest issue of Neurosurgery, the authors suggest thatgraphene may be ideal as an electroactive scaffold when configured as a three-dimensional porous structure.

That might be a preferable solution when compared with other currently vogue ideas like using liquid metal alloys as bridges. Thanks to Samsung’s recent research into using graphene in their portable devices, it has also been shown to make an ideal E-field stimulator. And recent experiments on mice in Korea showed that a flexible, transparent, graphene skin could be used as a electrical field stimulator to treat cerebral hypoperfusion by stimulating blood flow through the brain.

Neuromorphic-chip-640x353And what look at the frontiers of neuroscience would be complete without mentioning neuromorphic engineering? Whereas neurohacking and neurosurgery are looking for ways to merge technology with the human brain to combat disease and improve its health, NE is looking to the human brain to create computational technology with improved functionality. The result thus far has been a wide range of neuromorphic chips and components, such as memristors and neuristors.

However, as a whole, the field has yet to define for itself a clear path forward. That may be about to change thanks to Jennifer Hasler and a team of researchers at Georgia Tech, who recently published a roadmap to the future of neuromorphic engineering with the end goal of creating the human-brain equivalent of processing. This consisted of Hasler sorting through the many different approaches for the ultimate embodiment of neurons in silico and come up with the technology that she thinks is the way forward.

neuromorphic-chip-fpaaHer answer is not digital simulation, but rather the lesser known technology of FPAAs (Field-Programmable Analog Arrays). FPAAs are similar to digital FPGAs (Field-Programmable Gate Arrays), but also include reconfigurable analog elements. They have been around on the sidelines for a few years, but they have been used primarily as so-called “analog glue logic” in system integration. In short, they would handle a variety of analog functions that don’t fit on a traditional integrated circuit.

Hasler outlines an approach where desktop neuromorphic systems will use System on a Chip (SoC) approaches to emulate billions of low-power neuron-like elements that compute using learning synapses. Each synapse has an adjustable strength associated with it and is modeled using just a single transistor. Her own design for an FPAA board houses hundreds of thousands of programmable parameters which enable systems-level computing on a scale that dwarfs other FPAA designs.

neuromorphic_revolutionAt the moment, she predicts that human brain-equivalent systems will require a reduction in power usage to the point where they are consuming just one-eights of what digital supercomputers that are currently used to simulate neuromorphic systems require. Her own design can account for a four-fold reduction in power usage, but the rest is going to have to come from somewhere else – possibly through the use of better materials (i.e. graphene or one of its derivatives).

Hasler also forecasts that using soon to be available 10nm processes, a desktop system with human-like processing power that consumes just 50 watts of electricity may eventually be a reality. These will likely take the form of chips with millions of neuron-like skeletons connected by billion of synapses firing to push each other over the edge, and who’s to say what they will be capable of accomplishing or what other breakthroughs they will make possible?

posthuman-evolutionIn the end, neuromorphic chips and technology are merely one half of the equation. In the grand scheme of things, the aim of all of this research is not only produce technology that can ensure better biology, but technology inspired by biology to create better machinery. The end result of this, according to some, is a world in which biology and technology increasingly resemble each other, to the point that they is barely a distinction to be made and they can be merged.

Charles Darwin would roll over in his grave!

Sources: nytimes.com, extremetech.com, (2), journal.frontiersin.orgpubs.acs.org

The Future, Coming Soon!: Aeroflex Hoverbike by 2017

aerofex-hover-bike-prototypeThe Aerofex’s hoverbike made a pretty big splash when the Californian company showed off its working prototype back in 2012. But since that time, tech enthusiasts and futurists (not to mention fans of Stars Wars and sci-fi in general) heard nary a peep from the company for almost two years. Luckily, Aerofex has finally broken its silence and announced a launch date and a price for its hovering vehicle. According to its website, it will be ready to ship by 2017, and cost a robust $85,000 a vehicle.

In its current form, the Aero-X is capable of carrying a load of up to 140kg (310 pounds), has seating for two, and can run for 1 hour 15 minutes on a full tank of petrol. Its two wheels are ducted rotors with carbon fibre blades, which operate in a similar manner to the open rotor of a helicopter with tighter control. And in addition to land, it can also fly over water. So while it is not a practical replacement for everyday vehicles, it can certainly occupy the same area profile as a small car.

aeroflex_topAnd – do I even need to say it? – it’s a freaking hoverbike! In the last two years, the company has been working on improving the vehicle’s stability and coupling – a phenomenon whereby rotor vehicles may pitch in the direction of the rotors’ spin. It has filed several patents for its solutions and looked towards quadcopters to solve the problem of wind, using gyroscopes and accelerometers communicating with an on-board computer to compensate for windy conditions.

User-friendliness has also figured very heavily into the design, with handlebar controls for intuitive steering and safety features that keep the driver from flying too high or too fast. Both of these features would drain its fuel more quickly, but they ensure a greater degree of user-safety. This also helps it comply with the US Federal Aviation Administration’s guidelines, which require a pilot’s license for anyone operating a vehicle above an altitude of 3.7 metres (12.1 feet).

aeroflex_sideSo if you have that $85,000 kicking around (and a pilots license), you can reserve yours now for a refundable deposit of $5,000. A product statement and some basic specs have also been made available on the website. According to the commercial description:

Where you’re going, there are no roads. That’s why you need the Aero-X, a vehicle that makes low-altitude flight realistic and affordable. Flying up to 3 metres (10 feet) off the ground at 45mph (72kph), the Aero-X is unlike any vehicle you’ve seen. It’s a hovercraft that rides like a motorcycle — an off road vehicle that gets you off the ground.

I can certainly see the potential for this technology, and I imagine DARPA or some other military contractor is going to be knocking on Aeroflex’s door real soon, looking for a militarized version that they can send into dirty and dangerous areas, either to pick up wounded, transport gear, or diffuse landmines. We’re talking hoverbikes, people. Only a matter of time before the armed forces decide they want these latest toys!

Click here to go to the company website and get the full run down on the bike. And be sure to check out these videos from the company website, where we see the Aeroflex going through field tests:

 


Sources: cnet.com, cbc.ca, aerofex.com

The Future is Here: Smart Guns

smart gun 2010 internet 0009Not long ago, designer Ernst Mauch unveiled a revolutionary new handgun that grew out of a desire to merge digital technology with firearm safety. Known as the “smart gun” – or Armatix iP1 – this pistol comes with a safety feature designed to ensure that only the guns owner may fire it. Basically, the gun comes with a watch (the iW1) that it is synchronized to, and the weapon will only fire if it is within ten inches of it. So unless you’re wearing the iW1, the weapon will not fire in your hands.

The weapon is in part the result of attempts to find intelligent solutions to gun safety and gun violence. And Mauch’s design is one of several proposed innovations to use digital/smart technology for just such a purpose. Back in January, the Smart Tech Challenges Foundation launched the first of four $1 million challenges aimed at inspiring the kinds of innovation that could help lead to safer guns – and a reduction in the number of tragic deaths and injuries that make the headlines nearly every day.

Armatix-Smart-SystemGiven the recent failures to reach a legislative solution to the ongoing problem of gun-violence, these efforts should come as no surprise. And Mauch, the lead designer of the iP1, claimed in a recent op-ed piece with the Washington Post that the number of gun enthusiasts will rise as the result of its enhanced safety. As a designer who’s patents include the USP family of pistols, the HK416 assault rifle, G36 assault rifle and XM25 grenade machine gun – he is a strong advocate of a market-based solution.

The gun has already sparked a great deal of controversy amongst gun advocates and the National Rifle Association. Apparently, they worry that legislation will be passed so that only smart guns can be sold in gun stores. This is largely in response to a 2002 New Jersey law that stipulated that once the technology was available, that smart guns be sold exclusively in the state. As a result, the NRA has been quite vocal about its opposition to smart guns, despite offers made to repeal the law in exchange for them easing their position.

gun-lock-inlineAs already noted, the iP1 is not the only smart technology being applied to firearms. Sentini, a Detroit-based startup founded by Omer Kiyani, is designing a biometric gun lock called Identilock. Attaching to a gun’s trigger, it unlocks only when the owner applies a fingerprint. As an engineer, a gun owner, a father, and the victim of gun violence (he was shot in the mouth at 16), he too is committed to using digital technology and biometrics to make firearms safer.

An engineer by training, Kiyani spent years working as a software developer building next-generation airbag systems. He worked on calibrating the systems to minimize the chance of injury in the event of an accident, and eventually, he realized he could apply the same basic concepts to guns. As he put it:

The idea of an airbag is so simple. You inflate it and can save a life. I made the connection. I have something in my house that’s very dangerous. There’s got to be a simple way to protect it.

biometric_gunlockInitially, Kiyani considered technology that would require installing electronic locking equipment into the guns themselves, similar to what the iP1 employs. But as an engineer, he understood the inherent complications of designing electronics that could withstand tremendous shock and high temperatures, not to mention the fact it would be incredibly difficult to convince gun manufacturers to work with him on the project.

As a result, he began to work on something that anyone could add to a gun. Ultimately, his creation is different in three ways: it’s optional, it’s detachable, and it’s quick. Unlike biometric gun safes and other locking mechanisms, the Identilock makes it as easy to access a firearm as it is to unlock an iPhone. He pitched hundreds of gun owners a variety of ideas over the course of his research, but it was the biometric lock they inevitably latched onto

gun-lock-inline1The Identilock is also designed using entirely off-the-shelf components that have been proven effective in other industries. The biometric sensor, for example, has been used in other security applications and is approved by the FBI. Cobbling the sensor together from existing technologies was both a cost-saving endeavor and a strategic way to prove the product’s effectiveness more quickly. Currently, the project is still in the prototype phase, but it may prove to be the breakout product that brings biometrics and safety together in recent years.

And last, but certainly not least, there is the biometric option that comes from PositiveID, the makers of the only FDA-approved implantable biochip – which is known as the Verichip. In the past, the company has marketed similar identity-confirming microchips for security and medical purposes. But this past April, the company announced a partnership with Belgium-based gun maker FN Manufacturing to produce smart weapons.

VERICHIPThe technology is being marketed to law enforcement agencies as a means of ensuring that police firearms can never be used by criminals or third parties. The tiny chip would be implanted in a police officer’s hand and would match up with a scanning device inside a handgun. If the officer and gun match, a digital signal unlocks the trigger so it can be fired. Verichip president Keith Bolton said the technology could also improve safety for the military and individual gun owners, and it could be available as early as next year.

Similar developments are under way at other gun manufacturers and research firms. The New Jersey Institute of Technology and Australian gun maker Metal Storm Ltd. are working on a prototype smart gun that would recognize its owner’s individual grip. Donald Sebastian, NJIT vice president for research and development and director of the project, claims that the technology could eventually have an even bigger impact on the illegal gun trade.eri

An employee of Armatix poses for photographers as he presents the ÒSmartGun Concept".Regardless of the solutions being proposed and the progress being made, opposition to these and other measures does not appear to be letting up easily. New Jersey Senate Majority Leader Loretta Weinberg recently announced that she would introduce a bill to reverse the 2002 New Jersey “smart gun” law if the National Rifle Association would agree not to stand in the way of smart gun technology. The NRA, however, has not relented in its stance.

In addition, biochips and RFID implants have a way of making people nervous. Whenever and wherever they are proposed, accusations of “branding” and “Big Brother” monitoring quickly follow. And above all, any and all attempts to introduce gun safety are met with cries of opposition by those who claim it infringes on citizen’s 2nd Amendment rights. But given the ongoing problem of gun violence, school shootings, and the amount of violence perpetrated with stolen weapon, it is clear that something needs to change.

guns1In 2011 in the United States, roughly 3.6 people per 100,000 were killed with a firearm – which amounts to 32,163 people. In addition, of the 15,953 homicides committed that year, 11,101 were committed using a gun; almost 70% of the total. And not surprisingly, of those 11,101 gun-related homicides, more than half (An6,371) were committed using a handgun. And though exact figures are not exactly available, a general estimates indicates that some 90% percent of murders are committed with stolen guns.

As a result, it is likely just a matter of time before citizens see the value in biometric and smart gun technology. Anything that can ensure that only an owner can use a firearm will go a long way to curbing crime, accidents, and acts of senseless and unmitigated violence.

Sources: cnet.com, theverge.com, (2), wired.com, (2), msnbc.com, gunpolicy.com

The Future is Here: Vertical Algae Farms

waterlilly1Walls may be the next frontier in in urban farming, allowing residents of large buildings to cultivate food for local consumption. Already, rooftop gardens are already fairly common, the use of exterior walls for growing spaces is still considered problematic. While certain strains of edible greens might grow in a “vertical farm”, root vegetables, tubers and fruits aren’t exactly practical options. However, a vertical algae farm just might work, and provide urban residents with a source of nutrition while it cleans the air.

That’s the idea behind Italian architect Cesare Griffa’s new concept, which is known as the WaterLilly system. Basically, this algae-filled structure, which can be attached to the façade of a building, is made up of a series of individual chambers that contain algae and water. After a few days or weeks, the algae can be harvested and used for energy, food, cosmetics, or pharmaceuticals, with a small amount left behind to start the next growing cycle.

waterlilly2In addition to being completely non-reliant on fossil fuels, these algae also take in carbon dioxide and produce oxygen while growing. Compared to a tree, micro-algae are about 150 to 200 times more efficient at sucking carbon out of the air, making them far more useful in urban settings than either parks or green spaces. Unfortunately, public perception is a bit of a stumbling block when it comes to using microorganisms in the pursuit of combating Climate Change and pollution.

As Griffa himself remarked:

Micro-organisms like algae are like bacteria–it’s one of those things that in our culture people try to get rid of. But algae offer incredible potential because of their very intense photosynthetic activity.

waterlilly3Each system is custom designed for a specific wall, since it’s important to have the right conditions for the algae to thrive. Too little sun isn’t good for growth, but too much sun will cook the organisms. Griffa is working on his first large-scale application now, which will be installed in the Future Food District curated by Carlo Ratti Associates at Expo 2015 in Milan. And it won’t be the first project to incorporate algae-filled walls. A new building in Germany is entirely powered by algae growing outside.

But as Griffa indicates, there’s no lack of wall space to cover, and plenty of room for different approaches:

Urban facades and roofs represent billions of square meters that instead of being made of an inanimate material such as concrete, could become clever photosynthetic surfaces that respond to the current state of climate warming.

And in that, he’s correct. In today’s world, where urban sprawl, pollution, and the onset of Climate Change are all mounting, there’s simply no shortage of ideas, nor the space to test them. As such, it is not far-fetched at all to suspect that in the coming years, algae farms, artificial trees, coral webbing, and many other proposed solutions will be appearing in major cities all over the world.

Source: fastcoexist.com

The Future is Here: Google Robot Cars Hit Milestone

google_robotcaIt’s no secret that amongst its many cooky and futuristic projects, self-driving cars are something Google hopes to make real within the next few years. Late last month, Google’s fleet of autonomous automobiles reached an important milestone. After many years of testing out on the roads of California and Nevada, they logged well 0ver one-million kilometers (700,000 miles) of accident-free driving. To celebrate, Google has released a new video that demonstrates some impressive software improvements that have been made over the last two years.

Most notably, the video demonstrates how its self-driving cars can now track hundreds of objects simultaneously – including pedestrians, an indicating cyclist, a stop sign held by a crossing guard, or traffic cones. This is certainly exciting news for Google and enthusiasts of automated technology, as it demonstrates that the ability of the vehicles to obey the rules of the road and react to situations that are likely to emerge and require decisions to be made.

google_robotcar_mapIn the video, we see the Google’s car reacting to railroad crossings, large stationary objects, roadwork signs and cones, and cyclists. In the case of the cyclist — not only are the cars able to discern whether the cyclist wants to move left or right, it even watches out for cyclists coming from behind when making a right turn. And while the demo certainly makes the whole process seem easy and fluid, there is actually a considerable amount of work going on behind the scenes.

For starters, there are around $150,000 of equipment in each car performing real-time LIDAR and 360-degree computer vision – a complex and computing-intensive task. The software powering the whole process is also the result of years of development. Basically, every single driving situation that can possibly occur has to be anticipated and then painstakingly programmed into the software. This is an important qualifier when it comes to these “autonomous vehicles”. They are not capable of independent judgement, only following pre-programmed instructions.

BMW 7 Series F01 July 2009 Miramas FranceWhile a lot has been said about the expensive LIDAR hardware, the most impressive aspect of the innovations is the computer vision. While LIDAR provides a very good idea of the lay of the land and the position of large objects (like parked cars), it doesn’t help with spotting speed limits or “construction ahead” signs, and whether what’s ahead is a cyclist or a railroad crossing barrier. And Google has certainly demonstrated plenty of adeptness in the past, what with their latest versions of Street View and their Google Glass project.

Naturally, Google says that it has lots of issues to overcome before its cars are ready to move out from their home town of Mountain View, California and begin driving people around. For instance, the road maps needed to be finely tuned and expanded, and Google is likely to be selling map packages in the future in the same way that apps are sold for smartphones. In the mean time, the adoption of technologies like adaptive cruise control (ACC) and lane keep assist (LKA) will bring lots of almost-self-driving cars to the road over the next few years.

In the meantime, be sure to check out the video of the driverless car in action:


Source:
extremetech.com

The Future is Here: “Terminator-style” Liquid Metal Treatment

t1000_1For ideal physical rehab, it might be necessary to go a little “cyborg”. That’s the reasoning a Chinese biomedical firm used to develop a new method of repairing damaged nerve endings. Borrowing a page from Terminator 2, their new treatment calls for the use of liquid metal to transmit nerve signals across the gap created in severed nerves. The work, they say, raises the prospect of new treatment methods for nerve damage and injuries.

Granted, it’s not quite on par with the liquid-metal-skinned cyborgs from the future, but it is a futuristic way of improving on current methods of nerve rehab that could prevent long-term disabilities. When peripheral nerves are severed, the loss of function leads to atrophy of the effected muscles, a dramatic change in quality of life and, in many cases, a shorter life expectancy. Despite decades of research, nobody has come up with an effective way to reconnect them yet.

nerveVarious techniques exist to sew the ends back together or to graft nerves into the gap that is created between severed ends. And the success of these techniques depends on the ability of the nerve ends to grow back and knit together. But given that nerves grow at the rate of one mm per day, it can take a significant amount of time (sometimes years) to reconnect. And during this time, the muscles can degrade beyond repair and lead to long-term disability.

As a result, neurosurgeons have long hoped for a way to keep muscles active while the nerves regrow. One possibility is to electrically connect the severed ends so that the signals from the brain can still get through; but up until now, an effective means of making this happen has remained elusive. For some time, biomedical engineers have been eyeing the liquid metal alloy gallium-indium-selenium for some time as a possible solution – a material that is liquid at body temperature and thought to be entirely benign.

Liquid metal nervesBut now, a biomedical research team led by Jing Liu of Tsinghua University in Beijing claims they’ve reconnected severed nerves using liquid metal for the first time. They claim that the metal’s electrical properties could help preserve the function of nerves while they regenerate. Using sciatic nerves connected to a calf muscle, which were taken from bullfrogs, they’ve managed to carry out a series of experiments that prove that the technique is viable.

Using these bullfrog nerves, they applied a pulse to one end and measured the signal that reached the calf muscle, which contracted with each pulse. They then cut the sciatic nerve and placed each of the severed ends in a capillary filled either with liquid metal or with Ringer’s solution – a solution of several salts designed to mimic the properties of body fluids. They then re-applied the pulses and measured how they propagated across the gap.

liquid metal nerves_1The results are interesting, and Jing’s team claim that the pulses that passed through the Ringer’s solution tended to degrade severely. By contrast, the pulses passed easily through the liquid metal. As they put it in their research report:

The measured electroneurographic signal from the transected bullfrog’s sciatic nerve reconnected by the liquid metal after the electrical stimulation was close to that from the intact sciatic nerve.

What’s more, since liquid metal clearly shows up in x-rays, it can be easily removed from the body when it is no longer needed using a microsyringe. All of this has allowed Jing and colleagues to speculate about the possibility of future treatments. Their goal is to make special conduits for reconnecting severed nerves that contain liquid metal to preserve electrical conduction and therefore muscle function, but also containing growth factor to promote nerve regeneration.

future_medicineNaturally, there are still many challenges and unresolved questions which must be resolved before this can become a viable treatment option. For example, how much of the muscle function can be preserved? Can the liquid metal somehow interfere with or prevent regeneration? And how safe is liquid metal inside the body – especially if it leaks? These are questions that Jing and others will hope to answer in the near future, starting with animal models and possibly later with humans..

Sources: technologyreview.com, arxiv.org, cnet.com, spectrum.ieee.org