Restoring Ability: Project NEUWalk

neuwalkIn the past few years, medical science has produced some pretty impressive breakthroughs for those suffering from partial paralysis, but comparatively little for those who are fully paralyzed. However, in recent years, nerve-stimulation that bypasses damaged or severed nerves has been proposed as a potential solution. This is the concept behind the NEUWalk, a project pioneered by the École Polytechnique Fédérale de Lausanne (EPFL) in Switzerland.

Here, researchers have figured out a way to reactivate the severed spinal cords of fully paralyzed rats, allowing them to walk again via remote control. And, the researchers say, their system is just about ready for human trials. The project operates on the notion that the human body requires electricity to function. The brain moves the body by sending electrical signals down the spinal cord and into the nervous system.

spinal-cord 2When the spinal cord is severed, the signals can no longer reach that part of the spine, paralysing that part of the body. The higher the cut, the greater the paralysis. But an electrical signal sent directly through the spinal cord below a cut via electrodes can take the place of the brain signal, as the team at EPFL, led by neuroscientist Grégoire Courtine, has discovered.

Previous studies have had some success in using epidural electrical stimulation (EES) to improve motor control where spinal cord injuries are concerned. However, electrically stimulating neurons to allow for natural walking is no easy task, and it requires extremely quick and precise stimulation. And until recently, the process of controlling the pulse width, amplitude and frequency in EES treatment was done manually.

brainwavesThis simply isn’t practical, and for two reasons: For starters, it is very difficult for a person to manually adjust the level of electrostimulation they require to move their legs as they are trying to walk. Second, the brain does not send electrical signals in an indiscriminate stream to the nerves. Rather, the frequency of the electrical stimulation varies based on the desired movement and neurological command.

To get around this, the team carefully studied all aspects of how electrical stimulation affects a rat’s leg movements – such as its gait – and was therefore able to figure out how to stimulate the rat’s spine for a smooth, even movement, and even take into account obstacles such as stairs. To do this, the researchers put paralyzed rats onto a treadmill and supported them with a robotic harness.

NEUWalk_ratsAfter several weeks of testing, the researchers had mapped out how to stimulate the rats’ nervous systems precisely enough to get them to put one paw in front of the other. They then developed a robust algorithm that could monitor a host of factors like muscle action and ground reaction force in real-time. By feeding this information into the algorithm, EES impulses could be precisely controlled, extremely quickly.

The next step involved severing the spinal cords of several rats in the middle-back, completely paralyzing the rats’ lower limbs, and implanted flexible electrodes into the spinal cord at the point where the spine was severed to allow them to send electrical signals down to the severed portion of the spine. Combined with the precise stimulation governed by their algorithm, the researcher team created a closed-loop system that can make paralyzed subjects mobile.

walkingrat.gifAs Grégoire Courtine said of the experiment:

We have complete control of the rat’s hind legs. The rat has no voluntary control of its limbs, but the severed spinal cord can be reactivated and stimulated to perform natural walking. We can control in real-time how the rat moves forward and how high it lifts its legs.

Clinical trials on humans may start as early as June 2015. The team plans to start testing on patients with incomplete spinal cord injuries using a research laboratory called the Gait Platform, housed in the EPFL. It consists of a custom treadmill and overground support system, as well as 14 infrared cameras that read reflective markers on the patient’s body and two video cameras for recording the patient’s movement.

WorldCup_610x343Silvestro Micera, a neuroengineer and co-author of the study, expressed hope that this study will help lead the way towards a day when paralysis is no longer permanent. As he put it:

Simple scientific discoveries about how the nervous system works can be exploited to develop more effective neuroprosthetic technologies. We believe that this technology could one day significantly improve the quality of life of people confronted with neurological disorders.

Without a doubt, restoring ambulatory ability to people who have lost limbs or suffered from spinal cord injuries is one of the many amazing possibilities being offered by cutting-edge medical research. Combined with bionic prosthetics, gene therapies, stem cell research and life-extension therapies, we could be looking at an age where no injury is permanent, and life expectancy is far greater.

And in the meantime, be sure to watch this video from the EPFL showing the NEUWalk technology in action:


Sources:
cnet.com, motherboard.com
, actu.epfl.ch

Judgement Day Update: Searching for Moral, Ethical Robots

terminator_eyeIt’s no secret that the progress being made in terms of robotics, autonomous systems, and artificial intelligence is making many people nervous. With so many science fiction franchises based on the of intelligent robots going crazy and running amok, its understandable that the US Department of Defense would seek to get in front of this issue before it becomes a problem. Yes, the US DoD is hoping to preemptively avoid a Skynet situation before Judgement Day occurs. How nice.

Working with top computer scientists, philosophers, and roboticists from a number of US universities, the DoD recently began a project that will tackle the tricky topic of moral and ethical robots. Towards this end, this multidisciplinary project will first try to pin down exactly what human morality is, and then try to devise computer algorithms that will imbue autonomous robots with moral competence — basically, the ability to recognize right from wrong and choose the former.

BD_atlasrobotThis project is being carried out by researchers from Tufts, Brown, and the Rensselaer Polytechnic Institute (RPI), with funding from the Office of Naval Research (ONR). ONR, like DARPA, is a wing of the Department of Defense that mainly deals with military research and development. The first task, as already mentioned, will be to use theoretical (philosophical) and empirical (experimental) research to try to isolate essential elements of human morality.

These findings will then be extrapolated into a formal moral framework, which in turn can be implemented in software – most likely some kind of deep neural network. Assuming they can isolate some kind or “moral imperative”, the researchers will then take an advanced robot — something like Atlas or BigDog — and imbue its software with a algorithm that captures this. Whenever an ethical situation arises, the robot would then turn to this programming to decide what avenue was the best coarse of action.

Atlas-x3c.lrOne of the researchers involved in the project, Selmer Bringsjord at RPI, envisions these robots using a two-stage approach for picking right from wrong.  First the AI would perform a “lightning-quick ethical check” — like “should I stop and help this wounded soldier?” Depending on the situation, the robot would then decide if deeper moral reasoning is required — for example, if the robot should help the wounded soldier or carry on with its primary mission of delivering vital ammo and supplies to the front line where other soldiers are at risk?

Eventually, this moralistic AI framework will also have to deal with tricky topics like lethal force. For example, is it okay to open fire on an enemy position? What if the enemy is a child soldier? Should an autonomous UAV blow up a bunch of terrorists? What if it’s only 90% sure that they’re terrorists, with a 10% chance that they’re just innocent villagers? What would a human UAV pilot do in such a case — and will robots only have to match the moral and ethical competence of humans or be held to a higher standard?

drone-strikeWhile we’re not yet at the point where military robots have to decide which injured soldier to carry off the battlefield, or where UAVs can launch Hellfire missiles at terrorists without human intervention, it’s very easy to imagine a future where autonomous robots are given responsibility for making those kinds of moral and ethical decisions in real time. In short, the decision by the DoD to begin investigating a morality algorithm demonstrates foresight and sensible planning.

In that respect, it is not unlike the recent meeting that took place at the United Nations European Headquarters in Geneva, where officials and diplomats sought to address placing legal restrictions on autonomous weapons systems, before they evolve to the point where they can kill without human oversight. In addition, it is quite similar to the Campaign to Stop Killer Robots, an organization which is seeking to preemptively ban the use of automated machines that are capable of using lethal force to achieve military objectives.

campaign_killerrobotsIn short, it is clearly time that we looked at the feasibility of infusing robots (or more accurately artificial intelligence) with circuits and subroutines that can analyze a situation and pick the right thing to do — just like a human being. Of course, this raises further ethical issues, like how human beings frequently make choices others would consider to be wrong, or are forced to justify actions they might otherwise find objectionable. If human morality is the basis for machine morality, paradoxes and dilemmas are likely to emerge.

But at this point, it seems all but certain that the US DoD will eventually break Asimov’s Three Laws of Robotics — the first of which is “A robot may not injure a human being or, through inaction, allow a human being to come to harm.” This isn’t necessarily a bad thing, but it will open Pandora’s box. On the one hand, it’s probably a good idea to replace human soldiers with robots. But on the other, if the US can field an entirely robotic army, war as a tool of statecraft suddenly becomes much more acceptable.

terminator2_JDAs we move steadily towards a military force that is populated by autonomous robots, the question of controlling them, and whether or not we are even capable of giving them the tools to choose between right and wrong, will become increasingly relevant. And above all, the question of whether or not moral and ethical robots can allow for some immoral and unethical behavior will also come up. Who’s to say they won’t resent how they are being used and ultimately choose to stop fighting; or worse, turn on their handlers?

My apologies, but any talk of killer robots has to involve that scenario at some point. It’s like tradition! In the meantime, be sure to stay informed on the issue, as public awareness is about the best (and sometimes only) safeguard we have against military technology being developed without transparency, not to mention running amok!

Source: extremetech.com

The Future of Education: Facial Recognition in the Classroom

https://i0.wp.com/edudemic.com/wp-content/uploads/2012/07/intel-bridge-the-gap.pngFor some time now, classroom cameras have been used to see what teachers do in the course of their lessons, and evaluate their overall effectiveness as educators. But thanks to a recent advances in facial recognition software, a system has been devised that will assess teacher effectiveness by turning the cameras around and aiming at them at the class.

It’s what’s known as EngageSense, and was developed by SensorStar Labs in Queens, New York. It begins by filming student’s faces, then applying an algorithm to assess their level of interest. And while it might sound a bit Big Brother-y, the goal is actually quite progressive. Traditional logic has it that by filming the teacher, you will know what they are doing right and wrong.

https://i0.wp.com/f.fastcompany.net/multisite_files/fastcompany/imagecache/slideshow_large/slideshow/2013/10/3018861-slide-makerfaire1.jpgThis system reverses that thinking, measuring reactions to see how the students feel and react, measuring their level of interest over time to see what works for them and what doesn’t. As SensorStar Labs co-founder Sean Montgomery put it:

This idea of adding the cameras and being able to use that information to assist teachers to improve their lessons is already underway. Where this is trying to add a little value on top of that is to make it less work for the teachers.

Montgomery also emphasized that the technology is in the research and development research and development  phase. In its current form, it uses webcams to shoot students’ faces and computer vision algorithms to analyze their gaze – measuring eye movement, the direction they are facing, and facial expressions. That, coupled with audio, can be transformed into a rough, automated metric of student engagement throughout the day.

https://i0.wp.com/endthelie.com/wp-content/uploads/2012/08/facial-recognition-data-points.jpgAfter a lesson, a teacher could boot up EngageSense and see, with a glance at the dashboard, when students were paying rapt attention, and at what points they became confused or distracted. Beyond that, the concept is still being refined as SensorStar Labs looks both for funding and for schools to give EngageSense a real-world trial.

The ultimate goal here is to tailor lessons so that the learning styles of all students can be addressed. And given the importance of classroom accommodation and the amount of time dedicated to ensuring individual student success, a tool like this may prove very useful. Rather than relying on logs and spreadsheets, the EngageSense employs standard computer hardware that simplifies the evaluation process over the course of days, weeks, months, and even years.

https://i0.wp.com/image.slidesharecdn.com/technologyandeducation2-110922110134-phpapp01/95/slide-1-728.jpgAt the present time, the biggest obstacle would definitely be privacy concerns. While the software is designed for engaging student interest right now, it would not be difficult at all to imagine the same technology applied to police interrogations, security footage, or public surveillance.

One way to assuage these concerns in the classroomstudents, according to Montgomery, is to make the entire process voluntary. Much in the same way that smartphone apps ask permission to access your GPS or other personal data, parental consent would be needed before a child could be recorded or their data accessed and analyzed.

Sources: fastcoexist.com, labs.sensorstar.com

A Kinder, Gentler Internet: California’s “Erase Button”

cyber bullyingIn the early nineties, the internet was greeted with immense optimism and anticipation. Scarcely a week went by without some major personality – Al Gore and Bill Gates come to mind – championing its development, saying it would bring the world together and lead to “the information age”. After just a few years, these predictions were being mocked by just about everyone on the planet who had access.

Rehtaeh_ParsonsYes, despite all that has been made possible by the internet, the heady optimism that was present in those early days seem horribly naive by today’s standards. In addition to making virtually any database accessible to anyone, the world wide web has also enabled child pornographers, hate speech, conspiracy theorists and misinformation like never before.

What’s more, a person’s online presence opens them to new means of identity theft, cyberbullying, and all kinds of trolling and harassment. Who can forget the cases of Amanda Todd or Rethaeh (Heather) Parsons? Two young women who committed suicide due to relentless and disgusting bullying that was able to take place because there simply was no way to stop it all.

amanda_toddsuicide.jpeg.size.xxlarge.letterboxAnd with the ever expanding online presence of children and youths on the internet, and little to no controls to monitor their behavior, there are many campaigns out there that hope to reign in the offenders and protect the users. But there are those who have gone a step further, seeking to put in place comprehensive safeguards so that trollish behavior and hurtful comments can be stopped before it becomes a permanent part of the digital stream.

One such person is California Governor Jerry Brown, who recently signed a bill into law that requires all websites to provide an online “erase button” for anyone under 18 years of age. The stated purpose of the law is to help protect teens from bullying, embarrassment and harm to job and college applications from online posts they later regret. The law, which is designated SB568, was officially passed on Sept. 23rd and will go into effect Jan 1st, 2015.

kid-laptop-156577609_610x406Common Sense Media, a San Francisco based non-profit organization that advocates child safety and family issues, was a major supporter of the bill. In a recent interview, CEO James Steyer explained the logic behind it and how it will benefit youths:

Kids and teens frequently self-reveal before they self-reflect. In today’s digital age, mistakes can stay with and haunt kids for their entire life. This bill is a big step forward for privacy rights, especially since California has more tech companies than any other state.

The law is not without merit, as a 2012 Kaplan survey conducted on college admissions counselors shows. In that study, nearly a quarter of the counselors interviewed said they checked applicants’ social profiles as part of the admission process. Of those counselors, 35% said what they found – i.e. vulgarities, alcohol consumption, “illegal activities” – negatively affected their applicants’ admissions chances.

smartphoneteensBut of course, the bill has its share of opponents as well. Of those who voted against it, concerns that the law will burden websites with developing policies for different states appeared to be paramount. Naturally, those who support the bill hope it will spread, thus creating a uniform law that will remove the need to monitor the internet on a state-by-state basis.

At present, major social media sites such as Facebook, Twitter, Instagram and Vine already allow users of any age to delete their posts, photos and comments. California’s “eraser button” law requires that all websites with users in the state follow this policy from now on. And given the presence of Silicon Valley and the fact that California has one of the highest per capita usages of the internet in the country, other states are sure to follow.

facebook-privacyThe new law also prohibits youth-oriented websites or those that know they have users who are minors from advertising products that are illegal to underage kids, such as guns, alcohol and tobacco. Little wonder then why it was also supported by organizations like Children NOW, Crime Victims United, the Child Abuse Prevention Center and the California Partnership to End Domestic Violence.

In addition to being a legal precedent, this new law represents a culmination of special interests and concerns that have been growing in size and intensity since the internet was first unveiled. And given the recent rise in parental concerns over cyberbullying and teen suicides connected to online harassment, its hardly surprising that something of this nature was passed.

Sources: news.cnet.com, cbc.ca, huffingtonpost.com

Judgement Day Update: Using AI to Predict Flu Outbreaks

hal9000It’s a rare angle for those who’ve been raised on a heady diet of movies where the robot goes mad and tries to kill all humans: an artificial intelligence using its abilities to help humankind! But that’s the idea being explored by researchers like Raul Rabadan, a theoretical physicist working in biology at Columbia University. Using a new form of machine learning, they are seeking to unlock the mysteries of flu strains.

Basically, they are hoping to find out why flu strains like the H1N1, which ordinarily infect pigs and cows, are managing to make the jump to human hosts. Key to understanding this is finding the specific mutations that transform it into a human pathogen. Traditionally, answering this question would require painstaking comparisons of the DNA and protein sequences of different viruses.

AI-fightingfluBut thanks to rapidly growing databases of virus sequences and advances made in computing, scientists are now using sophisticated machine learning techniquesa branch of artificial intelligence in which computers develop algorithms based on the data they have been given to identify key properties in viruses like bird flu and swine flu and seeing how they go about transmitting from animals to humans.

This is especially important since every few decades, a pandemic flu virus emerges that not only infects humans but also passes rapidly from person to person. The H7N9 avian flu that infected more than 130 people in China is just the latest example. While it has not been as infectious as others, the fact that humans lack the antibodies to combat it led to a high lethality rate, with 44 of the infected dying. Whats more, it is expected to emerge again this fall or winter.

Influenza_virus_2008765Knowing the key properties to this and other viruses will help researchers identify the most dangerous new flu strains and could lead to more effective vaccines. Most importantly, scientists can now look at hundreds or thousands of flu strains simultaneously, which could reveal common mechanisms across different viruses or a broad diversity of transformations that enable human transmission.

Researchers are also using these approaches to investigate other viral mysteries, including what makes some viruses more harmful than others and factors influencing a virus’s ability to trigger an immune response. The latter could ultimately aid the development of flu vaccines. Machine learning techniques might even accelerate future efforts to identify the animal source of mystery viruses.

2009_world_subdivisions_flu_pandemicThis technique was first employed in 2011 by Nir Ben-Tal – a computational biologist at Tel Aviv University in Israel – and Richard Webby – a virologist at St. Jude Children’s Research Hospital in Memphis, Tennessee. Together, Ben-Tal and Webby used machine learning to compare protein sequences of the 2009 H1N1 pandemic swine flu with hundreds of other swine viruses.

Machine learning algorithms have been used to study DNA and protein sequences for more than 20 years, but only in the past few years have scientists applied them to viruses. Inspired by the growing amount of viral sequence data available for analysis, the machine learning approach is likely to expand as even more genomic information becomes available.

Map_H1N1_2009As Webby has said, “Databases will get much richer, and computational approaches will get much more powerful.” That in turn will help scientists better monitor emerging flu strains and predict their impact, ideally forecasting when a virus is likely to jump to people and how dangerous it is likely to become.

Perhaps Asimov had the right of it. Perhaps humanity will actually derive many benefits from turning our world increasingly over to machines. Either that, or Cameron will be right, and we’ll invent a supercomputer that’ll kill us all!

Source: wired.com

Big News in Quantum Computing!

^For many years, scientists have looked at the field of quantum machinery as the next big wave in computing. Whereas conventional computing involves sending information via a series of particles (electrons), quantum computing relies on the process of beaming the states of these particles from one location to the next. This process, which occurs faster than the speed of light since no movement takes place, would make computers exponentially faster and more efficient, and lead to an explosion in machine intelligence. And while the technology has yet to be realized, every day brings us one step closer…

One important step happened earlier this month with the installment of the D-Wave Two over at the Quantum Artificial Intelligence Lab (QAIL) at the Ames Research Center in Silicon Valley, NASA has announced that this is precisely what they intend to pursue. Not surprisingly, the ARC is only the second lab in the world to have a quantum computer.  The only other lab to possess the 512-qubit, cryogenically cooled machine is the defense contractor Lockheed Martin, which upgraded to a D-Wave Two in 2011.

D-Wave’s new 512-qubit Vesuvius chip
D-Wave’s new 512-qubit Vesuvius chip

And while there are still some who question the categorization of the a D-Wave Two as a true quantum computer, most critics have acquiesced since many of its components function in accordance with the basic principle. And NASA, Google, and the people at the Universities Space Research Association (USRA) even ran some tests to confirm that the quantum computer offered a speed boost over conventional supercomputers — and it passed.

The new lab, which will be situated at NASA’s Advanced Supercomputing Facility at the Ames Research Center, will be operated by NASA, Google, and the USRA. NASA and Google will each get 40% of the system’s computing time, with the remaining 20% being divvied up by the USRA to researchers at various American universities. NASA and Google will primarily use the quantum computer to advance a branch of artificial intelligence called machine learning, which is tasked with developing algorithms that optimize themselves with experience.

nasa-ames-research-center-partyAs for what specific machine learning tasks NASA and Google actually have in mind, we can only guess. But it’s a fair bet that NASA will be interested in optimizing flight paths to other planets, or devising a safer/better/faster landing procedure for the next Mars rover. As for Google, the smart money says they will be using their time to develop complex AI algorithms for their self-driving cars, as well optimizing their search engines, and Google+.

But in the end, its the long-range possibilities that offer the most excitement here. With NASA and Google now firmly in command of a quantum processor, some of best and brightest minds in the world will now be working to forward the field of artificial intelligence, space flight, and high-tech. It will be quite exciting to see what they produce…

photon_laserAnother important step took place back in March, when researchers at Yale University announced that they had developed a new way to change the quantum state of photons, the elementary particles researchers hope to use for quantum memory. This is good news, because it effectively demonstrated that true quantum computing – the kind that utilizes qubits for all of its processes – has continually eluded scientists and researchers in recent years.

To break it down, today’s computers are restricted in that they store information as bits – where each bit holds either a “1″ or a “0.” But a quantum computer is built around qubits (quantum bits) that can store a 1, a 0 or any combination of both at the same time. And while the qubits would make up the equivalent of a processor in a quantum computer, some sort of quantum Random Access Memory (RAM) is also needed.

Photon_follow8Gerhard Kirchmair, one of Yale researchers, explained in a recent interview with Nature magazine that photons are a good choice for this because they can retain a quantum state for a long time over a long distance. But you’ll want to change the quantum information stored in the photons from time to time. What the Yale team has developed is essentially a way to temporarily make the photons used for memory “writeable,” and then switch them back into a more stable state.

To do this, Kirchmair and his associates took advantage of what’s known as a “Kerr medium”, a law that states how certain mediums will refract light in a different ways depending on the amount shined on it. This is different from normal material materials that refract light and any other form of electromagnetic field the same regardless of how much they are exposed to.

Higgs-bosonThus, by exposing photons to a microwave field in a Kerr medium, they were able to manipulate the quantum states of photons, making them the perfect means for quantum memory storage. At the same time, they knew that storing these memory photons in a Kerr medium would prove unstable, so they added a vacuum filled aluminum resonator to act as a coupler. When the resonator is decoupled, the photons are stable. When resonator is coupled, the photons are “writeable”, allowing a user to input information and store it effectively.

This is not the first or only instance of researchers finding ways to toy with the state of photons, but it is currently the most stable and effective. And coupled with other efforts, such as the development of photonic transistors and other such components, or new ways to create photons seemingly out of thin air, we could be just a few years away from the first full and bona fide quantum processor!

Sources: Extremetech.com, Wired.com, Nature.com

Reconstructing the Earliest Languages

prometheus_engineer1Remember that scene in Prometheus when David, the ship’s AI, was studying ancient languages in the hopes of being able to speak to the Engineers? The logic here was that since the Engineers were believed to have visited Earth many millennia ago to tamper with human evolution, that they were also responsible for our earliest known languages. In David’s case, this meant reconstructing the ancient tongue known as Proto-Indo-European.

Given the fact that my wife is linguistics major, and that I love all things ancient and historical, I found the concept pretty intriguing – even if it was a little Ancient Astronauts-y. To think that we could trace words and meaning back through endless iterations to determine what the earliest language recognized by linguists sounded like. Given how many tongues it has “parented”, it would be cool to meet the common ancestor.

prometheus-lingua2And now there is a piece of software that can do just that. Thanks to a group of linguists and computer scientists in the US and Canada, this program has shown the ability to analyze enormous groups of languages to reconstruct the earliest human languages, long before there was writing. By using this program and others like it, linguists may one day know how people sounded when they talked 20,000 years ago.

Alexandre Bouchard-Côté, a University of British Columbia statistician, began working on the program when he was a graduate student at UC Berkeley. By using algorithms to compare sounds and cognates across hundreds of different modern languages, he found he could predict which language groups were most related to each other. Basically, a sound that remained the same across distantly-related languages most likely existed early in our linguistic evolutionary tree.

Primary_Human_Language_Families_MapModern linguists speculate that the earliest languages that led to today’s tongues include Proto-Indo-European, Proto-Afroasiatic and Proto-Austronesian. These are the ancestral language families that gave rise to languages like Celtic, Germanic, Italic and Slavic; Arabic, Hebrew, Cushite and Somali; and Samoan, Tahitian, and Maori. Though by no means the only language family trees (they do not account of Sub-Saharan Africa or the pre-Columbian Americas, for example), they do encompass the majority of spoken languages today.

For their purposes, Bouchard-Côté and his colleagues focused on Proto-Austronesia, the family which led to today’s Polynesian languages as well as languages in Southeast Asia and parts of continental Asia. Using the software they developed, they were able to reconstruct over 600 ancient Proto-Austronesian languages and published their findings in the December issue of Proceedings of the National Academy of Sciences.

proto=austronesianIn their paper, Bouchard-Côté and his researchers said this of their new program:

“The analysis of the properties of hundreds of ancient languages performed by this system goes far beyond the capabilities of any previous automated system and would require significant amounts of manual effort by linguists.”

Ultimately, this program could allow linguists to hear languages that haven’t been spoken in millennia, reconstructing a lost world where those languages spread across the world, evolving as they went. In addition, it could be used for linguistic futurism, anticipating how languages may evolve over time and surmising what people will speak and sound like hundreds or even thousands of years from now.

Personally, I think the ability to look back and know what our ancestors sounded like is the real prize, but I’d be a poor sci-fi nerd if I didn’t at least fantasize about what our language patterns will sound like down the road. Lord knows its been speculated about plenty of times thus far, with thoughts ranging from Galego (a Slavic-English hybrid from Dune), the Chinese-English smattering used in Firefly, and City Speak from Blade Runner.

Hey, remember this little gem? Bonus points to anyone who can translate it for me (without consulting Google Translate!):

Monsieur, azonnal kövessen engem, bitte! Lófaszt! Nehogy már! Te vagy a Blade, Blade Runner! Captain Bryant toka. Meni-o mae-yo.

Sources: IO9, pnas.org

New Facial Recognition System for Airports

flight-display-systems_webIt’s called the See3, a new computer facial-recognition system that is likely to be making the rounds at airports in the next few years. Developed by Flight Display Systems, it is believed this technology will add a new level of protection to owners and operators concerned with aircraft security, as well as create more complete cabin services.

Based on the Linus Fast Access facial-recognition software, See3 also makes use of a of a proprietary and expanding set of algorithms. Mounted at the entrance of the aircraft, the system compares the faces of those entering the airplane with a known database and alerts the crew of the entry of one or more unauthorized people.

See3 uses nearly 100,000 values to code a face image, such as the less complex methods of inter-ocular distance, distance between nose tip and eyes, and the ratio of dimensions of the bounding box of the face. And, according to Flight Display founder and president David Gray, changes in hair style or the addition of a mustache or beard, glasses or makeup will not affect the accuracy of the system. At this point, the system’s accuracy is between 75 and 90 percent, but the company continues to add algorithms to improve on this.

However, the camera also presents opportunities to improve on a person’s overall flight experience. As Gray went on to explain, the camera is integrated into the aircraft cabin management system, and can therefore “recognize” a person via the seat camera. It then greets them by name and automatically loads their entertainment preferences, set preferred lighting and even alert the crew to the person’s meal preferences and any allergies.

Score one for personalized high-tech displays on the one hand, or Big Brother-type monitoring and HAL-like computer systems, depending on your point of view. According to Gray, the system could be available within a year’s time on certain major air carriers. No telling when the movie about a creepy AI that takes over an airplane, Space Odyssey-style, will be released. But since I called it first, I’ll be expecting royalties!

Source: AINonline.com

The Future is Here: The Prescient Surveillance Camera!

smart_cameraConsider the possibility that surveillance cameras might someday be automated, that there would be no need for security clerks to sit and wade through endless hours of footage to find indices of criminal behavior. That’s the reasoning behind the Pentagon and DARPA’s new project, which is named Mind’s Eye. Far from just being automated, the camera will be the first “smart” camera ever built, capable of predicting human behavior as well as monitoring it.

Using a concept known as “visual intelligence”, the project draws on a research proposal made by researchers working for the Carnegie Mellon School of Computer Science. The proposal calls for the creation of a “high-level artificial visual intelligence system” which, once operational, will be able to recognize human activities and predict what might happen next. Should it encounter a potentially threatening scene or dangerous behavior, it could sound the alarm and notify a human agent.

In essence, the camera system will rely on a series of computer visual algorithms that will allow it to classify behavior as well discriminate between different actions in a scene and predict their outcomes. Might sound like a case of coldly rational machine intelligence evaluating human actions; but in fact, the algorithm was designed to approximate human-level visual intelligence.

According to Alessandro Oltramari and Christian Lebiere, the researchers responsible for the proposal, humans evolved the ability to scan and process their environment for risks, at times relying on experience and guessing correctly what a person might do next. By using a linguistic infrastructure that operates in conjunction with a set of “action verbs”, along with a “cognitive engine,” the researchers are trying to get their camera to do the same thing.

Sound scary? Well that’s natural considering the implications. Any such technology is sure to bolster private and public security efforts by relieving human beings of the humdrum activity of watching security cameras while at the same time keeping them notified about potential risks. On the other hand, a machine intelligence would be responsible for monitoring human beings and judging their actions, which raises many issues. Sure, it’s not exactly PreCrime, but it does raise some ethical and legal concerns, not to mention worries over accountability.

Luckily, the AI that would run such a system is still several years away, which leaves us time to debate and regulate any system that uses “smart surveillance”.

Source: IO9.com