Consider the possibility that surveillance cameras might someday be automated, that there would be no need for security clerks to sit and wade through endless hours of footage to find indices of criminal behavior. That’s the reasoning behind the Pentagon and DARPA’s new project, which is named Mind’s Eye. Far from just being automated, the camera will be the first “smart” camera ever built, capable of predicting human behavior as well as monitoring it.
Using a concept known as “visual intelligence”, the project draws on a research proposal made by researchers working for the Carnegie Mellon School of Computer Science. The proposal calls for the creation of a “high-level artificial visual intelligence system” which, once operational, will be able to recognize human activities and predict what might happen next. Should it encounter a potentially threatening scene or dangerous behavior, it could sound the alarm and notify a human agent.
In essence, the camera system will rely on a series of computer visual algorithms that will allow it to classify behavior as well discriminate between different actions in a scene and predict their outcomes. Might sound like a case of coldly rational machine intelligence evaluating human actions; but in fact, the algorithm was designed to approximate human-level visual intelligence.
According to Alessandro Oltramari and Christian Lebiere, the researchers responsible for the proposal, humans evolved the ability to scan and process their environment for risks, at times relying on experience and guessing correctly what a person might do next. By using a linguistic infrastructure that operates in conjunction with a set of “action verbs”, along with a “cognitive engine,” the researchers are trying to get their camera to do the same thing.
Sound scary? Well that’s natural considering the implications. Any such technology is sure to bolster private and public security efforts by relieving human beings of the humdrum activity of watching security cameras while at the same time keeping them notified about potential risks. On the other hand, a machine intelligence would be responsible for monitoring human beings and judging their actions, which raises many issues. Sure, it’s not exactly PreCrime, but it does raise some ethical and legal concerns, not to mention worries over accountability.
Luckily, the AI that would run such a system is still several years away, which leaves us time to debate and regulate any system that uses “smart surveillance”.