Cyberwars: FBIs Facial Recognition Database

facial_rec1This past summer, the FBI was compelled to release information about the operational facial recognition database they working on. As part of its Next Generation Identification (NGI) program, this database is part of the FBIs efforts to build a “bigger, faster and better” means of biometric identification. Earlier this month, the FBI announced that the system is now working at “full operational capability”, and many people are worried…

To break it down, the NGI database is made up of millions of stored mugshots and other photos, which are then used when analyzing footage taken by CCTV feeds or other cameras around the country. The full deployment of the program comes three months after James Comey, the bureau’s director, announced that the agency was “piloting the use of mug shots” alongside the bureau’s other databases, in order to catch wanted criminals.

FBI_NGI_Slide_WideDesigned to replace the bureau’s aging fingerprint database, the NGI is different in that it is designed to be multimodal. This means that it will link multiple forms of biometric data to biographical information such as name, address,  ID number, age and ethnicity. It’s currently focused on fingerprint and facial records, but it will also be capable of holding iris scans and palm prints, with the possibility of added voice recognition and gate analysis (i.e. how people walk).

As the FBI said in a statement on Monday, Sept. 15th, the NGI, combined with fingerprint database:

[W]ill provide the nation’s law enforcement community with an investigative tool that provides an image-searching capability of photographs associated with criminal identities.

Naturally, the worries that this database will be another step towards “Big Brother” monitoring. However, what is equally (if not more) worrisome is the fact that the details of the program are only a matter of public record thanks to a lawsuit filed by the Electronic Frontier Foundation. The lawsuit was issued in June of 2013, wherein the EFF compelled the FBI to produce records in accordance Freedom of Information Act to detail the program and its face-recognition components.

facial_recCiting the FBI documents, the EFF claims that the facial recognition technology is not very reliable and that the way the database returns results is fundamentally flawed, as well as pointing out that it will indiscriminately combine the details of both criminals and non-criminals. Based on their own interpretation, they claim it could fail 20 percent of the time, which could lead to innocent persons becoming the subject of police investigations.

Nevertheless, the bureau remains confident that the system will simplify and enhance law enforcement both locally and federally. As they said of the program when it was first announced back in 2011:

The NGI system has introduced enhanced automated fingerprint and latent search capabilities, mobile fingerprint identification, and electronic image storage, all while adding enhanced processing speed and automation for electronic exchange of fingerprints to more than 18,000 law enforcement agencies and other authorized criminal justice partners 24 hours a day, 365 days a year.

fingerprint_databaseIn 2012, the NGI database already contained 13.6 million images (of seven to eight million individuals) and by mid-2013, it had 16 million images. We now know it aims to have 52 million facial records in its system by next year, and those will include some regular citizens. This is another source of concern for the EFF and civil liberties advocates, which is the estimated 4.3 million images taken for non-criminal purposes.

Whenever someone applies for a job that requires a background check, they are required to submit fingerprint records. These records are then entered into federal databases. Right now, the FBI’s fingerprint database contains around 70 million criminal profiles, and 34 million non-criminal records. With the NGI database now up and running, photographs can be submitted by employers and other sources along with fingerprints, which puts non-criminals on file.

FBI-facial-recognitionThe database, while maintained by the FBI, can be searched by law enforcement at all levels. According to Jennifer Lynch, the EFF attorney behind the lawsuit:

Your image would be searched every time there is a criminal investigation. The problem with that is the face recognition is still not 100 percent accurate.” This means that the system is liable to make mismatches with data. If a camera catches a criminal’s face and that is compared to images in the database, there’s no guarantee that it will pop up an accurate result. 

What’s more, when the database is searched it does not return a completely positive result; but instead provides the top hits, ranked by probability of match. So if your face just happens to be similar to a snapshot of a criminal caught in CCTV footage, you may become a suspect in that case. Combined with other forms of biometric readers and scanners, it is part of a general trend where privacy is shrinking and public spaces are increasingly permeated by digital surveillance.

internet-of-things-2This sort of data exchange and on-the-ground scanning will be made possible byand is one of the explicit aims ofFirstNet, the nationwide broadband network for law enforcement and first responders, colloquially referred to by some as the “internet of cops”. Much like all things pertaining the expansion of the internet into the “internet of things”, this sort of growth has the capacity to affect privacy and become invasive as well as connective.

As always, fears of an “Orwellian” situation can be allayed by reminding people that the best defense is public access to the information – to know what is taking place and how it works. While there are doubts as to the efficacy of the NGI database and the potential for harm, the fact that we know about its inner workings and limitations could serve as a legal defense wherever a potentially innocent person is targeted by it.

And of course, as the issue of domestic surveillance grows, there are also countless efforts being put forth by “Little Brother” to protect privacy and resist identification. The internet revolution cuts both ways, and ensures that everyone registered in the torrential data stream has a degree of input. Fight the power! Peace out!

Sources: motherboard.com, arstechnica.com, singularityhub.com

The Future of Computing: Brain-Like Computers

neuronsIt’s no secret that computer scientists and engineers are looking to the human brain as means of achieving the next great leap in computer evolution. Already, machines are being developed that rely on machine blood, can continue working despite being damaged, and recognize images and speech. And soon, a computer chip that is capable of learning from its mistakes will also be available.

The new computing approach, already in use by some large technology companies, is based on the biological nervous system – specifically on how neurons react to stimuli and connect with other neurons to interpret information. It allows computers to absorb new information while carrying out a task, and adjust what they do based on the changing signals.

brain_chip2The first commercial version of the new kind of computer chip is scheduled to be released in 2014, and was the result of a collaborative effort between I.B.M. and Qualcomm, as well as a Stanford research team. This “neuromorphic processor” can not only automate tasks that once required painstaking programming, but can also sidestep and even tolerate errors, potentially making the term “computer crash” obsolete.

In coming years, the approach will make possible a new generation of artificial intelligence systems that will perform some functions that humans do with ease: see, speak, listen, navigate, manipulate and control. That can hold enormous consequences for tasks like facial and speech recognition, navigation and planning, which are still in elementary stages and rely heavily on human programming.

googleneuralnetworkFor example, computer vision systems only “recognize” objects that can be identified by the statistics-oriented algorithms programmed into them. An algorithm is like a recipe, a set of step-by-step instructions to perform a calculation. But last year, Google researchers were able to get a machine-learning algorithm, known as a “Google Neural Network”, to perform an identification task (involving cats) without supervision.

And this past June, the company said it had used those neural network techniques to develop a new search service to help customers find specific photos more accurately. And this past November, researchers at Standford University came up with a new algorithm that could give computers the power to more reliably interpret language. It’s known as the Neural Analysis of Sentiment (NaSent).

deep_learning_laptopA similar concept known as Deep Leaning is also looking to endow software with a measure of common sense. Google is using this technique with their voice recognition technology to aid in performing searches. In addition, the social media giant Facebook is looking to use deep learning to help them improve Graph Search, an engine that allows users to search activity on their network.

Until now, the design of computers was dictated by ideas originated by the mathematician John von Neumann about 65 years ago. Microprocessors perform operations at lightning speed, following instructions programmed using long strings of binary code (0s and 1s). The information is stored separately in what is known as memory, either in the processor itself, in adjacent storage chips or in higher capacity magnetic disk drives.

neural-networksBy contrast, the new processors consist of electronic components that can be connected by wires that mimic biological synapses. Because they are based on large groups of neuron-like elements, they are known as neuromorphic processors, a term credited to the California Institute of Technology physicist Carver Mead, who pioneered the concept in the late 1980s.

These processors are not “programmed”, in the conventional sense. Instead, the connections between the circuits are “weighted” according to correlations in data that the processor has already “learned.” Those weights are then altered as data flows in to the chip, causing them to change their values and to “spike.” This, in turn, strengthens some connections and weakens others, reacting much the same way the human brain does.

Neuromorphic-chip-640x353In the words of Dharmendra Modha, an I.B.M. computer scientist who leads the company’s cognitive computing research effort:

Instead of bringing data to computation as we do today, we can now bring computation to data. Sensors become the computer, and it opens up a new way to use computer chips that can be everywhere.

One great advantage of the new approach is its ability to tolerate glitches, whereas traditional computers are cannot work around the failure of even a single transistor. With the biological designs, the algorithms are ever changing, allowing the system to continuously adapt and work around failures to complete tasks. Another benefit is energy efficiency, another inspiration drawn from the human brain.

IBM_stacked3dchipsThe new computers, which are still based on silicon chips, will not replace today’s computers, but augment them; at least for the foreseeable future. Many computer designers see them as coprocessors, meaning they can work in tandem with other circuits that can be embedded in smartphones and the centralized computers that run computing clouds.

However, the new approach is still limited, thanks to the fact that scientists still do not fully understand how the human brain functions. As Kwabena Boahen, a computer scientist who leads Stanford’s Brains in Silicon research program, put it:

We have no clue. I’m an engineer, and I build things. There are these highfalutin theories, but give me one that will let me build something.

calit2PhotoLuckily, there are efforts underway that are designed to remedy this, with the specific intention of directing that knowledge towards the creation of better computers and AIs. One such effort comes from the National Science Foundation financed the Center for Brains, Minds and Machines, a new research center based at the Massachusetts Institute of Technology, with Harvard and Cornell.

Another is the California Institute for Telecommunications and Information Technology (aka. Calit2) – a center dedicated to innovation in nanotechnology, life sciences, information technology, and telecommunications. As
Larry Smarr, an astrophysicist and director of Institute, put it:

We’re moving from engineering computing systems to something that has many of the characteristics of biological computing.

Human-Brain-project-Alp-ICTAnd last, but certainly not least, is the Human Brain Project, an international group of 200 scientists from 80 different research institutions and based in Lausanne, Switzerland. Having secured the $1.6 billion they need to fund their efforts, these researchers will spend the next ten years conducting research that cuts across multiple disciplines.

This initiative, which has been compared to the Large Hadron Collider, will attempt to reconstruct the human brain piece-by-piece and gradually bring these cognitive components into an overarching supercomputer. The expected result of this research will be new platforms for “neuromorphic computing” and “neurorobotics,” allowing for the creation of computing and robotic architectures that mimic the functions of the human brain.

neuromorphic_revolutionWhen future generations look back on this decade, no doubt they will refer to it as the birth of the neuromophic computing revolution. Or maybe just Neuromorphic Revolution for short, but that sort of depends on the outcome. With so many technological revolutions well underway, it is difficult to imagine how the future will look back and characterize this time.

Perhaps, as Charles Stross suggest, it will simply be known as “the teens”, that time in pre-Singularity history where it was all starting to come together, but was yet to explode and violently change everything we know. I for one am looking forward to being around to witness it all!

Sources: nytimes.com, technologyreview.com, calit2.net, humanbrainproject.eu