Ahead of FutureScot’s Digital Justice and Policing 2019 event in Edinburgh on the 29th October, Davie Gow, Deputy CTO for Leidos Civil UK, caught up with Leidos Vice President and Tech Fellow John Mears, who specialises in biometrics, identity management, and forensics.

Mears uses technology to help improve citizen services, homeland security, law enforcement, defence, and intelligence operations. Here’s an interview with Mears around some of the themes Davie discussed with him.

What’s your view on facial recognition?

Facial recognition is simply another biometric-based form of identification and one that has been used by humans throughout our history as a species. Since the earliest aggregation of humans in groups and villages, we’ve learned to recognise each other by our faces, our bodies, our voices, our hand and fingerprints, our signatures, even our body odour. However, faces are special to us. Our brains seem to be “wired” to recognise familiar faces, and there’s even a specialised portion of the brain dedicated to facial recognition – the fusiform gyrus. 

On average, we can recognise between 1,000 and 10,000 faces over our lifetimes, though we often forget the names. Certain people, those with prosopagnosia, can’t recognise faces at all, while others, called “super-recognisers” excel at facial recognition. However, these people comprise less than 2 percent of the population in each case. Most of the rest of us are somewhere in the middle, and we sometimes struggle with recognising faces and remembering names. The situation is exacerbated because we’ve grown into a global society with an estimated population of 7.6 billion. 

Given our societal mobility and access to global e-commerce, identification by many means – including facial recognition – has become an imperative for convenience, facilitation of commerce, and security. We’ve seen the advances made in convenience with such things as the introduction of the iPhone X, showing that many people are entirely satisfied with facial recognition technology for personal use. For facilitation of commerce, the travelling public, particularly where it’s offered, have enthusiastically greeted dispensing of boarding passes and ID cards in favour of facial recognition for international flights. Plus, in this scenario match accuracy approaches 99 percent versus measured accuracy of human visual passport inspection in the range of 80 percent or less. 

The topic of most debate is associated with security applications of facial recognition, particularly for police investigative purposes. Given the increased accuracy of current facial recognition algorithms, and pressure to solve more crimes faster with diminishing budgets, it is understandable that police agencies want to use more automated tools, including facial recognition. Such powerful algorithms can increasingly operate on off-angle (non-frontal) probe images and lower resolution surveillance video captures, thus expanding the sources of information that can be meaningfully ingested and processed by them. 

My view of automated facial recognition technology overall is that it has finally reached the stage of maturity and accuracy that it can be a supplement – or even an alternative biometric – to fingerprint and iris recognition for many applications. It is a powerful tool for convenience, facilitation of commerce, and security under appropriate use cases, policy, and law. 

What about surveillance technologies?

Surveillance is using humans or automation to persistently observe an environment to derive intelligence, detect adverse behaviour, or – when recorded – to forensically analyse circumstances leading up to an event of interest (perhaps for purposes of attribution). There are many forms of surveillance, including aerial imagery, data mining, social network analysis, computer, communications, RF (including RFID and geolocation), geophysical, audio (e.g. gunshot detection and location) and video surveillance. Video surveillance is widely accepted and deployed in many cities, particularly where there are known threats to security or a desire to implement smart city applications for efficiency. Such video surveillance is often networked into operations centres for the cities involved, with human operators on watch, or combined with automated video analytics.  

Video analytics are computer algorithms specially designed to analyse video streams and automate alerts to threats or violations of policy or law. Such analytics can operate on people in the video scene, or objects, such as cars or their number plates. When operating on people, functions can include people counting, flow monitoring, presence (or absence), tracking through multiple camera views and, when associated with facial recognition algorithms, identification or classification (e.g., gender, age estimate, ethnicity estimate). The security applications are numerous, and smart cities are increasingly using such techniques to increase safety and decrease costs of service for convenience and flow of transportation or implementation of policies (like parking restrictions or access to the inner city).

The most common form of surveillance in civilian use today – video monitoring of roads and cities – is very useful for traffic flow monitoring, security monitoring, and emergency dispatch awareness – but this infrastructure will also be foundational to some functions of smart city evolution. However, city surveillance with real-time facial recognition should be governed by local policy and law and used in limited prescribed circumstances.

We tend to think of these types of applications for identifying criminals or terrorists, but there are other applications of the technology, like finding missing children, identifying exploited children, and identifying missing or disoriented adults (e.g. with amnesia or Alzheimer’s disease). I believe video facial recognition is always warranted for forensic analysis after an emergency event, especially when no other useful evidence is immediately found, and the need is urgent. The Boston Marathon bombing of April 15, 2013 or the London bombings of July 7, 2005 are examples of appropriate use.

I believe that policy and law tend to lag leading-edge technology and at times may slow the achievement of a necessary balance between salient factors such as security and privacy. To accelerate the convergence of technology and law, we need to educate our policy and lawmakers about the capabilities – and limitations – of the technology so that science and facts can inform and accelerate the production of good policy and law.

Why do you think people are concerned with facial recognition and surveillance technologies? 

I believe this question requires two separate answers and I will address surveillance first.   

With respect to surveillance, there are different factors, both historical and current, that are driving some people’s concerns. Recent reports of Facebook data misuse have led to some concerns that large accumulations of data about citizens (Facebook users) can be abused. In fact, the Facebook-Cambridge Analytica scandal could be characterised as a type of surveillance in the form of social network analysis. By transference, video surveillance will be met with scepticism about the purity of its intentions. Such concerns are amplified by the exaggerated and unrealistic portrayals of surveillance capabilities in TV shows and movies. 

A recent BBC report of citywide surveillance with tightly coupled facial recognition in current day China has driven fears that such technology could be used for purposes of oppression by the Chinese government. In spite of the very different governmental structures and values of western democratic countries, some people are concerned that if it could happen in China, it could happen elsewhere. Some people (I’m not among them) simply mistrust their governments, even democratic governments, and this extends to their police agencies, especially when they are using surveillance and facial recognition. 

With respect to facial recognition, again there are four prominent areas of concern. 

People are concerned that automated facial recognition makes mistakes and might implicate innocent people in a law enforcement application, an assertion which misrepresents the significance of facial recognition in investigative processes. When a photo of an unknown suspect (perhaps from a surveillance video) is applied as a probe against, for example, a mug-shot database, multiple possible candidates – the best matches – are returned for investigative consideration. Candidates are then ruled in or ruled out based on other evidence (e.g. motive, proximity and opportunity). 

The ability to do this can provide plausible new leads in an otherwise stalled investigation. The real question about automated facial recognition (e.g. against mug shot databases) is “is it better than previous processes?” In almost all cases, the answer is “yes.”  Certainly, in the passport picture matching application, currently available automated facial recognition exhibits 99 percent accuracy vs. 80 percent or less for human passport inspectors. Can mistakes be made? Yes. Are the mistakes fewer than with prior processes? Again, yes… Both processes, law enforcement investigations and border security, are made much better – though not flawless – through the use of automated facial recognition algorithms.

There are some people who equate privacy with anonymity. These are two different concepts. It is possible to be anonymous to other people, and still have our privacy invaded. It is possible to be known but retain privacy

John Mears

People are concerned that automated facial recognition systems might exhibit racial bias. Facial recognition algorithms exhibit different error rates based on the gender and race of the subjects in the gallery. However, it is inappropriate to call “error rates” by the name “bias” since machines aren’t capable of exhibiting what we humans call “bias.” Interestingly, the latest U.S. National Institute of Standards and Technology (NIST) tests show that the top-performing algorithms actually work better with black faces than with white faces.

There are some people who equate privacy with anonymity. These are two different concepts. It is possible to be anonymous to other people, and still have our privacy invaded (like the robo-calls that seem to come at suppertime). It is possible to be known but retain privacy (the right to be left alone). For surveillance cameras in public city settings, there is no expectation of privacy, nor should there be an expectation that you will be anonymous there. This is not an issue for most people, unless they commit a crime, or the surveillance technology is abused in some way.

There are people who believe that automated facial recognition capability will be used to “stalk” people, thus invading their privacy. This allegation is most often levelled at police or people who operate city surveillance technology. At a conference a couple of years back, I heard a UK policeman ask if anyone really thought that the police have the time and resources to sit and watch individuals without the motivation of a major crime or terrorist attack. He went on to answer his rhetorical question by saying that they had neither the time nor the resources. 

At the turn of the 20th century, Eastman Kodak introduced the Brownie camera, an event that democratised access to photographic capability for the masses. This was deemed at the time to be enough of a threat to privacy that its use was banned in some public spaces including public beaches and around the Washington Monument in Washington, D.C. Today everyone has a cell phone with a high-quality camera and video capability constantly documenting many things around us all. Technology evolves, laws and policy follow, and society adapts.

Will governments and organisations continue to clamp down on their use?

Public servants and elected officials have the opportunity to understand applications that balance convenience, security, privacy, and flow of commerce, and then make thoughtful, transparent, and auditable decisions to use the technology responsibly. There is much value to using the technology to make our society safer and to save money with the correct analysis of data.

How can threats of misuse be mitigated? 

There are regulations, best practices and practical experiences from users around the world that can be used as examples of how to best use the technology in effective and respectful ways. In the UK, government users of personal data have to develop and publish a Data Protection Impact Assessment (DPIA). The DPIA must “describe the nature, scope, context and purposes of the processing; assess necessity, proportionality and compliance measures; identify and assess risks to individuals; and identify any additional measures to mitigate those risks.” 

In the U.S., there’s an analogous federal agency publication called a Privacy Impact Assessment (PIA). In the EU, in accordance with the GDPR, an instrument called a Privacy Impact Assessment (PIA) or a Data Protection Impact Assessment (DPIA) is required. These are all just different ways to assure citizens of democratic governments that such systems are properly used. Citizens are both informed and assured by governments’ enforcing transparency and publishing how such advanced technology will be used to the benefit of society.

In the case of police agencies, the agencies are understandably reluctant to reveal all the tools, techniques and sources used in investigations, lest they tip off criminals to ways of avoiding capture and prosecution or compromise their assets. However, in the case of facial recognition and surveillance, I believe that the best policies are implemented around transparency, wherein the transparency and accountability is both a deterrent to crime and a deterrent to abuse of power. Police agencies should publish their policies and practices for uses of facial recognition and overt surveillance. Such policies and practices should be underpinned by appropriate oversight, periodic audits, and tools for insider threat detection and deterrence.

To register for the last remaining places at Digital Justice & Policing visit the link here.