Artificial intelligence – including live facial recognition technology – moved a step closer to frontline policing yesterday at the first ever Scottish biometrics conference.

Senior officers indicated that they are seeking to ‘widen the debate’ when it comes to the use of the controversial technology – which has been used by forces in London and South Wales.

Police Scotland, as the second largest police force in the UK, currently maintains a policy of not using AI-powered live facial recognition systems in operational policing.

But the first-ever Scottish biometrics conference – hosted at the Radisson Blu hotel in Edinburgh – provided a platform for high-ranking officers to discuss its potential use in future.

Brian Plastow, Scotland’s biometrics commissioner, has already stated that he would support the use of the technology where there is a “significant threat” to public safety.

Andy Freeburn, Assistant Chief Constable, Police Scotland said whilst he couldn’t give a ‘firm commitment’ to its roll-out in Scotland, he was in favour of “exploring” the use of the tech.

He said: “I’m not going to put a timeframe on it…but I think we should start exploring that; it’s not a firm commitment but this is about a conversation with the Scottish public, it’s not with the UK, as there may be different views. And we saw those different views being expressed when we implemented cyber kiosks. So, I’m now going to move forward to put together a plan on how we would consult and how we would engage.”

He said he would be keen to put together a framework for how those views might be expressed, and how public scrutiny might work, but cautioned: “And it might be the case that this is outrightly rejected, that actually this isn’t what the Scottish public want. And that’s fine, but I’ve got to balance that against our desire to do our statutory duty…and I actually think it would help us keep people safer, but I need to listen to the public’s views on this as well.”

In a presentation he gave to delegates, Assistant Chief Constable Freeburn revealed that he had actually been hired by Police Scotland partly on the basis of a presentation he gave on what should be the ‘limits of technology’ in policing.

As executive lead for organised crime, counter terrorism and intelligence, Freeburn has been in post since early 2022 and he pointed to a series of tech-led implementations within the force since his arrival.

Last year, with oversight from the Scottish Police Authority, the force adopted the use of the Home Office’s Child Abuse Image Database Facial Matching system (CAID FM), which helps to speed up the process of identifying victims of child sexual abuse and reduces the impact on officers required to view potentially traumatic material.

That system was introduced as part of a new ‘rights-based pathway’ developed by the force following the controversy around the introduction of so-called ‘cyber kiosks’, which sparked a backlash from privacy campaigners before a phased roll-out of the technology began in 2020. The software, designed to search mobile phones for incriminating digital evidence, led to the force undergoing an internal review of how it handled the introduction of new technologies.

As part of the new guidance, the force must demonstrate that it has answered in full 11 ethical questions, designed in conjunction with the UK Government’s Centre for Data Ethics and Innovation, which puts the onus on balancing privacy and human rights against the potential risk to public safety of not introducing a potentially transformative technology.

He said that the process requires multiple steps including equality and human eights impact assessments, and seeks to quantify the level of public concern around a particular technology. If project risks are rated low, then generally a technology can proceed but the higher the perceived risk, the more public consultation, engagement and scrutiny is required. The force works in conjunction with the Scottish Police Authority and varies its approach according to the technology it is proposing to use.

“I think we’ve laid a really strong foundation in Police Scotland,” Freeburn said. “I think we’ve learned from challenges we’ve faced around cyber kiosks and we continue to learn and review. But we continue to bring in new technologies in a reflective and responsible way. We need to delicately balance that need to uphold human rights, whilst ensuring we make the best use of new technologies to fulfil our statutory obligations to keep the people of Scotland safe.”

He stressed that the force cannot allow for the criminals to get ahead on their use of new technologies – and that they must be able to “fight fire with fire”.

He added: “My final point is I think we do need to get into the potentially difficult and divisive topic of live facial recognition. We need to look at the limits of artificial intelligence and really I hope that today is that first step in a wider debate, that we continue to be proportionate in our use of technology to keep the public safe in our increasingly digital world.”

Freeburn’s points had been echoed earlier in the day by Chief Constable Jo Farrell, who opened the conference, by saying the force has never used live facial recognition, but referenced the fact the Scottish biometrics commissioner had indicated that it could be used in the appropriate circumstances. During a later panel discussion, however, she went a bit further. She signalled a willingness to explore the use of AI, again by balancing the harms and safety considerations, drawing upon the example of how the technology had been used successfully in the NHS.

But she said that policing had “not yet landed” the message of colleagues in the healthcare sector, where there is increasing public acceptance of the technology for preventing and treating diseases. In that scenario, she said there is “little pushback” to the use of AI, adding: “Somehow, we need to raise the profile of that harm piece [in policing]. So that people are able to apply the logical mindset, that when somebody within the NHS says, ‘This is how we’re going to use AI, this is how we’re going to use technology, people are absolutely content.”

She added: “But that mission of saving life, identifying illnesses at an early stage, is something that nobody argues with. And yet, when it comes down to harm and safety, policing, law enforcement, the agencies sitting here, haven’t landed that same message in the same way.”

The wider context

The conference heard how live facial recognition technology is being used in London.

Lindsey Chiswick, director of intelligence at the Metropolitan Police, presented results of how the UK’s largest police force has been harnessing the tech to crack down on organised street crime in some of the capital’s busiest shopping and tourist destinations. The city has been blighted by street robbery gangs, who target victims for high-end Rolexes and iPhone buyers coming out of stores.

Chiswick said how mobile police vans – equipped internally with live facial recognition screens – enable officers to scan a crowd and pick out known offenders from a ‘watch list’. The technology has already been deployed in robbery hotspots like Oxford Circus, Regent Street and Croydon, where knife crime levels are particularly high. Unlike some systems, she stressed that the cameras relaying images to the van’s screens automatically pixelate the faces of members of the public who are not on any offender watch lists, and that they are automatically deleted, so there is no infringement on people’s civil liberties through mass data capture and storage.

In all, the vans have been deployed 79 times since the force adopted the technology in 2020, developed by software firm NEC. Officers have made 234 arrests across a range of crime types, including rape, grievous bodily harm and violent robbery, as well as leading to the apprehension of suspects who have been on wanted lists for up to 15 years. In addition, the deployments have led to a further 312 individuals being stopped who were on conditions, such as high risk offenders. The LFR system allowed quick checks to be carried out on those individuals, some of whom were found to be in breach of their conditions – with further house searches leading to additional evidence gathered and arrests being made.

The algorithm that operates the LFR was road tested by the National Physical Laboratory (NPL) before its adoption, Chiswick said. If there were to be any updates that would lead to further tests, to ensure the safety and efficacy of the system. However, the system had captured 15 ‘false positive’ identifications, suggesting that the algorithm is not perfect – despite improvements in recent years. That is a view shared by many privacy campaigners and legal experts, some of whom contributed to the conference.

However Chiswick said analysis of the Croydon deployment had found LFR to be three times more effective than conventional tactics, such as stop and search, although she conceded it was not an approved study and there was more evaluation work to be done.

“More work needs to be done but we’ve tried our best to take the public with us on this journey,” she said. “Surveys that have been done have consistently and powerfully demonstrated that there is majority support for the use of LFR by law enforcement.” Independent research from The Alan Turing Centre for Emerging Technology and Security (CETAS) showed that 67 per cent of the public were comfortable with using biometrics for identifying criminal suspects in crowded areas. A YouGov poll in October last 23 found 57 per cent support for the technology and a study from the Information Commissioner in 2021 found 82 per cent support for the technology in 2021. However Chiswick referenced another study which found less support for LFR in areas where trust in policing is low, highlighting the need to engage more with certain communities.

Other voices

The conference heard from a range of academics and legal experts who pointed to the need to ensure the safety, efficacy and accuracy of AI systems – with regulatory checks and balances in place to avoid infringing on civil liberties and allowing for the effective presentation of reliable evidence in court.

The challenge for AI developers in future will centre around how they can demonstrate the scientific reliability of algorithms deployed in criminal justice settings – for example by eliminating biases that may lead to certain racial or gender groups being unfairly targeted.

There was also a discrepancy between the oversight of private biometrics or facial recognition systems being used as part of an “overall system of policing and social control”, said Professor Paul Wiles, former chief scientific adviser to the Home Office.

He characterised this as a regulatory “dilemma” given that the Scottish Biometrics Commissioner’s powers currently extend only to the oversight of policing in Scotland.

He said: “Extending the focus would mean that citizens were protected more broadly, but compliance might be more difficult to achieve and lead to pressure for you to move to a rules-based system.

“A pragmatic response could be to extend the commissioner’s remit slightly to include state or quasi state agencies that are exercising policing functions that can have a direct effect on individual citizens,” he added.

Turning to AI specifically, Professor Wiles raised concerns that the courts start to treat facial image matching as a form of expert evidence, and that the judiciary lacks the skills and experience to assess the validity of AI in this context. As machine learning develops into fully autonomous AI systems, there is a risk that the underlying datasets are not sufficient to pass rigorous scientific standards to substantiate their claims.

He said: “AI might well be a new technology, but the old mantra, ‘junk in junk out’, still applies.”

Problems with reliability are compounded by the fact that big tech systems sold by massive IT vendors often develop generic AI systems that do not account for regional, jurisdictional or cultural differences in the application of the law.

He said: “We in the UK are in a difficult position in this regard, being a small market with devolved criminal justice systems based on two different types of legal system. I suspect that the AI vendors might claim that they could develop a generic system and then fine tune it for each specific context.

“All I can say is that we have seen how very large technology companies have sought to flatten the cultural and political differences between countries in order to globalise their products. This one of the causes of the current conflicts between the vendors and governments around the world.”