FutureScot
Data & AI

Is Scotland ready for artificial intelligence?

Photograph: Rafi Andhika P/Shutterstock.com

Scotland’s public sector has done much to define its approach to artificial intelligence (AI) over the last few years. At the heart of the national AI strategy – published by government in 2021 – is that the technology should be underpinned by “ethical, trustworthy and inclusive” principles. 

Part of that trust relies on transparency and openness, and the Scottish AI Register was launched in March 2023 to provide a window into how the technology is being used by public sector bodies. 

To date, there are two registered projects that have explored how to use AI to improve the way public services are delivered. 

Connecting You Now, a solutions provider, registered its AI platform for providing disabled people with better access to public services. The firm’s chief executive credited the register with triggering an understanding among his team of how AI might comply with equalities and human rights legislation. 

And the Scottish Children’s Reporters Administration (SCRA) has described how it intends to investigate the potential benefits of AI to help keep children safe, by looking at patterns in data within the context of the Children’s Hearings System. 

The Indication System for Sexual Exploitation Risk (ISSER) would help Children’s Reporters review existing written information provided to them about the risk of sexual exploitation for individual children.

Tom Wilkinson, the Scottish Government’s chief data officer, says he has been encouraged by the cautious use of AI thus far, and that the register is a real selling point for Scotland, as it gives “structure and guidance” to organisations seeking to use it, as well as confidence to the market on the standards expected of them.

“The AI Register is essentially a public resource where we make visible the uses of AI across the public sector in Scotland,” he says. 

“There’s not an equivalent to that in the UK, and the idea is that the way in which AI is deployed is described publicly. The EU AI act is also currently talking about the transparency of corpuses of data used for algorithmic training purposes.”

The point at which AI is ready to be deployed routinely on public sector data, however, remains a question. In order for the technology to work, the underlying datasets must be in good shape.

Wilkinson explains: “I think there are great opportunities in the works for the Scottish Government in terms of the way we support digital across government and the public sector. And so, more than anything, as an enabler to things like AI, I think it’s really important to make sure that public data is all speaking the same language.

“We won’t be able to make full use of AI unless public data is in a good place in the first place. And that’s something we’re all working on.”

The Scottish Government’s innovation accelerator – CivTech – has been at the forefront of some of that work. It has increasingly seen AI specified in responses to the public sector challenges it sets on behalf of government agencies looking for technology to transform services. 

In a NatureScot challenge earlier this year, Informed Solutions – the challenge provider – looked to harness AI and geospatial technology to better understand and manage protected environmental areas. 

In the third sector, Citizens Advice Scotland also deployed an AI solution last year to improve national call-handling capabilities, and launched a data-driven innovation programme to enhance its internal systems. 

Outside of CivTech, Marine Scotland has experimented with AI to count and categorise fish species around potential sites for offshore wind turbine development.

The advent of more recent generative AI platforms such as ChatGPT represents a different challenge to the public sector, though. 

Developed by Silicon Valley-based OpenAI, the large language model technology has dramatically altered the conversation on AI, globally. 

Systems such as Bard, developed by Google, are having a similar impact, with organisations – public and private – grappling with the risks and opportunities of using computer-aided decision-making.

AI policy is a matter reserved to Westminster, and even though Scotland has its own distinct approach, the current UK Civil Service guidance on AI is to proceed with caution. 

The official guidance states, from public content published online on September 29, states: ‘You are encouraged to be curious about this new technology, expand your understanding of how they can be used and how they work, and use them within the parameters set out within this guidance.”

Wilkinson says: “The position of the Civil Service is not a blanket ban on the use of ChatGPT or similar; it encourages exploration of these tools to try and find what value can be added to public services. 

“However, it emphasises some of the considerations one would want to make when using these kind of tools, for example in information security. So, civil servants would need to bear in mind that they couldn’t put anything into it that was sensitive or related to individuals’ data.”

He adds: “I think it would be totally permissible to use it for desk research, where the kind of questions you were asking weren’t… going to essentially reveal a controversial policy intent or something like that.”

On the AI policy front, the UK Government hosted the AI Safety Summit at the wartime code-breaking centre, Bletchley Park in Buckinghamshire, on 1 and 2 November. 

Viscount Camrose, the minister for AI in the Department for Science, Innovation and Technology, recently revealed that he had used ChatGPT to scan lengthy legislative texts and provide a summary document. 

With his responsibility for AI, Wilkinson obtained a licence for ChatGPT 4, to test it out and understand the change in capabilities between the previous 3.5 version. He also tested Microsoft’s Copilot tool, which integrates ChatGPT 4. 

He said while the platform does a “very good job of writing very quickly, very generic pieces of text”, it can also be prone to providing “misleading responses”, and in some instances the more you probe the large language model it will start to “back away” from previous answers.

He says: “I guess the issue with large language models is that they can develop rules of thumb that no human ever gets to develop because of the volume of information that they’ve used. 

“There’s no kind of systematic picture of how different things in the world relate to each other. It’s just a rule of thumb based on the average of what people have written over the internet.”

As to whether large language models will make it into use within the public sector, Wilkinson says it would be speculation at this stage, and a lot of testing work is required in practice, but that it “seems pretty likely” that the technology could be used to “speed up” a lot of the work undertaken by government officials, such as desk research, drafting reports and summarising information. 

He says: “It seems likely that it could shave quite a bit of time – and maybe boring time – off someone’s job where they would otherwise be churning out boilerplate text around something they’re following, I guess like a recipe, rather than actually doing the sort of creative, systematic thinking that people are probably still much better at.” 

He cautions, again, that any such use would ideally be overseen by some a human-in-the-loop validation process.

The UK Government’s recent “pro-innovation” white paper on AI and the Bletchley Park event will likely play into a wider policy programme to regulate AI, and perhaps eventually a formal act of Parliament. 

The EU has already set out its stall on that front. Its AI Act has not quite reached the statute books, as lawmakers are still agreeing the finer points of regulation. But in its vision, it has tried to conceive of legal frameworks around the use of generative AI. 

Plans include ensuring platforms like ChatGPT are transparent, disclosing that the content was generated by AI, and that large language models are designed to “not produce illegal content”. 

Its vocabulary around the potential regulatory landscape for AI specify “different rules for different risk levels”, which is understandable given AI’s increasing role in, for example, medical diagnostics, and many other specialised and high-risk fields.

Scotland’s innovation minister, Richard Lochhead, recently said that we are “open for business” on AI, which echoes the pro-innovation stance taken south of the border. But he has also signalled a desire for a more joined-up approach between the UK’s devolved nations. 

As governments contend with AI in the coming months, we should expect more of these conversations to take place. Whether they keep up with the rapid evolution of the technology is another matter entirely. 


Tom Wilkinson is among speakers at Digital Scotland 2023

Related posts

Flourishing in the data driven economy

Will Peakin
December 15, 2018

New public health body will have key leadership role in data science and innovation

Kevin O'Sullivan
July 17, 2019

The real deal in virtual expertise

Will Peakin
September 28, 2016
Exit mobile version