Scotland is “open for business”’ when it comes to artificial intelligence (AI), innovation minister Richard Lochhead told an audience of investors in London in September. The message was clear: if you are a company developing “ethical” AI systems, we want your innovations here. 

It was a sentiment that resonated with Mark Watson, chief architect for civil at Leidos, the US-headquartered IT services giant. 

Watson, who was impressed by the minister’s speech at the CogX tech event, said: “It was interesting and pleasing to hear that, especially given the naysayers around AI at the moment. On the one hand you have a Scottish Government minister being open to the possibilities of AI, and on the other hand you have people wanting to put the brakes on it.

“For me, the innovation potential – if done correctly and safely, with ethics and trustworthiness at the heart – can be revolutionary for people and systems. We just need to go about it in the right way.”

Watson will be among the speakers at Digital Scotland in Edinburgh on 21 November. He will participate in an AI-themed panel discussion with the Scottish Government’s chief data officer, Tom Wilkinson, and Professor Shannon Vallor of the University of Edinburgh, who served as a consulting AI ethicist for Google’s cloud AI program. 

The debate will centre on the emerging role of AI on government services, with a particular focus on ethics, trustworthiness and inclusivity – the three pillars of Scotland’s national AI strategy. 

Insofar as generative AI has been harnessed by governments around the world, few have taken the plunge in meaningful ways. Notable exceptions include Japan’s agricultural policy unit, one of the first to release ChatGPT into the govtech arena, and Dubai’s Electricity and Water Authority, which has also embraced the Open AI tech platform. 

The public sector often tends to be more cautious in its adoption of cutting-edge, transformative technologies – particularly one that has provoked such a global level of debate. Cabinet Office guidance currently places restrictions on the use of generative AI platforms and the AI Safety Summit at the beginning of November is expected to influence future policy developments. But for Watson, the opportunities to innovate with AI, with the right checks and balances in place, is paramount.

“Leidos is a science, engineering and technology company with years of experience of working in AI,” says Watson, pictured left.

“We take an academic approach; similar to some of Scotland’s leading universities and data institutes. We have many PhD holders whose roles are to develop AI in a way that is reliable, resilient, and secure – with the tools that increase human trust. 

“That means providing transparency and eliminating bias in the algorithms that we develop, and it’s a mission that we are committed to.”

But he does acknowledge the problems. “While it’s clear that there are challenges and concerns when it comes to AI, such as deep fakes and misinformation, it’s important to recognise that these challenges can be overcome. 

“Trust in AI is indeed one of the most significant hurdles facing both Scotland and the global community, but it’s also an opportunity for growth and progress. 

“By collectively adopting a co-ordinated approach and establishing the necessary regulatory frameworks, we can pave the way for a future where AI, in partnership with humans, enables us to unlock its incredible benefits, fostering innovation, collaboration, and a brighter future for all.” 

Watson also speaks of some of the inherent issues with biases, in data and the humans that build AI models. He mentions a case in the United States, where algorithms predicting reoffending in a criminal justice context were found to be biased against certain groups. 

AI systems should be rigorously tested, he says, to ensure that the underlying data is not skewing in certain directions, and always remains fair and impartial. “We also need to ensure that there is transparency. It is no good having an AI system and saying to people, ‘just trust us’, we need to show and evidence that trust. 

“Openness in AI is going to be a really important facet for designing new services, especially in the public sector where scrutiny and standards are so high.”

He adds: “Having different perspectives is really important to ensuring bias is not prevalent in AI systems. Ensuring the people who build design, build and operate them bring in a wide variety of experience and perspectives is essential to ensuring unconscious bias is minimised.”

One thing is clear, though. This year has seen a global step change in public attitudes to AI. Before the advent of ChatGPT, which garnered headlines around the world at the turn of 2023, there wasn’t as much recognition or awareness of AI technology. 

According to the Office of National Statistics, which measured UK public sentiment on AI in June, awareness appears to have increased in the last 12 months. Its survey showed that 72 per cent of adults could give at least a partial explanation of AI in the Opinions and Lifestyle Survey, collected in May 2023, compared with 56 per cent in the Centre for Data Ethics and Innovation’s Public Attitudes to Data and AI Tracker Survey collected in June to July 2022.

In the World Economic Forum’s Future of Jobs Report 2023, a global study, 50 per cent of surveyed companies expect AI to create jobs growth and 25 per cent expect it to create job losses between 2023 and 2027.

Watson says: “It’s clear that the mood is shifting. People are sceptical about AI, and a level of caution is right. But at the same time, more and more people are using platforms like ChatGPT. It’s just becoming part and parcel of how we live our lives. 

“The education sector is going to be one of the trickiest to manage in that regard. If we can use large language model (LLM) systems to teach ourselves, it does have an impact on the classroom and learning, and the future of what good teaching looks like.” 

He adds: “For me, we can potentially do incredibly innovative things around teaching, with self-directed learning, harnessing an algorithm to respond to the way our children learn, which could in turn help teachers to manage that learner’s journey – and perhaps even be more accommodating of the rate at which pupils progress. Because we are all different.”

Watson warms to his theme. It’s clear for him that AI is a journey of discovery, and should be regarded as such, like any scientific breakthrough. 

He cites healthcare, where AI adoption has already been trialled in fields as diverse as oncology and dermatology, but again with the caveat that openness and transparency – and informed consent from patients – is key to integrating any use of AI systems in the sector. Deepmind, the AI company, fell foul of data regulators in 2017 when it used data belonging to NHS patients without their consent. 

“We have to do it in the right way, with people at the heart of it,” he adds. “Because at the end of the day, in whatever field we apply it, AI can look at the patterns in data and understand things that human analysts just won’t be able to find. 

“In healthcare, most clinical experts suggest that they would always want to be able to keep a human in the loop, to validate computer-driven results – something I would advocate for.” 

If we are to harness the power of AI, governments also need to incentivise data-sharing, adds Watson. At the moment, he says, there are too many examples where data is siloed and not shared, because of fears over data protection and regulatory penalties. 

He says: “A very human response is that if I share this data, it might do some good, but I also might get in trouble, so I don’t want to take that risk. 

“These are all thorny issues that we have to contend with if we are going to fully embrace AI. And it does all come back to people, process and technology. If we are only thinking about the technology, then we will have missed a trick.”


Mark Watson is among speakers at Digital Scotland 2023

Partner Content in association with Leidos