From evaluation to adoption. The coming world of AI in Scottish healthcare
When it comes to AI in healthcare, there is a broad sweep of areas where the technology can be applied. Application is currently focused – but not limited to – diagnostics, prevention, public health (big data and analytics), devices, pharmaceuticals (drug and vaccine development), Covid-19 and the data governance and security arrangements that surround the sector.
Across the world, countries are moving at different paces when it comes to research, development and commercialisation of the technology; unsurprisingly the US and China, in terms of commercial research and institute-led research, respectively, are out in front. By geographical area, the EU is second only to China in terms of the number of AI players, active across all areas of the economy, and in pure research is actually ahead, according to the European Commission’s AI Uptake in Health and Healthcare, 2020, report.
Looking at the numbers of players weighted by GDP, the UK stands out as having the greatest number followed by South Korea. The US, though, has a distinctly different approach, in that its AI players are predominantly ‘firms’, and therefore its nascent industry is dominated by the private sector. How this will play out when it comes to widespread adoption of artificial intelligence within public healthcare is still to be seen.
When I speak to James Blackwood and Neil Warbrick, who work among a group of Scottish research-led organisations involved in field trials of AI in healthcare, it is clear that the technology is very close to being put into clinical service across a number of different specialisms. According to Blackwood, chief technology officer at the Glasgow-based Industrial Centre for Artificial Intelligence Research in Digital Diagnostics (iCAIRD), and AI Lead at Scottish Health and Industry Partnership, he remarks that there are currently around 45-50 AI projects being supported within the Scottish AI ‘ecosystem’, some of which are in post-trial evaluation i.e. very close to be used in service.
Blackwood, who is also NHS National Programme Manager for AI, characterises the steps towards rollout for AI as research, evaluation (including regulatory approval) and finally adoption. He says: “So from an iCAIRD and West of Scotland perspective, we have a portfolio of about 35 AI research evaluation programmes, which we think is probably one of the most comprehensive research programmes in AI and healthcare in the UK and perhaps slightly wider than that.
“That covers radiology and pathology, so it’s a ‘multi-ology’ centre and it covers multiple health boards – across Ayrshire and Arran, Greater Glasgow and Clyde, Lothian and Grampian, which means it’s a national programme and it’s doing a number of things; it delivers, first of all, platforms. You can’t do artificial intelligence at scale without building platforms to do that. So, we’ve got a radiology platform called SHAIP, from a company called Canon. And that allows us to introduce artificial intelligence into a digital pathology lab and we digitise that pathology lab as part of iCAIRD.”
iCAIRD itself is funded by Innovate UK, under the UK Research and Innovation (UKRI) Industrial Strategy Challenge Fund (ISCF) “From Data to Early Diagnosis in Precision Medicine” challenge. It is therefore connected into a wider network of AI research across the UK, and is part of multi-site trials for the technology, which it does on behalf of NHSX in England. The research infrastructure in Scotland has led to the creation – in healthcare – of a National AI Hub, which according to Blackwood ‘gives us the ability to scale and adopt more rapidly’.
The business model for AI in Scotland follows a ‘triple helix’ approach, with the NHS, the private sector and academia working together. Unlike the US, where commercial firms dominate, the technology is developed carefully within the NHS itself, aided by researchers and academics. Canon is an example of the ‘industrial partner’ allowed into that process.
When a product reaches the commercialisation stage, there are contracts in place whereby the NHS is afforded a discount or realises some kind of benefit proportionate to its investment in time and resource. Blackwood says: “We start with the anticipation that all projects are going to have a future. All projects will have a product that we can put into the hands of patients or clinicians.” From a practical perspective, working within the NHS gets a product closer to its ‘target environment’, with a greater chance of success, he adds.
So how long will all this take before we start seeing AI working in clinical practice feels like an appropriate question, given we have been hearing so much about AI in recent years (we even have a National AI strategy that’s over a year old)?
Blackwood says: “I think we’re at a tipping point now. For the last three years we’ve pretty much been in a research zone in Scotland. What we’re now starting to see is a higher proportion of those projects being evaluations – checking the product works with our data and processes in Scotland.”
He adds: “So, I don’t think we’re further than 12 to 18 months away from seeing the introduction of artificial intelligence into clinical service in Scotland. If we do this right through something like the AI Hub then you’re talking about national priorities being addressed first, so areas like cancer, like diagnostics, like waiting lists.” He caveats that with the need to ensure the various regulatory approvals are achieved, and ensuring new procurement processes are set up. “It’s all very new but I think we’re nearly there and for the first time this year we’re talking about projects that will go into service.”
The European Commission report shows that Covid-19 has also accelerated many of the AI programmes around the world, as clinicians and the public alike have proved more disposed to novel ways of using technology in healthcare. Looking at the numbers, about 80 per cent of clinicians and/or the public are happy to have artificial intelligence involved in the decision support process associated with treatment, prevention or diagnosis, adds Blackwood.
There are some fields which are progressing faster than others: radiology has long been viewed as fertile ground for such developments, given how medical scans and images can be relatively easily analysed by a trained AI. The Scottish Radiology Transformation Programme’s AI working group has fed back that radiologists – an understaffed discipline where around 60 per cent of roles are unfilled after a year, according to Blackwood – are saying that they are ready for AI to help with ‘decision support’. There does appear to be a little way to go as there is a difference between decision support and fully automated ‘decision-making’, which regulators like the Care Quality Commission and Ionising Radiation (Medical Exposure) Regulations (IR(ME)R) do not currently allow.
‘They’re very receptive to decision support,” says Blackwood. “And the pilots that we’ve been running show that they gain confidence from having decision support tools. They show we gain efficiency and accuracy, so really no dis-benefit to introducing AI. I think a couple of years ago you would have had issues with trust, but I really don’t think they are particular issues for our clinicians any more. In fact they are kind of desperate to try and get help and assistance so we can get through the work list much more quickly. So it’s the right time and right place for the introduction of AI now.”
It is hoped AI in decision-making for radiology may soon clear regulatory hurdles, so things are moving in the right direction. Around 15 per cent of chest x-rays are normal, says Blackwood, so if an AI can strip out that proportion of workload from having to be checked by human eyes, it would lead to a big efficiency gain for radiologists. That in turn will have a positive knock-on effect for waiting times and GP referrals.
How confident should we be, though, in delegating decisions to computers for our healthcare?
Blackwood says notwithstanding differences across – and even within – specialisms, the evidence shows that performance equivalence is more than achievable. “We wouldn’t even consider artificial intelligence until it has exceeded human levels of accuracy and in some cases that accuracy is in the very high 90s,” he says. Moreover where some AIs have shown 98-99 per cent accuracy in diagnostics, the human error rates can be between two and 10 per cent.
“It will vary between discipline and case; with x-rays a clinician is very unlikely to make any mistakes on identifying something completely normal. So what we’re saying is that everyone has high confidence in the clinician and the AI but the clinician takes time and could be doing something better.” For screening programmes, for diseases like breast cancer, AI is ideally placed to make a difference as it can remove people who don’t have cancer from going to the next stage. It is quite straightforward in screening programmes for an AI to detect ‘cancer or no cancer’, says Blackwood.
Bias is an area where there has been great concern within AI as a field generally. Biases within datasets can skew the results towards particular, unrepresentative outcomes. In the US, researchers found that an algorithm used on more than 200 million people in hospitals to predict which patients would likely need extra medical care heavily favoured white patients over black patients.
How can we avoid that sort of thing happening here?
Blackwood says: “It’s really important that you do thorough analysis of your population so that when you’re training it (the algorithm), that it’s representative of the population.”
“The second thing you try and do is make AI generalisable and you do that by making sure you train it not just on your population but on populations of data from around the world. And you ensure when you’re training it, you have high quality ground truth, so you have multiple people looking at and telling the artificial intelligence what is cancer, what is not cancer. So that’s how you remove bias.” He insists that post-implementation surveillance i.e. regularly monitoring the performance of the AI – because it can vary over time – will minimise negative impacts.
Warbrick, eHealth Innovation Programme Director, NHS Greater Glasgow & Clyde, points to wider fields of potential adoption of AI in Scotland. Aside from diagnostics, there are active AI projects in heart failure, where handheld echo devices are being optimised with AI.
He says: “So, we’re shifting from using a full-stack echo-cardiogram machine to a handheld device that can be used in different locations, and the AI is assisting the quality of the echo that comes from that. We’re very close to a project finishing where heart failure work can be adopted. In COPD [chronic obstructive pulmonary disorder], it’s a bit of a different approach we’ve taken where COPD AI is part of the solution.
He adds: “We’ve focused on combining multiple things for a different design of service for high-risk COPD patients keeping them at home and preventing emergency admissions and regular outpatient appointments. So it’s monitoring them in their home through a number of devices and data and completing daily updating, and then applying AI on top of that to stratify patients according to those most at risk and needing intervention, where those interventions can then prevent them being admitted. That’s in use and is at scale-up stage.”
There will be no overnight mass adoption of AI in healthcare in Scotland, nor anywhere else in the world for that matter. But it’s clear that that the pace of delivery is quickening, and in two to three years time in Scotland, we are likely to see AI much more commonplace in clinical settings. A national health data exchange is proving a fruitful domain in which to experiment and invent new solutions to age-old problems in healthcare. Such ‘safe spaces’ to try out the technology and collaborate with multiple stakeholders across Scotland is pushing AI gradually to the fore.