AI is a ‘public good’ and shouldn’t be left to Big Tech companies whose technology innovations often disappear behind proprietary walls, a leading medical expert has warned.

Dr Jonathan Millar at Edinburgh University said that fast-moving AI in the world of medical and drug discovery science should not be left to tech and pharma companies alone.

He said that universities such as Edinburgh are ideally placed to play a pivotal role, and cited the example of the Human Genome Project – which involved hundreds of scientists from different universities around the world.

Dr Millar, who is also a hospital intensive care consultant, said whereas the impact of genomics in medicine has still to be fully felt – owing in part to the mathematical complexity of analysing a single human genome, which has 3.2 billion base pairs – that AI will increasingly provide a way of doing that.

“I think industry have realised that,” he told a media briefing event at the university’s School of Informatics on Wednesday evening. “There’s been a huge amount of investment in AI in the pharma sector and there’s been a huge amount of investment in AI by big tech aimed at biology and genomics in particular.

“And I think that brings me to a problem and also a key role for the university, and a key opportunity for the University of Edinburgh.

“That analysis, I think, is the same as the sequencing efforts that got us the Human Genome Project. It is a public good, and at the moment a lot of this technology is disappearing behind proprietary walls. And I think we are going to have to think about the public nature of these technologies and the university sector is ideally placed to pursue the public good.”

He said in addition to the technology, you also need the biological insight, the computing power and IT infrastructure to deliver ground-breaking science in genomics. The audience heard how the new £900 million exascale supercomputer coming to the university will provide such assistance.

“Edinburgh is particularly well placed to bring those things together,” he added. “AI is going to revolutionise what I do; it’s already started but I think there are some key conversations about what that looks like in the future.”

Journalists from across the mainstream and specialist media were invited to hear from four academics at the briefing event, which featured robots serving canapés. Professor Michael Rovatsos, professor of artificial intelligence at the university, hosted the event.

He was joined by Professor Tim Drysdale, from the School of Engineering, who covered the development of remote laboratories, Professor Shannon Vallor from the Edinburgh Futures Institute, who is a philosopher and expert in AI ethics, and Professor Helen Hastie, who heads the School of Informatics and is an expert in autonomous systems and human-robot interaction.

All agreed that AI is disrupting the status quo in academia, and providing opportunities for academics to collaborate and understand how the technology can be applied in fields ranging from life sciences to engineering, creative arts, education, finance, business and law.

Professor Hastie touched on the importance of the university carving a role for itself as a data ‘ecosystem’ for multiple sectors, where AI can bring about new innovation.

She said: “The university has recognised this so we created this lab, called the Generative AI Lab [GAIL]. A lot of you might know about Chat GPT, which can generate text, but it’s much more powerful than that. Drug discovery and genome sequencing really does exemplify what we can do with generative AI. You can also generate video, images and code, there are so many things you can do.”

Some examples where the university is already experimenting with the technology include the creative arts – where algorithms have been trained on previous descriptions of Edinburgh Festival shows to generate new productions. Another experiment involved an art installation at a crimewriting festival, taking near real-time police reports to come up with new crime fiction.

“The intersection between AI and art is just one example of what we could do,” said Professor Hastie, who also mentioned that the university has researchers looking into how to spot AI-developed video and photographic deepfakes as well as fake news on social media.

AI bias and responsible innovation

AI developers and users must be conscious of bias in datasets, added Professor Hastie. Furthermore, there is a risk that some algorithms may start to “hallucinate” and generate inaccuracies, she said. AI algorithms that are ‘explainable’ and transparent can help with this and also increase adoption.

That point was echoed by Professor Vallor, a former ethics adviser to Google, whose work on the development of moral and ethical frameworks underpinning the development of AI has provided the university with a new string to its bow at the Centre for Technomoral Futures.

Her contribution reflected the fact there have always been difficulties in the interplay between humans and technology, and with the advent of AI there is a real opportunity to “rededicate human expertise to the wise and responsible management of this technology”.

She said: “As the technology moves further into the kind of deepest aspects of human experience and institutions, the moral and political and legal challenges will only grow.

“What we are seeing is a kind of critical turning point where we have to decide if the AI ecosystem is going to grow up and mature. And this happens with other technologies.

“We talk about AI and innovation sometimes as if this is the first time this has happened. It’s the first time AI has happened but we’ve had lots of new, disruptive technologies in history. And there’s a phase where they throw everything into chaos, and everyone is confused and there’s moral panic and some of it’s legitimate, and some of it’s not and some of our predictions are good, and most of them are terrible.

“But what happens – and it happened with the automobile and happened with the aeroplane, and all manner of innovations – is there’s a point at which you are forced to grow up as a society with the technology and learn to manage it.”

The panel also addressed human obsolescence in the face of AI. Professor Vallor said that the technology presented less of a risk to journalists, as one example, than perhaps many feared – including the journalist who asked the question. She said that AI – unlike humans – cannot decide “what matters”. Her point was echoed by Professor Rovatsos, who said that it is the right of journalist to pursue their “passion” and investigate something that requires the truth being brought to light. “AI will never do that,” he said.

In terms of the overweening power of big tech, the panel agreed with a point made by Professor Vallor that universities have been central to the kind of international partnerships – including the International Space Station and the CERN research centre in Switzerland – that have brought about many scientific and societal advances.

AI regulation and oversight

One of the areas where there is ongoing concern is the regulation of AI. The UK has set up the AI Safety Institute, but it is primarily a government research body with no real powers of oversight for AI development in the private sector. The King’s Speech this week omitted a specific mention of an AI Bill, unlike in the EU, which has taken a different approach with the EU AI Act, which does have enforcement powers and even prohibits some AI systems.

In the UK, the Met Police consulted with the National Physical Laboratory on the development of an algorithm for the deployment of AI-powered live facial recognition technology. That government body is a research hub for metrology, the science of weights and measures, and according to Professor Rovatsos has worked with the Alan Turing Institute on AI standards.

“I guess in the UK, what we’ve seen so far is that there is a forum between the different regulators where they talk about coordinating on digital and AI, but there was resistance, political resistance, to having an AI regulator,” said Professor Rovatsos. “That is understandable to some degree because if there was a regulator doing medical AI devices and AI for nuclear submarines, is that the same thing? Could one regulator do that?”

He added: “On the more technical side, I think it is still the case that very few people in the world are in a position to do rigorous testing for things like bias and so on, that they would sign off on, ‘This is fine’, for many technical reasons, but also because it’s very hard to evaluate these blackbox algorithms.”

In medicine, Dr Millar said it should be left to existing regulators such as the Medicines and Healthcare products Regulatory Agency. But he said they were always playing “catch-up” when it came to new technologies – and AI was no exception.

“What is in front of them is moving so fast that they are struggling to adapt,” he said. “And talking about jobs, the people needed to staff those jobs are rarer but they’re going to have to be trained.

“And that is part of our responsibility as well is ensuring that we have a workforce and the breadth of skills required to function in the future. But those agencies, and I think this is probably true of lots of other settings, I think are best placed to do that regulation but will need new skills.”

Professor Vallor commended the work of the UK Government on AI assurance standards, and for its willingness to collaborate with both the US and Europe.

“There’s actually a global movement emerging – to start to come together around certain standards for AI assessment, evaluation, auditing, and assurance and I’m actually quite optimistic about where that might go.”