Many debates around AI suggest it confronts us with challenges that are unlike anything else we have had to deal with in getting to grips with before – this is only true to some extent.

AI systems can be unpredictable, their behaviour can change over time as they learn new things, and they can do more harm because they can do things that otherwise require human intelligence.

However, from the standpoint of a consumer or a business, AI-based products and services are just another form of IT, and we have every right to expect them to be safe to use when they are made available to us.

The tech industry has never fully accepted this kind of responsibility, and this was the case long before the rise of AI. Whenever it took action to satisfy the demands society placed on its products, it did so only when it was forced to do so by the introduction of new laws, or when companies faced high-profile lawsuits.

Somehow, this industry has always managed to purport that the risks of software are nothing like those that, say, the automotive, aviation, or even the toy industry have to keep in check. Unlike those and many other branches of engineering and manufacturing, it has never established the standards, testing regimes and certification procedures to create a culture that ensures systems provide an acceptable level of safety.

You may ask, is it even possible to provide similar safety guarantees for a technology such as generative AI, which provides users with the ability to have just about any question answered, and any kind of image, sound, or video generated? Is it realistic to expect that we can anticipate all possible risks that come with more advanced software bots that can carry out more complex tasks on our behalf?

At their core, these systems all generate what seems to be the most appropriate output based on patterns extracted from vast swathes of data, so it seems almost inevitable that there will always be some instructions you can give them that will make them do something bad. Given that the uses of these systems are almost limitless, is there any way to test them sufficiently so as to make sure any residual risk of them doing something unsafe becomes negligible?

These are challenging questions nobody has found a good answer to yet. Many scientists around the world are trying to address them, including in a major initiative we have started at my university.

But keeping people safe is not just a technical or scientific problem – it is a policy and regulation problem, and here is where the UK Government has up to now taken a rather bizarre approach. Rather than thinking about the problem of how AI can be adopted safely in society (for example, by simply not allowing products to be taken to market without sufficient safety guarantees), initiatives such as the AI Safety Institute avoid the question of how societal objectives and political will can influence industry practice, which others like the EU and the US are addressing head-on.

Without a doubt, these and other initiatives will continue to conduct important research on AI safety. If we create a more transparent and democratised research environment where the entire scientific community can access and test the latest AI models, this may even allow us to tackle some of the challenging questions by inventing entirely new methodologies for testing AI systems.

Regulation need not stifle innovation or slow down progress – creating a collaborative, fair, and transparent ecosystem in this way has the potential to improve competition, reduce barriers to entry for new start-ups, and accelerate the commercialisation of research.

If, instead, the UK’s innovation-friendly approach to AI regulation mainly involves using taxpayers’ money to fund research that multi-trillion tech giants are not willing to do, rather than forcing them to abandon their practice of bringing unsafe technologies to the market, we will not become the world leader in trusted AI our government aspires the UK to be.

Making AI safe is no easy task, but asking those who sell things to us to keep us safe is a different matter altogether. Much of the tech industry is not opposed to this, but the fact that safety expertsethicists and others keep jumping ship from the most powerful AI corporations is quite telling. Without a political imperative to control them, the chances of controlling the impact of AI are slim.