Legacy technology systems are a persistent challenge across government departments in Scotland. They slow down development, increase operational risk, and make it harder to deliver modern, responsive services. Yet tackling legacy isn’t easy – it’s often a bitter pill to swallow, especially when systems are deeply embedded and knowledge about them has faded over time.
The longer legacy systems remain untouched, the more they act as a tax on progress. They reduce the velocity at which teams can deliver change and introduce growing risks: security vulnerabilities, operational fragility, and the loss of institutional knowledge as experienced staff move on. But artificial intelligence (AI) is beginning to offer a new way forward.
AI has a valuable role to play in legacy modernisation. Many legacy systems are poorly documented or highly complex, and the people who originally built them may no longer be around. AI can help bridge that gap. It can analyse codebases, identify patterns, and even assist in transforming legacy code into formats suitable for modern platforms. We’ve seen promising examples where AI has helped replatform systems more efficiently than traditional methods.
However, AI isn’t infallible. It can make mistakes: especially with long-running or complex tasks. That’s why it’s essential to build the right guardrails around AI solutions. Human oversight, robust testing, and clear success metrics are all critical. AI should be part of a broader strategy, not a shortcut.
One of the biggest hurdles we see is moving from prototype to production. Many organisations are experimenting with AI and seeing exciting early results, but struggle to operationalise those solutions. This is particularly true in government, where tolerance for error is understandably low. Services must be reliable, and even small mistakes can have serious consequences.
To de-risk AI adoption, it’s vital to bring all stakeholders on the journey, from technologists to product and service owners. AI systems are less deterministic than traditional software, which makes them harder to evaluate. Understanding what “good enough” looks like is a nuanced question, and one that must be answered collaboratively. It’s not just about technical performance – it’s about trust, transparency, and shared understanding.
Another key consideration is organisational culture. Developing AI solutions is different from traditional engineering. It often resembles research more than delivery, with non-linear progress and higher uncertainty. That means teams need to adopt a mindset that embraces iteration, experimentation, and learning. Waiting for perfection isn’t realistic; what matters is having the right foundations to improve safely over time.
Building AI literacy across the organisation is also crucial. Leaders and stakeholders need to understand not just what AI can do, but what it should do. With so much hype and so many vendors pushing AI solutions, it’s easy to get swept up in promises that don’t match the needs of public services. Helping non-technical leaders make informed decisions is part of the job, and part of building a sustainable, responsible AI strategy.
We’ve supported clients in understanding the evolving landscape of machine learning and MLOps, helping them take models from prototype to production safely. That includes deploying, monitoring, and evaluating AI systems in ways that meet the high standards required in government.
Ultimately, AI won’t solve legacy technology challenges overnight. But it can help Scottish public sector organisations understand their systems better, modernise more safely, and deliver services more effectively. With the right mindset, governance, and collaboration, AI can be a powerful ally in the journey to modern, resilient public infrastructure.