Glasgow researchers have developed an AI safety and trustworthiness tool to help organisations, policymakers, and the public maximise the benefits of AI applications while identifying their potential harms.
The free tool, developed as part of the Participatory Harm Auditing Workbenches and Methodologies (PHAWM) project, aims to help address the urgent need for rigorous assessments of AI risks caused by the rapid expansion and adoption of the technology across a wide range of sectors.
It is designed to help support the aims of regulations like the European Union’s AI Act, introduced in 2024, which seek to balance AI innovation with protections against unintended negative consequences.
PHAWM’s new open-source workbench tool will help empower users without extensive backgrounds in AI to conduct an in-depth audit of the strengths and weaknesses of any AI-driven application.
It also helps to actively involve audiences who are usually excluded from the audit process, including those who will be affected by the decisions made by the AI application, in order to produce better outcomes for end-users of the applications.
The tool is the first public outcome from PHAWM, which was launched in May 2024 and supported by £3.5m in funding from Responsible AI UK (RAi UK).
It brings together more than 30 researchers from seven leading UK universities with 28 partner organisations to tackle the challenge of developing trustworthy and safe AI systems.
The tool and its accompanying framework, which guides organisations and communities to use the tool effectively, are both publicly available and free to download from the project’s website.
Professor Simone Stumpf, of the University of Glasgow’s School of Computing Science, leads the PHAWM project.
She said: “Generative and predictive AI applications have the potential to give organisations valuable new ways to deliver improved services for end users. They are already influencing decisions in areas including housing, employment, finance, policing, education, and healthcare.
“However, they can be afflicted by flaws like bias and inaccuracies. In order to avoid building AI applications which enforce unfair outcomes in critical services, they must be carefully monitored and regularly audited by humans.
“Until now, those audits are usually conducted by people with a deep understanding of the processes which drive AI, but who may lack insight into the social or cultural impacts those systems may create. There is rarely an opportunity for people who will regularly use or will be affected by AI decision-making to help guide their development.
“Our new workbench tool is designed to help organisations create better, fairer, more transparent AI systems by providing diverse perspectives on AI applications which might otherwise go unexamined.”
Have you considered using AI in the public sector in Scotland? Why not register your interest in exploring the technology in the AI Challenge 2026 – click the banner below.
