
February 18, 2026 by University of Glasgow
Collected at: https://techxplore.com/news/2026-02-free-tool-ai-safer-trustworthy.html
A University of Glasgow-led research project is releasing a free tool to help organizations, policymakers, and the public maximize the benefits of AI applications while identifying their potential harms. The tool, developed as part of the Participatory Harm Auditing Workbenches and Methodologies (PHAWM) project, aims to help address the urgent need for rigorous assessments of AI risks caused by the rapid expansion and adoption of the technology across a wide range of sectors.
It is designed to help support the aims of regulations like the European Union’s AI Act, introduced in 2024, which seek to balance AI innovation with protections against unintended negative consequences.
PHAWM’s new open-source workbench tool will help empower users without extensive backgrounds in AI to conduct an in-depth audit of the strengths and weaknesses of any AI-driven application.
It also helps to actively involve audiences who are usually excluded from the audit process, including those who will be affected by the decisions made by the AI application, in order to produce better outcomes for end-users of the applications.
The tool is the first public outcome from PHAWM, which was launched in May 2024. It brings together more than 30 researchers from seven leading U.K. universities with 28 partner organizations to tackle the challenge of developing trustworthy and safe AI systems.
The tool and its accompanying framework, which guides organizations and communities to use the tool effectively, are both publicly available and free to download from the project’s website.
Professor Simone Stumpf, of the University of Glasgow’s School of Computing Science, leads the PHAWM project. She said, “Generative and predictive AI applications have the potential to give organizations valuable new ways to deliver improved services for end users. They are already influencing decisions in areas including housing, employment, finance, policing, education, and health care.
“However, they can be afflicted by flaws like bias and inaccuracies. In order to avoid building AI applications which enforce unfair outcomes in critical services, they must be carefully monitored and regularly audited by humans.
“Until now, those audits were usually conducted by people with a deep understanding of the processes which drive AI, but who may lack insight into the social or cultural impacts those systems may create. There is rarely an opportunity for people who will regularly use or will be affected by AI decision-making to help guide their development.
“Our new workbench tool is designed to help organizations create better, fairer, more transparent AI systems by providing diverse perspectives on AI applications which might otherwise go unexamined.”
The tool and accompanying guiding framework have been developed through extensive co-design workshops with the project’s partners and other stakeholders in the health and cultural heritage sectors. These sectors are two of the four areas that the PHAWM project was established to investigate, alongside media content and collaborative content generation.
The PHAWM tool works by systematically gathering diverse perspectives on an organization’s current or prospective AI application through a four-stage auditing process.
First, the audit instigator is guided to provide information about the AI system in accessible, non-technical language.
Second, they invite relevant stakeholders, including users of the system and the people the systems’ decisions will affect, such as the public or patients for health AI applications, to participate in the auditing process.
Next, the audit participants are guided to align the audit with their concerns and lived experience of the AI application’s impact on their daily lives or profession. The tool and framework then help participants identify potential positive and negative impacts, create metrics to measure them, and assess whether the AI application under audit is capable of meeting their criteria. The AI application will receive a pass or fail grade based on the audit criteria set by each participant.
Finally, the audit instigator collects the data and insights from the audit participants, identifying areas of concern raised during the process. They can use the diverse perspectives gathered to develop action plans which will inform their decisions about how the AI application is developed or integrated into practice.
Professor Stumpf added, “The tools and processes we’ve developed offer a practical, community‑centered approach to evaluating the real‑world impacts of artificial intelligence. The workbench is a flexible tool which can be used to run in-depth audits of AI applications an organization has developed in-house, as well as being used to investigate whether off-the-shelf AI applications will meet organizations’ needs before they are purchased.
“Being able to look in such depth and from so many different angles will help organizations make properly informed decisions which assess the balance of risk and reward which comes from adopting new technologies. Our hope is that organizations will be encouraged to use the tool and framework we’ve developed with our partners and stakeholders, which will enable them to reap the benefits of AI while avoiding any potential for harms.”
The PHAWM team are continuing to refine the tool and framework in collaboration with representatives from their four key areas of investigation.
Public Health Scotland and NHS National Services Scotland (NSS) contributed to PHAWM’s health use case, while Istella contribute to media content. The National Library of Scotland, Museum Data Service and David Livingstone Birthplace Trust participate in the cultural heritage use case, and Wikimedia are involved in the collaborative content generation use case.
The PHAWM team are also currently developing comprehensive training and support for certification to help organizations adopt PHAWM’s auditing tools as effectively as possible.

Leave a Reply