Academics at the University of Strathclyde are embarking on a project to discover how platforms such as ChatGPT could help researchers harness generative AI.
The experts will seek to understand how large language model (LLM) platforms – also including Google’s Bard, and Anthropic’s Claude – could assist data gathering and analysis in research.
Generative AI can assist researchers in many ways, from designing data collection tools to generating survey responses and data cleaning to analysis and reporting.
As with any new tool, however, it needs to be used responsibly, the academics warn.
A new 10-month project led by the University of Strathclyde aims to help researchers and their institutions make informed decisions on how they use Generative AI with participant data, to protect the privacy of the essential people who participate in research.
As part of the work, they will gauge the views and concerns of University Research Ethics Committees around the UK.
The project has been awarded £100,000 funding from REPHRAIN, the National Research Centre on Privacy, Harm Reduction and Adversarial Influence Online, and is in collaboration with the University of Edinburgh.
Professor Wendy Moncur of Strathclyde’s Department of Computer and Information Sciences, who is leading the project, said: “Generative AI capabilities are impressive and can save researchers time and give new insights. We will help researchers and their universities to foresee and avoid potential pitfalls in its use.
“These pitfalls include participant re-identification, where we have promised study participants that they will be anonymous yet Generative AI undoes our anonymisation and reidentifies them. Another potential pitfall is when we ask Generative AI to make up extra data based on participant data that we already have, and it ‘hallucinates’ – makes up – misleading or even defamatory information about people.
“Our aim is to enable UK universities to exploit the incredible potential of Generative AI, while protecting participants’ privacy and the excellent quality of UK academic research, by understanding and guarding against potential pitfalls.”
The research aims to help guide research institutions, University Research Ethics Committees, regulatory authorities, funders, including REPHRAIN itself, data custodians, professional organisations, publishers, and advocacy groups in their early encounters with research involving Generative AI.
The project is informed by the UK Government’s Futures Toolkit, a resource that policy professionals can use to embed long-term strategic thinking in the policy and strategy process.