FutureScot
Data & AI

UK Government launches investigation into algorithmic decision-making bias

metamorworks / Shutterstock

The potential for bias in the use of algorithms in crime and justice, financial services, recruitment, and local government is to be investigated by the Centre for Data Ethics and Innovation (CDEI).

Algorithms have huge potential for preventing crime, protecting the public and improving the way services are delivered, said the UK Government in a statement. But, it added, decisions made in these areas are likely to have a significant impact on people’s lives and public trust is essential.

Professionals in these fields are increasingly using algorithms built from data to help them make decisions. But there is a risk that any human bias in that data will be reflected in recommendations made by the algorithm. The CDEI wants to ensure those using such technology can understand the potential for bias and have measures in place to address. It also aims to help guarantee fairer decisions and where possible improve processes.

In crime and justice, algorithms could be used to assess the likelihood of re-offending and inform decisions about policing, probation and parole. For example, some police forces have already started to use algorithms to feed into their decision-making – such as the Harm Assessment Risk Tool in Durham which is being used to assist officers in deciding whether an individual is eligible for deferred prosecution based on the future risk of offending.

The establishment of the CDEI supports the UK Government’s wider Industrial Strategy, and it was set up to make sure data-driven technologies and artificial intelligence are used for the benefit of society. It will partner with the Cabinet Office’s Race Disparity Unit to explore the potential for bias based on ethnicity in decisions made in the crime and justice system.

Speaking ahead of a Downing Street event to mark the publication of the centre’s first work programme and strategy setting out the CDEI’s priorities, Jeremy Wright, the Digital Secretary, said: “Technology is a force for good which has improved people’s lives but we must make sure it is developed in a safe and secure way.

“Our Centre for Data Ethics and Innovation has been set up to help us achieve this aim and keep Britain at the forefront of technological development.

“I’m pleased its team of experts is undertaking an investigation into the potential for bias in algorithmic decision-making in areas including crime, justice and financial services. I look forward to seeing the Centre’s recommendations to Government on any action we need to take to help make sure we maximise the benefits of these powerful technologies for society.”

If we get this right, the UK can be the global leader in responsible innovation.

Roger Taylor

Roger Taylor, Chair of the Centre for Data Ethics and Innovation, added: “The centre is focused on addressing the greatest challenges and opportunities posed by data driven technology. These are complex issues and we will need to take advantage of the expertise that exists across the UK and beyond. If we get this right, the UK can be the global leader in responsible innovation.

“We want to work with organisations so they can maximise the benefits of data driven technology and use it to ensure the decisions they make are fair. As a first step we will be exploring the potential for bias in key sectors where the decisions made by algorithms can have a big impact on people’s lives.”

The CDEI will also explore the opportunities for data-driven technology to address the potential for bias in existing systems and to support fairer decision-making. This may include increasing opportunities for those in the job or credit markets in existing recruitment and financial services systems. It will also explore opportunities to boost innovation in the digital economy.

In recruitment, computer algorithms can be used to screen CVs and shortlist candidates. This could help potentially limit the impact of unconscious bias, where people discriminate against candidates because of their background. But there have also been reports of such technology inadvertently exacerbating gender bias.

And in financial services data analysis has long been used to inform decisions about whether people can be granted loans. But the rise of data and AI machine-learning presents increased issues about the transparency and fairness of such decisions.

The CDEI today sets out its priorities in its first work programme and strategy. This also includes plans for it to investigate how data is used to shape online experiences through personalisation and micro-targeting – for example where you search for a product and then adverts for similar products appear later in your browser.

This review will explore where, how and why online targeting approaches are used, and their impact on members of the public. The CDEI is launching a series of nationwide workshops to investigate public views on the acceptability of micro-targeting. Both policy reviews will publish interim reports in the summer with final reports set to be published early next year.

Related posts

Standing up for UK citizens’ data rights

Elizabeth Denham
February 22, 2018

Financial services for good: ‘Data Nations’ team wins 24-hour Deloitte datathon

Will Peakin
March 23, 2018

Facebook says data leak affected 87m, up from 50m, but user growth unaffected

Will Peakin
April 5, 2018
Exit mobile version