Public need greater reassurance about the use of artifical intelligence in the public sector
Kevin O'Sullivan, February 11, 2020 3 min read
The general public need greater reassurance about the use of artifical intelligence (AI) by public sector bodies, a new report has found. There is a lack of transparency about where AI is being used by public sector organisations and concerns over data biases in machine learning algorithms, which could infringe human rights. The Committee on Standards in Public Life – which advises the Prime Minister on ethical standards across the whole of public life in England – has found that the UK’s regulatory and governance framework for AI in the public sector remains a ‘work in progress’ and deficiencies are ‘notable’. The work of the Office for AI, the Alan Turing Institute, the Centre for Data Ethics and Innovation (CDEI), and the Information Commissioner’s Office (ICO) are all “commendable”, the 78-page review into Artificial Intelligence and Public Standards found. But on the issues of transparency and data bias in particular, there is an “urgent need for guidance and regulation.” The committee published its report and recommendations to the Prime Minister to ensure that high standards of conduct are upheld as technologically assisted decision making is adopted more widely across the public sector. The Committee also published new polling on public attitudes to AI. Jonathan Evans, Chair of the Committee on Standards in Public Life said: “Honesty, integrity, objectivity, openness, leadership, selflessness and accountability were first outlined by Lord Nolan as the standards expected of all those who act on the public’s behalf. “Artificial intelligence – and in particular, machine learning – will transform the way public sector organisations make decisions and deliver public services. Demonstrating high standards will help realise the huge potential benefits of AI in public service delivery. However, it is clear that the public need greater reassurance about the use of AI in the public sector. “Public sector organisations are not sufficiently transparent about their use of AI and it is too difficult to find out where machine learning is currently being used in government. “Explanations for decisions made by machine learning are important for public accountability. Explainable AI is a realistic and attainable goal for the public sector – so long as public sector organisations and private companies prioritise public standards when they are designing and building AI systems.” Data bias remains a serious concern. Further work is needed on measuring and mitigating the impact of bias to prevent discrimination via algorithm in public services. Evans, a former MI5 Director General, added: “We conclude that the UK does not need a new AI regulator, but that all regulators must adapt to the challenges that AI poses to their specific sectors. We endorse the government’s intentions to establish CDEI as an independent, statutory body that will advise government and regulators in this area. “All public bodies using AI to deliver frontline services must comply with the law surrounding data-driven technology and implement clear, risk-based governance for their use of AI. Government should use its purchasing power in the market to set procurement requirements that ensure that private companies developing AI solutions for the public sector appropriately address public standards. “This new technology is a fast-moving field, so government and regulators will need to act swiftly to keep up with the pace of innovation. By ensuring that AI is subject to appropriate safeguards and regulations, the public can have confidence that new technologies will be used in a way that upholds the Seven Principles of Public Life as the public sector transitions into a new AI-enabled age.