AI workers should be ‘licensed’ and meet independent ethical standards, according to the professional body for information technology.

The British Computer Society (BCS) is urging policymakers to adopt regulations that would see ‘high-stakes’ AI roles governed by a code of practice.

Such regulations would make an ‘AI version of the Post Office Horizon scandal’ less likely, the Chartered Institute for IT said.

In its new research report BCS also recommends strong and safe whistleblowing channels to allow tech experts to call out unethical management. 

Around 19% of IT professionals faced an ethical challenge in their work in 2023, according to a BCS survey.

CEOs and Directors making decisions on the resourcing and use of AI, should share in the accountability. That could be achieved by requiring large organisations to publish their policies on ethical use of tech, BCS suggested.

BCS said the measures would rebuild public trust and help the UK set a world-class standard in ethical AI, following the AI safety summit at Bletchley in autumn last year.

Rashik Parmar MBE, chief executive of BCS, said: “We have a register of doctors who can be struck off. AI professionals already have a big role in our life chances, so why shouldn’t they be licenced and registered too? 

“CEOs and leadership teams who are often non-technical but still making big decisions about tech, also need to be held accountable for using AI ethically. If this isn’t happening, the technologists need to have confidence in the whistleblowing channels available within organisations to call them out; for example, if they are asked to use AI in ways that discriminate against a minority group. 

“This is even more important in the wake of the Post Office Horizon IT scandal where computer generated evidence was used by non-IT specialists to prosecute sub postmasters with tragic results.

“By setting high standards, the UK can lead the way in responsible computing, and be an example for the world. Many people are wrongly convinced that AI will turn out like The Terminator rather than being a trusted guide and friend – so we need to build public confidence in its incredible potential.”

The paper ‘Living with AI and emerging technologies: Meeting ethical challenges with professional standards’ led by BCS’ Ethics Specialist Group recommends that:

  • Every technologist working in a high-stakes AI role should be a registered professional meeting independent standards of ethical practice, accountability, and competence.
  • Government, industry and professional bodies should support and develop these standards together to build public confidence and create the expectation of good practice.
  • UK organisations should be required to publish their policies on ethical use of AI in any relevant systems– and those expectations should extend to leaders who are not technical specialists, including CEOs and governing boards.
  • AI professionals should have clear and visible routes for ‘whistleblowing’ if they feel they are being asked to act unethically or deploy AI in way that harms colleagues, customers or society.
  • The UK government should aim to take a lead in and support UK organisations to set world-leading ethical standards.