At StormID, we have been at the forefront of digital transformation in the public sector for many years. With the rapid advance of artificial intelligence, however, we feel that the technology landscape is on the cusp of a quantum leap – and public services in Scotland stand to benefit considerably.
We have been excited to work with Futurescot over this past fortnight to launch our first-ever AI Challenge competition, aimed at stimulating interest in the technology. Whilst we recognise its potential, we also realise there is much to do in order to explain it in greater detail.
I hope readers will bear with me as I attempt to reflect on some of the practical considerations for organisations who might be interested in submitting an application. There’s a lot to pack in, so I thought it would be helpful firstly to cover where it can and cannot be used. In the coming weeks, I’ll also be writing about how you define AI use cases and finally how you might assess their feasibility.
Firstly, let’s start with a definition. Artificial Intelligence is a broad discipline and the following definition, used by UK Parliament, is sufficient for our purposes: “Machines that perform tasks normally performed by human intelligence, especially when the machines learn from data how to do those tasks.”
For example, traditional automation methods, such as Robotic Process Automation (RPA) are often mistaken for AI. However, where RPA follows a process defined from the outset by an end user, never straying from those very specific constraints, AI has the ability to recognise patterns in data and learns over time, by mimicking human intelligence.
During the 2000s, the rise of cloud service providers like Azure and AWS enabled widespread access to machine learning (ML) services for enterprises by offering scalable, cost-effective, and user-friendly solutions. The NHS, for example, has been at the forefront of integrating ML technologies to improve decision-making and diagnostics.
ML services such as those available from large cloud service providers can perform very narrow tasks such as Natural Language Processing (NLP), text analytics or computer vision but they lack the versatility of Large Language Models (LLMs), which have come to the fore in the last two years, most notably Open AI’s ChatGPT.
From Machine Learning to LLMs
Launched in late 2022 to much fanfare, ChatGPT was a paradigm shift in AI. This breakthrough has led to an AI capability bonanza, but let’s look at just a few areas where the technology really has developed since the early days of machine learning.
- Versatility – LLMs are trained on vast amounts of data so they can perform tasks they were not specifically trained for.
- Natural language comprehension – LLMs are tailored to handle the complexities of human-like language understanding and unstructured data.
- Creativity and content generation – LLMs have shown the ability to not only summarise content but produce novel text, audio, images, video and code, opening up new opportunities for innovation within organisations.
- Data analysis – LLMs can support better decision-making processes by analysing vast datasets, uncover hidden patterns and provide actionable recommendations.
We recommend that pre-trained LLMs, such Open AI’s ChatGPT, which is available via Azure cloud services, offers a good starting point for public sector organisations considering AI use cases given their immediate availability and versatility of application.
How can a public sector organisation use LLMs now?
Even at this early stage in the story, LLMs are already performing a wide range of useful office-based functions. The use of AI is evolving quickly, but some promising examples of use cases for the public sector are outlined below. Often in practice, a real-world problem is likely to feature several tasks and business processes so will cut across many of these use cases.
Better Citizen Support
- Offer natural language interfaces to users that can involve one or more modalities such as text, audio or image for fast, convenient assistance.
- Speed up delivery of services by helping users retrieve and search for online content and data.
- Automating citizen digital queries through email or form correspondence to the right parts of the business.
- Personalised digital interfaces tailored to personal informational needs that could potentially aggregate information from external sources.
Improving Accessibility
- Making information more accessible to users such as rephrasing complex content into simpler language.
- Automating content translation into multiple languages.
- Text to speech and vice versa, aiding individuals with visual or hearing impairments.
Speeding up Paperwork
- Documenting and recording information.
- Reviewing and extracting key information from various documents and across different internal systems.
- Automating the assessment of inbound applications or requests for information from citizens.
- Organising unstructured data from different sources into structured formats.
Augment Decision-Making
- Triaging and summarising pertinent information to support decision making.
Automating Internal Workflows
- Automating internal processes such as data entry and autonomously interacting with internal databases and applications and inter-departmental communication.
Data Analysis
- Performing data analysis across large datasets to generate new insights and patterns and gather evidence or make recommendations.
Content Generation
- Generation of content to support first drafts of documents or automated summaries or report generation.
- Generation of audio, images or video from text and vice versa
Public Sector constraints using LLMs
Of course, LLMs offer significant benefits for the public sector and its users. However, there are some use cases for which LLMs in the public sector may not yet be appropriate and which should be avoided, for example:
- High-stakes decision making for areas of public policy delivery which require sensitivity or a significant risk calculus.
- High-accuracy results: generative AI is optimised for plausibility rather than accuracy and should not be relied on as a sole source of truth, without human oversight and/or additional measures to ensure accuracy.
- High-explainability contexts: the inner workings of an LLM solution may be difficult to explain, meaning that it should not be used where it is essential to explain every step in a decision.
- Limited data contexts: Unless specifically trained on specialist data, LLMs are not true domain experts. On their own, they are not a substitute for professional or expert advice, especially in legal, medical, or other highly regulated fields where precise and contextually relevant information is essential.
This list is not exhaustive but gives a general guide and should be considered in line with any internal policies around AI as well following responsible AI principles which are outlined in Scotland in a National AI Strategy and AI Playbook.
Once again, we would welcome further discussion of the AI Challenge, so please do get visit the website and get in touch if you think you’ve found a use case in your organisation.