FutureScot
Fintech

Accountability for accuracy in AI models

The regulation of AI is now a key agenda item for financial authorities. Picture: iStock (supplied by Pinsent Masons)

The use of artificial intelligence has progressed rapidly in the financial services sector with firms using AI via chatbots, fraud detection systems and generative AI tools. The regulation of AI is now a key agenda item for financial regulators.

An important consideration when training an AI system is the validity of the datasets on which the system trains. An AI system can be thought of as accurate where the prediction it makes or the output it generates is accepted as true when assessed against a real-world occurrence or ground truth.

Accuracy may be measured by assessing the extent to which an AI system consistently produces correct predictions or outputs based on the inputs or data given. These measurements are dependent on the selection and quality of the data provided for training and testing purposes.

Data collection and quality

The quality of the data selected to train an AI model will impact the AI system’s accuracy, performance, and reliability. Taking steps to determine whether the dataset is of high quality will enable the AI model to produce more reliable outcomes and create better predictions for users.

When collecting data for AI models, firms need to consider the potential use cases for the AI system and whether there is a need to collect additional data, such as from a subpopulation, to generate accurate results. Data gathered from a subpopulation could rebalance a dataset that under-represents certain characteristics.

The use of additional data could have an important impact on the outcome of an AI system where it is used in credit risk management, an area of high innovation in both digital banking and fintech markets. If an AI model is trained to determine the creditworthiness of borrowers, subpopulation data could be collected to promote the financial inclusion of minorities.

Enriching the quality of datasets may require some trade-offs. Financial services firms should be prepared to justify any choices they make to collect additional data. Those choices need to be balanced against any obvious data privacy risks.

Financial services firms should also consider the benefits and risks of feature engineering techniques. Feature engineering is a pre-processing step in machine learning that involves selecting, extracting and decomposing certain elements from raw data. While feature engineering can improve model accuracy, according to the Information Commissioner’s Office (ICO), it can also result in aggregation bias if controls are not implemented to prevent a ‘one-size-fits-all’ model from being applied inappropriately to relevant data. A mature strategy towards feature engineering is necessary to meet regulatory expectations.

Identifying errors and testing models

Retraining and testing are needed to evaluate an AI model’s ability to generate accurate predictions or intended outputs over time. When models are consistently producing incorrect information or where there is evidence of models degrading, financial services firms can be exposed to a range of risks and significant costs.

Measuring for certain types of error, which financial services firms also test for in other contexts, such as for false positives and false negatives, can be helpful when retraining AI models. Financial services firms should have a robust system in place for recording model testing results for future reference.

Accuracy throughout the AI lifecycle

AI systems hold much promise for the future of financial services and fintech. Whether developing AI systems internally or using ones procured from third parties, there is a growing regulatory expectation that financial services firms will take steps to ensure that high levels of accuracy are maintained and embedded across AI governance and accountability frameworks.

Related posts

An introduction to the 3PO Project

3PO Project
January 4, 2023

Nurturing digital talent

Yorath Turner
December 14, 2021

Why CIOs are struggling to stay ahead of the ever-changing security threat

Objective
February 23, 2023
Exit mobile version