Pathways to progress: Intelligent Design for an AI World

06/11/2023

Introduction

The UK financial regulators are finally beginning to turn their minds in a serious way towards how they will supervise firms seeking to deploy artificial intelligence (AI) in an increasingly automated industry.  In Discussion Paper 5/22 (DP5/22), the UK regulators split the AI lifecycle into three distinct stages: pre-deployment, deployment and recovery/redress.

This briefing focuses on the matters firms should have in mind when in the pre-deployment phase.  It builds on previous statements made by the regulators in relation to the use of AI, machine learning (for example in DP 5/22 and PRA Supervisory Statement 1/23) and algorithmic trading (in PRA Supervisory Statement 5/18 and the 2018 FCA Report into Algorithmic Trading in Wholesale Markets).

AI, but what’s the point?

Before embarking on detailed design and development work, regulators are likely to expect firms to be able clearly to articulate appetite for using AI, including:

  • What is the firm’s overall approach to AI, in a macro context?

The regulators have repeatedly made clear that firms seeking to deploy complex models, such as AI and machine learning algorithms, need to consider their overall approach to using such systems in their businesses before they do so.   This is in no small part due to regulators’ experience that firms deploying new technology without understanding the risks that it may pose to their businesses tend be at significantly increased risk of poor outcomes.  In practice, supervisors are likely to want firms to be able to articulate their overall approach to using AI in their business before embarking on the design and deployment of new models.

  • What is the intended use case for the model and how does it fit with that approach?

Firms will need to be able to articulate both the intended use case for the model, and how it fits with the firm’s overall approach to deploying AI in its business.  The extent to which the deployment of the AI will lead to supervisory concern is likely to be a function of a number of elements, including:

  • the nature, scale and complexity of the firm’s business;
  • the importance of the use case to that business;
  • the extent to which the firm and the model will interact with retail consumers, or drive outcomes for retail consumers;
  • the complexity of the model itself;
  • the firm’s prior experience of deploying complex models; and
  • the firm’s ability to access the resources and expertise it needs to deploy the model.
  • Will the model be taking decisions?

The intensity of regulatory scrutiny is likely to be significantly greater where an AI is taking decisions (particularly where these can have an impact on customer outcomes, or lead to allegations of unfairness or bias), than where it is being brought in to augment an existing process (e.g. analysing large volumes of data or making connections between customer behaviours to help develop new products).  At least in the early days, as supervisors familiarise themselves with the various use cases for AI, firms should expect to receive fairly significant challenge in seeking to deploy AIs that have decision-making capabilities.

  • What is the likely role of the AI in delivering customer outcomes?

The FCA’s Consumer Duty is likely to change fundamentally the way the FCA interacts with firms seeking to deploy AIs into customer-facing parts of their businesses.  Firms will need to incorporate the full range of Consumer Duty principles at the design stage, and ensure that they remain at the forefront of the firm’s thinking as the model is developed and that they have the evidence to support this.

Risks and Rewards

As part of deciding to use or develop an AI, firms will need to be able to articulate not only the positive benefits of doing so, but also the immediate and ongoing risks and associated mitigating actions, including:

  • Does the firm understand the risk to it, its counterparties and its customer base arising from AI?

For firms that are looking to deploy AI in a limited fashion, this analysis may be relatively standalone, although the firm will still need to think about how to position AI in its overall risk management framework.  More complex firms that are looking to deploy a variety of different models would be well-advised to consider how AI fits into their risk management framework for models specifically.  Those firms will need to think carefully about:

  • how that framework might need to be adjusted for AIs, which are likely to be among the most complex models that the firm deploys; and
  • where the firm already has non-AI models in a production environment, whether the deployment of AIs might require the firm to move to a tiered structure for assessing model risk, if it does not already have one.
  • How will those risks be measured, reported on and mitigated?

Once the high level risk has been identified, the firm will need to consider how it will measure, report on and mitigate those existing risks (and any new risks which come to light as the process develops) through enhanced controls.  This will include considering how to generate suitable management information, and whether it has suitably skilled and experienced resource in all three lines of defence to  be able to do so effectively.

  • Does deploying the AI expose the firm to additional legal or reputational risk?

Practically, firms may also need to consider the extent to which the deployment of AI may expose them to additional legal or reputational risk (for example, should the system not perform as expected).  While this will not be completely clear in the risk assessment phase, considering those risks is likely to become more important as the project lifecycle progresses.

Development

While the regulators have not yet set out definitive guidance as to their expectations around the development of AI systems, they are likely to expect any firm considering doing so to evidence it has considered a number of different aspects as part of the development phase:

  • Appropriate design principles have been developed

Consistent with the points made above about identifying both the appetite for using AI models and the risks of doing so, regulators will expect the firm to have established – through appropriate governance channels – a clear set of design principles and objectives against which the model development process can be measured.  These should be regularly reviewed throughout the development process to ensure that they remain appropriate.

Regulators are also likely to expect the firm to have considered the wider implications of those design principles, including whether there might be any fairness or bias implications arising from the principles themselves.

  • Project governance and delivery

Regulators have clearly noted in various publications relating to the deployment of models (including SS1/23 and in relation to algorithmic trading) that establishing appropriate project governance, and a clear path to delivery, will be critical.

In practice, this is in many cases likely to manifest as a high level, Executive-led steering committee (involving the Business, the compliance and risk functions), led by the responsible Senior Manager (see below).  The committee will need access to relevant expertise to be able to steer the project effectively.  This might be through having committee members with those skills and experience, or by bringing that in from outside the firm as needed.  Establishing appropriate reporting lines and visibility will be critical.

  • The need for Executive accountability

Regulators will expect executive accountability to be at the heart of pre-deployment stage.  In practice this is likely to require a specific allocation of responsibility for delivery of the system to a Senior Manager, with an updated statement of responsibility to match and a clear framework to oversee their delegations.  The Senior Manager responsible should ideally chair the relevant governance forum.

More complex firms will likely need to consider whether to give the responsibility to SMF24, or to the Head of the relevant business unit.  For example, if the pre-deployment phase is particularly technology heavy, for example, this might suggest the SMF24 ought to be responsible.  For smaller solo regulated firms, responsibility may in practice fall on the CEO or Executive Director for the area of the business in which the AI is deployed.

  • Interaction with regulatory requirements

It will be critical to be able to demonstrate that all applicable regulatory requirements have been taken account of in developing the AI.  This is likely to involve a detailed process of mapping rules, guidance and other regulatory material to understand the potential areas of direct legal and regulatory risk, and to ensure that the model does not inadvertently cause other issues for the firm (for example creating a “black box” effect which makes it more difficult to provide evidence to the FOS when dealing with complaints).

  • Validation of data sources

The regulators will expect that firms take steps to validate any data which will be used either to seed the AI, or to inform it while it is running.  This means that firms will likely have to establish a variety of data-related workstreams, for example looking at:

  • data sources (this can be complex when dealing with large firms), and considering whether any GDPR issues (for example relating to consent provided by data subjects) arise around use of existing data;
  • data architecture (e.g. are there any system or data compatibility issues, and if multiple data sources are used, how do they fit together?);
  • the risk of bias in the underlying data used to develop the model;
  • data quality (including whether there is a need to cleanse data before use);
  • whether, the data is suitable for the model’s intended use, and representative of the business, products, and customer base it is intended to be used for.
  • The need for testing (possibly including parallel running)

The regulators have been clear that appropriate testing is a must to ensure that a complex model can be deployed safely.  Testing is likely to need to take in matters such as:

  • backward-looking testing – regulators are likely to expect firms to carry out testing using pre-existing data and other inputs (such as known customer outcomes, or economic circumstances) to see how the model would have reacted.  Where the AI is making decisions, this may include comparing results to decisions made by humans using existing processes;
  • forward-looking testing – firms should also consider testing the AI by reference to some form of future stress event, or plausible change in circumstances, that would enable the firm to predict how the model might react (with a view to ensuring that model performance does not deteriorate unacceptably as a result); and
  • parallel running - where an AI is taking the place of, or augmenting, an existing process – a period of parallel running may well be needed to ensure that the firm understands how the model outputs reflect the outputs of existing processes.

Testing should include looking for bias in model outputs, and considering whether the way the AI operates could lead to model drift over time.  Outputs from the testing might also be used to create a range of early-warning indicators for future model drift.   In some cases, particularly where a model is complex, outsourced or might be expected to have direct impacts on customer outcomes (particularly important given the ambit of the FCA’s Consumer Duty), it may well be wise to consider external validation by a suitably-experienced professional services firm.

  • Outsourcing risks have been appropriately considered

Where aspects of the AI model are outsourced, or the firm is making use of third-party AI-as-a-service, the regulators will expect all outsourcing risks have been appropriately considered, and mitigating actions put in place.  This will include ensuring that the regulators’ outsourcing rules are complied with.

  • Appropriate model documentation has been produced

One of the key outputs of the pre-deployment phase should be a full suite of model documentation, including recording: the process by which the AI was designed, its data sources (and any operational challenges arising from those sources), the assumptions underpinning its operations, the outputs of any testing (including any adjustments made as a result), the criteria for future testing, and any limitations of the model itself.