Pathways to progress: Board Oversight and Accountability in an Artificially Intelligent World

United Kingdom

Introduction

The unexpected speed with which Artificial Intelligence (AI) has captured the public imagination – both in business and in private lives – has been staggering.  And both the regulators and the regulatory system itself have struggled to keep up with the pace of change.

In October 2022, the FCA and the PRA issued DP5/22, which sets out a high level framework for discussion of the evolving regulation of artificially intelligent systems.  While the questions the regulators asked back in 2022 were potentially wide ranging, the tenor of the discussion paper was clearly geared towards evolution in the regulatory regime, and not revolution.

That made sense.  Given the Government’s re-positioning of the UK post-Brexit, the impact of AI on the wider economy, the advent of the regulators’ competitiveness objectives and the government’s broader consultation on AI regulation (which closed in June 2023), it is difficult to see how the UK regulators can go too far out on a limb.

What does this mean in practice?  While feedback on DP5/22 is expected in Q4 2023, and may in time lead to further regulatory rule-making, the governance steps that Boards will need to take when a firm is designing, deploying and running AI systems are likely to follow fairly well-trodden paths.

Oversight – how will the Board oversee the approach to AI?

The key challenge for any Board grappling with new technology or new markets will be to ensure that the Board has a sufficient level of understanding of the subject matter, and the risks it poses to the firm, to allow it to oversee the firm’s approach to the issues.  Boards needed to upskill when cybersecurity emerged as a significant increased risk to the industry.  It will be the same with AI.

Boards will need to consider a number of questions in preparing themselves for the adoption of AI:

  • Is there a need for specific AI-related experience on the Board?

Given the fact that the move towards AI is a relatively recent development, the regulators are unlikely to consider significant prior experience of AI at Board level to be a pre-requisite for safe running.  However, firms clearly need to consider the extent to which some additional technology expertise may be required to bolster the Board, particularly if the firm becomes heavily reliant on AI as part of its overall business model. 

Boards may also invite non-board members if they have specific expertise.  For firms where Board-level expertise is not immediately available, this may offer a good way of accessing that expertise (though this should not be a substitute for board members taking steps to raise their awareness of AI, for example through training – for which, see below).

  • How do Board members come up to speed with the basic concepts underpinning AI?

This is likely to be most effective if delivered through a focussed set of board training and briefing sessions which focus not only on underlying concepts, but also the practical realities and risks inherent in AI systems in a way that reflects Board members’ roles in holding the executive to account.

  • To what extent the Board should be involved in decision-making around AI (both go/no go decisions and other decisions during the system lifecycle)?

This is likely to depend on a number of different aspects, and the level of maturity of the firm’s control environment as it relates to AI.  In the early days, regulators are likely to expect Boards to be involved in nearly all go/no go decisions, at both design and deployment stages.

Once a firm has demonstrated that it is able safely to design, deploy and operate AI systems, it may be possible (as the PRA Supervisory Statement relating to model risk management – SS1/23 suggests) to use a system of model tiering, which would drive what level of involvement the Board should have in decision-making related to the systems, for example by reference to:

  • the underlying complexity of the system;
  • its size and scale (and by extension its importance to the firm’s business); and
  • the risks posed by the system to the firm, its customers and the wider financial system should it produce unexpected outcomes, veer off-course, or need to be stopped.  In this context, regulators are likely to regard AIs that are used to take decisions to be more risky than those which gather and analyse large volumes of data.

As AI enters the mainstream and is adopted by larger numbers of firms, Board members may need to consider whether any other directorships they hold could (particularly in the technology sector) could give rise to conflicts.

  • How will the Board deal with AI-related issues that come to it in a BAU capacity?

The Board will need to consider how it will inform itself on an ongoing basis about how AI is operating within the business.  This includes considering the management information and regular reporting the Board will receive on implementation and performance, as well as – linked to model risk management – what might trigger an out-of-cycle report to the Board.

Boards should also consider the nature of the reporting they wish to receive – for example, verbal updates, written papers, MI and (potentially) live demonstrations.

  • Should the Board hold regular deep dives into AI across the firm on a holistic basis?

Given the increasing importance of AI systems, and the risks they potentially pose to the firm and its customers, it would be sensible to carry out some sort of deep dive on a regular basis (and probably no less than once a year), for example at board strategy or away days.  

Questions such a deep dive might consider include:

  • Is the firm’s overall approach to AI appropriate, taking account of the firm’s business and customer base?
  • Has it correctly identified the key risks arising from the use of AI?
  • What level of risk arising out of AI is the Board prepared to accept?
  • Are the mitigating actions proposed by the executive sufficient?
  • What level of risk is the firm actually running, measured against the firm’s risk appetite?
  • If AI is outsourced, is the Board comfortable that relevant outsourcing risks and associated regulatory requirements are dealt with?
  • How is the Board satisfied that AI acts to deliver good customer outcomes?
  • Have there been any “near misses”, and what has the firm learned from those?
  • Is there scope for the use of table-top exercises?

Table-top exercises – where the Board works through a hypothetical scenario involving some element of crystallised risk to test its response – are becoming near-standard practice in the cyber-resilience space.  The idea is to road-test the Board’s response to a hypothetical risk event as a means of acclimatising Board members to the sorts of decisions they will need to take, the information they  may have in taking them, and the things they will have to bear in mind when doing so.  Table-top exercises also provide a safe space in which to test whether the firm’s incident response framework operates effectively. 

Running a similar table top exercise in relation to significant AI systems (including considering when a kill switch might need to be deployed, and who makes that decision) – for example at a board training day – is likely to be invaluable in helping Board members understand the sorts of issues that might arise in an instance of crystallised risk arising out of an AI, and how they may be expected to react to it.

Accountability – how will the Board ensure executive accountability for AI?

Given the risks posed to their objectives, Regulators are likely to want to drill down into how the Executive manages model risk arising out of the deployment of AI.  The key tool through which they will do so is likely to be the Senior Managers and Certification Regime (SMCR).  Boards will therefore need to adopt a similar approach.

This means that the Board will need to consider:

  • Is there sufficient expertise within the Executive to deal with AI-related issues?

As a corollary to the question of whether the Board has sufficient expertise to deal with AI-related issues, a key question is whether the executive of the firm has the level of expertise it needs to establish, deploy and run the systems safely.  Increasing the level of expertise in firm (both at executive and at working level) is one action that can help mitigate AI model risk.  This includes looking at levels of expertise across all three lines of defence.

Firms that become highly dependent on AI for their day-to-day business, whose AI is undertaking particularly complex activities, or where there is any risk of a “black box” scenario emerging, would therefore be well advised to look carefully at the level of expertise that may be needed in the business to mitigate the risks. 

This is particularly the case if a firm is heavily dependent on outsourced AI, where the ability to understand the detail of what the AI is doing in real time, determine how effectively it is carrying out its tasks, and deal with any emerging issues could be constrained by the fact that the firm is using a third party provider.

  • Is there clarity over executive-level governance of AI?

It will be important to define the executive governance around AI clearly, to ensure that it is clear in which executive forum significant decisions about the AI are taken, what the escalation path is to the Board and what event or events might be expected to trigger such an escalation. 

  • Is there clarity over which SMFs are responsible for AI, including design, implementation and business-as-usual running?

There is currently no prescribed responsibility under the SMCR relating to AI, although one of the questions the regulators asked in DP5/22 was whether it might be helpful to create a prescribed responsibility, or indeed a specific Senior Management Function in the future.

Currently, in dual regulated and enhanced solo regulated firms the executive holding  SMF 24 is responsible for technology systems; the SMF4 is responsible for risk, and individual SMFs are responsible for the operation of their business units.  It is therefore entirely possible that different SMFs may be responsible for different aspects of the firm’s AI deployment, and at different points in the lifecycle.  In smaller solo regulated firms, responsibility may in practice fall on the CEO or Executive director with responsibility for the area of the business for which the AI is being deployed.

While certified staff might be responsible through delegation for the design, implementation and business-as-usual running, an executive director will need to have oversight and ultimate responsibility.    

In DP5/22 the regulators draw a distinction between owners, developers and users – they note that all of those individuals will have responsibility for ensuring that models are developed, implemented and used in accordance with the overall risk management framework, risk appetite and limitations of use.  As DP5/22 puts it:

“…the most appropriate SMF(s) may depend on the organisational structure of the firm, its risk profile, and the areas or use cases where AI is deployed within the firm…”

An example might therefore see the SMF24 being responsible for design and build up to deployment, with some combination of the SMF24 and relevant Heads of Business Unit (as “owners”) being responsible in the post-deployment phase.  An effective handover would clearly be vital in those circumstances.  Decisions on ownership and accountability will need to be taken very early in the project lifecycle, and then kept under regular review.  Firms will need to consider whether reference to AI needs to be included in the relevant SMF’s statement of responsibility and any reasonable steps framework.

  • Does there need to be one SMF with overall responsibility for AI risk on a macro level?

It is likely in dual regulated or enhanced firms that the regulators will expect responsibility for overall risk arising from AI systems to be housed with the SMF4 (Chief Risk) function holder.  Taking account of the PRA’s expectations regarding model risk management in SS1/23, this might include responsibility for:

  • establishing policies and procedures for identifying and dealing with risk arising from AI;
  • ensuring effective challenge where risks are identified;
  • ensuring AI is subject to regular independent validation or review;
  • evaluating and reviewing results generated by the AI, and validation and internal audit reports;
  • taking prompt remedial action when necessary to ensure the aggregate risk remains within the board approved risk appetite; and
  • ensuring sufficient resourcing, and adequate systems and infrastructure to ensure data and system integrity, and effective controls and testing of outputs.

In smaller firms responsibility for AI model risk may sit with the SMF16 or executive risk director.

  • Will responsibility for the AI also sit in the business unit that the AI is supporting?

Where an AI has the potential to pose significant risks to customers, or to the operations, of a particular business unit, there may well be a stronger case for giving the SMF head of the relevant business unit responsibility for overseeing the safe business-as-usual running of the system.  This could be achieved by adding an additional responsibility to the SMF’s statement of responsibilities.

If the firm has also added responsibilities for the system to the Chief Risk function holder, then care will need to be taken to ensure that the statements of responsibilities of the respective SMFs dovetail, avoiding overlap or (more seriously) underlap.

  • Has the firm’s SMCR implementation taken full account of the use of AI in the business?

While the adoption of AI will not necessarily require a wholesale review of a firm’s SMCR implementation, firms will need to take care to ensure that statements of responsibility of the SMFs with responsibility for AI design, implementation, deployment and risk fully cover the various elements of the AIs deployed in the firm, and the risks they pose.

As always SMFs will need to ensure they can evidence how they have exercised their responsibilities and reasonable steps obligations.

In practice, technical responsibilities are likely to be delegated to specialists (who may be certified or conduct staff) or outsourced to third parties.  The SMF will still retain ultimate responsibility and therefore will need to ensure their delegation and oversight is appropriate to meet their conduct rules and reasonable steps obligations.    

Download this article.