On 26 October 2023, the Bank of England (“BoE”), Prudential Regulation Authority (“PRA”) and Financial Conduct Authority (“FCA”) (together, the “Supervisory Authorities”) published a joint Feedback Statement on ‘Artificial Intelligence and Machine Learning’ (FS2/23) (the “Feedback Statement”). The Feedback Statement is a follow-up to the Discussion Paper (DP5/22) on the same topic published in October 2022 (the “BoE/FCA Discussion Paper”).
On 26 October 2023, the UK Government also published a Discussion Paper on ‘Capabilities and risks from frontier AI’ (the “Frontier AI Discussion Paper”). This publication will inform discussions at the AI Safety Summit taking place at Bletchley Park on 1-2 November 2023. The AI Safety Summit aims to build a shared global understanding of the risks posed by frontier AI and how these risks could be managed.
On 27 October 2023, in another development, the UK Government published a paper on ‘Emerging Processes for Frontier AI Safety’, which accompanies the publication of AI safety policies by leading AI companies (together, the “AI Safety Publications”).
This note summarises areas which may be of interest to financial services firms in relation to the Feedback Statement, the Frontier AI Discussion Paper and the AI Safety Publications.
Feedback Statement (FS2/23)
The Feedback Statement summarises responses to the BoE/FCA Discussion Paper and provides an overall summary of the themes arising from the responses. It does not include any policy proposals or give an indication of how the Supervisory Authorities intend to move forward and what their future proposals may entail.
Key takeaways are as follows:
- Most respondents would not find a financial services sector-specific regulatory definition of AI useful.
- Respondents stated that a technology-neutral, outcomes and principles-based approach to regulation would be preferable. This would allow regulators to leverage existing approaches to regulation.
- Most respondents consider that the Supervisory Authorities should prioritise consumer protection. Respondents stated that risk of data bias, discrimination and financial exclusion could be particularly relevant for vulnerable consumers and consumers with protected characteristics.
- Around half of respondents noted that consumer outcomes ought to be a key metric when assessing the benefits and risks of AI.
- As anticipated by CMS in a previous article on the topic, most respondents stated that existing firm governance structures and regulatory frameworks such as the Senior Managers and Certification Regime (“SMCR”) are sufficient to address AI risks. However, most respondents stressed that further guidance on how to interpret the ‘reasonable steps’ element of the SMCR in an AI context would be helpful.
- While the principles proposed in the PRA’s Supervisory Statement on ‘Model risk management principles for banks (SS1/23)’ were considered by respondents to be sufficient to cover AI model risk, there are areas which could be further clarified. In particular, respondents were concerned about the increasing use of third-party models and data and noted that this is an area where more regulatory guidance would be helpful. Respondents also noted the relevance of the Supervisory Authorities’ Discussion Paper on ‘Operational resilience: Critical third parties to the UK financial sector’ (DP3/22).
- There was a general call for greater coordination and harmonisation among sectoral regulators. In particular:
- Most respondents stated that areas of data regulation are currently not sufficient to deal with AI risks, and that it would be beneficial for Supervisory Authorities to align data definitions and taxonomies.
- Regulators should design and maintain ‘live’ regulatory guidance that is periodically updated in response to the rapidly changing capabilities of AI.
- Ongoing industry engagement is important, and initiatives such as the AI Public Private Forum could serve as a template for ongoing public-private engagement.
- AI systems can be complex and involve many areas across financial services firms. Therefore, a joined-up approach across business units and closer collaboration between data management and model risk management teams would be beneficial. There is a striking reference in the feedback to the fact that, within firms, “a lack of technical expertise is especially worrying”. Ensuring that relevant expertise is embedded in all three lines of defence will be an ongoing challenge for firms, as well as making key decisions on governance structures (e.g. adopting cross-cutting AI functions and/or centres of excellence within business units).
Analysis and next steps
All financial services firms will want to review and consider the implications of the Supervisory Authorities’ Feedback Statement and the Frontier AI Discussion Paper. These publications will add to the ongoing debate as to how governance questions are addressed and managed, how consumer protection is adequately prioritised and how emerging risks associated with data and third-party models and services are being managed.
Although the AI Safety Publications are not intended for financial services firms directly, we expect it will be useful for firms to review these materials. The proposed safety measures aim to increase public trust in AI, which could in turn accelerate AI adoption in highly regulated sectors, such as financial services.
The UK Government‘s response to the consultation that accompanied its March 2023 White Paper on ‘A pro-innovation approach to AI regulation’ was expected by the end of September 2023 (or by the end of October 2023 at the latest) and remains delayed. It is possible that the UK Government wishes to better understand the potential harms arising from the use of AI before publishing the response, which will no doubt be informed by the discussions at the AI Safety Summit.
If you have any questions, thoughts or training requirements on the future of AI regulation in financial services, please do not hesitate to contact a member of our team.
Co-authored by Tom Callaby, Joy Davey, Lal Ayral and Emma Gazzola