UK government publishes delayed update on AI policy

08/02/2024

The UK government has published its delayed response to the consultation accompanying its March 2023 white paper on artificial intelligence (AI). It had been expected by the end of September 2023, but was delayed until after the AI Safety Summit at Bletchley Park in November 2023.

The government has not put forward any legislative proposals at this stage and remains committed to a sector-by-sector approach to regulating AI. Instead, the government will continually review whether its approach of issuing non-binding principles to UK regulators remains appropriate, recognising that legislative action will be required eventually.

The deadline for key regulators, including the Financial Conduct Authority (FCA) and Bank of England (BoE), to publish updated plans for overseeing AI in their respective remits has been confirmed as April 30.

Implementing the principles on a sector-by-sector basis

The government originally proposed five cross-sector principles relating to: the safety, security and robustness of AI systems; appropriate transparency and explainability; fairness; accountability and governance; and contestability and redress (the principles).

It had proposed that regulators be encouraged to implement the principles without compelling them to do so. It also proposed the establishment of a central function to ensure regulatory coordination and coherence.

The government does not intend to depart from its original approach and has now established a clearer roadmap for implementing the principles in relevant sectors, including financial services. It confirmed that it has written to several regulators affected by AI — including the FCA, BoE, Office of Communications (Ofcom), Information Commissioner's Office (ICO), Health and Safety Executive (HSE), and the Office of Gas and Electricity Markets (Ofgem) — asking them to publish their plans for implementing the principles by April 30, 2024.

Firms will be interested to review these plans in detail when they are published later this year. In the meantime, firms may find it useful to review the guidance that the government published alongside the consultation response. This sets out how the government will work with regulators to refine the current regulatory regimes so they are fit for purpose in relation to AI.

Areas still under review

The exact time when the UK will legislate and move beyond voluntary or non-statutory principles and guidance has not been specified. The government acknowledged that "there is more work to do," but it also stressed a desire not to "rush to regulate." The government will not decide until it and the regulators have undertaken further work.

Another key area under continuing review is whether "highly capable general-purpose AI" will require its own targeted approach. The topic of general-purpose AI models also caused significant debate in the EU in relation to the finalisation of the EU AI Act. Similarly, the UK government recognises that different approaches may be necessary for different types of AI systems based on their potential for more generalised negative impacts.

The government acknowledges that such systems challenge the idea of a context-based approach to regulation, as these systems do not operate within one sector and any flaw in a general-purpose model could result in multiple harms across the whole economy.

As such, the government considers that new direct responsibilities for developers of such models may be required and that other actors across the value chain, such as data or cloud hosting providers, may need to be brought within the scope of regulation.

Next steps

Further upcoming initiatives outlined in the consultation response will continue to give shape to the UK's approach in the coming months. This will create the certainty needed to seize opportunities, particularly in highly regulated sectors, such as financial services and sectors handling personal data.

Additional key initiatives to be led by the government include:

Spring 2024:

  • Establishment of a steering committee with government representatives and regulators to support knowledge exchange and coordination.
  • Publication of a plan for continually assessing the effectiveness of the UK regulatory framework.
  • Targeted consultation on a proposed "cross-economy AI risk register."
  • Launch of an AI and digital hub to provide support from regulators to innovative businesses.
  • Publication of an "Introduction to AI Assurance" document aimed at helping organisations build their understanding of safe and trustworthy AI systems.
  • £10 million of additional funding for regulators to adapt and respond to AI.
  • Call for input on the next steps in securing AI models, including a potential code of practice for cyber security.
  • Publication of a full AI-skills framework that supports employers, employees and training providers.
  • Publication of updated guidance on the use of AI within HR and recruitment.
  • Publication of the first iteration of the International Report on the Science of AI Safety.
  • Establishment of a new international dialogue to defend democratic norms and institutions and to address shared risks related to electoral interference.

Summer 2024:

  • Engagement with experts on interventions for highly capable AI systems, including questions on open release.
  • Updates to the government's initial cross-sectoral guidance for regulators on implementing the principles.

By the end of 2024:

  • Update on the government's work in relation to developers of highly capable general-purpose AI systems.
  • Update of the government's "emerging processes for frontier AI safety" document.
  • Call for evidence on AI-related risks to trust in information and related challenges, such as deepfakes.
  • Launch of the AI Management Essentials scheme to set a minimum good practice standard for companies selling AI products and services in the UK.
  • Phasing in a mandatory requirement for central government departments to use the Algorithmic Transparency Recording Standard.

Continuing:

  • Continued work with bodies such as the Digital Regulation Cooperation Forum and the AI Safety Institute.
  • UK Research and Innovation to improve links between regulators and AI research in the UK.
  • Government and regulators to analyse and review potential gaps in existing powers and remits.
  • Sharing of knowledge with international partners through the AI Safety Institute. 
  • £9 million government partnership with the United States on responsible AI.
  • Continuing bilateral and multilateral partnerships on AI, including through the G7, G20, Council of Europe, Organisation for Economic Cooperation and Development (OECD), United Nations and Global Partnership on AI.

The consultation response provides an overdue update on the government's AI policy and will be welcomed by stakeholders. Firms will need to remain engaged over the coming months as new guidance and consultations are produced by sectoral regulators.

In the meantime, the European Union continues to forge ahead with its own digital agenda, with the final version of the EU AI Act expected to be approved by the European Parliament in April, then coming into force later in 2024. Firms with operations in the UK and the EU will be keen to understand the implications of any divergence of approach and the associated cost, governance and compliance implications.

This article was first published by Reuters Regulatory Intelligence.