UK announces £21m AI roll-out across NHS

EU, UK

Introduction

On 23 June 2023, the UK government announced a £21 million fund aimed at accelerating the deployment of Artificial Intelligence (“AI”) across the National Health Service (“NHS”). NHS Trusts will have the opportunity to apply for funding to implement promising AI imaging and decision support tools, enabling faster diagnosis and treatment for conditions such as cancer, stroke, and heart disease. Additionally, the government has committed to deploying AI decision support tools in all stroke networks by the end of 2023, increasing the accessibility of stroke diagnosis treatment.

The £21 million funding will be available for bids for any AI diagnostic tool, so long as it represents “value for money”. The government has previously invested £123 million in 86 AI technologies, supporting stroke diagnosis, screening, cardiovascular monitoring, and home-based care. The recent establishment of an AI & Digital Regulation Service further aims to facilitate the safe deployment of AI devices in the NHS, ensuring compliance with regulations and streamlining the development and adoption of AI technologies.

The UK’s “pro-innovation approach” to AI regulation

This announcement comes after the publication of the AI White Paper in March 2023, where the UK government declared its “Pro-innovation approach to AI regulation”. Notably, the UK government has left decision-making authority solely in the hands of regulators, in the belief that implementing immediate legislation could hinder innovation. Instead, they intend to leverage the expertise of regulators which already possess industry-specific knowledge.

AI regulation in the UK vs EU: a divergent approach

In contrast, AI regulation is currently a high priority within the European Union (“EU”) and has been ever since the proposal of the AI Act (“AIA”) by the European Commission in April 2021. The AIA is an important legislative measure that targets AI system providers, users, importers, and distributors. Its purpose is to establish a comprehensive framework that safeguards the safety and fundamental rights of EU citizens in the context of AI usage and development. The AIA encompasses various technical aspects, including specific obligations for the design and development of AI systems categorised as "high-risk," the implementation of a quality management system by AI providers, and the introduction of conformity assessments and audits for AI systems.

On 27 April 2023, Members of the European Parliament (“MEPs”) arrived at a provisional political agreement on a revised version of the Commission’s draft AIA, and also gave the green light in a vote on 11 May 2023 to move the Act forward. Most recently, on 14 June 2023, the MEPs adopted their “negotiating position” on AI.  By the end of 2023, the EU and its member states aim to reach an agreement on the final form of what will be the first ever law regulating AI. As it stands, the current draft provides that all organisations rolling out AI in the EU will be subject to the AIA, regardless of whether the organisation is based in the EU or not.

If you would like to discuss these or any other developments, please get in touch with us, or your usual contact at CMS.  We would be delighted to talk about your interests and concerns.

Article co-authored by Hanisha Kanani, Solicitor Graduate Apprentice at CMS.