AI Assurance: Building Trust in Responsible AI Systems in the UK

England and Wales

The UK Government recognises AI assurance as a critical component of broader AI governance. In a White Paper published in 2023, the government outlined a regulatory framework comprising five cross-cutting principles for the responsible development and use of AI across all sectors. To facilitate the implementation of these principles and help organisations enhance their understanding of AI assurance and governance, the Department for Science, Innovation & Technology in February 2024 released the Introduction to AI Assurance (the “Assurance Guidelines”). The Assurance Guidelines are intended to provide insights into assurance mechanisms and global technical standards, enabling industry and regulators to effectively build and deploy responsible AI systems.

What is AI assurance?

AI assurance involves the process of measuring, evaluating, and communicating the trustworthiness of AI systems. The UK Government has emphasised that AI assurance is a critical component of wider organisational risk management frameworks for developing, procuring, and deploying AI systems, as well as demonstrating compliance with existing and any relevant future regulations.

The White Paper, Governance and Assurance

In March 2023, the UK government outlined its approach to AI governance through its White Paper. The White Paper (which we have commented on here) sets out the key elements of the UK’s proportionate and adaptable regulatory framework. It includes five cross-sectoral principles to guide and inform the responsible development and use of AI in all sectors of the economy:  (i) safety, security and robustness, (ii) appropriate transparency and explainability; (iii) fairness; (iv) accountability and governance; (v) contestability and redress.  The UK's approach to AI governance, as outlined in the White Paper, focuses on outcomes rather than the technology itself. The UK Government acknowledges that potential risks associated with AI vary depending on its application context. To achieve outcomes-based governance, it is expected that regulators will interpret and implement the regulatory principles within their respective sectors, providing clear guidelines for compliance. AI assurance plays a vital role in this governance framework by establishing processes for verifiable claims and holding organisations accountable.

The Assurance Guidelines emphasise that outside of the UK context, supporting cross-border trade in AI will also require a well-developed ecosystem of AI assurance approaches, tools, systems, and technical standards which ensure international interoperability between differing regulatory regimes.

The Assurance Guidelines

The Assurance Guidelines introduce key AI assurance concepts and terms, placing them within the broader AI governance landscape. The following areas are explored:

  1. AI Assurance Toolkit: The toolkit provides a high-level overview of the tools and practices that organisations can adopt to ensure responsible AI deployment. It addresses a range of AI assurance mechanisms, including qualitative and quantitative assessments, to address uncertainty, ambiguity, and objectivity in different contexts and across the AI lifecycle.
  2. AI Assurance in Practice: This section offers a high-level overview as to how assurance mechanisms are applied, enabling organisations to effectively incorporate AI assurance processes.
  3. Key Actions for Organisations: This section suggests practical steps to enhance AI assurance within an organisation. These steps include considering existing regulations, upskilling employees on AI assurance, reviewing internal governance and risk management processes, and staying updated with regulatory guidance.

AI assurance mechanisms, standardisation and ISO/IEC 42001

The Assurance Guidelines state that assurance mechanisms need to be underpinned by available technical standards in order to provide a consistent baseline, whilst also increasing their effectiveness and impact.

These technical standards are generally consensus-based standards, which are developed by global standard organisations (“SDOs”) such as the International Standards Organisation (“ISO”). For an introduction to ISO standards, see our LawNow article here.

The Assurance Guidelines highlight that the standards allow assurance users to trust the evidence and conclusions presented by assurance providers, noting that “without standards we have advice, not assurance”. In this context, the newly published ISO/IEC 42001 standard plays a pivotal role – the world’s first AI management system standard (the “AI MSS”).  The AI MSS forms an integral part of the process, management and governance standards developed by the ISO.

Since our previous Law-Now on the draft AI MSS, the final version of the AI MSS in substantively the same form was published in December 2023. The AI MSS provides guidelines for managing AI systems within organisations of any size involved in developing, providing, or using AI-based products or services, and establishes a framework for systematically addressing and controlling the risks related to the development and deployment of AI, encouraging responsible AI practices. It addresses ethical considerations, transparency, and risk management, and organisations can use this standard to demonstrate responsible AI use, enhance traceability, and achieve cost savings.

The Assurance Guidelines state that as a foundation, all organisations should integrate robust organisational governance frameworks for AI systems. As a baseline, core governance processes include:

  • Clear, standardised internal transparency and reporting processes and lines of responsibility, with a named person responsible for data management and clear governance and accountability milestones built into the project design. 
  • Clear avenues for escalation and staff (at all levels) empowered to flag concerns to appropriate levels. 
  • Clear processes to identify, manage and mitigate risks. 
  • Quality assurance processes built-in throughout the AI lifecycle. 
  • External transparency and reporting processes. 

The AI MSS as an international standard that is law agnostic and aims to help organisations to achieve these objectives not only in the UK, but also in the EU and beyond.

More specifically, at the EU level, where standardisation will play a significant role in the implementation of the AI Act (see our Law-Now article on this topic here), the objective is to leverage the ISO standards as much as possible in the context of preparing AI Act specific EU standards. With this in mind, the EU standardisation bodies will conduct a gap analysis of the AI MSS to ensure that it meets the requirements of article 17 of the At Act.

Key actions for organisations

According to the Assurance Guidelines, AI assurance is not a silver bullet for responsible and ethical AI; yet early engagement of likely future governance needs, skills and/or technical requirements can help to build an organisation’s assurance capabilities.

The Assurance Guidelines propose organisations to consider the following next steps:

  • Consider existing regulations: While there are no statutory AI regulations in the UK, there are existing regulations that are relevant for AI systems (such as the UK GDPR).
  • Upskill within organisation: While the ecosystem is still evolving, organisations should be developing their understanding of AI assurance and anticipating likely future requirements.
  • Review internal governance and risk management: It is important to consider how an organisation’s internal governance processes ensure risks and issues can be quickly escalated, and effective decision-making can be taken at an appropriate level.
  • Look out for new regulatory guidance: Over the coming years, we expect that regulators will be developing sector-specific guidance setting out how to implement the proposed regulatory principles in each regulatory domain.

Overall, the Assurance Guidelines serve as a valuable plain-English resource for organisations and individuals starting to navigate the evolving landscape of AI. As AI continues to play an increasingly significant role in every aspect of our lives, embracing assurance measures outlined in the Assurance Guidelines becomes essential for organisations for building a trustworthy and sustainable AI ecosystem.

The authors would like to thank Darcey Cunningham, Solicitor Apprentice for her help in writing this article.