Governing AI in the UK: Interim Report by the House of Commons

United Kingdom

The Science, Innovation and Technology Committee appointed by the House of Commons has recently published a report titled “The Governance of Artificial Intelligence: Interim Report” ("the Report"). The Report highlights the current position and recent developments of AI in the UK, as well as in the EU and US, and proposes next steps for the government to action. The Report is significant in acknowledging that the approach to regulation of AI risks the UK falling behind the pace of development of AI.

Background

The Science, Innovation and Technology Committee is appointed by the House of Commons to look at the expenditure, administration and policy of the Department for Science, Innovation and Technology, and associated public bodies. They also make sure that policies and decision-making by the government are based on solid scientific evidence and advice.

The rapid rate of development has led to more complex discussions over the governance and regulation of AI, as well as how public policy should respond to reap the benefits whilst safeguarding public interest and preventing harm. In response to this, the House of Commons launched an inquiry on 20 October 2022 to examine the following:

  • the impact of AI on society and the economy;
  • whether and how AI and its different uses should be regulated; and
  • the UK Government's AI governance proposals

The House of Commons published the Report on 31 August 2023 to highlight its initial findings, aiming to address the recent developments in AI, its benefits and challenges, and how the UK Government have responded, interestingly comparing the UK’s approach to that of other countries and jurisdictions. The inquiry is still ongoing, and it has been noted that a further Report will be published in due course.

The Report and the 12 challenges

The Report examines the factors behind recent AI developments and highlights the benefits offered by the technology, such as in medicine and healthcare as well as education, but also identifies twelve challenges for policymakers. These are:

  1. Bias challenge - AI contain inherent bias as datasets are compiled by humans, which society find unacceptable.
  2. Privacy challenge - AI can identify individuals and use their personal data in ways in which the public does not want.
  3. Misrepresentation challenge - AI can be used with the intention of misrepresenting someone's behaviour, opinions or character.
  4. Access to data challenge - very large datasets are required for the most powerful AI, but this is only held by few organisations.
  5. Access to compute - similar to the access to data challenge, only a few organisations have access to significant compute power for the development of powerful AI.
  6. Black Box challenge - there are AI models that are unable to explain why they produce a particular result, challenging transparency requirements.
  7. Open-source challenge - requiring code to be openly available may promote transparency and innovation. Allowing it to be proprietary may concentrate market power but allow more dependable regulation of harms.
  8. IP & copyright challenge - Some types of AI use other people's content, therefore policy must establish the rights of the originators of this content, and these rights must be enforced.
  9. Liability challenge - if third parties use AI to harm, policy must establish whether developers or providers of the technology bear any liability for harms done.
  10. Employment challenge - AI is expected to disrupt jobs people do and are available to be done, therefore policy makers must anticipate and manage the disruption.
  11. International coordination challenge - as AI is a global technology, the development of governance frameworks to regulate its uses must be an international undertaking.
  12. Existential challenge - some people think AI is a major threat to the existence of humanity, therefore, if this is a possible threat, there needs to be protections in place for national security.

The Report suggests that the Government's approach to AI governance and regulation should address each of the twelve challenges, both through domestic policy and international engagement.

The White Paper

The Report was released in the context of a white paper on AI published by the government in late March 2023 (“White Paper”). The White Paper aimed to guide the use of AI in the UK, drive innovation responsibly, maintain the public’s trust in AI technology and came off the back of the July 2022 policy paper titled ‘Establishing a pro-innovation approach to regulating AI’ and the ‘National AI Strategy’ published in September 2021.

The Secretary of State for Science, Innovation and Technology Michelle Donelan has put forward in the White Paper that the UK will not regulate AI separately AI and a new AI regulator will not be formed. Instead, guiding the use of AI in the UK, are the ‘Five Principles’ (the Principles) to be followed and implemented by existing regulators. 

The five Principles of AI regulation set out in the White Paper are:

1. Safety, Security and Robustness

AI should be managed in a way that is safe, secure and robust so that risks are managed carefully. The capacity for AI to function autonomously is a particular concern, potentially impacting safety and security. The White Paper sets out that AI systems should be technically secure and functional reliably, as intended. Further, system developers should be conscious of the specific security threats that may apply across different stages of the AI lifecycle. For regulators, this Principle encompasses providing guidance on considerations of good cybersecurity practices or privacy practices more generally, referring to a risk management framework to be applied by AI life cycle actors, and considering the role of the technical standards already available such as addressing AI safety, security and robustness.

2. Appropriate Transparency and Explainability

Those developing and using AI should be able to effectively communicate when and how it is used. As mentioned, transparency plays a role in the UK’s agenda to increase public trust and drive AI adoption. In addition, the system’s decision-making process should be explained ‘at an appropriate level of detail’ - in more detail for higher risk AI systems and vice versa. In circumstances where AI systems are not adequately explainable, there are resultant risks such as unintentionally breaking laws or causing harm and compromising security.  Regulators must ensure AI life cycle actors provide information relating to, among others: the nature and purpose of the AI being used, the data being used, and the logic and process used. This can be either proactively or retrospectively. Furthermore, explainability requirements must be set, ensuring balance between regulatory enforcement and technical trade-offs.

3. Fairness

The use of AI should comply with existing laws and must not create unfair commercial outcomes or discriminate in any way. To ensure their approach remains proportionate and context-specific, the Paper ascertains that regulators should be able to describe and illustrate fairness within the context of their particular industry whilst also consulting other regulators where need be. Relevant laws expected to be taken into consideration are: Equality Act 2010, the Human Rights Act 1998, the UK General Data Protection Regulation (GDPR), the Data Protection Act 2018, Consumer and competition law, and sector-specific fairness requirements, such as the Financial Conduct Authority (FCA) Handbook.

4. Accountability and Governance

There must be appropriate oversight over and clear accountability for AI. This is once more considering the potential of AI-autonomy, making the establishment of accountability and ownership all the more important. Given the complexity of AI supply chains, this can be quite difficult for actors in the AI lifecycle. It is up to regulators to determine who is accountable and in which circumstances this accountability arises. They may be required to provide guidance on how to demonstrate accountability in the short-term. In the medium to long-term, the Paper suggests that the government may issue additional guidance in this area. Regulators must also provide guidance on mechanisms for governance, including risk management activities and governance processes.

5. Contestability and Redress

Decisions and outcomes resulting from AI must have effective routes to adjudication. Potential risks that may be introduced by AI systems include the reproduction of biases and safety concerns. Importantly, the current Framework will not create new rights or route to redress. Regulators will be responsible for creating or updating guidance with information on the adjudication and redress mechanisms, clarifying “existing ‘formal’ routes of redress”.

Whilst the White Paper is recognised as an initial step in the right direction, the Report acknowledges that this approach risks the UK falling behind the pace of development of AI. Other jurisdictions are making considerable progress legislatively towards regulating AI, such as the EU and the US. The EU has proposed a new EU AI Act, and have taken a risk-based approach, in contrast to the UK’s proposed context-specific, principles-based approach. The US are also making progress in regulating AI, with the White House Office of Science and Technology Policy publishing a non-binding ‘Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People’ in October 2022, offering “… a set of five principles and associated practices to help guide the design, use and deployment of automated systems to protect the rights of the American public”. If the UK does not take action fast, they risk failing to meet their aspirations of becoming an AI governance leader. 

Next steps & Recommendations proposed by the report 

The Government response to the report is due by 31 October 2023.

According to the Report, 12 challenges should be the basis for discussion in the summit on AI safety, due to take place on the between 1-2 November 2023. This summit will bring together both AI companies and experts and international governments, and so will be key in advancing a shared international understanding of both the challenges and opportunities of AI.

The Government has not yet confirmed if specific legislation relating to AI will be addressed in the forthcoming King's Speech scheduled for November. This upcoming parliamentary session represents the final chance prior to the General Election for the UK to establish regulations regarding AI governance. It will be interesting to see whether any form of AI Bill will be introduced in November. 

The authors would like to thank Darcey Cunningham, Solicitor Apprentice for her help in writing this article.