A Roadmap to a Resilient AI Ecosystem: Policy Considerations for the UK

United Kingdom

According to a recent briefing paper published by the Centre for Emerging Technology and Security (CETaS) and the Centre for Long-Term Resilience (CLTR), the UK is inadequately resilient to the risks posed by AI. CLTR is an independent think tank with a focus on resilience, while CETaS is a research centre based at the UK’s national institute for AI and data science (the Alan Turing Institute).

Calling for decisive action by the UK government, the briefing paper (‘Strengthening Resilience to AI Risk: A guide for UK policymakers’) sets out a suggested framework for identifying and understanding the main sources of risk and their potential impacts at three key stages of the lifecycle of AI systems:

  1. design, training and testing
  2. immediate deployment and usage
  3. longer-term deployment and diffusion.

While some risks associated with AI may be specific to a particular sector or context, the briefing paper suggests that policy interventions are still needed to address risks arising from the increasing use of general-purpose systems. The briefing paper proposes the following three main goals for policy interventions and maps these to the three key AI lifecycle stages (on the basis that policy interventions will have the most effect if the intervention occurs at the point at which the risk first arises):

  1. creating visibility and understanding
  2. promoting best practices
  3. establishing incentives and enforcing regulation.

In addition to proposing domestic policy options, the briefing paper summarises challenges to be addressed by, and success criteria for, global AI policy and discusses the opportunity for the UK to take a leading role in setting the global agenda.

In this article, we summarise the risks, domestic policy options and global policy considerations described in the briefing paper (see here for the briefing paper).

A. AI risk pathways

The briefing paper gives an overview of the main sources of risk at each key stage of the AI lifecycle, acknowledging that some risks will cut across these key stages.

1. Design, training and testing

According to the briefing paper, there are four main sources of risk at this stage:

  1. Where personal data has been included in the dataset on which an AI model is trained, this may give rise to data privacy concerns for individuals who have not been given the opportunity to object expressly to such use of their data.
  2. The risk of cyberattacks, misuse or accidents involving AI systems as a result of unauthorised access may increase where there are security vulnerabilities during the design or development of an AI model, or in the resulting model.
  3. Environmental risks, exacerbated by the energy consumption involved in the use of considerable computing capacity and infrastructure throughout the development process for training and retraining an AI model.
  4. The briefing paper warns that general-purpose AI models may have dangerous capabilities that pose a risk to society and that could, if not addressed at the pre-deployment stage, lead to a range of harms which would be more difficult to address once deployed. In addition, if the way in which an AI system achieves the given goal cannot be predicted or controlled (the so-called alignment (or misalignment) problem), there is a risk that the system’s use of its powerful (and potentially dangerous) capabilities will result in unintended outcomes.

2. Immediate deployment and usage

The briefing paper lists the following as the three main sources of risk at this stage:

  1. Accidents could result from the irresponsible deployment of AI systems, including where deployed in safety-critical sectors. This could be due to systems malfunctioning as a result of inputs (failure of robustness), systems aiming to achieve slightly different goals from those intended (failure of specification) or systems that cannot be monitored or controlled adequately when deployed (failure of assurance).
  2. Before deploying an AI system, the designers need to identify the potential for malicious uses of that system, which may result in digital, political or physical security risk.
  3. An AI system developed in the commercial context is likely to have a higher tolerance for error than an AI system developed in the context of high-stakes public sector decision-making, and it is difficult to quantify the cumulative impact of multiple errors over time.

3. Longer-term deployment and diffusion

The following are identified as key possible structural impacts at this stage:

  1. Impacts on the economy and employment, with policy interventions needed to mitigate or avoid the risk of job losses for workers displaced by AI systems and the acceleration of wealth for those who own or control AI systems.
  2. If there is bias in the dataset on which an AI system is trained, this can lead to a discriminatory impact and can result in the AI system making biased decisions and perpetuating these inequities. Deploying an AI model in relation to a group can also result in bias if that group is over- or underrepresented in the training data.
  3. In order to respond to risks and crises properly, societies need to protect against the erosion of epistemic security and freedom of thought. In other words, societies need to be able to take collective action based on reliable and trustworthy information and to identify unsupported or untrue claims.
  4. The potential for strategic advantage to be conferred on those who ‘win’ the race in developing advanced forms of AI creates a powerful incentive for international competition, which may lead to geopolitical instability. The briefing paper warns that such “arms-racing dynamics” are at odds with AI safety, as actors may be incentivised to take larger risks to stay ahead of rivals or achieve higher payoffs.
  5. The concentration of wealth and power in the hands of those who own or control the most successful AI systems, contributing to an imbalance of power when governments are implementing regulation.

B. Domestic policy interventions

If policymakers can improve visibility and understanding in relation to the capabilities, risks and impacts of AI systems, they will be in a better position to identify and promote best practices for the safe development and deployment of AI systems. In turn, this should mean that policymakers have a clearer basis for introducing incentives and accountability mechanisms in order to ensure that these best practices are adopted.

In order to achieve these three main goals for policy interventions, the briefing paper proposes a number of policy options for each of the three key AI lifecycle stages.

1. Creating visibility and understanding

The briefing paper suggests the following policy options for establishing better visibility and understanding in relation to AI systems:

  1. Design, training and testing
    1. Creating a model reporting and information sharing regime between AI developers and regulators, to enable oversight of an AI model’s utility, risks and trustworthiness. This is one of the key recommendations in the briefing paper.
    2. Establishing a third-party auditing ecosystem to assess risks associated with an AI model.
    3. Adopting a consistent approach in respect of direct engagement by the UK government with industry bodies, ensuring the national security community is represented appropriately.
  2. Deployment and usage
    1. Developing a systematic approach to incident sharing, to improve collective understanding of patterns in harms caused by AI. This is one of the key recommendations in the briefing paper.
    2. Supporting industry programmes offering AI bounties to incentivise the identification and responsible disclosure of risks of AI systems.
  3. Longer-term deployment and diffusion.

One of the key recommendations in the briefing paper was the introduction of tools to measure key developments within the AI ecosystem, including the following:

  1. Measuring job displacement from AI systems as well as the potential for erosion of quality of work.
  2. Evaluation of the pace of global AI innovation and its impacts.
  3. Tracking and understanding public perceptions of AI and, in particular, incorporating the views of those most harmed by the deployment of AI.

2. Promoting best practices

The following policy options are proposed in the briefing paper to promote best practices for the safe development and deployment of AI systems:

  1. Design, training and testing
    1. Developing organisational governance and developer risk management guidelines, to address legal compliance, oversight and systems testing.
    2. Issuing model design standards which cover technical, ethical and professional requirements and make clear the tests and evaluations required as well as the results needed to demonstrate that an AI model meets a safety threshold.
    3. Promoting the adoption of privacy-preserving model training techniques and audits, to address data protection concerns in relation to the model training process. This is one of the key recommendations in the briefing paper.
  2. Deployment and usage
    1. Developing pre-deployment checklists and post-deployment monitoring requirements for AI systems, especially where there is a higher risk of misuse or accidents or where systems may behave unpredictably. This is one of the key recommendations in the briefing paper.
    2. Use of pre-deployment demonstrations and deliberative processes to obtain input from a broad range of perspectives, identify societal concerns and design mitigations and better governance mechanisms.
  3. Longer-term deployment and diffusion
    1. Funding pilot projects to demonstrate proof of concept in relation to a coordinated approach to watermarking of AI-generated content (in particular, visual content) and AI-enabled authorship detection, to protect the public’s ability to create, share, obtain and access reliable information. This is one of the key recommendations in the briefing paper.
    2. Exploring legal exemptions for anti-trust and safety cooperation, to encourage safety-motivated collaboration between AI developers.
    3. Developing public sector skills to use AI tools in the efficient delivery of public services and to recognise and address AI impacts.

3. Establishing incentives and enforcing regulation

In order to establish incentives and accountability mechanisms for improving adoption of the best practices, the following policy options are suggested:

  1. Design, training and testing
    1. Establishing a robust AI assurance ecosystem to allow developers to demonstrate adherence to best practices and indicate how trustworthy their system is. This is one of the key recommendations in the briefing paper.
    2. Recalibrating public R&D funding allocation to reflect the way in which AI is integrated into the UK economy, including in relation to research focussed on reducing bias, protecting privacy and improving safety.
    3. Exploring how imposing registration requirements on developers or establishing a licensing regime for developers could be used to incentivise or hold accountable developers in relation to mitigating the risks of AI systems. This is one of the key recommendations in the briefing paper.
  2. Deployment and usage
    1. Articulating ’red lines’ to make clear where humans need to retain control of functions (such as power supply and nuclear deterrents in the context of critical infrastructure) or where irreversible actions should not be taken by AI systems without direct human oversight or authorisation. This is one of the key recommendations in the briefing paper.
    2. Use of export controls to limit who can buy AI software (and advanced chips or other inputs for developing AI systems) developed in the UK.
    3. Exploring how legal liability can be used to incentivise or hold accountable developers in relation to mitigating the risks of AI systems. This is one of the key recommendations in the briefing paper.
  3. Longer-term deployment and diffusion
    1. Use of investment screening to limit foreign influence over the direction of AI development in specific contexts (such as in a security context).
    2. Investing in public compute resources to reduce the costs associated with training, testing and evaluating AI models and, in turn, reduce the barriers to entry for smaller AI developers.
    3. Use redistributive economic policies to impose ‘windfall clauses’ on companies with the greatest AI market share, if advanced forms of AI are expected to result in widespread job losses.

C. Global policy challenges

Having proposed policy options at a domestic level, the briefing paper describes the following challenges to be solved by global AI policy:

  1. The country in which an AI system is deployed is often different from the country in which that system was developed and, as such, will have had no insight or input into the development of that system.
  2. Given the general-purpose nature of AI technology, there is a vast range of potential applications for AI, which creates challenges for collecting, analysing and reporting information about AI at a domestic level, and is complicated further at the global level by different governments having different approaches and systems.
  3. The country of deployment will have a limited ability to introduce or shape context-specific incentives and, in turn, a limited ability to prevent negative externalities (other than through withdrawal from the AI ecosystem).
  4. Misuse of AI and the harms of AI systems can cross into other jurisdictions, meaning that domestic policy can only take an individual country so far.
  5. Given the different approaches by governments at a domestic level, friction between global initiatives and domestic exceptions is inevitable, especially in relation to national security.
  6. Competition at a global level may fuel a ‘race-to-the bottom’ as a result of countries taking risks in order to obtain a first-mover advantage and, having obtained this advantage, extracting concessions from governments in return for investment.

The briefing paper suggests that, in order to increase the likelihood of its success, global AI policy needs to be:

  1. inclusive, by engaging with a broad range of countries,
  2. justice-seeking, by ensuring that important AI discussions include governments whose populations are at risk of being disproportionately affected by decisions made at the global level,
  3. interdisciplinary, meaning that policy should be guided by a wide variety of disciplines rather than commercial interests being given priority,
  4. information-democratising, meaning that there is a need for more transparent communication about capabilities, and
  5. adaptable, meaning that there needs to be a range of tools to shape behaviours, given the rapidly changing landscape.

Following the UK’s exit from the EU and given the UK’s ability to define its AI strategy separately from any other country or region, the briefing paper suggests that the UK may be well-positioned to build bridges between countries that would otherwise be at odds. In order for the UK to assume a leading role in international AI discussions, the briefing paper recommends that the UK pilots policy options across the different AI lifecycle stages at a domestic level and evaluates their viability for implementation by multiple countries, with the same three main goals applying to policy options at the global level.

However, the briefing paper reiterates the need for the UK to act decisively on the challenges faced by AI, warning that any further delay by the UK government will jeopardise the UK’s chances of assuming a global leadership role, as well as making it harder to prevent harms resulting from AI risks. This sentiment is echoed in a recent report published by the Commons Technology Committee (see here), which warns that, unless the UK introduces AI legislation in November 2023, the UK risks being left behind by other legislation (such as the EU’s AI Act) which may become the de facto standard.

The authors would like to thank Louise Marchington, Associate at CMS, for her assistance in writing this article.