“High-risk AI”: a European approach to excellence and trust

United KingdomScotland

On 19 February 2020, the European Commission (the “Commission”) published a White Paper entitled ‘On Artificial Intelligence - A European approach to excellence’. Building upon the European strategy for artificial intelligence (“AI”) issued in April 2018, the paper proposes a common regulatory approach that will, in the Commission’s view, avoid the fragmentation of the market, promote EU’s innovation capacity and speed up the uptake of ethical and trustworthy AI. While much anticipated comprehensive feedback on the use of surveillance technology did not make it in the final issued version White Paper (whereas in the “leaked” version it did), it represents a significant step towards European-wide regulation of AI.

We expect that lawyers interested in AI will be considering the White Paper with some interest because it is an early indication of the Commission’s thinking on this topic. However, as one would expect for a document that starts the process for an initial public consultation, the discussion is mainly suggestive/indicative. The White Paper sets out some high-level ideas and concepts, but not much details as to what a draft AI Regulation would look like.

Structure of the White Paper

The White Paper focuses on two themes:

  • “Ecosystem of Excellence”: an investment-oriented policy framework designed to align AI efforts across Europe; and
  • “Ecosystem of Trust”: a regulatory risk-based framework centred around building consumers and businesses’ trust to foster a broader uptake of AI.

The White Paper is accompanied by a document titled “A European Strategy on Data” which sets out policy measures to enable Europe to become a dynamic data-agile economy and another report prepared by the Commission that discusses the implication of AI, Internet of Things and other technologies for safety and liability legislation (“Data Report”). Our observations on the Data Repot will follow.

Policy Framework

The Commission proposes to step up action at multiple levels, including boosting investments in research and innovation, enhancing the developments of skills and support the uptake of AI by SMEs. For example, by end of 2020 the European Commission intends to revise the Coordinated Plan, taking into account the feedback received on the White Paper. The Coordinated Plan that was first presented by the Commission in December 2018 in consultation with Member States includes 70 joint actions for closer cooperation (between the Commission and Member States) in certain areas such as research, investment and data. The Commission and the European Investment Fund also intend to launch a pilot scheme of €100 million in the first quarter 2020 to provide equity financing for innovative developments in AI.

The Future EU Regulatory Framework

The White Paper acknowledges that possible adjustments could be made to existing legislation and there is a scope for discussion in respect of further harmonisation of national liability laws. However, a new legislation specifically made for AI may be needed in order to make the EU legal framework work with the current and anticipated technological and commercial developments in the space.

The White Paper notes the following areas of uncertainty:

  • Limitation of scope of existing EU legislation: It is not clear whether standalone software is covered by EU product safety legislation, outside of some sectors with explicit rules. In addition, general EU safety legislation applies to products and not services, so not to services based on AI technology.
  • Changing functionality of AI systems: Existing legislation predominantly focuses on safety risk at the time of placing the product on the market and does not consider modification of the products and integration of software, including AI, during their lifecycle. Thus, according to the Data Report the autonomous behaviour of certain AI systems may require a risk assessment during its life cycle as well as human oversight. The Data Report also notes that EU product safety legislation could provide for specific requirements addressing the risk to safety of faulty data at the design stage as well as a mechanism to ensure the quality of data is maintained throughout the use of AI products and systems.
  • Allocation of responsibility in the supply chain: EU legislation on product liability becomes unclear if AI is added after the product is placed on the market by a party that is not the producer. That legislation only provides for liability of producers, thus leaving national liability rules to govern liability of others in the supply chain.
  • Changes to the concept of safety: The use of AI in products and services can give rise to risks, such as cyber security risks, or risks that result from a loss of connectivity) that EU legislation does not explicitly address.The Data Report notes that given the increasing complexity of supply chains in relation to new technologies, provisions specifically requesting cooperation between the economic operators in the supply chain and the users could provide legal certainty.

A further complication is that some of the above-mentioned legal uncertainty makes it difficult for persons having suffered harm to obtain compensation under current EU and national liability legislation. The Commission states that a risk-based approach is needed to narrow the scope of this regulatory framework and to ensure that any regulatory intervention in new technologies is proportionate and not excessively prescriptive. The White Paper proposes to differentiate between AI applications, by determining whether the application is ‘high-risk’, and which would then fall within the new framework.

High Risk AI Applications

The White Paper defines 'high risk' applications as those which involves significant risks both in the sector and in its intended use – particularly from a safety, consumer rights and fundamental rights perspective. The two limbed-test to determine if an AI application is ‘high-risk’ is set out as follows:

  • the AI application is employed in a sector where significant risks can be expected to occur given the nature of activities typically undertaken (such as healthcare, transport, energy and parts of the public sector i.e. migration, social security); and
  • the AI application is used in a manner where significant risks are likely to arise (for example, uses of AI that produce legal or other significant effects for the rights of an individual, or that pose a risk to injury death or immaterial damage).

The White Paper makes it clear that the rules should be proportionate, so the above test would be cumulative, i.e. both limbs must be met to be high risk. In addition, the Commission proposes that a new regulatory framework will include an exhaustive list of sectors affected. However, it also makes it clear that the use of AI applications for the purposes of remote biometric identification and other surveillance technologies would always be considered high-risk and apply irrespective of the sector. Further, even if the application is not qualified as high-risk, economic operators could then be awarded a ‘quality label’ for their AI applications to signify that their AI products and services are trustworthy and create a standardised benchmark that users can easily recognise and rely upon.

It should be noted that US has already criticised the test as being unhelpfully simplistic. Speaking the day after the White Paper was issued, the CTO of the US, Michael Kratsios, found “room for improvement” in an approach which “clumsily attempts to bucket AI-powered technologies as either ‘high risk’ or ‘not high risk’.” According to Kratsios, the US preferred a “spectrum of sorts”.

Types of Regulatory Requirements

The Commission proposes that the mandatory regulatory requirements on high-risk AI applications might cover:

  • Data sets used to train AI systems: providing reasonable assurances that use of the AI products/ services meets applicable EU safety rules, personal data is adequately protected and that such use doesn’t lead to outcomes entailing prohibited discrimination.
  • Data and record-keeping: keeping accurate records of training and testing the AI system.
  • Information provision: providing adequate information about the use of high-risk AI systems in a proactive manner.
  • Robustness and accuracy of AI systems: ensuring AI can adequately deal with errors and are resilient (and can mitigate) against both overt attacks and subtle manipulation.
  • Human oversight: implementing reviews and validations of AI systems by humans.
  • Specific requirements for certain AI applications: for example, remote biometric identification.

Unfortunately, there are a significant number of open questions about where the mandatory requirements sit in the broader regulatory context, for example:

  • What is the extent of such mandatory requirements being incorporated in the draft ISO standards currently being developed (see below)?
  • How would those mandatory requirements interact with some of the other AI frameworks, such as the High Level Expert Group’s HLEG’s Ethical Guidelines for Trustworthy AI.

How to Distribute and Enforce Legal Requirements

In the White Paper, the Commission suggests that each obligation should be addressed to the actor(s) who are best placed to address any potential risks at each stage of the life cycle– e.g. developers, deployers – without adversely affecting the answer to the question as to which party should be liable for any damage caused.

The Commission is then seeking views on whether and to what extent strict liability may be needed in order to achieve effective compensation of possible victims of damage caused by AI applications who are ‘high-risk’. The Commission has also suggested the possible obligation to conclude available insurance, following the example of the Motor Insurance Directive, in order to ensure compensation irrespective of the liable person’s ability to pay compensation and to help reducing the costs of damage.

For the operation of all other AI applications, which are not high-risk which the Commission acknowledges would constitute most AI applications, the Commission is reflecting on whether the burden of proof concerning causation and fault needs to be adapted. One of the issues flagged by last year’s EC report is that ‘proving causation’ element - when the potentially liable party has not logged the data relevant for assessing liability or is not willing to share them with the victim.

In terms of compliance and enforcement, a mandatory prior conformity assessment (such as a test, inspection or certification), as well as repeated assessments over the life cycle of the AI system), could be necessary to verify and ensure compliance with the mandatory regulations for high-risk applications. This may also include the algorithms and data sets used in the development of that AI application. Given that conformity assessments are a familiar part of EU product legislation, i.e. a test to check that a product meets requirements before it is placed on the market, the Commission’s proposed approach in relation to this issue is not surprising. The proposed approach could also work well with the draft standards in relation to AI that the International Standards Organisation (ISO) is currently developing.

Usefully, the White Paper identifies a number of nuances that need to be considered further in a conformity assessment for AI applications. These include:

  • Difficulties with testing conformity with some of the mandatory requirements;
  • Dealing with the situation where the AI application evolves/learns from its experiences. The question is would you then need perform a repeat test?
  • Ensuring that training datasets and programming and training methodologies are tested.
  • Incorporating a remediation process if an AI application fails its conformity assessment.

The Commission also suggests the territorial scope of this regulatory framework should be applicable to all relevant economic operators providing AI-enabled products or services in the EU, regardless of whether they are established in the EU or not.

What was not in the White Paper?

Earlier in the year, we discussed that a leaked version of the White Paper revealed that Europe was considering a five-year ban on facial recognition. That ban no longer appears in the final draft of the White Paper. The EC simply stated that in order to address possible societal concerns relating to the use of AI for the purpose of uniquely identifying a natural person in public places, it will launch a broad European debate on the specific circumstances, if any, which might justify such use, and on common safeguards.

Next steps

The Commission has invited comments on these proposals as set out in this White Paper via an open public consultation, open until 19 May 2020. The Commission is seeking views from Member States, other European institutions, public in general and any other interested parties. We look forward to seeing the outcome of the consultation process.