Regulating high-risk AI: EU Parliament calls for a new civil liability regime

Europe

On the 19 October 2020, the European Parliament (“EP”) adopted proposals on regulation of Artificial Intelligence (“AI”) prepared by the EP’s Committee on Legal Affairs (“JURI Committee”). The proposals address ethics, liability and intellectual property rights in the context of AI and aim to improve innovation, ethical standards and trust in technology. In this article, we discuss the report on the civil liability regime for AI (the “Report”) and the proposed regulation (the “Proposed Regulation”).

Summary of the proposed changes

According to the Report, operators of “high-risk” AI-systems would be subject to strict liability for any damage that results in harm to life, health, damage to property or harm that results in economic loss. What “strict liability” means in practice is that operators of high-risk AI-systems will be liable for any harm caused by an autonomous activity, device or process driven by their AI system, even if they did not act negligently. The Report considers that in situations where there is more than one operator, all operators should be jointly and severally liable, while having the right to recourse proportionately against each other.

Is the White Paper relevant?

The Report follows the White Paper on AI, the Report on safety and liability framework published by the European Commission (“Commission”) earlier in February and public consultation. The White Paper has proposed to regulate “high-risk” AI applications that involve significant risks both in the sector and in its intended use – particularly from a safety, consumer rights and fundamental rights perspective. Amongst other matters, the Commission has: (i) noted the limitations of the scope of existing EU legislations; (ii) sought views on whether strict liability may need to be applied in order to compensate for damage caused by “high-risk” AI applications; and (iii) suggested the possible obligation to secure insurance. As discussed in further detail below, the main focus of the Proposed Regulation is on AI-systems that operate in public spaces, so the definition of “high risk” AI-systems appears to be narrower than the definition presented by the EC for public consultation.

Claims under the Product Liability Directive

In the Report, the JURI Committee has suggested that the Product Liability Directive (“PLD”) has for over 30 years proven to be an effective means of getting compensation for harm triggered by a defective product, but should nevertheless be revised to adapt it to the digital world and to address the challenges posed by emerging digital technologies. In particular, the Report suggests the Commission to consider certain changes to the PLD, including transforming the PLD into a regulation, clarifying the definition of ‘products’ by determining whether digital content and digital services fall under its scope and considering adapting concepts such as ‘damage’, ‘defect’ and ‘producer’ (to incorporate manufacturers, developers, programmers, service providers as well as backend operators). The Report has noted that the PLD should continue to be used for civil liability claims against the producer of a defective AI-system, when the AI-system qualifies as a product under that Directive. The Proposed Regulation then deals with the civil liability of operators of AI-systems.

Who is an operator?

“Operator” is defined under the Proposed Regulation as “both the frontend and the backend operator as long as the latter’s liability is not already covered by the Product Liability Directive”. The frontend operator is defined as “any natural or legal person who exercises a degree of control over a risk connected with the operation and functioning of the AI-system”. The backend operator is defined as “the natural or legal person who, on a continuous basis, defines the features of the technology, provides data and essential backend support service and therefore also exercises a degree of control over the risk connected with the operation and functioning of the AI-system”. Comparable to an owner of a car or pet, the operator is able to exercise a certain level of control over the risk that the item poses. Exercising control thereby should mean any action of the operator that affects the manner of the operation from start to finish by determining the input, output or results, or could change specific functions or processes within the AI-system. In circumstances where the frontend operator is also the producer of the AI-system, the Proposed Regulation will prevail over the PLD, however if the backend operator also qualifies as a producer under the PLD, then the PLD will take precedence.

What is “high-risk” AI-system?

Under the Proposed Regulation, “high-risk” means “a significant potential in an autonomously operating AI-system to cause harm or damage to one or more persons in a manner that is random and goes beyond what can reasonably be expected; the significance of the potential depends on the interplay between the severity of possible harm or damage, the degree of autonomy of decision-making, the likelihood that the risk materialises and the manner and the context in which the AI-system is being used”.

The Report suggests that when determining whether an AI-system is high-risk, the sector in which significant risks can be expected to arise and the nature of the activities undertaken must also be taken into account.

The Report suggests listing exhaustively AI-systems that are high-risk in a separate Annex and this Annex to be reviewed by the EC every six months and if necessary, amended by delegated acts. An AI system that has not yet been assessed by the Commission and included in the list set out in the Annex, should nevertheless, by way of exception, be subject to strict liability if it caused repeated incidents resulting in serious harm or damage. During the initial drafts of the Report, the JURI Committee listed specific AI-systems that would be considered high-risk. These included unmanned aircraft, specific categories of autonomous vehicles, autonomous traffic management systems, autonomous robots and autonomous public places cleaning devices. The final version of the Report does not contain this list.

What types of harm are covered?

The Report addresses harm to life and health, damage to property and harm that results in economic loss. The JURI Committee has urged the Commission to re-evaluate and align the thresholds for damages in Union law, whilst also analysing, in depth, the legal status in all Member States of granting compensation for economic loss to ensure that AI-specific acts are necessary to include and whether it contradicts the existing legal framework. The Report also notes that although AI-systems can cause harm to other rights, such rights should be addressed by already existing laws (such as anti-discrimination law or consumer protection law). For the same reason, the use of biometric data or of face recognition techniques by AI-systems have not been addressed in the Report as their unauthorised use is already covered by specific laws, such as the General Data Protection Regulation (“GDPR”). This does not align with the approach in the White Paper, where the Commission has suggested that the use of AI applications for the purposes of remote biometric identification and other surveillance technologies would always be considered “high-risk” and should come within the scope of the proposed legislation.

At this stage, it is not clear whether the JURI Committee contemplated that loss of data as a result of use of the AI-system should be captured under the Proposed Regulation as loss of property.

Compensation & limitation periods

The Report proposes that there should be a maximum compensation of:

  • €2 million payable in case of death or harm to a person’s physical health or integrity resulting from an operation of a high-risk AI-system;
  • €1 million in case of harm that results in economic loss or damage to property.

In addition, the Report proposes limitation periods of:

  • 30 years for claims concerning harm to life or health; and
  • 10 years in cases of property damage or harm that results in economic loss.

These are very lengthy limitation periods which greatly exceed those provided for under the PLD.

Strict liability preferred

According to the Proposed Regulation, the operator of a “high-risk” AI-system “shall be strictly liable for any harm or damage that was caused by a physical or virtual activity, device or process driven by that AI-system”. The Report states that the operator shall not be able to exonerate himself or herself by arguing that he or she acted with due diligence or that harm or damage was caused by an autonomous activity, device or process driven by his or her AI-system. The Proposed Regulation then goes on to state that the operator shall not be held liable if the harm or damage was caused by force majeure.

Practical implications

The current proposal would require providers of AI systems to check whether they meet the definition of “Operator” and would require them to undertake a risk assessment of their technology. Operators would also need to consider carefully how AI systems are categorised on the basis that the proposal imposes strict liability on those operating high-risk AI if there is damage caused.

Insurance

The Report states that as the European Union and its Member States do not require radical changes to their liability framework, AI-system should not push away from traditional insurance systems. Insurance can help to ensure that affected individuals receive effective compensation as well as to pool the risks of all insured persons. However, publicly funded compensation mechanisms are not an adequate answer to the rise of AI.

Despite the lack of access to quality historical claims data involving AI-systems, we understand that European insurers are already developing new products area-by-area and cover-by-cover as the technology develops further. If there is a need for a new cover, the Report strongly suggests that the insurance market will adapt existing insurance covers or come up with various new products that will provide adequate solutions to different types of AI-systems in different sectors. The Report also suggests that the uncertainty surrounding risks should not make insurance premiums prohibitively high and therefore create an obstacle to research and innovation in this field.

Transitional Period

The draft explanatory note to the proposed text of the Proposed Regulation suggests that until the legislative response to the rise of AI becomes law, industry and researchers should be able to innovate according to the current rules and should benefit from, (what appears to be a very lengthy) five-year-long transition period.

Next Steps

The adopted Proposed Regulation will be sent back to the Commission and confirmed by the European Council before becoming an official Regulation of the EU. The Commission’s legislative proposal is expected to be out early next year. In the meantime, the European Parliament’s has also set up a special committee on the Artificial Intelligence in the Digital Age (AIDA). AIDA was set up to provide a “holistic approach providing a common, long-term position that highlights the European Union’s key values and objectives relating to artificial intelligence in the digital age.” And to ensure that the digital transition is consistent on human rights and is human-centric. AIDA held constitutive meetings on 3rd September 2020 where it elected its chairs and vice-chairs and had a series of working meetings in the w/c 26 October and 9 November. The Committee is chaired by Romanian MEP Dragoș Tudorache. More information on the work and meetings of AIDA can be found here.