Regulating AI systems (Part 2): The significance of the Draft AI Liability Directive within employment law

Germany

Following the presentation of the AI Act in part 1 of this blog series, part 2 deals with the Draft AI Liability Directive and explains what it will mean for employers.

The growing willingness of companies to use AI to streamline and optimise work processes is often thwarted by concerns about the incalculable liability it may present. To reduce this legal uncertainty surrounding issues of liability, particularly in relation to work across borders, the European Commission issued its Draft AI Liability Directive of 28 September 2022 (Draft AI Liability Directive), setting out uniform requirements for certain aspects of non-contractual liability under civil law for damage caused by the use of AI systems.

The Directive is also intended to counter what is sometimes called the "black box" effect: This occurs when, due to the technical and algorithmic complexity, autonomy and opacity of AI-supported processes, the party injured by means of AI is not able either to determine the correct opposing party or prove a causal unlawful act or omission that would allow for the successful enforcement of a claim for compensation.

To achieve its purpose, the Directive chiefly revolves around two core aspects in its substance: On the one hand, Article 3 Draft AI Liability Directive deals with the disclosure of evidence; on the other hand, Article 4 Draft AI Liability Directive establishes a presumption of a causal link, which can be rebutted if the presumption is false.

The Draft AI Liability Directive is intended in particular to facilitate the assertion of tortious claims

The Draft AI Liability Directive should only apply directly to non-contractual fault-based claims under civil law for compensation for damage caused by AI systems. In particular, this includes the tortious liability regime of sections 823 ff. German Civil Code (BGB). One example of this would be a case where an operator of drones delivering parcels fails to comply with the instructions for use, resulting in personal injury. Section 823 (1) German Civil Code (BGB) is the key liability provision.

Since the Court of Justice of the European Union (CJEU) tends to classify EU legislation as "statutes that are intended to protect another person" (Schutzgesetz) pursuant to section 823 (2) German Civil Code (BGB), it is reasonable to assume that liability may also arise due to a breach of the obligations under the Regulation laying down harmonised rules on artificial intelligence (AI Act); for example, if a provider does not comply with requirements when using AI-supported recruitment services.

However, the Directive does not apply to contractual liability or matters of criminal law. Therefore, when using AI in human resources, claims for compensation under section 15 (1) German General Act on Equal Treatment (AGG), which the German Federal Labour Court (BAG) categorises as contractual claims, are not covered by the Directive. However, as the Directive stands at present, the claimant would potentially be able to indirectly use in its favour evidence obtained by simultaneously bringing a non-contractual claim in tort.

For the definitions, the Directive refers to the AI Act (see Part 1), which has already advanced further through the legislative process. These two elements of a conceptually interrelated set of regulations represent two sides of the same coin: While the AI Act imposes obligations on providers and operators to reduce risk as a precautionary measure, the Draft AI Liability Directive concerns how downstream liability issues are interpreted.

Disclosure of evidence

The right to information described in Article 3 Draft AI Liability Directive is used to determine who the correct opposing party to the injured party's claim is. To this end, the injured party must first make every reasonable effort to obtain the relevant evidence regarding the system from the defendant. If disclosure fails, for example because the defendant does not have access to the evidence, the court may, at the request of the claimant, order the provider or operator to disclose the relevant evidence about a specific high-risk AI system that is believed to have caused damage. The injured party must therefore plausibly substantiate its claim to the extent that damage caused by a high-risk AI system is suspected.

Since employers are often classified as operators when using high-risk AI systems, their disclosure obligations most likely relate to the obligations arising from Article 29 Draft AI Act, which include proper use in accordance with the instructions for use and appropriate data entry. The records to be made when using high-risk AI systems are particularly suitable as evidence here.

To ensure that the disclosure of evidence does not result in sensitive data and business information being made public, Article 3 Draft AI Liability Directive restricts the content of the request to the necessary minimum as is specifically determined by weighing up mutual interests through a necessity and appropriateness test. The objective of this provision is to prevent blanket requests. As a result, confidentiality on the one hand and effective enforcement of claims on the other are in conflict with each other. What the result of this weighing up will be in practice and whether effective protection can be maintained will be the subject of national implementation and the standard applied by the courts.

Article 3 (5) Draft AI Liability Directive establishes a rebuttable presumption of non-compliance with a duty of care in the event that the defendant does not comply with the request to disclose or preserve evidence.

Rebuttable presumption of causality

Due to the practical difficulty of proving a causal link between non-compliance with a duty of care (in particular under the AI Act) and the output produced by the AI system or the fact that the AI system did not produce any output, Article 4 (1) Draft AI Liability Directive provides for a rebuttable presumption of causality to this effect.

While Article 3 Draft AI Liability Directive only relates to high-risk AI systems (which generative AI systems such as ChatGPT, Bard and Bing usually are not), Article 4 (1) Draft AI Liability Directive is intended to cover all AI systems in principle. However, for AI systems that are not high-risk AI systems, the presumption of causality only applies if it is excessively difficult for the claimant to prove the causal link (Article 4 (5) Draft AI Liability Directive).

Looking at this in the context of German tort law, the presumed causality within the meaning of the Directive does not directly correspond to the requirements of sections 823 ff. German Civil Code (BGB) as they concern neither the causal link between the breach of duty and the violation of the protected right of the injured party (haftungsbegründende Kausalität) nor the causal link between the breach of duty and the resulting damage (hafungsausfüllende Kausalität). Instead, a further level of causality is added in the form of a causal link between the breach of the duty of care by the provider or operator and the AI output. Injured parties can benefit from this partial presumption of causality if they prove both non-compliance with a duty of care and causality between the AI output and the damage. By way of exception, a breach of the duty of care may be presumed pursuant to Article 3 (5) Draft AI Liability Directive.

According to recital 22, the direct purpose of the duty of care must be to prevent the damage that has occurred. Employers therefore do not have to fear liability triggered by the presumption of causality every time they breach any duty of care – for example, if they forget to submit the necessary documents to the competent authorities.

Measures employers should take right away

The obligations for companies and employers arising from the AI Act are relevant in the context of the Draft AI Liability Directive. For example, records and documentation may have to be created in accordance with the AI Act, which would then have to be disclosed in the event of a claim. Otherwise, the lack of counter-evidence would be of benefit to the claimant. Companies working with high-risk AI systems in particular are therefore advised to establish the necessary infrastructure and organisational conditions now so they can familiarise themselves with the EU-level regulation. However, it will be some time before the Draft AI Liability Directive comes into force. In view of the fast pace of technological developments in AI and the ongoing debate about the right level of regulation, it is impossible to rule out significant changes before then.