AI Act - Regulatory focus on high-risk AI systems

Germany

Providers, deployers, distributors, and importers of high-risk AI systems are in the focus of the legislator with extensive obligations under the AI Act.

The current draft of the Artificial Intelligence Act (AI Act) places regulatory emphasis on requirements for high-risk AI systems and obligations for their providers, deployers, distributors and importers, as well as other third parties along the AI value chain.

Obligations for providers of high-risk AI systems

If a generative AI system is classified as a high-risk AI system , its provider will be subject to extensive and complex obligations that will require some administrative effort and financial resources in practical implementation. A provider is a natural or legal person, public authority, agency or other body that develops an AI system or that has an AI system developed with a view to placing it on the market or putting it into service under its own name or trade mark, whether for payment or free of charge (Art. 3 No. 2).

Providers of high-risk AI systems are primarily required to ensure that the following requirements for high-risk AI systems (Art. 8 et seq.) are met throughout the lifecycle of an AI system (Art. 16), which is to be ensured by an appropriate risk-management system (Art. 9):

  • Data quality (Art. 10): Relevant for the deployment and development of generative AI systems are the requirements for the quality of the datasets used to train a high-risk AI system. Data governance and data management procedures should ensure that training, validation, and testing datasets are sufficiently relevant, representative, error-free, and complete with respect to the intended purpose of the system; have appropriate statistical characteristics, including with respect to the individuals or groups of individuals to whom the high-risk AI system is intended to be used; and, where appropriate, correspond to the properties, characteristics, or elements that are typical of the particular geographic, behavioural, or functional settings or contexts in which the AI system is intended to be used. In addition, checks for and appropriate measures must be taken to detect, prevent, and mitigate potential bias. To the extent that personal data must exceptionally be processed for this purpose, Art. 10 imposes requirements for pseudonymisation, confidentiality, and data processing.

If the provider cannot meet the data quality and data governance requirements due to lack of access to relevant data, the deployer may be held contractually liable for violations of Art. 10 if the deployer is the only one with access to the data (Art. 10 (6a)).

  • Documentation and record-keeping requirements (Art. 11, 12): With respect to the development and operation of high-risk AI systems, records and technical documentation conform to the state of the art must be maintained that includes information on general characteristics, capabilities, and limitations of the system; algorithms used; data; training, testing, and validation procedures; and documentation of the relevant risk management system.
  • Transparency for users (Art. 13): Natural persons as users should be enabled to interpret and appropriately use the results by providing documentation and instructions for use.
  • Human oversight (Art. 14): Before placing a high-risk AI system on the market or putting it into operation, appropriate measures to ensure human oversight with sufficient AI competence must be defined by the provider during design and development (e.g. built-in operational restrictions, stop button).
  • Accuracy, robustness, safety and cybersecurity (Art. 15): Accuracy, robustness, safety and cybersecurity must correspond to the generally accepted state of the art.

Other duties of a provider of high-risk AI systems include, in particular:

  • Indicating registered trade name or registered trade mark and contact address on the high-risk AI system itself or on the packaging or in the enclosed documentation (Art. 16(1a));
  • Establishment of a quality management system (Art. 17);
  • Carrying out and ensuring prescribed conformity assessments (Art. 16 para. 1e, Art. 43);
  • Retention of log data (Art. 20);
  • Corrective action and provision of relevant information to distributors, importers, competent authorities and, where possible, users (Art. 21);
  • Information and cooperation obligations (Art. 22, 23);
  • Drawing up and keeping ready an EU declaration of conformity (Art. 48);
  • Labeling obligations (CE conformity marking, Art. 49),
  • Retention requirements (Art. 20, Art. 50);
  • Registration requirement (Art. 51);
  • Establishing a robust post-market monitoring system and reporting requirements (Art. 61, 62).

For violations of data and data governance (Art. 10) and transparency (Art. 13) requirements, a provider faces fines of up to EUR 20 million or 4% of a company's total annual worldwide turnover in the previous fiscal year, and for other violations of the requirements and obligations, up to EUR 10 million or 2% of a company's total annual worldwide turnover in the previous fiscal year.

Providers established outside the Union must appoint in writing an authorised representative established in the European Union (EU) prior to making their AI systems available in the Union and provide the authorised representative with sufficient powers and resources to fulfill the obligations under the AI Act, in particular the provision of information to the competent authorities (Art. 25).

A product manufacturer is also subject to the provider obligations if it places high-risk AI systems on the market or puts them into operation in connection with products under its name (Art. 24).

Obligations for deployers, distributors, and importers of high-risk AI systems

In addition to the obligations for providers, the AI Act sets out further obligations for other third parties along an AI value chain.

Importers, natural or legal persons resident or established in the Union that place on the EU market or put into service an AI system that bears the name or trade mark of a natural or legal person established outside the EU (Art. 3 No. 6, Art. 26) must comply with the following:

  • ensure that high-risk AI systems comply with the provisions of the AI Act, in particular that the relevant conformity assessment procedure has been carried out on the part of the provider of the AI system, that the technical documentation has been drawn up, that the system bears the required conformity marking and is accompanied by the required documentation and instructions for use, and, if applicable, that an authorised representative has been appointed; in the event of a suspected breach of the requirements of the AI Act, the importer must not place the high-risk AI system on the market and must inform the competent authority if necessary;
  • indicate their registered trade name or registered trade mark and contact address on the high-risk AI system itself or on the packaging or in the enclosed documentation;
  • ensure that storage or transportation conditions do not affect conformance with high-risk AI system requirements;
  • cooperate with the competent authorities.

Distributors, natural or legal persons established in the Union that place on the market or puts into service an AI system that bears the name or trade mark of a natural or legal person established outside the Union (Art. 3 No. 7, Art. 27):

  • must verify high-risk AI systems for required CE conformity marking, required documentation and instructions for use, and compliance with obligations under the AI Act by providers or the importer, as applicable;
  • must not place them on the market in the event of a suspected breach of the requirements for high-risk AI systems and, where appropriate, must inform providers or importers or the competent authority of any risk;
  • must, for AI systems already placed on the market for which they suspect a breach of the requirements for high-risk AI systems, take necessary corrective action and, where appropriate, inform providers or importers or the competent authority of a risk;
  • cooperate with the competent authorities.

Deployers, natural or legal persons, public authorities, institutions or other bodies using an AI system under theirs authority except where the AI system is used in the course of a personal non-professional activity (Art. 3 No. 4, Art. 29) must, in particular:

  • use AI systems, especially high-risk AI systems, according to the instructions for use;
  • monitor their operations;
  • comply with record-keeping requirements, if applicable;
  • depending on their control over the high-risk AI system, implement human oversight, ensure that natural persons entrusted with it are competent, sufficiently qualified and resourced, and ensure that relevant and appropriate resilience and cybersecurity measures are regularly reviewed and adjusted or updated;
  • agree with employee representatives and inform affected employees prior to commissioning or use in the workplace;
  • irrespective of transparency obligations (Art. 52), inform data subjects if it is a high-risk AI system that makes or assists in making decisions about natural persons;
  • cooperate with the competent authorities;
  • conduct an assessment with respect to impacts of the high-risk AI system on fundamental rights (Art. 29 (a)).

Conclusion

The requirements and obligations related to the use and development of high-risk AI systems are complex and extensive. In addition to the requirements for high-risk AI systems and obligations for providers, distributors, importers and deployers, the AI Act in the current version of the Council's negotiating position now regulates further obligations and responsibilities along the value chain of high-risk AI systems and set out requirements for foundation models and obligations for their providers.