The current version of the AI Act has been amended by parliament to include further regulation on duties and responsibilities along the AI value chain, and to specify requirements for basic models and obligations for their providers.
In the proposal of the Commission for an Artificial Intelligence Act (AI Act), in addition to obligations for providers, distributors, importers and deployers of high-risk AI systems, further obligations for other third parties in the AI value chain of high-risk AI systems were already laid out. The current version of the negotiating position of the parliament further specifies and expands these obligations.
New and particularly relevant for generative AI models are also regulations on requirements for foundation models and obligations for their providers if they are part of the AI value chain of high-risk AI systems.
Duties and responsibilities along the AI value chain
To enhance confidence in the value chain of a high-risk AI system and strengthen assurance for all actors regarding compliance with high-risk AI system requirements and relevant duties, the AI governs the duties and responsibilities of an AI value chain.
Provider obligations can apply to distributors, importers, deployers and other third parties
If distributors, importers, deployers or other third parties place a high-risk AI system on the market or put it into service under their own name or trademark, they are considered providers within the meaning of the AI Act and are subject to the provider obligations of Art. 16 (Art. 28).
This also applies if they modify the intended purpose of a high-risk AI system or make a substantial modification to a high-risk AI system and it remains a high-risk AI system. This also applies if they make a substantial modification to any other AI system or its intended purpose and it becomes a high-risk AI system as a result.
In these cases, the original provider is no longer considered a provider within the meaning of the AI Act and is thus not responsible for the high-risk AI system in question (Art. 28 (1), (2)). In these cases, the original provider must provide the new provider with all necessary documentation and records necessary to fulfill the requirements and obligations of the AI Act.
The provision is also applicable to providers of foundation models when they are directly integrated in a high-risk AI system.
In order to avoid being liable for a "misappropriated" placing on the market or putting into service of a generative AI, providers of generative AI systems should therefore – irrespective of a classification as a high-risk AI system – specify an intended purpose for their AI systems within the meaning of the AI Act. In this context, intended use is the use for which an AI system is intended according to the provider, including the specific context and conditions of use, as specified in the information supplied by the provider in the instructions for use, promotional or sales materials and statements, as well as in the technical documentation (Art. 3 No. 12).
Contract content requirements of providers and other third parties in the AI value chain
In an AI value chain, tools, services, components, or processes are provided by a variety of providers, which are then incorporated by the provider in an AI system (e.g. data collection and preprocessing, model training, model retraining, model testing and evaluation, integration into software, or other aspects of AI system development). Given the complexity of AI value chains, the AI Act is intended to ensure that all relevant information, expertise, and training is made available to a provider of high-risk AI systems (Recital 60).
Providers of high-risk AI systems and third parties that provide tools, services, components or processes (e.g. APIs, platform operators) that are used for or integrated in the high risk AI system must by written agreement specify the information, capabilities, technical access, and or other assistance, based on the generally acknowledged state of the art, that the third party is required to provide in order to enable the provider of the high risk AI system to fully comply with the obligations under this Regulation AI Act(Art. 28(2a)). The Commission is to provide non-binding model contract clauses for this purpose.
Trade secrets must be protected and may only be disclosed if necessary measures to preserve their confidentiality have been taken in advance (Art. 28 para. 2b).
Against the background of a negotiation imbalance that may exist for startups and SMEs, Art. 28 a – comparable to a GTC control between consumers and companies – contains a catalog of contractual terms that may be considered unfair in connection with the supply of tools, services, components or processes for a high-risk AI system (i.e. an unfairness test). Such unfair terms (only the individual clause, not the entire contract) are not binding if they were imposed unilaterally and by taking advantage of the stronger bargaining position (i.e. take it or leave it) vis-à-vis a startup or small or medium-sized enterprise. In this regard, the AI Act identifies unfair clauses to be used as a benchmark for assessing unfairness, such as liability exclusions or limitations for intent and gross negligence, an exclusion of remedies for non-performance of contractual obligations or liability, and unilateral judgment rights regarding the sufficient provision of technical documentation and information in accordance with the contract and the interpretation of contractual provisions.
Obligations for providers of foundation models
The growing relevance of foundation models within an AI value chain as well as the potential impact of a lack of control prompts parliament to clarify the legal situation of providers of foundation models (Recital 60g). The current version therefore now defines requirements for foundation models and obligations for their providers (Recital 60e f., Art. 28 b).
Foundation models are models of AI systems that have been trained on a broad data at scale, are designed for generality of output, and can be adapted to a wide range of distinctive tasks (Art. 3 No. 1c). AI systems with or without a specific purpose, and thus also high-risk AI systems, can thereby be the implementation of a foundation model that can potentially be further used in countless downstream AI systems.
Providers of foundation models should ensure that the foundation model meets the requirements of Art. 28b before placing it on the market or putting it into service regardless of whether it is provided stand-alone, as part of an AI system or product, as open source, service or via other distribution channels.
These include:
- Evidence and documentation requirements regarding the management of risks to health, safety and fundamental rights, the environment, democracy and the rule of law before and during the development of the foundation model;
- Ensure that only data subject to appropriate data governance measures for baseline models are processed and included, and establish measures to verify suitability, potential bias, and remediation;
- Design and develop the foundation model to achieve appropriate levels of performance, predictability, interpretability, correctability, security, and cybersecurity throughout its lifecycle;
- Compliance with standards to reduce energy and resource consumption and increase energy efficiency and overall efficiency;
- Preparation of technical documentation and instructions for use and their storage for ten years from the date of placing on the market or putting into service of the foundation model;
- Establishment of a quality management system;
- Registering the foundation model in an EU database (Art. 60).
In the context of generative AI systems, additional obligations for providers of foundation models used in generative AI systems and providers implementing a foundation model in a generative AI system are also regulated (Art. 28b (4)):
- Compliance with transparency obligations (Art. 52 (1));
- Ensure that reasonable precautions are taken through training, and where appropriate design and development, in accordance with the generally accepted state of the art, to avoid generating content that violates EU law, in particular fundamental rights, including freedom of expression (e.g. model evaluation, red-teaming or machine learning verification and validation techniques);
- Document and make publicly available a detailed summary on the use of copyrighted training data.
The obligations are not intended to lead to a blanket classification of foundation models as high-risk AI systems, but to ensure that the AI Act achieves its objectives (Recital 60g). Foundation models developed for a narrower, less general and limited set of applications and not adaptable to a wide range of tasks (e.g. simple multi-purpose systems) are therefore not intended to qualify as foundation models under the AI Act.
Monitoring and certification of high-risk AI systems
Notified bodies are to be commissioned as conformity assessment bodies to monitor and certify AI systems and to verify the conformity of AI systems with the requirements of the AI Act (Art. 33). To be notified as such a body, an application must be submitted to the notifying authority to be established by the member states and a notification procedure must be followed (Art. 30 ff.).
Specific requirements for the conformity assessment of a high-risk CI system are regulated in Art. 40 ff. Certificates issued by a notifying body are valid for a maximum of five years and may be suspended or revoked by the notifying body (Art. 44). Member states shall establish an appeal procedure against decisions of the conformity assessment body (Art. 45).
Conclusion
The development and use of high-risk AI systems is extensively regulated by the AI Act not only through the requirements and obligations for its providers, deployers, importers and distributors, but also through the expanded responsibilities and obligations of other third parties in an AI value chain in the current version, in particular providers of foundation models.
What other obligations providers and users, in particular of generative AI, may also be subject to, what rights those affected by an AI system have, and what sanctions may be imposed in the event of violations of the AI Act, will be outlined in the final part of this blog series.
Social Media cookies collect information about you sharing information from our website via social media tools, or analytics to understand your browsing between social media tools or our Social Media campaigns and our own websites. We do this to optimise the mix of channels to provide you with our content. Details concerning the tools in use are in our Privacy Notice.