AI Act - The Regulation of generative AI


The use of generative AI systems is likely to be regulated in the future by the AI Act, which is expected to come into force at the end of the year.

Artificial Intelligence (AI) and machine learning have developed rapidly in recent years. Generative AI models can simulate conversations, answer questions, and independently generate new content, and have the potential to revolutionise everyday professional life in areas ranging from journalism to legal services. The use and development of generative AI is expected to be regulated in the future by the AI Act, the original version of which, as a Commission draft, has undergone some changes as a result of parliament's negotiating position of 14 June 2023. 

The impact of the AI Act in its current draft version particularly on generative AI will be explored in more detail with this blog post and the following parts of our series "AI Act".

Opportunities and challenges of generative AI

Generative Artificial Intelligence (AI) is the generic term for a form of AI that uses statistical methods to generate new content such as digital images, videos, audios, texts or even software code based on probabilities. Using machine learning techniques, such an AI system is trained by algorithms that analyse an existing data set and identify connections and relationships within it, and eventually use the resulting "model" to make decisions or predictions for the production of new content. In addition to text and content creation, generative AI systems have a wide range of potential applications, from customer service, portfolio management, composition of music, creation of artwork, image editing, research, programming, to virtual assistants.

As with any new technology, however, there are risks associated with the development and use of generative AI. The quality of the content and results generated by a generative AI depends on the quality, content and quantity of the data with which the AI system is trained. Therefore, it is not surprising that AI models such as the ones from common AI chatbots can provide really good results at first glance, for example, as answer for a legal question, but on a closer look these answers have nothing to do with German law if the training data set does not include it. The productive use of generative AI therefore requires (apart from an accurate prompt) that correct and appropriate data have been used to train it.

The use of datasets for training an AI can affect fundamental rights, the protection of intellectual property, privacy, and the protection of personal or sensitive data, depending on the nature and origin of the data. If an AI model is trained with unfiltered data freely retrievable from the Internet, the content produced by the generative AI will also reflect societal biases mirrored in the datasets (e.g. the content produced by an AI may be discriminatory and racist if it is not filtered or interfered with in the training of the AI). 

Regulation in the EU: AI Act

An ethical and legal framework for the development and use of AI are to be created within the European Union (EU) by the AI Act, complemented by the Directive on AI Liability. Following the publication of the proposal for an AI Act (Proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence AI (AI Act, COM(2021) 206 final, 2021/0106(COD)) by the Commission on 21 April 2021, the AI Act is now being considered and discussed by the various EU institutions in the legislative process. On 14 June 2023, the European Parliament adopted its negotiating position, whereby the Commission's original proposal was revised and supplemented. For example, the scope of application has been extended once more, and now includes the deployers of AI systems established both within and outside the EU. In addition, AI systems in social media can be classified as high-risk, general principles for the use and development of AI systems are defined, and a new EU institution, the AI Office, has been established.

The next step is negotiations with the member states on the final text of the regulation. An agreement on a final version of the AI Act is expected at the end of the year.

The aim of the AI Act is, on the one hand, to establish clear rules for dealing with AI-based systems in order to avoid discrimination, surveillance and other potentially harmful effects, especially in areas relevant to fundamental rights. At the same time, it aims to promote competition in the EU and strengthen Europe's position in the global AI competition.

Scope of the AI Act: Generative AI is covered in principle

The scope of the AI Act is broad both in factual and personal terms. Defined and covered in the current version, AI systems are machine-based systems that are designed to operate with varying levels of autonomy and that can, for explicit or implicit objectives, generate outputs such as predictions, recommendations, or decisions that influence physical or virtual environments (Art. 3 No. 1). While the definition in the Commission's proposal referred to Annex I of the AI Act, which listed techniques and concepts such as machine learning or deep learning, in the negotiating position, the reference as well as Annex I are deleted. 

The AI Act with the current version of Art. 2 is intended to apply to all providers and deployers of AI systems – regardless of whether they are established in the EU or in a third country – and all distributors and importers of AI systems, authorised representatives of AI system providers and manufacturers of certain products established or located in the EU, as well as EU-based data subjects whose health, safety or fundamental rights are significantly affected by the use of an AI system.

Generative AI systems as well as their providers, distributors, importers, and product manufacturers therefore generally fall within the scope of the AI Act.

AI systems and their risk potential: the risk-based approach of the AI Act

The AI Act categorises AI systems in terms of their risk potential for health, safety, and the fundamental rights of natural persons:    

  • Unacceptable Risk (Prohibition of certain practices);
  • high risk (high-risk AI systems);
  • AI systems specified in more detail (transparency obligations);
  • a low or minimal risk (General principles and voluntary codes of conduct).

A classification of generative AI within the risk categories cannot be made in a general way or on the basis of the mode of operation. Rather, it is the specific purpose and application modalities of the development or use of an individual generative AI system that are decisive. 

Addition of general principles for all AI systems to the AI Act proposal

The Commission's proposed regulation, which contains a basis for voluntary codes of conduct for providers of other AI systems, has been supplemented in the current version with general principles applicable to all AI systems. All actors falling within the scope of the AI Act must use their best efforts to develop and deploy AI systems or foundation models – regardless of their risk classification – only in accordance with general principles for ethical and trustworthy use of AI in line with fundamental rights and values of the EU (Art. 4a):

  • Human agency and oversight: AI systems must be developed and used as tools that serve people, respect human dignity and personal autonomy and functions in a way that can be appropriately controlled and overseen by humans.
  • Technical robustness and safety: the development and deployment of AI systems should minimise unintended and unexpected damage, as well as ensure robustness in the event of unintended problems and resilience against attempts to alter the use or performance of the AI system to enable unlawful use by malicious third parties.
  • Privacy and data protection: AI systems must be developed and used in compliance with existing privacy and data protection rules,while processing data that meets high standards in terms of quality and integrity.
  • Transparency: AI systems must be developed and used in a way that allows appropriate traceability and explainability while making humans aware that they communicate or interact with an AI system and duly informing users of the capabilities and limitations of AI system and informing affected persons of their rights.
  • Diversity, non-discrimination and fairness: AI systems must be developed and used in a way that includes diverse actors and promotes equal access, gender equality and cultural diversity while avoiding discriminatory impacts and unfair biases that are prohibited by EU or national law.
  • Social and environmental well-being: AI systems should be developed and used in a sustainable and environmentally friendly manner as well as in a way to benefit all human beings while monitoring and assessing the long-term impact on the individual, society and democracy.

These general principles are implemented in concrete terms for providers and deployers of high-risk AI systems in the specific requirements for high-risk AI systems (Art. 8-15) and the relevant obligations in each case. For foundation models, they are also formulated in the AI Act (Art. 28-28b) and must be observed by providers. 

If AI systems comply with the provisions of Art. 28, the transparency obligations (Art. 52) or harmonised standards, technical specifications or codes of conduct (Art. 6), these systems must comply with the general principles. However, the general principles are not intended to create new obligations under the AI Act (Art. 4a (2)).

Providers and users of all AI systems must take appropriate measures to ensure sufficient AI literacy with regard to their employees and other persons entrusted with the operation and use of the AI systems (Art. 4b). In particular, technical knowledge, experience, training and the concrete application modalities of the AI system are relevant in this regard. Such appropriate measures include education in basic concepts and knowledge about AI systems and how they work, including the different types of products and uses, as well as their risks and benefits.


Generative AI will in principle be covered by the scope of the AI Act and its use and development must comply with the general principles of the AI Act. The specific AI systems are not permitted at all as prohibited practices and the requirements outside the prohibited practices that are imposed on AI systems and the obligations that affect their providers, users or other participants in the AI value chain depend on the classification of the respective generative AI based on its intended purpose and concrete areas of application. While AI systems with unacceptable risk are banned altogether, the focus of regulation on high-risk AI systems whose providers, deployers, importers and distributors and other third-parties in the AI value chain must fulfill extensive obligations