"AI systems" within the meaning of the AI Act: Term and definition

Germany

Two ways of better handling and defining the term "AI system" within the meaning of the AI Act and its distinction from "artificial intelligence".

The Regulation (EU) 2024/1689 laying down harmonised rules on artificial intelligence ("AI Act"), which came into force on 1 August 2024, is the first legislative attempt to establish uniform rules for the use of artificial intelligence systems.

Probably the most important term in the regulation is "AI system". It determines the material scope of application and thus the programme of obligations to be fulfilled by those who develop, distribute, put into service and/or use artificial intelligence (AI) systems in the Union.

Regulatory target: Artificial Intelligence

The European legislature was particularly motivated by its concern that without a standardised regulation for the internal market, many small-scale national regulatory concepts would emerge. The legal situation within the European Union would be almost impossible to keep track of. At the same time, the legislature felt that it was called upon to ensure "human welfare" by developing "artificial intelligence" in accordance with the values of the Union and the Charter of Fundamental Rights of the European Union (recital no. 6 AI Act).

The regulation leaves no doubt that all the levers available to the European legislature for such harmonisation should be used to ensure that this objective is achieved. It prominently states that not only developing, but also placing on the market, putting into service and using artificial intelligence systems ("AI systems") in the Union will be subject to a harmonised legal framework (Article 1 no. 2 a) AI Act). 

From early on, the legislature was clear about the object of regulation: Artificial intelligence was to be harmonised. A meaningful definition was essential for the desired regulation, for which a "High-Level Expert Group on Artificial Intelligence" (AI HLEG) was consulted. In its report dated 8 April 2019, it pointed out that the term "artificial intelligence" comes from computer science, where it does not just refer to a single, static (and therefore tangible) technology, but in fact encompasses an extensive, dynamic field of research. Wanting to regulate artificial intelligence would be like wanting to regulate "the internet". The AI HLEG therefore proposed a correction: It is not artificial intelligence that should be regulated, but the technologies that result from it.

The definition of AI systems: The text of the law

Due to the dynamic nature of artificial intelligence, it was not desirable to settle on specific technologies. Rather, the definition should open up room for interpretation that not just makes it possible to respond to new AI technologies in the future, but ideally makes responding superfluous. However, the objectives of the AI Act ("to improve the functioning of the internal market [...]") are jeopardised if no clear definition is provided. Naturally, it is difficult to establish a definition that is as flexible as possible on the one hand and to provide as much legal certainty as possible on the other at the same time, as the two objectives pull in different directions. 

The result of this tension is the definition of "AI system" set out in Article 3 (1) AI Act. According to this, an AI system is

"a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments;

The definition indicates that the legislature only wants to commit to the absolute minimum. However, it is clear from the text of the regulation itself and its genesis that the legislature had certain AI technologies in mind when it drafted the regulation. The common features of these technologies were distilled and set out as minimum requirements. 

This allows for two approaches to interpreting the definition: The legislature's guiding principles on the one hand and the abstract minimum requirements on the other can be used to deduce physical forms of AI technology.

Legislature's guiding principles for AI

When the European legislature gave the AI HLEG the task of developing the definition of "artificial intelligence", the expert group built on a proposal that the European legislature had previously communicated. Among other things, the legislature's working thesis stated: 

"AI-based systems can be purely software-based, acting in the virtual world (e.g. voice assistants, image analysis software, search engines, speech and face recognition systems) or AI can be embedded in hardware devices (e.g. advanced robots, autonomous cars, drones or Internet of Things applications)." 

Regardless of the fact that this (partial) definition did not make it into the final text of the law, the legislature indicated here which technologies it associated with "artificial intelligence" (AI technologies). Of course, it should not be overlooked that this merely represents a snapshot of 2018. After all, no reference is made here to text-generating AI (in particular "large language models"), which play a significant role in the later text of the regulation under the label of "General-Purpose AI Model". 

With a view to the final AI Act dated 12 July 2024, this range of specific technical AI applications will be expanded and supplemented. The AI Act explicitly refers to online search engines (intermediation services), especially if they are formulated as online chatbots; autonomous robots in manufacturing or personal assistance and care; diagnostic systems and systems to support human decision-making in healthcare; software for processing and improving documents/dossiers; translation software; language models/chatbots and technologies with the ability to recognise biometric data or emotions of natural persons.

It is due to the dynamic nature of "artificial intelligence" that these principles from the legislature can only provide a rough guide. Just because a technology does not fall under one of these applications, this does not mean that it should not be seen as an AI system. Conversely, however, it can be said that the more a technology has in common with these principles, the more likely it is to qualify as an AI system. 

Text of the law

The second approach to the material scope of application of the AI Act comes from how the definition provided in the text of the law is interpreted. Based on the specific example applications of AI systems described above, the legislature has abstracted the common elements. This distillation of similarities (seven in number) subsequently found its way into the text of the law. Since 6 February 2025, the AI Act has also been supported by guidelines on how to interpret the term "AI system".

According to the legislature's definition, every AI system is

a machine-based system [1] that is designed to operate with varying levels of autonomy [2] and that may exhibit adaptiveness after deployment [3], and that, for explicit or implicit objectives [4], infers [5], from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions [6] that can influence physical or virtual environments [7]

  • "Machine-based" primarily "refers to the fact that AI systems run on machines" (recital no. 12 AI Act). A machine can be understood as both the hardware and software components that help the AI system to function; the term is to be interpreted broadly. The term also covers the new quantum computers.
  • Furthermore, an AI system is characterised by the fact that it can be operated autonomously to a certain degree, i.e. that it can act at least partially independently of actions by humans and is able to function without human intervention. A system that requires constant human intervention to function correctly is therefore not an AI system. However, it is sufficient for the system to be "designed" for autonomous operation. Whether it is ultimately operated autonomously is in general irrelevant.
  • An AI system also has the ability to "deduce" information. Deduction in this sense is the process by which the system generates outputs. AI systems differ from simple data processing primarily in that they go beyond the mere processing of data to also enable learning, reasoning and modelling processes and generate new outputs. An AI system differs from a computer program in particular in that it does not simply automate or (independently) improve an operation, but generates its own output.
  • The following list, according to which outputs can be "predictions, content, recommendations or decisions", merely serves to provide examples. Models or algorithms that are extracted by the system from inputs or data are also outputs. The reference to explicit or implicit goals emphasises that AI systems can work towards explicitly defined or implicit goals. The goals of the AI system are not necessarily synonymous with its purpose.
  • In addition, the outputs of the AI system can have an effect on the physical or virtual environment. In this sense, environments are contexts in which AI systems are operated, i.e. their "working environment" so to speak. 

Other possible capabilities of AI systems

An AI system does not necessarily have to be purely software. This feature, which was included in the draft AI Act, has been deleted. Based on the legislature's intention, it should not matter whether the AI system is used independently or as part of a product. Insofar as an AI system comes as part of a product, it is irrelevant whether the software is embedded in the product itself or merely remotely serves the function of the product without being integrated into it. 

The definition provided in Article 3 (1) AI Act also clarifies that a system may be an AI system if it is adaptive after it has gone into operation. Adaptiveness refers to the ability of the AI system to learn and change as it is used. This refers in particular to "machine learning". The legislature is therefore deciding against the view that a system is only an AI system if it can change itself. 

The distinction pointed out by the AI HLEG between very simple "rational AI systems" and "learning rational systems" has not led the legislature to limit itself to one or the other. 

Filtering via risk assessment

With Article 3 (1) AI Act, the legislature has thus opted for a very broad and flexible definition of an AI system. This seems justified. After all, the research field of artificial intelligence is constantly reinventing itself and bringing new AI systems onto the market. Limiting the scope of application at the material level harbours the constant risk of loopholes in the protections.

Instead, the legislature is applying the lever through the risk assessment. An AI system that is so rudimentary that it poses little or no risk is quickly dismissed from the "stranglehold" of the AI Act and only subjected to minimal requirements. Only when assessing whether an AI system is a high-risk AI system does the legislature feel compelled to take more far-reaching action.