AI Act: Council and Parliament reach political deal

International

After lengthy negotiations, a political deal has been reached on the AI Act. This blog provides an overview of the current situation.

There has been a gap of over two years between the world's first legal framework for Artificial Intelligence (AI) presented by the EU Commission in April 2021 and the press release of 9 December 2023, in which the EU Parliament announced that it had reached a political deal on the draft AI Act with the Council. Ahead of the deal there had been several days of intense negotiations, starting on 6 December 2023, the outcome of which remained long in the balance.

An overview of the key points

The AI Act retains the risk-based and graduated approach that was originally intended. AI systems must meet transparency requirements, for example to comply with EU copyright law and to summarise training data. In some cases, there are also reporting obligations with regard to energy efficiency and cyber security. Providers of high-risk AI systems will be subject to extensive and complex obligations. In addition, citizens will have rights to information and complaints about AI systems. Violations of the new rules can lead to severe fines.

Catalogue of prohibited practices

According to the press release, in line with its negotiating position of 14 June 2023, the Parliament was successful in including biometric categorisation systems that classify people into groups based on sensitive criteria (e.g. political views, skin colour or sexual orientation) in the Catalogue of prohibited practices. AI systems for social scoring based on social behaviour and personal characteristics, emotion recognition (e.g. in the workplace or educational institutions), scraping to create facial recognition databases, the manipulation of human behaviour and the exploitation of people's vulnerabilities (e.g. age, disability, social or economic situation) will also be prohibited in future.

In its negotiating position, the Parliament had called for the prohibition of AI systems in public spaces that enable biometric identification (biometric identification systems, RBI) – both generally for real-time identification and, with a few exceptions, 'post' remote identification for the purpose of law enforcement. The deal now stipulates that identification systems are permitted both in real time and retrospectively in publicly accessible spaces under strict conditions and subject to a pre-judicial authorisation.

The subsequent use of identification systems is limited to the targeted search for a person convicted or suspected of a serious criminal offence. Real-time identification systems will be subject to strict requirements and may only be used in a limited area and for a limited time for the targeted search for victims of kidnapping, human trafficking or sexual exploitation, to prevent specific imminent terrorist threats and to locate or identify suspects of pre-defined serious criminal offences.

High-risk AI systems

According to the Parliament's negotiating position, AI systems should only be categorised as high-risk AI systems if they have a significant impact on fundamental rights. There is little information available on the specific obligations that providers of high-risk AI systems will be subject to in future, in particular on the data and data governance requirements that are particularly relevant in practice (currently: Article 10 of the AI Act).

It remains to be seen what the final wording of the deal and its practical impact on the development and training of AI systems will be.

General-purpose AI systems (GPAI)

The final version of the AI Act will contain rules for "general-purpose" AI systems (GPAI). Even if these AI systems are still a long way from being usable in all conceivable fields of application without additional training at the current stage of technical development, the AI Act already addresses the potential risks that such systems can pose.

For example, special transparency obligations are defined for GPAI and obligations are introduced with regard to comprehensive documentation, compliance with copyright and a detailed summary of the training data used. For GPAI systems that have a potentially very serious impact ("high impact"), additional requirements apply, for example with regard to tests and evaluations, reporting obligations, risk management and security requirements.

Lower fines for violations

Violations of the AI Act can result in fines of between EUR 35 million or 7 % of global turnover and EUR 7.5 million or 1.5 % of turnover, depending on the violation and the size of the company. Compared to the Parliament's negotiating position (EUR 40 million or 7 % of global turnover and EUR 10 million or 2 % of turnover), the amount of potential fines has thus decreased slightly.

What next?

The AI Act is intended to strike a balance between driving innovation and establishing an AI centre in Europe on the one hand and fundamental rights, democracy and sustainability on the other. The new EU data law, including the Data Act, which is intended to enable the comprehensive use of data in Europe and promote a flourishing internal market for data, is also of key importance in connection with AI. Following the political deal on the AI Act, both the Council and the Parliament must formally adopt the agreed text. Votes will take place in the EU Parliament's Committee on the Internal Market and Consumer Protection and in the Committee on Civil Liberties, Justice and Home Affairs.

More information on AI: Artificial Intelligence and law Insights from CMS | CMS Germany