Regulating AI systems (Part 1): Effects of the AI Act on employers

Germany

This article presents the main provisions in the AI Act for employers.

The European Union has taken a leading role in regulating artificial intelligence (AI). In response to the rapid spread of AI systems and their potential impact on society, the economy and ethics, the EU is currently working on two important instruments: the AI Act and the AI Liability Directive. These two regulations are intended to build citizens' trust in AI, clarify accountability for AI systems and ensure ethical use. The AI Liability Directive is the subject of part 2 of this blog series.

Agreement on the content of the AI Act to be reached by the end of 2023

In April 2021, the European Commission presented an initial draft of the AI Act. This was the first attempt at creating robust regulation for AI. Since the EU Parliament agreed on its compromise proposal to the planned AI Act on 14 June 2023, the draft has been in trilogue negotiations between the European Commission, the Council of the European Union and the European Parliament. The aim is to reach an agreement on the design of the regulation by the end of 2023.

Objective and substantive scope of the AI Act

The regulation aims to ensure safety, transparency and longevity in the design and use of AI. In addition, possible risks of AI regarding threats to fundamental rights are to be addressed. 

The regulation will for the most part apply to both providers and users/deployers of AI systems, provided that the AI systems themselves or their results are used in the EU. Employers will generally be viewed as users/deployers of AI systems.

AI systems will generally be described, subject to the results of the trilogue meeting regarding Article 3 (1), as autonomously acting systems that operate on the basis of various approaches (such as machine learning, logic and knowledge or statistics) and generate outputs that influence their environment.

The obligations employers will be subject to will depend on the risk of the AI system used

The regulator takes a risk-based approach by distinguishing between AI systems with unacceptable risk, AI systems with high risk and other AI systems. 

An unacceptable risk is when there is a danger to humans, especially if the AI system has the potential to manipulate humans through the use of subliminal techniques. Examples of this include social scoring, "real-time" remote biometric identification in publicly accessible spaces and predictive policing systems. These are generally prohibited under Title II of the Act.

Title III, as the core of the Act, deals with high-risk AI systems that pose significant risks to the health and safety or fundamental rights of individuals. This includes AI systems used in products covered by EU product safety legislation, such as medical devices, lifts and toys. Employers are particularly likely to notice that AI used in human resources regularly poses a high risk, since Article 6 (2) in conjunction with Annex III (4) covers systems which are intended to be used in particular for recruitment, promotion and termination.

High-risk AI systems are subject to strict regulations before they can be brought to market. They also need to be assessed over their entire life cycle (Article 29). Therefore, the user/deployer is subject to the obligation to use a high-risk AI system in accordance with the instructions for use and to monitor its operation based on those instructions for use. If the user/deployer has control over the data, they must also ensure that the input data is in accordance with the intended purpose of the system and retain the system's automatically generated records.

According to the European Parliament's proposal, users/deployers must first carry out a fundamental rights check in accordance with Article 29 a before putting high-risk AI into service. This check will take factors into account including which groups of people will be affected by the use of AI. Furthermore, in accordance with Article 29 (5) (a), workers representatives must be consulted prior to the use of a high-risk AI system in the workplace and informed that they will be subject to the system.

In addition, the European Parliament wants users/deployers of all AI systems to comply with general data protection, transparency and equal treatment principles under Articles 4 a and 4 b. For employers, this means ensuring there is the possibility of human oversight and a sufficient level of AI competence among their employees. If a high-risk AI system is used, Article 29 (1 a) requires human oversight to actually be introduced.

The measures employers should take right away

To keep up with the use of future technologies, employers should look into integrating AI into their operations at an early stage. This goes in parallel with the need to keep an eye on the legislative development of the draft AI Act. The proposed obligations to keep records and assess the impact on fundamental rights will presumably lead to increased effort being demanded of organisations. Employers should therefore get started early with bringing the necessary expertise on board by training existing staff accordingly and/or making use of external support.