The following article explains the most important provisions of the AI Act that employers should be ready for.
The Council's final vote on the AI Act is an important milestone in the regulation of artificial intelligence. The AI Act will have far-reaching implications for businesses across Europe, particularly in terms of how employers use artificial intelligence-based technologies. In light of these developments, it is crucial that employers understand the requirements and obligations associated with this new legislation.
This blog article aims to provide an overview of the most important aspects of the AI Act that are particularly relevant for employers. We will look at which categories of AI are covered by the AI Act, how they are classified and what specific compliance requirements arise for businesses.
Objective of the AI Act
The objective of the AI Act is to promote human-centred and trustworthy AI and to warrant a high level of protection for health, security, democracy, the rule of law and the environment. At the same time, the AI Act is intended to contribute to the development of innovation. To coordinate these objectives, the AI Act provides for a risk-based approach consisting of a tiered system with corresponding obligations, depending on the specific use of AI concerned: while specific requirements apply to high-risk AI systems and general purpose AI (GPAI) models, basic transparency rules apply to a broader range of AI systems. Certain AI practices that distort a person's behaviour through subliminal manipulation or deception techniques, exploit a person's vulnerabilities or evaluate people based on their social behaviour with regard to adverse effects ("social scoring") are completely prohibited.
Employers frequently use AI systems as deployers
The Parliament and the Council base their definition of AI on the OECD concept. According to Art. 3 no. 1, an AI system is a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.
The personal scope of the AI Act primarily includes providers and deployers (both operators) of AI systems. If employers only use AI systems developed by third parties under their own responsibility, they are considered deployers within the meaning of the AI Act. An employer only qualifies as a provider if it develops an AI system or has one developed and puts this system into operation under its own name.
The business use of AI can be classified as high-risk
If employers do not develop AI models that are capable of performing a wide range of different tasks, but only operate AI (in particular generative AI applications) or use AI only for specific purposes in the workplace, the details of the requirements for providers of GPAI models under Art. 53 are not relevant. Rather, the focus for employers is on determining whether an AI system must be categorised as high-risk.
With regard to employment and personnel management, Article 6 (2) in conjunction with Annex III defines these as systems that are intended to be used for the recruitment or selection of natural persons, in particular to place targeted job advertisements, to analyse and filter job applications, and to evaluate candidates. AI systems intended to be used to make decisions affecting terms of work-related relationships, the promotion or termination of work-related contractual relationships, to allocate tasks based on individual behaviour or personal traits or characteristics or to monitor and evaluate the performance and behaviour of persons in such relationships are also considered to be high-risk.
This is based on the consideration that these systems can have an appreciable impact on the future career prospects, livelihoods and rights of the employees concerned (recital 57). In particular, using such systems to monitor performance and behaviour has an impact on data protection and privacy. There is also a risk that historical patterns of discrimination, for example against women, will be perpetuated.
AI literacy and transparency are basic requirements
Regardless of the specific level of risk, employers as providers or deployers of an AI system have an obligation under Art. 4 to take measures to ensure that their employees have a sufficient level of AI literacy. Existing technical knowledge, experience, education and training and the context the AI systems are to be used in must be taken into account. In addition, Art. 50 imposes general transparency obligations for direct interaction of the AI system with natural persons and for the publication of certain AI-generated texts.
The most extensive obligations apply to high-risk AI systems
Deployers of high-risk AI systems are subject to a number of obligations in Art. 26 to ensure security, transparency and fairness in the use of these technologies. First, they must take appropriate technical and organisational measures to ensure that the AI system is used in accordance with the instructions for use. There is also an obligation to ensure that the input data corresponds to the intended purpose of the high-risk AI system and is "sufficiently representative" with regard to the intended use.
Information, storage and supervisory obligations
If employees in the workplace are affected by a high-risk AI system, they must be informed in advance. If there is a works council, there is also an obligation to inform the works council. Regardless of this, any EU and national regulations must be observed, including the obligations to inform and consult the works council set out in the German Works Constitution Act (BetrVG).
Furthermore, deployers of high-risk AI systems are required to provide comprehensive documentation. This includes storing automatically generated logs for at least six months.
The duty of human supervision also poses a challenge as this cannot be carried out by just anyone. Instead, the natural person must have the necessary competence, training and authorisation and the deployers must provide them with the necessary support.
There is a special obligation to provide information when high-risk AI systems make decisions about natural persons or assist in making these decisions. In such cases, affected persons have a new right to an explanation of individual decision-making, as set out in Art. 86 AI Act.
Monitoring duties
Continuous monitoring of the AI system in accordance with the instructions for use and ensuring that it is taken out of service if there is reason to believe that its use poses a disproportionate risk to health, security or fundamental rights are also essential aspects. There is also an obligation to report serious incidents.
In summary, deployers of high-risk AI systems have a significant responsibility, which not only concerns the technical and organisational implementation, but also includes the protection and information of affected persons as well as the monitoring and control of the system to ensure its security and fair use.
Obligations as a provider
In the probably less frequent case that an employer develops or has a high-risk AI system developed as a provider, it is subject to the obligations of Art. 16-21. It must take measures that enable the deployers to fulfil their obligations in turn. In addition, a risk management system must be set up for the regular systematic review of the AI system and data management. Providers are obliged to register themselves and their systems in an EU database before putting a high-risk AI system into operation in the workplace (Art. 49).
Fines may be imposed for violations
If an employer violates Art. 16 or 26, it can be fined up to EUR 15,000,000 or up to 3 % of its total worldwide annual turnover for the previous financial year.
Employers need to be prepared
The draft now needs to be formally approved by the Council and will enter into force 20 days after publication in the Official Journal. The AI Act will then become applicable, in accordance with Art. 113, apply after just six months, while the regulations for general-purpose AI will only apply after 12 months and regulations regarding high-risk AI only after 36 months. In order to avoid being taken by surprise by the comprehensive regulatory concept of the AI Act, employers should familiarise themselves now with the catalogue of obligations that awaits them.
Upcoming Event
REGISTER NOW: AI Regulations in the EU, UK, and Asia: Essential Insights for HR and Employers
Social Media cookies collect information about you sharing information from our website via social media tools, or analytics to understand your browsing between social media tools or our Social Media campaigns and our own websites. We do this to optimise the mix of channels to provide you with our content. Details concerning the tools in use are in our Privacy Notice.