What employers must be aware of when using AI and how AI can revolutionise HR and boost efficiency

Germany

The hype created by the rise of generative artificial intelligence (AI) is enormous and continues to this day. In the workplace, it is primarily text-based large language models (LLMs) that will make inroads because of their ability to generate human-like texts and conversations. Used as such, LLMs allow employees to significantly reduce their workload. For example, employees can easily obtain information on a specific topic they are researching to complete their tasks faster. In HR, the software can also be used to check job applications against predefined criteria or draft employment contracts and notices of termination.

As with any new technology, however, there are certain considerations that both employees and employers must take into account. The goal of this article is to highlight its potential legal challenges so that both employers and employees can safely use generative AI as part of their work practices going forward.

Implementation

Due to its wide range of possible applications, employers need to ensure that generative AI is implemented in a way that is compatible with business needs, especially when integrating it into other systems via APIs (application programming interfaces). APIs will enable existing software to communicate with LLMs to ensure smooth and fully incorporated business operations. Generative AI can thus be used both as a work tool for employees to perform their tasks and for dealing with customers.

In addition to economic considerations, the implementation process must also comply with legal requirements. For instance, national regulations might require the employer to consult employee representatives before introducing generative AI as a work tool. Under current German law, for example, the mere possibility of monitoring performance or behaviour is sufficient to justify co-determination rights in the introduction of IT; it can generally be assumed that a generative AI has the corresponding technical possibilities. However, a right of co-determination is ruled out if the employer does not have access rights to the data, which could be used to monitor performance and behaviour. This is the case if the employees use external service providers with whom the employer has no contractual relationship. If the AI's use is for downsizing affecting a significant portion of the workforce, co-determination is required as well. On the contrary, the instruction to use an AI application alone is not subject to co-determination per se, since this concerns only the work in terms of how it is executed.

Data protection

Once generative AI is successfully implemented, its use bears the inherent legal risk that personal data,  confidential business information and trade secrets will be disclosed. LLMs usually use the data provided by users to train the system. It can thus be assumed that the data are stored on the providers' servers, at least temporarily.

Across the member states of the EU, this use might conflict with the European General Data Protection Regulation (GDPR), which is arguably the strictest data protection legislation in the world. Therefore, when using generative AI systems in the EU, it is recommended not to enter any personal data at all – especially since the EU has concerns about the privacy of data stored on servers in the US, which applies to data collected by many LLMs. (The EU-US Data Privacy Framework could resolve this issue).

If LLMs are used to generate warnings and notices of termination, Article 22 (1) of the GDPR must be followed. The provision establishes a general principle that decisions based solely on automated processing, which produce legal effects concerning the data subject or similarly affect him or her significantly, are prohibited. In this case, the final decision-making authority must lie with a real human being.

Employers should ask themselves early in the process how they can ensure a GDPR-compliant implementation of AI in the workplace. Once they have become accustomed to using generative AI, a return to “the old ways of working” seems unlikely due to the increasing options the systems offer.

Discrimination and bias

The software behind LLMs is fed and trained by a large number of texts from the internet, in particular from Wikipedia, social media and online forums, newspaper articles, and books. The software is thus exposed to all of the human errors of its users. This can lead to biased results and entail discriminatory effects when it comes to age, ethnicity, sex, or disabilities. In this respect, employers are advised to take steps to ensure that any discriminatory or biased results do not see the light of day since such statements to customers could potentially cause permanent damage to the business’s reputation.

If used by HR as an autonomous tool for hiring and firing decisions, discrimination can easily result from claims made by the applicant or the dismissed employee (under the General Act on Equal Treatment under German law or Title VII of the Civil Rights Act of 1964, as amended, in the US).

In April 2021, the European Commission proposed a harmonised legal framework for artificial intelligence, also known as the AI Act (COM(2021) 206 final). Following a risk-based approach, the proposed AI Act is set to stipulate requirements intended to reduce risks to safety and fundamental rights such as personal dignity (Articles 1 and 4 of the EU Charter of Fundamental Rights), respect for private and family life (Article 7), the right to equality (Article 20) and non-discrimination (Article 21).

Conclusion

With the above in mind, employers should put in place guidelines that regulate the work-related use of generative AI, especially LLMs, by their employees to manage the inherent risks. Additionally, employees should be trained on how to interact with LLMs and how to communicate with customers who are interacting with LLMs so that their use is profitable and legally secure in the long run. Although still limited in some respects, most generative AI systems provide a reliable minimum level of plausibility at a speed well above that of humans and offer a significant increase in efficiency in this respect. Used properly, generative AI can thus be a valuable tool for improving business operations.

It is therefore important for both employers and employees to understand the capabilities of generative AI. They need to be aware of what it can and cannot do. When it reaches its limits, employees should be prepared to take over. To ensure constant quality assurance, employers should have their human employees regularly review the LLM's responses for accuracy, and to identify any underlying issues and avoid potential liability.

The creation of generative AI is indisputably a historic milestone in human development. Technological development is advancing rapidly and will continue to change the world of work. Employers are well advised to examine the opportunities AI presents and weigh them against potential risks. Now is a good time to see what it has to offer.

For more information on the legal implications of using generative AI in the workplace, contact your CMS client partner or these CMS experts.

 

Our experts will be closely monitoring these developments and predictions during the course of the year, providing regular updates and analysis through Law-Now, the CMS subscription service. Sign up today and ensure that you never miss an important update again.