There are many artificial intelligence solutions that can be used at the workplace. Whether it is at the recruitment stage using cognitive games (Goshaba), writing emails (TextCortex), managing skills development (Neobrain), to anticipate turnover (Team Opportunity Prediction), to organize work or improve productivity, employers have a wide range of tools to integrate artificial intelligence in HR.
Nevertheless, the development of artificial intelligence generates legal issues and concerns: the workplace is no exception. While AI can offer great perspectives, employers should anticipate changes implied by its development.
Use of AI by HR
Artificial intelligence in HR can be used for many traditional tasks:
- Sorting CVs: artificial intelligence systems can analyse several thousand CVs to determine, according to criteria predetermined by the company, the profiles best suited to the job description.
- Anticipating resignations and staff turnover: artificial intelligence systems can determine which employees are most likely to resign, when and for what reason, in order to send an alert when a certain number of criteria are met, anticipating the needs for recruitments.
- Optimization of the organization of employees' working time: this is particularly common in parcel delivery and logistics companies (DHL, Amazon, La Poste, etc.).
While these tools clearly allow time saving and efficiency, this algorithmic or “predictive” solution carries a risk of discriminatory practices behind a promise of neutrality. Indeed, experience has demonstrated that, in practice, a predictive recruitment system could be discriminatory due to the presence of bias in the training data fed into the algorithm.
According to a recent article of Terra Nova (a French independent think tank) dated 3 February 2025, generative AI also offers opportunities throughout professional life.
Contrary to some alarmist studies, Terra Nova's article highlights that generative AI can facilitate job market access, reduce discrimination, and promote professional advancement, provided that public authorities, employers, and social partners make appropriate choices. The think tank emphasizes the need to adapt continuous training policies and anticipate AI impacts in companies, suggesting that AI could become a lever for inclusion and improved quality of work life. For example, during recruitment process, AI tools can help overcome barriers such as illiteracy, dyslexia, lack of digital skills, and disabilities by assisting in writing CVs and cover letters, using voice-to-text conversion, and translation tools.
Although it constitutes an opportunity, typical ethical and legal issues may arise because artificial intelligence can impact the employees’ health and safety through work intensification. Similarly, AI is likely to trigger issues concerning the organization of working time (optimization of time worked in the company, with fewer breaks) or possible economic dismissal (could the use of massive AI systems be qualified as “Technological Change” justifying the implementation of a redundancy?).
Reshaping labor laws to protect employees?
The use of AI in the workplace will require regulatory frameworks to maximize its benefits while mitigating its potential negative effects. But AI is faster than lawmakers and up until dedicated provisions are enacted or at least adapted to tackle new challenges, some of the legal issues raised by AI can already be managed by existing legal rules.
The General Data Protection Regulation sets certain limits, in particular the prohibition of profiling (processing using an individual's personal data to analyze and predict their behavior) and decisions that would be based exclusively on an algorithm.
Likewise, as per the French labor Code, the works council must be informed and consulted before the introduction of any new technology in the company, but may also request an assessment of the impacts of the proposed AI on employment conditions (article L.2312-8 of the Labor Code).
More recently, the EU Regulation 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence (known as the AI Act) provides for rules protecting employees. The AI Act applies to deployers of AI systems within the EU, which includes many employers using AI systems under their authority. This means employers who use AI for tasks such as recruitment, employee management, and other operational activities are subject to the Act's requirements.
In this respect the AI Act prohibits the use of artificial intelligence systems to infer the emotions of a person in the workplace (Art. 5. 1. f) and those using biometric categorizations to individually classify natural persons on the basis of their biometric data (...) (Art. 5.1.g).
The Act classifies as high-risk artificial intelligence (Annex III) systems relating to:
- employment, worker management and access to self-employment
- education and vocational training
High-risk artificial intelligence systems are subject to information for staff representatives by the AI Act. Thus, before putting into service or using a high-risk AI system in the workplace, deployers who are employers shall inform the staff representatives, and the individuals concerned that they will be subject to the use of a high-risk AI system. This information shall be provided, where relevant, in accordance with the rules and procedures laid down in EU and national law and in accordance with practices regarding the provision of information to employees and their representatives.
Use of AI by employees and risks for employers
If some consider AI as a possible threat for employees, one should also take into account risks for employers triggered by the use of AI by employees themselves.
Employee may produce texts and images as part of their work without informing their employer and without the employer being aware of it. For the time being, there is no traceability of AI products (for example, a label that would clearly indicate that a text or image was produced by an AI system).
By way of illustration, ChatGPT, whose primary function is to produce a synthetic text in response to a question without the need for human intervention, could be used by an employee to write articles or to automate time-consuming tasks with low added value (writing reports, emails, etc.). Some have gone so far as to integrate generative AI into their daily work tools, mainly to avoid tedious tasks such as standard emails, generating marketing content, searching for scientific content, or even writing commercial prospecting messages.
This is not without risk, as the use of data can compromise personal data and consequently expose their employers to non-compliance with GPDR regulation.
There are risks concerning confidentiality, as the data can be reused to train the software, without any specific information being communicated to it regarding the fate of the data thus processed. This has led the « GPDP » (Italian data protection authority) to suspend the use of ChatGPT in Italy, as the General Data Protection Regulation (GDPR) requires that everyone be informed of any processing of data concerning them, that the personal data provided is accurate and that the age of users can be verified.
Finally, issues related to the ownership and quality of the response obtained may arise. When producing automated text, it is not possible to trace the sources used, which creates a risk for the company if the employee does not disclose this use and one of the elements is in fact covered by copyright.
AI and social dialogue
There is no real consensus among economists on the consequences of AI on productivity and employment. AI has the potential to automate a substantial portion of current jobs. Estimates suggest that roughly two-thirds of jobs could be affected by AI automation, with generative AI potentially substituting up to one-fourth of current work. This could lead to job displacement in certain sectors, particularly those involving repetitive tasks.
Nevertheless, studies show that the majority of managers consider AI to be an opportunity, while around 20% consider it to be a threat (see survey “Managers and AI: which knowledge and professional uses?”, APEC, May 2024).
As a matter of fact, AI can assist in data analysis, customer service, and decision-making, allowing employees to focus on higher-value tasks that require creativity and problem-solving skills. But integration of AI is also likely to create skill gaps in the workforce. Employees will need to acquire new skills to work effectively with AI systems, and companies will need to invest in reskilling and upskilling programs to keep up with technological advancements.
In this context, staff representatives are to play a great role in the rolling-out of AI at the workplace.
Negotiations with trade unions could also be conducted at sector level, in particular through the conclusion of a national inter-professional agreement.
In France for example, the journalists' branch recently concluded one of the first branch agreements taking into account the evolution of the profession due to artificial intelligence (Branch agreement of January 17, 2024, relating to the professional ethics charter in the journalists' branch).
According to Terra Nova, the deployment of AI should be a central theme in social dialogue, potentially becoming a mandatory topic to be discussed with staff reps.
As of today, to protect employees’ rights and mitigate risks for employers, companies are clearly encouraged to implement AI policies to regulate the use of AI at the workplace, as they did when emails and internet access entered the workplace and required the rolling-out of IT policies.
Social Media cookies collect information about you sharing information from our website via social media tools, or analytics to understand your browsing between social media tools or our Social Media campaigns and our own websites. We do this to optimise the mix of channels to provide you with our content. Details concerning the tools in use are in our Privacy Notice.