Skip to content

Menu

Network by SubjectChannelsBlogsHomeAboutContact
AI Legal Journal logo
Subscribe
Search
Close
PublishersBlogsNetwork by SubjectChannels
Subscribe

Artificial Intelligence at Work: Legal Issues in French Labour Law

By Laure Joncour on August 26, 2025
Email this postTweet this postLike this postShare this post on LinkedIn
Parisian bridge
Léonard Cotte, Unsplash

Artificial Intelligence (AI) is increasingly present in our professional lives—whether in recruitment, human resources management, task automation or decision-making. Employees use AI to enhance their efficiency or to assist with repetitive tasks, often without informing their managers.

AI in the workplace utilises a range of tools, including:

  • Virtual assistants (e.g. ChatGPT) for drafting emails, summarising documents, or generating ideas;
  • Automated translation tools;
  • Predictive data analysis software;
  • Systems for automating repetitive tasks; and
  • Chatbots.

What are the legal issues in labour law arising from the use of AI in a work context?

While the European Directive on Artificial Intelligence (AI Act), which came into effect on 2nd February 2025, seeks to regulate both AI providers and users, the core legal principles applicable to employers are already enshrined in the French Labour Code.

Review of the core principles:

Consultation with the Works Council (CSE – for companies with at least 50 employees):

The CSE may need to be consulted on several grounds but most notably when new technologies are introduced within the company. Any such implementation of AI requires consultation with the CSE, which must be informed and consulted about the project and its potential impact on employees and their employment. It is the employer’s responsibility to assess the impact of any AI use and to determine whether, and on what grounds, the CSE should be consulted.

Transparency:

Both the AI Act and the Labour Code mandate transparency in the workplace. In the context of recruitment and employee evaluation, the Labour Code specifies that the methods and techniques applied must be clearly disclosed to both employees and job applicants.

Accordingly, when a candidate or employee is evaluated, hired or dismissed based on (or influenced by) a decision made by an algorithm, they have the right to be informed of the use of and to understand how it is being used.

Algorithmic Discrimination:

Research indicates that recruitment processes are susceptible to “unconscious bias.” Our opinions are subjective, shaped by culture and upbringing, and can unconsciously influence hiring decisions. While it might be assumed that AI could ensure objectivity in recruitment, algorithms trained on biased data may perpetuate or even amplify existing discrimination. Whether the discrimination is human-driven or generated by AI, labour law prohibits any form of discrimination based on unlawful criteria such as gender or origin.

Consequently, it is essential to conduct regular audits of AI tools to detect and correct any systemic biases.

Training:

Employers are legally required to ensure that their employees receive adequate training to help them adapt to new technologies. It is their responsibility to provide appropriate training on these technologies.

Data Protection:

AI frequently relies on the analysis of sensitive data, including personal information, employment history, online behaviour, etc. The collection and processing of such data must comply with strict GDPR rules. Data must be collected lawfully and transparently. Additionally, organisations are required to obtain informed consent from employees where necessary and to implement measures to ensure the security of such data.

Regulation of AI Use by Employees:

If an AI system makes a mistake—such as an HR tool wrongly rejecting a candidate or AI providing incorrect information—the question arises about accountability:  Is the employer or the software provider responsible? Similarly, an employee might misuse AI or inadvertently disclose sensitive data through a virtual assistant. These issues must be addressed through clear internal policies, and employees must be educated on the risks of using AI.

Health and Safety:

AI can be a valuable asset for improving occupational health and safety, particularly regarding dangerous or physically demanding tasks. However, its implementation can also become a source of stress—through social isolation caused by increased digitalisation, human-robot interaction or employee surveillance. Employers are under an enhanced duty of care to ensure the safety of their employees and to assess all workplace risks. The company’s mandatory risk assessment documentation (“Document Unique d’Évaluation des Risques”) must include any risks associated with the presence or introduction of AI in the workplace.

The integration of AI in the workplace is not just a technological challenge—it is also a legal one. As the saying goes, “an ounce of prevention is worth a pound of cure.” It is better to proactively invest time and effort upfront to prevent misuse, safeguard employees and minimise legal disputes.

  • Posted in:
    Employment & Labor, International
  • Blog:
    Global Workplace Insider
  • Organization:
    Norton Rose Fulbright
  • Article: View Original Source

LexBlog logo
Copyright © 2026, LexBlog. All Rights Reserved.
Legal content Portal by LexBlog LexBlog Logo