The protection of personal data in the context of AI systems used for professional purposes: a neglected issue
Ludovica Robustelli  1@  
1 : Nantes Université
CNRS : UMR6230, CNRS : UMR6502, CNRS

The risks associated with the use of AI in the workplace have long been well known. The technostress associated with information overload due to digitization, the fear of being replaced by robots and the resulting job losses, the difficulty of keeping personal and professional life separate, and the anxiety caused by video or algorithmic surveillance are all examples. 

In terms of how these systems are designed and trained, data, including personal data, is an essential component. The GDPR sets out specific rules for the processing of employees' personal data by employers. For example, online recruitment based on a single automated decision is prohibited, and the use of profiling to assess employee's performance is subject to strict rules. Article 88 of the GDPR also provides that Member States may adopt more protective provisions for the protection of workers' personal data, provided they obtain the prior approval of the European Commission. This shows that the protection of workers' personal data is an area that needs to be explored and where there is still a leeway for improvement. Nevertheless, this issue tends to be dismissed since the AI Regulation came into force, despite the importance of ensuring its proper articulation with the GDPR to safeguard the protection of workers' personal data. 

On 1 August 2024, one of the world's first instrument attempting to regulate AI systems within the EU internal market (and beyond) came into force. Since then, all attention is focused on the obligations binding operators involved in the lifecycle of these systems. Depending on the risks to safety, health and the protection of fundamental rights, AI tools are subject to a classification based on four levels of risk, ranging from prohibited practices to systems presenting no risk. AI systems deployed in the workplace are a privileged target for observation since many of their uses are classified as high-risk. 

This article aims to highlight the risks associated with the processing of workers' personal data by AI systems (I.) and looks at how European law is evolving to minimize the risks (II.). A third section will address the gaps left by current legislation (III.), followed by a final section containing suggestions for improvement and a conclusion (IV.). 

 

 


Loading... Loading...