EU Commission issues Guidelines on Prohibited AI Practices – Part II.

CEE, UK

The European Commission has issued long awaited comprehensive guidelines on prohibited artificial intelligence (AI) practices, as established by Regulation (EU) 2024/1689 (AI Act). These practical guidelines, which include examples and “use cases”, are crucial for businesses and organisations involved in the development, deployment, and use of AI systems within the EU. They have been issued to ensure compliance with the AI Act, which seeks to balance innovation with the protection of fundamental rights and Union values.

This is the second of two articles on these guidelines. In our first article, we summarised the guidelines on harmful manipulation and exploitation, social scoring and individual risk assessment and prediction of criminal offences. In this our second article, we continue summarising the guidelines on the remaining prohibited AI practices.

Untargeted scraping to develop facial recognition databases (Article 5(1)(e))

This prohibition covers creation or expansion of facial recognition databases through untargeted scraping of images from the internet or CCTV footage.

  • Examples: AI systems scanning live CCTV feeds to compile unauthorised biometric databases or AI companies collecting millions of photos from social media without user consent to train facial recognition models.

Based on the guidelines, the following AI systems fall outside of the scope of this prohibition:

  • Untargeted scraping of biometric data other than facial images (such as voice samples)
  • AI systems which harvest large amounts of facial images from the internet to build AI models that generate new images about fictitious persons.

Emotion recognition in workplaces and schools (Article 5(1)(f))

AI systems that fall under this prohibition are used to identify and infer emotions (e.g. happiness, sadness, anger, surprise, disgust, embarrassment, excitement, shame, contempt, satisfaction, amusement, etc.) of natural persons based on biometric data in workplaces and educational institutions, with exceptions for medical and safety reasons.

  • Examples: using webcams and voice recognition systems by a call centre to track their employee’s emotions, such as anger, or using emotion recognition AI systems during the recruitment process; using an emotion recognition AI system by an education institution during admissibility tests for new students.

Examples not falling under this prohibition:

  • AI systems inferring emotions from written text and not based on biometric data.
  • AI systems detecting physical states, such as pain or fatigue (e.g. of professional pilots for the purpose of preventing accidents).
  • AI systems for emotion detection outside of workplaces and education, such as using voice recognition systems by a call centre to track their customers emotions or using emotion detection in an online language course.
  • Emotion detecting AI systems in the workplace and education institutions for medical or safety reasons.

Biometric categorisation (Article 5(1)(g)):

This prohibition covers AI systems categorising individuals based on biometric data to infer sensitive characteristics like race, political opinions, or sexual orientation.

  • Examples: An AI system that categorises persons active on a social media platform according to their assumed political orientation, and analyses the biometric data from the photos they have uploaded on the platform in order that they be sent targeted political messages; or a biometric categorisation system that claims to be capable of deducing an individual’s race from their voice.

Examples not falling under this prohibition:

  • Biometric categorisation ancillary to another commercial service and strictly necessary for objective technical reasons.
  • Labelling, filtering, or categorisation of biometric data sets acquired in line with EU or national law, which may be used, for example, for law enforcement purposes.

Real-time remote biometric identification (RBI) for law enforcement (Article 5(1)(h)):

This prohibition includes use of real-time RBI systems in publicly accessible spaces for law enforcement, with specific exceptions for targeted searches of victims, prevention of imminent threats, and localisation of suspects of certain crimes. These exceptions can be practiced only with strict safeguards such as conducting a fundamental right impact assessment and registration in the EU database. The guidelines contain detailed rules of the fundamental right impact assessment. All other uses of RBI systems that are not covered by this prohibition fall within the category of high-risk AI systems of the AI Act.

This concludes the second Law-Now article in this series on the EU’s guidelines on prohibited AI practices.

Conclusion

The Commission's guidelines provide essential clarity on the prohibited AI practices under the AI Act, aiming to protect fundamental rights while fostering innovation. Businesses and organisations involved in AI should carefully review these guidelines to ensure compliance and avoid substantial penalties.

For advice on navigating these regulations and to ensure that your AI systems meet required standards, contact your CMS client partner or these CMS experts: