EU Commission issues Guidelines on Prohibited AI Practices – Part I.

CEE, UK

The European Commission has issued long awaited comprehensive guidelines on prohibited artificial intelligence (AI) practices, as established by Regulation (EU) 2024/1689 (AI Act). These practical guidelines, which include examples and “use cases”, are crucial for businesses and organisations involved in the development, deployment, and use of AI systems in the EU. They have been issued to ensure compliance with the AI Act, which seeks to balance innovation with the protection of fundamental rights and Union values.

This is first of two Law-Now articles summarising these guidelines.

Background and objectives

The AI Act, effective from 1 August 2024, introduces harmonised rules for AI systems in the EU, categorising them into four risk levels: unacceptable, high, transparency, and minimal to no risk. The guidelines focus on AI systems posing unacceptable risks, which are prohibited under Article 5 of the AI Act. These prohibitions are designed to protect fundamental rights, including human dignity, privacy, and non-discrimination.

Prohibited AI practices – use cases, deadlines and penalties

The guidelines outline the conditions for seven categories of prohibited AI practices and illustrate them with practical use cases that fall under each prohibited practice, along with additional examples that constitute exceptions to the main rule.

The prohibitions go into force beginning 2 February 2025 and apply in principle to all AI system regardless of whether they were placed on the market or put into service before or after that date.

Non-compliance can result in significant penalties, which will be applicable on 2 August 2025 and include fines of up to EUR 35 million or 7% of the total worldwide annual turnover for businesses. Before 2 August, the provisions on penalties for non-compliance with the prohibitions in Article 5 AI Act will not apply since, in this interim period, there will be no market surveillance authorities to monitor whether the prohibitions are being complied with.

1. Harmful manipulation and exploitation (Article 5(1)(a) and (b)):

This prohibition includes AI systems that deploy subliminal, manipulative, or deceptive techniques to distort behaviour and cause or reasonably likely cause significant harm, which include physical, psychological, financial, and economic harms.

  • Examples: visual or auditory subliminal messages, subvisual and subaudible cueing, embedded images, misdirection, sensory or personalised manipulation by AI system, or AI chatbot that impersonates a friend of a person with synthetic voice and tries to pretend it is the person causing scams and significant harms.

This prohibition also covers exploitation of vulnerabilities due to age, disability, or socio-economic situations and cause or reasonably likely cause significant harm.

  • Examples: AI-powered toy encouraging children to complete increasingly risky challenges; a therapeutic chatbot to persons with mental disabilities exploiting their limited intellectual capacities to influence them to buy expensive medical products; an AI-predictive algorithm to target with advertisements for predatory financial products people who live in low-income post-codes.

Lawful AI systems, not falling under these prohibitions, include:

  • Lawful persuasion: an AI system, using personalised recommendations based on transparent algorithms and user preferences and controls, engages in persuasion.
  • AI systems not likely to cause significant harm: a therapeutic chatbot uses subliminal techniques to steer users towards a healthier lifestyle and to quit bad habits, such as smoking.

2. Social scoring (Article 5(1)(c)):

Social scoring means AI systems that evaluate or classify individuals based on social behaviour or personal characteristics, leading to detrimental treatment in unrelated social contexts or disproportionate to the behaviour's gravity.

  • Examples: an authority for migration and asylum implements a partly automated surveillance system at refugee camps built on a range of surveillance infrastructure, including cameras and motion sensors; tax authorities use an AI predictive tool on all taxpayers’ tax returns in a country to select tax returns for closer inspection based on unrelated data, such as taxpayers’ social habits or internet connections.

Examples for lawful AI systems, not falling under this prohibition:

  • Evaluation that is not based on personal or personality characteristics or social behaviour of individuals, even if in some cases individuals may be indirectly impacted by the score;
  • Financial credit scoring systems used by creditors or credit information agencies to assess a customer’s financial creditworthiness or outstanding debts, providing a credit score or determining their creditworthiness assessment, which are based on the customer’s income and expenses and other financial and economic circumstances.

3. Individual risk assessment and prediction of criminal offences (Article 5(1)(d)):

AI systems belong to this category, which are used to assess or predict the risk of a person committing a crime based solely on profiling or personality traits.

  • Examples: AI predicting future criminal behavior using personality traits or psychological profiling, rather than objective, verifiable facts directly linked to a criminal activity or AI automatically classifying individuals as potential suspects in criminal investigations without human oversight or objective verification.

Based on the guidelines, the following AI systems fall outside of the scope of this prohibition:

  • AI systems supporting human assessment based on objective and verifiable facts directly linked to a criminal activity are excluded from the prohibition. These AI systems, however, will be classified as high-risk AI systems.
  • Location-based or geospatial or place-based crime predictions fall outside the scope of the prohibition.
  • AI systems used for crime predictions and assessments in relation to legal entities.
  • AI systems used for individual predictions of administrative offences.

See the second Law-Now article in this series for more information on these guidelines.

Conclusion

The Commission's guidelines provide essential clarity on the prohibited AI practices under the AI Act, aiming to protect fundamental rights while fostering innovation. Businesses and organisations involved in AI should carefully review these guidelines to ensure compliance and avoid substantial penalties.

For expert advice on navigating these regulations and to ensure your AI systems meet required standards, contact your CMS client partner or these CMS experts: