AI Act - Prohibited Practices and High-Risk AI Systems

Germany

The classification of AI systems as prohibited practices or high-risk AI systems depends on their intended purpose and specific areas of application.

While the risk-based approach of the draft of the AI Act (AI Act) is also retained in the current version of parliament, parliament is seeking to provide more clarity on the classification of AI systems, particularly for high-risk AI systems, by adding the criteria of significant risk.

Unacceptable risk: classification of generative AI as prohibited practices.

The AI Act prohibits the placing on the market, putting into service, and use of AI systems deemed incompatible with European Union (EU) fundamental rights (Art. 5), with the catalog of prohibited practices expanded by parliament:

  • Social scoring systems: AI systems that score people based on personal characteristics such as race, gender, religion, or political beliefs (i.e. social scoring) can lead to discrimination, stigmatisation, and injustice.
  • Manipulation of behaviour: AI systems that aim to manipulate or influence people's behaviour to guide their decisions are prohibited. This includes, for example, the targeted manipulation of content on social media or other platforms to pursue political or commercial goals.
  • Real-time facial recognition in public spaces: the AI Act provides for a ban on the use of AI systems for real-time facial recognition in public spaces.
  • Risk assessment and profiling for delinquency: AI systems that assess natural persons or groups for risk of delinquency or predict the occurrence or recurrence of a crime or misdemeanor based on profiling a natural person or assessing personality traits and characteristics are prohibited, including data on a person's location or the past criminal behaviour of natural persons or groups of natural persons.
  • Database creation or expansion for facial recognition: Another prohibition applies to AI systems that create or expand databases for facial recognition through the untargeted reading of facial images from the Internet or from video surveillance footage.
  • Inference of emotions: AI systems that infer emotions of a natural person in law enforcement, border patrol, workplace, and educational settings are also considered an unacceptable risk and are prohibited.

If the prohibition of AI practices of Art. 5 is violated, the AI Act provides for fines of up to EUR 40 million or 7% of a company's total worldwide annual turnover in the preceding financial year (Art. 71). The fines provided for by the AI Act can thus potentially be far higher than under the European General Data Protection Regulation (GDPR), which provides for a maximum of up to EUR 20 million or 4% of a company's total worldwide annual turnover in the preceding financial year.

High-risk AI systems: Generative AI is captured depending on the intended purpose and application modalities.

High-risk AI systems are subject to stricter rules on risk management systems, data, data governance, technical documentation and record keeping, transparency and provision of information to users, human oversight and robustness, accuracy and cybersecurity. The classification of high-risk AI systems in the Commission's proposed regulation has been clarified in parliament's negotiating position.

The AI Act further distinguishes between two categories of high-risk AI systems.

One category includes AI systems that are themselves products or safety components of products covered by EU harmonisation legislation listed in Annex II and, as products or safety components of a product, are subject to third-party health and safety conformity assessment with a view to being placed on the market or put into service in accordance with the harmonisation legislation listed in Annex II. Basic examples include toys, aircraft, two- or three-wheeled vehicles, automobiles, medical devices, railroad systems, and elevators. In the area of generative AI models, virtual assistants or personalised recommendations could be considered as parts of such products.

In addition, AI systems that fall under one or more of the areas listed in Annex III of the AI Act are considered high-risk AI systems if they pose a significant risk to the health, safety or fundamental rights of natural persons or environmental harm. The restrictive criterion of actual significant risk as well as an obligation for the Commission to draw up guidelines six months before the AI Act enters into force, setting out in more detail circumstances of the use of AI systems that give rise to a significant risk to health, safety or fundamental rights, were added to parliament's negotiating position (Art. 6 (2)). As a result, Annex III was expanded to include:

  • AI systems in safety-critical areas: Errors or malfunctions in systems used in safety-critical areas such as the transport sector, the energy industry (e.g. water, gas, heat and power supply) or the healthcare sector can lead to serious damage or injury to people and are therefore considered high-risk AI systems.

For example, if generative AI is used to review X-rays, CT scans, and MRIs for signs of cancer, heart disease, or neurological disorders, or, to help physicians make diagnosis and treatment decisions by analysing patient records, these AI systems would be high-risk AI systems.

  • Biometric identification systems: For AI systems that use biometric data such as facial recognition, voice recognition, or behavioural analysis to identify individuals, the high risk classification is the result of the serious data-protection and privacy implications of misuse of these systems.

However, biometric identification systems are unlikely to use generative AI components in most cases.

  • Evaluating AI systems in certain areas: Where AI systems are used to evaluate performance, rank an individual, or provide access to and use of certain essential private and public services and benefits, an erroneous or biased evaluation may have a significant impact on the career prospects, participation in society, standard of living, and livelihoods of the individuals concerned. In particular, the following areas are therefore considered high risk:
    • Education (e.g. assessment of student performance);
    • Automated classification of applicants in employment, personnel management and access to self-employment;
    • Credit score evaluation or to evaluate the creditworthiness of natural persons (an exception would be: commissioning by small providers for their own use in a business operation);
    • Evaluation of natural persons for the conclusion of health and life insurance policies.

Such evaluating high-risk AI systems will generally be generative AI systems that, for example, analyse data such as income, employment history, and creditworthiness, which can predict the creditworthiness of a company or an individual, or review resumes and professional qualifications of applicants for their suitability for a particular employment relationship and, under certain circumstances, already make a pre-selection of qualified applicants. Another example is classification and evaluation of employees for promotion or termination.

  • AI systems in law enforcement and justice: AI applications used by law enforcement or EU authorities (e.g. as lie detectors or to assess the reliability of evidence in criminal proceedings) are considered high-risk.
  • AI systems related to migration, asylum, and border control: AI systems used in entry or asylum procedures (e.g. predicting security or health risks of a person entering the country, checking travel documents) are intended to be high-risk AI systems.
  • AI systems in the context of elections: If AI systems are used to influence election results or the voting behaviour of natural persons, they impair freedom of choice and therefore pose a high risk.
  • AI systems in the context of social media platforms: AI systems used in recommender systems of social media platforms that are designed as large online platforms to recommend user-generated content to users are considered high-risk because they can be used in ways that affects online safety, formation of public opinion, elections, and democratic processes and social issues.

Also added to parliament's negotiating position is that providers of AI systems used in the areas covered by Annex III have the option of submitting a notification to the competent authority with appropriate justification (Art. 6 (2a)) if they believe that their AI systems do not pose a high risk to health, safety, fundamental rights or the environment. If the competent authority classifies the AI system as a high-risk AI system, it may object to the notification within a certain period. The provider has a right to object to the objection. If a provider brings the AI system in the market before the period expires, fines may be imposed.

Conclusion

AI systems that pose an unacceptable risk are prohibited altogether as prohibited practices. For high-risk AI systems, the AI Act sets out requirements and obligations for their providers, deployers, distributors and importers and other third parties along the AI value chain. As a result, it is worth looking at the requirements of the AI Act now.