Over the last two years, the EU has paved the way for a uniform legal framework for the development, marketing and use of AI that conforms with Union values.
As a result, on 21 April 2021 the European Commission put forward its proposal for a Regulation laying down harmonised rules on artificial intelligence (Artificial Intelligence Act), which represents the EU's first legal framework on AI that both addresses the risks of AI and positions Europe to play a leading global role in this sector. The Proposal will be reviewed and debated by the European Council and parliament. Once the Proposal is adopted, organisations will have 24 months to prepare.
The most important elements of the Proposal include the following:
- A list of prohibited AI systems considered unacceptable because they contravene EU values. The Proposal follows a risk-based approach, differentiating between uses of AI that create (i) an unacceptable risk; (ii) a high risk; and (iii) low or minimal risk. The prohibitions covers practices that have significant potential to manipulate persons through subliminal techniques beyond their consciousness or exploit vulnerabilities of specific vulnerable groups such as children or persons with disabilities in order to materially distort their behaviour in a manner that is likely to cause them or another person psychological or physical harm. The Proposal also prohibits AI-based social scoring for general purposes done by public authorities. The use of "real time" remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement is also prohibited unless certain limited exceptions apply.
- Specific rules for AI systems that pose a high risk to health and safety or fundamental rights. These rules are based on the intended purpose of the AI system in line with existing product-safety legislation. The specific legal requirements for high-risk AI systems include data governance, documentation and record keeping, transparency and provision of information to users, human oversight, robustness, accuracy and security. Providers will be required to register their stand-alone high-risk AI systems (impacting fundamental rights) in an EU-wide database before placing them on the market or otherwise putting them into service.
- Transparency obligations for systems that (i) interact with humans (e.g. chatbots); (ii) are used to detect emotions or determine association with categories (e.g. social) based on biometric data; or (iii) generate or manipulate content (e.g. deep fakes).
- The European Artificial Intelligence Board, composed of representatives from the member states and the Commission, will facilitate a harmonised implementation of the regulation.
- National competent authorities and, among them, the national supervisory authority for the purpose of supervising the application and implementation of the regulation.
- Providers of AI systems must report and investigate AI-related incidents and malfunctions.
- There are three sanction levels: up to EUR 10 million, EUR 20 million and EUR 30 million, or, if the offender is a company, up to 2% to 4% and 6 % of its total worldwide annual turnover for the preceding financial year, whichever is higher.
For more information on this Proposal and opportunities in the European AI sector, contact your CMS client partner or CMS experts:
Our expert guide to “AI strategies in CEE” is available on the website here.