Igniting the Path to Trustworthy AI: European Standards and the AI Act


In the realm of AI, all eyes are on the ground-breaking AI Act (the “AIA”), a momentous EU legislation which will apply to providers, users, importers, and distributors of AI systems within the European Union. Amidst all the enthusiasm surrounding the AIA, a quiet yet pivotal development has taken place: the European Commission's (the “Commission”) request to European Standardisation Organisations to craft European Standards championing "safe and trustworthy" AI. This significant request has largely flown under the radar with the mainstream media. These standards will foster technical harmonisation and lay the groundwork required for the creation of a trustworthy AI landscape. With their completion, these standards will play a paramount role in implementing the AIA, crystallizing crucial requirements, and providing much-needed clarity to the stakeholders impacted by this transformative legislation.

What is the European Standards Framework?

When it comes to standardisation, there are three main types of regimes: national, regional, and international. Let's dive into each one to understand their significance.

At the national level, organisations like the British Standards Institution (the “BSI”) in the UK take charge of creating national standards. These standards serve as crucial guidelines within their respective countries.

On a regional scale, various European Standardisation Organisations (the “ESOs”) step in to prepare European standards within the EU. These standards play a vital role in harmonising practices and ensuring consistency across member states. There are 3 ESOs: the European Committee for Standardisation (the “CEN”); the European Committee for Electrotechnical Standardisation (the “CENELEC”); and the European Telecommunications Standards Institute the (“ETSI”).

At the global level, we have the International Standards Organisation (the “ISO”) and the International Electrotechnical Commission (the “IEC”). These organisations develop standards that have a global reaching impact, applying across borders and continents.

Recognised and formalised by EU Regulation (1025/2012) in 2013, ESOs hold a special place in the standardisation landscape. In fact, a remarkable 30% of ESO standards are mandated by the Commission itself. Where this is the case, by default, there is a “presumption of conformity”. In other words, when a voluntary standard that was mandated by the Commission is followed by an organisation, institution or government, they are said to also conform to the legislation that standard was designed to complement.

The aim of the ESOs is to promote harmonisation within the EU, uniquely placing them in a position where the creation of standards bridges any existing or potential gaps in EU law. In this way, standardisation provides many benefits to the EU such as consolidating the single market, strengthening competition and facilitating cross-border trade, all resulting in greater economic strength.

What is the plan for the European Standards on AI?

On 22 May 2023, the Commission published a standardisation request to the CEN and CENELEC in order to support the upcoming AIA (the “Standardisation Request”). The Commission have requested that CEN and CENELEC draft new European standards (the “European Standards”), or European standardisation deliverables (the “Deliverables”) as detailed in Appendix II of the Standardisation Request, in support of the “key technical areas covered” by the AIA.

CEN and CENELEC are required to develop European Standards and Deliverables (collectively referred to as “ESDs”) that address the following areas:

  • Risk management system for AI systems: These ESDs will specify the requirements for a risk management system for AI systems. The aim is to establish a continuous iterative process throughout the AI system's lifecycle that prevents or minimises risks to health, safety, or fundamental rights.
  • Governance and quality of datasets used to build AI systems: These ESDs will include specifications for adequate data governance and data management procedures to be implemented by AI system providers. They will focus on data generation and collection, data preparation operations, addressing biases, and ensuring the quality of datasets used to train, validate, and test AI systems.
  • Record keeping through logging capabilities by AI systems: These ESDs will define specifications for automatic logging of events by AI systems. The aim is to enable traceability throughout the system's lifecycle, monitor operations, and facilitate post-market monitoring by providers.
  • Transparency and information provisions to the users of AI systems: These ESDs will provide specifications for design and development solutions that ensure the transparency of AI system operations, enabling users to understand the system's output and use it appropriately. They will also include instructions for use, including system capabilities and limitations, as well as maintenance and care measures.
  • Human oversight of AI systems: These ESDs will specify measures and procedures for human oversight of AI systems. Providers will be required to identify and build these measures into the system before placing it on the market or putting it into service. Users should also be able to implement appropriate oversight measures.
  • Accuracy specifications for AI systems: These ESDs will outline specifications to ensure an appropriate level of accuracy for AI systems. Providers will be able to declare relevant accuracy metrics and levels, and appropriate tools and metrics will be defined to measure accuracy against defined levels.
  • Robustness specifications for AI systems: These ESDs will lay down specifications for the robustness of AI systems, taking into account sources of errors, faults, inconsistencies, and interactions with the environment. They will also consider AI systems that continue to learn after being deployed.
  • Cybersecurity specifications for AI systems: These ESDs will provide suitable organisational and technical solutions, to ensure that AI systems are resilient against attempts to alter their use, behaviour, performance or compromise their security properties by malicious third parties exploiting the AI systems’ vulnerabilities.
  • Quality management system: These ESDs will be drafted for providers of AI systems to be implemented within their organisation, with particular consideration given to small and medium size organisations.
  • Conformity assessment for AI systems: These ESDs will provide procedures and processes for conformity assessment activities related to AI systems and the quality management system of AI providers. They will also include criteria for assessing the competence of individuals involved in conformity assessment activities, considering scenarios where the assessment is carried out by the provider or a professional external third-party organisation.

In drafting the ESDs, CEN and CENELEC must consider the Commission’s AI policy objectives, reflective of those represented by the AIA. This includes ensuring the safety of AI systems, the promotion of investment and innovation in AI, and strengthening international co-operation for an AI standardisation that is consistent with the values and interests of the EU. Also, of high priority in drafting the European Standards is the consideration of public interest, “given its importance for the development and the deployment of AI”.

The Commission envisage that the standardisation process will be one of collaboration, meaning that ESOs must ensure the effective participation of EU SMEs and civil society organisations in drafting the European Standards, and must gather the relevant expertise in the area of fundamental rights. For SMEs specifically, the standardisation aims to promote competition and engage innovation in the design and development of AI systems and solutions.

The Standardisation Request includes reporting obligations to be imposed upon CEN and CENELEC to regularly indicate the progress made in the implementation of the standardisation. Specifically, CEN and CENELEC are required to report to the Commission every six months and submit an initial joint report to the Commission no later than 22 March 2024.

In addition to the benefits of the European Standards, the Commission believe that collective international standardisation will aid in consolidating a common, global vision of trust in AI and further promote trade by removing barriers to products and services powered by AI. One example of this is the potential EU adoption of standards developed by the ISO and the IEC. For this collaboration to be effective in bridging gaps in international AI governance, CEN and CENELEC must co-operate with the ISO and IEC without prejudicing their own obligations to safeguard the interests of the EU.

Next Steps

CEN and CENELEC have until 22 September 2023 to submit a  work programme to the Commission in relation to all the ESDs listed in Annex I of the Standardisation Request, allocate the  responsible technical bodies and set out a timetable for the execution of the requested standardisation activities so that the ESDs can be completed by 30 April 2025. We understand the intention is to leverage existing and upcoming ISO Standards on AI as much as possible; otherwise, the Commission deadline of 30 April 2025 may be difficult to achieve (given that most standards take 3 to 4 years to develop).  Clearly, it makes sense for the ESDs to have been developed prior to the AIA being in force (currently expected to be no earlier than the beginning of 2026).

The authors would like to thank Hanisha Kanani, Graduate Solicitor Apprentice at CMS, for her assistance in writing this article.