Managing AI: What Businesses Should Know about the proposed ISO Standard

England and Wales


The International Organisation for Standardisation (“ISO”) has published a draft international standard on AI management systems, ISO 42001 (the “AI MSS”). The draft AI MSS is designed to help organisations act responsibility in their use of and roles with AI systems. The AI MSS is a welcome addition to a range of ISO standards dealing with AI, including ISO 38507 that addresses governance implications of the use of AI and the recently published ISO 23894 that addresses risk management of AI.

As discussed in our previous Law-Now, ISO standards go through a set development process, with the AI MSS currently scheduled to be published in August 2023.

What is a Management Systems Standard?

Management system standards are a type of standard that can support organisations in implementing an integrated system for addressing matters such as senior management support, training, governance processes and risk management.  As with other management system standards, the AI MSS is developed in relation to a circular process of establishing, implementing, maintaining, and continually improving AI.

In essence, a management system standard helps organisations:

  • to improve their performance by specifying repeatable steps that they can implement to achieve their  goals and objectives; and
  • to create an organizational culture that reflexively engages in a continuous cycle of self-evaluation, correction and improvement of operations and processes through heightened employee awareness and  management leadership and commitment.

Examples of well-known management systems standards are ISO 9000 (quality) and ISO 27001 (information security).

The AI MSS is intended to be a management system standard on the responsible development and use of AI.  It is intended to be comparable with ISO 9000 (quality) and ISO 27001 (information security).

The AI MSS will be an auditable and certifiable standard. Audits are a vital part of the management system approach as they enable an organisation to check how well their achievements meet their objectives and show conformity to the standard.

Why is AI MSS needed?

The AI MSS notes that it is applicable to any organization, regardless of size, type and nature, that provides or uses products or services that utilize AI systems. The use of AI systems (whether to develop, use, monitor or provide products or services that use AI), involves considerations that go beyond those of other systems. The AI MSS focusses on:

  • AI systems that have the potential to change their behaviour through use, presenting a challenge to ensure continuing monitoring and compliance with rules and/or accepted practices;
  • AI systems involved in automatic decision making (possibly in a non-explainable or transparent way), which require specific management beyond that of a traditional system; and
  • the replacement of human interaction with machine learning, insight and data analysis, which increases the opportunities for applying AI systems while also changing the way those systems are justified, developed or deployed. 

Overall Structure

The AI MSS has 2 key sections:

  • the Management part (clauses 5 to 10), which are the requirements on how to manage an AI system in a responsible way; and
  • the Controls part (Annex A) and the Implementation guidance (Annex B). These are the technical and organisational measures in support of management requirements.

Requirements on how to manage an AI system in a responsible way

  1. These requirements are mandatory for compliance and include the following: Context: organisations should understand the specific issues relevant to its purpose that relate to its use of AI systems and the purpose behind its use of AI more generally. This involves considering the expectations and needs of relevant parties (such as anyone using or affected by the system), determining the scope of the system and more broadly, establishing an AI management system.
  2. Leadership: organisations should ensure that management demonstrates a commitment to and leadership of AI systems, establishes an AI policy and properly delegates roles, responsibilities and authorities.
  3. Planning: as part of its planning for the system, organisations should ensure that it takes the context of its use of the system (as described above) into account through, for example, establishing and maintaining AI risk criteria, planning how to address risks and opportunities associated with the system, creating an AI risk assessment and, importantly, planning for how to achieve the objectives of the system and managing any changes.
  4. Support: organisations should ensure that adequate resources, information, communications and awareness are in place appropriate to the system in use. This could also involve support for updating any information in line with changes to the system. 
  5. Operation: closely related to Planning, organisations should ensure that the plans put in place for the use of the system (taking into account the risk assessment and any other similar documentation) are adequately followed and utilised. 
  6. Performance evaluation: organisations should ensure that their use of AI systems is regularly monitored, audited and ultimately reviewed by management to ensure that it remains as planned, that processes and plans remain relevant and to ensure the goals of the system are being met. 
  7. Continual Improvement: organisations should ensure that the effectiveness, adequacy and suitability of AI systems is continually improved. Where any nonconformity takes place, organisations should be proactive in correcting any issue, understanding the root cause, implementing any responses and making any necessary changes to the system. Documented information shall be available as evidence of (i) the nature of the nonconformities and any subsequent actions taken; and (ii) the results of any corrective action.

Controls (Annex A) and Implementation (Annex B)

These are optional for compliance, but organisations have to provide rationale if a control is not implemented. These sections of the AI MSS are described in terms of:

  • objectives;
  • controls to achieve an objective; and
  • implementation guidance.

Annex A - Reference control objectives and controls

This includes tables with suggested objectives (similar to the considerations listed above) and then breaks them down into specific features, giving each feature a control. For example:

B.2 Policies related to AI

Objective: To provide management direction and support for AI systems according to business requirements and applicable legal obligations (including contractual obligations).


AI policy


The organization shall document a policy for the development or use of AI systems.


Alignment with other organizational policies


The organization shall determine where other

policies can be affected by or apply to, the organization’s objectives with respect to AI systems.


Review of the AI policy


The AI policy shall be reviewed at planned intervals

or additionally as needed to ensure its continuing suitability, adequacy and effectiveness.

* This is an excerpt from table A.1, Annex A

Annex B - Implementation guidance for AI controls

This goes into more detail on how to implement the controls suggested in Annex A and attempts to assist organisations in the actual implementation of the controls. For example, when discussing the implementation of an AI policy, the annex states:

‘The AI policy should be informed by:

  • business strategy;
  • organisational values and culture and the amount of risk the organization is willing to pursue or retain;
  • the level of risk posed by the AI systems;
  • legal obligations, including pursuant to contract;
  • the risk environment of the organization;
  • impact to relevant interested parties.

The AI policy should include:

  • principles that guide all activities of the organization related to AI;
  • processes for handling deviations and exceptions to policy.

The AI policy should consider topic-specific aspects where necessary to provide additional guidance or provide cross-references to other policies dealing with these aspects. Examples of such topics include:

  • AI resources and assets;
  • need for AI system impact assessments;
  • AI system development.

Relevant policies should guide the development, purchase, operation and use of AI systems.’

Example of how the AI MSS can be used in relation to considering, for example, an AI system impact assessment


Requirement (Clause 6.1.4)

Objective (Annex B.5)

Control (Annex B.5.2)

Implementation Guidance

The organisation shall assess the potential consequences for individuals and societies that can result from the development or use of AI systems. The AI system impact assessment shall determine the potential consequences an AI system’s deployment and intended use has on individuals and societies. The result of the system impact assessment shall be documented and be made available to relevant interested parties where appropriate.

The organisation should consider whether an AI system affects:

  • the legal position or life opportunities of individuals;
  • the physical or psychological well-being of individuals;
  • universal human rights;
  • society.


To assess system impacts to interested parties of the AI system throughout its life cycle.
























The organisation shall assess the potential consequences for individuals and societies that would result from the development or use of AI systems.



















Topics the organisation should consider can include, but are not limited to:

a) circumstances under which an AI system impact assessment should be performed, can include, but are not limited to:

  • criticality of the intended purpose and context in which the AI system is used or any
  • significant changes to these;
  • complexity of AI technology and the level of automation of AI systems or any significant changes to that;
  • sensitivity of data types and sources processed by the AI system or any significant changes to that.

b) elements that are part of the AI system impact assessment process, which can include:

  • identification (e.g. sources, events and outcomes);
  • analysis (e.g. consequences and likelihood);
  • evaluation (e.g. acceptance decisions and prioritisation);
  • treatment (e.g. mitigation measures);
  • documentation, reporting and communication;

c) who performs the AI system impact assessment

d) how the AI system impact assessment can be utilised (e.g. how it can inform the design or use of the system, whether it can trigger reviews and approvals);

e) individuals and societies that are considered based on the system’s intended purpose, use and characteristics (e.g. assessment for individuals, groups of individuals or societies).

The AI MSS also contains two additional informational annexes

Annex C - Potential AI related organisation objectives and risk sources

This annex includes a variety of objectives and risk sources organisations can consider when managing AI system risk. Objectives, for example, include privacy, robustness, security and fairness while risk sources include the level of automation, complexity of the environment and lack of transparency and explainability.

Annex D - Use of AI management system across domains or sectors

This Annex addresses the fact that the advice and steps described in the AI MSS to assist organisations in managing AI systems is broad and as such those systems could be in use in sectors with other important standards, obligations or commitments such as defence, energy or health. The annex therefore addresses how organisations should integrate their AI management system with other management system standards and gives advice on certification schemes.


In the view of the authors, the AI MSS provides a very useful selection of considerations for organisations implementing AI to assist them in systematically managing, controlling and documenting their use of the technology. In addition, the annexes provide practical advice to organisations. Prior to final publication we expect comments to be submitted on the draft which will be responded to in due course by the ISO editor. Following this and further work, a revised draft will be published, at which point we will have a clearer view of how the final standard will look, and whether it will be approved, or require a further stage of comments and consultation.

The authors would like to thank Jake Sargent, Associate at CMS, for his assistance in writing this article.