In April 2022 the International Standards Organisation (“ISO”) issued a new standard ISO 38507, titled “Governance implications of the use of artificial intelligence by organizations” (“ISO 38507” or the “Standard”).
A brief history of governance standards for IT
Since the 1960s, computers have increasingly been used by businesses following their technological advances such as the shift from host-based to distributed systems, the downsizing of computers themselves, and the increase in computer processing speed. Since 1990, the use of computers in corporate business (referred to as "IT use”), as exemplified by the technology revolution, has advanced and brought about major changes in the economy and society.
However as the IT use progressed, its adverse effects began to surface as business problems, and cases of negative impacts on corporate management, such as the "Year 2000 Problem," became widely known to the public. Since 2000, IT has been positioned as an essential infrastructure for corporate activities, and it has become necessary to recognise the adverse effects and risks of IT use as an issue for corporate management. Therefore, it has become necessary to establish a governance framework for IT as an integrated management and evaluation of IT to make corporate governance function.
COBIT[1] and ISO/IEC 3850014[2] are representative international standards for IT governance. The first edition of COBIT was published in 1996 by ISACA[3] and the IT Governance Institute (“ITG”), a research organization of ISACA. Since then, the scope of COBIT has expanded from audit, control, management, and IT governance to IT governance of enterprise.
ISO/IEC 38500 is an international standard for IT governance, published in 2008. As a self-regulatory framework with its roots in the OECD's corporate governance, this standard presents the principles for realising IT governance in the form of a detailed examination of the framework for introducing IT governance and control such as COBIT. ISO/IEC 38500 indicates that management should follow six principles (Responsibility, Strategy, Acquisition, Performance, Conformance, and Human Behavior). Since its publication in 2008, ISO/IEC 38500 has been used as a framework for IT use, pre-implementation assessment, and post-implementation, and has sequentially evolved through the 2015 revision and the development and publication of 38500 series. ISO 38507 (see below) is the latest member of this series.
From IT governance to AI governance
Since 2010, with the development of data collaboration business among related companies, examples of data use in business by AI (referred to as "AI use") have been spreading worldwide. At the same time, problems that arise when using AI have surfaced that are difficult to deal with using conventional IT governance, so it was decided that a new governance framework needed to be established. However, since AI is technically based on IT, it was decided to view it as an extension of IT governance and build AI governance in a form that takes into account the characteristics of AI.
The existing IT governance standards required accountability and transparency frameworks based on management self-regulation. On the other hand, it was recognised that AI governance required a framework for establishing trust in management, as interpretations and views on ethical issues of AI and other issues brought about by AI use vary depending on the economic and cultural background of each country. Accordingly, the need for ISO 38507 was established.
What is the purpose of ISO 38507?
The purpose of ISO 38507 is to provide guidance for the governing body of an organization that is using, or is considering the use of, artificial intelligence (AI) and encourages organizations to use appropriate standards to underpin their governance of the use of AI.
ISO 38507 addresses the nature and mechanisms of AI to the extent necessary to understand the governance implications of their use: what are the additional opportunities, risks and responsibilities that the use of AI brings? The emphasis ibn ISO 38507 is on governance (which is done by humans) of the organization’s use of AI and not on the technologies making up any AI system. However, the Standard acknowledges that such governance requires an understanding of the implications of the technologies.
What are the new implications posed by the use of AI?
ISO 38507 notes the importance of the responsibility of the governing body in an organisation that uses AI, where goals are set that shape the objectives and financial and non-financial outcomes of the organisation. ISO 38507 emphasises that the governing body is central to the organisation, setting its purpose and approving the strategies necessary to achieve that purpose. The governing body therefore has a degree of influence over the use and impact of AI on an organisation and must continually assess whether the existing governance is fit-for-purpose as the use of AI changes within an organisation. ISO 38507 demonstrates the importance of this through a list of new implications that arise from the use of AI, including for instance:
- an increased reliance on technology and systems for the acquisition of data and assurance of its quality;
- the impact of AI on the workforce, from discrimination concerns to redundancy due to automation, but also increasing quality of work by delegating tasks to AI systems; and
- the impact on commercial operations and brand reputation.
The governing body’s accountability is emphasised as being maintained across the full lifecycle of the AI technology from purchase, implementation, deployment, testing and various project phases all the way to de-commissioning. The below diagram (Figure 2) from ISO 38507 demonstrates how the AI system life cycle changes from inception to decommission.
ISO 38507 reminds the user that AI can be distinguished from other technologies due to the vast quantities of complex data that can be too complex for humans to process, for instance rather than a human leading each next logical step and solving problems, for AI it is data driven. The complex nature of AI ecosystems means that the degree of oversight required by governing bodies depends on a variety of factors, including the following:
- the intended use of the AI system;
- the type of AI used;
- the potential benefit the AI system will deliver;
- the new risks that can accompany the AI system;
- the stage of implementation of the AI system, amongst others.
What are the key takeaways?
ISO 38507 recommends that organisations take the following actions, amongst others, to place necessary constraints on the use of AI:
- Increase oversight of complianceGovernance oversight within organisations should be based on policies set by the organisation and should identify effective individual and collective accountability in an appropriate chain of responsibility, which is set alongside the context of use of AI. This includes putting policies in place to make sure AI is used appropriately, there is sufficient human oversight in place and any persons using AI are properly trained and know how to raise concerns. Legal requirements or obligations may be determined for using such technologies alongside the risk appetite of the organisation. ISO 38507 notes that governing bodies should be aware of new sources of risk poses by AI technologies, including unwanted bias, cyber-threats and a lack of AI expertise. The proposed new AI risk management, ISO 23894 could be useful for this - please see our summary here.
- Address the scope of use of AIThis sits alongside the importance of governance of data use addressed in ISO 38507, noting that data is being used for the correct purpose and sensitive data is protected and secured. This involves considering the formulation of relevant assumptions on data, conducting a prior assessment on the availability, quality, quantity and suitability of data and an examination of potential biases takes place. Formulating a description of the AI system, by way of its algorithms, data and models, would assist in being transparent enough to ensure the AI technology is being deployed for its intended use.
- Assess and address the impact on stakeholdersISO 38507 notes that the governing body is responsible outside of the context of AI of shaping and defining the organisation’s desired culture, which has an impact on stakeholders connected to the organisation. ISO 38507notes the human impact on an organisation’s culture and values which are implicitly embedded in the behaviour of staff and advocates for human involvement to a degree in the AI process, ensuring that AI systems can be monitored and corrected when needed. Conversely, ISO 38507 highlights that the AI system can itself identify where human decision-making is flawed through bias and discrimination. A “Cultures and Values Board” or an “Ethics Review Board” might be set up to supervise the impact of AI systems and make sure it is aligned to an organisation’s values and culture.
What is the significance of ISO 38507?
The role of governing bodies of an organisation shapes the purpose, mission, vision, ethos, values and culture of an organisation and has a central role in steering the strategy, resource and oversight of such activities. Governance of AI itself is key for the adoption of AI - The stats differ widely depending on how ‘AI adoption’ in the EU is measured (7% Eurostat, 2021 or 42% European Commission, 2020), but the fact remains that a key barrier to increased uptake in use and trust in AI is how exactly AI should be governed. Whilst there is no universal standard on what exactly AI governance should look like, this poses a significant opportunity for legislators globally to map out what they want AI regulation to look like.
ISO have embraced the development of a separate ISO standard on AI risk management, which fits alongside the UK’s National AI Strategy which places strong emphasis on the development of global technical standards. Earlier this year, the UK government announced the creation of a new AI Standards Hub (a summary of which can be found here) to help organisations better utilise and benefit from AI. We hope that ISO 38507 will be promoted by the AI Standards Hub, as an additional tool that can be offered to the UK AI community.
Like all ISO standards, the publication of ISO 38507 is just the starting point, and the final form of the outcome will continue to change as the standard is used globally. However, we expect that general governance will be the framework that management will refer to, and as a result, some form of governance assessment will be required. Currently, ISO is developing a standard for the evaluation of IT governance, and it is expected that AI governance will require a similar standard development in the near future.
The authors would like to thank Jessica Wilkinson, associate, for her assistance in writing this article.
[1] COBIT (Control Objectives for Information and related Technology).
[2] ISO/IEC 38500 “Information technology — Governance of IT for the organization”.
[3] Since its establishment in the U.S. in 1976, ISACA has been playing a leading role globally in the areas of IT governance, control, security, and information systems auditing by creating information systems auditing standards and certifying certified information systems auditors.
Social Media cookies collect information about you sharing information from our website via social media tools, or analytics to understand your browsing between social media tools or our Social Media campaigns and our own websites. We do this to optimise the mix of channels to provide you with our content. Details concerning the tools in use are in our Privacy Notice.