How to draft an AI policy


AI has been around for decades, but there’s been a substantial increase in the use (or, at least, talk about the use) of AI within the workplace as a result of recent advances in AI, in particular, in relation to generative AI tools.

Organisations need to ensure that there is appropriate and clear governance in relation to the use of AI within their businesses, so that they can harness the many benefits and exciting opportunities AI presents, whilst mitigating its potential risks. In this article, we set out some of the key issues to consider when preparing and implementing an AI policy.

Is a new policy required?

Start by considering whether a specific AI policy is required. If a business is proposing to ban or substantially limit the use of AI, it may be more appropriate (not to mention easier) to update existing policies.

Even if a specific AI policy is needed, it is likely existing policies will also need to be updated in order to accommodate the use of AI. For example, the use of certain AI tools will expose the business to new and different data security risks, requiring the implementation of more robust cybersecurity measures and a corresponding update to the IT security and data protection policies.

What is the scope and purpose of the policy?

A business should set out clearly the scope and purpose of its AI policy. For example, the policy should make clear if it covers all uses of AI within the business or is focussed on use of generative AI by certain parts of the business. It’s also helpful to explain the reasons for putting a policy in place, including by reference to the ethical and legal considerations and the risks associated with the use of AI. It should be made clear at the start of the policy that there are defined structures and processes in place for the use of AI. We would recommend the “scope and purpose” section is kept concise – we have seen examples of AI policies where the “scope and purpose” section is so long that it detracts from the substance of the policy and makes it a difficult read. 

Which AI tools can be used?

A business should investigate (and consult with staff) on any current or proposed use of AI tools.

The policy should:

  • list any AI tools that are banned completely;
  • list any AI tools that are approved for use and, importantly, the specified purposes for which such AI tools can be used; and
  • set out the process to follow if staff wish to use an approved AI tool for a different purpose or an AI tool that does not appear on the approved list.

What should staff bear in mind when using approved AI tools?

The policy should include clear guidelines on how to use AI tools, including how to address the risks associated with the relevant AI tools (after review of each AI tool’s Ts&Cs). Although some of the risks may depend on the size of the business, its sector and the context in which AI tools are used, the following are likely to be relevant to many businesses:

  • requiring staff, before making use of any AI tools, to read and acknowledge the AI policy and complete training on use of AI tools;
  • requiring staff to report any breach of policy or any AI-related incident (such as a data or security breach);
  • making clear the need to check and comply with the third-party terms of use of each AI tool and setting out any specific conditions for use of a certain AI tool. For example, certain AI tools may require the insertion of watermarks;
  • providing examples of how AI tools may be used in the context of the business, with examples of approved use cases for different roles;
  • requiring staff to indicate if and to the extent that their work has been produced or undertaken using AI tools; and
  • restrictions on inputting certain data and materials into AI tools.


The policy should:

  • define the roles and responsibilities of different stakeholders, ensuring a clear chain of accountability, including with respect to responsibility for raising awareness of the policy;
  • set out the process to follow if there is a breach of the AI policy, or a data or security breach associated with the use of AI, making clear which individual(s) or team(s) should be informed and who will be responsible for remedying any breaches; and
  • state clearly the consequences for breach of the AI policy.

Businesses will need to do more than simply draft an AI policy and then leave it in a drawer. To help ensure compliance with the policy, businesses should consider how best to communicate and raise awareness of the policy, including providing mandatory training to staff on the policy and any approved AI tools (and repeating this training as appropriate).

How often should the policy be reviewed and updated?

AI technology is developing at a rapid rate, with new and updated AI tools being made available on a regular basis. In addition, the introduction of AI-related legislation and regulatory requirements will affect the way in which businesses can use AI. For example, the proposed EU AI Act will impose a number of restrictions and obligations on the development and use of AI systems. It follows that a business will need to put in place a mechanism to ensure that its AI policy is reviewed and updated on a regular basis, including inviting feedback from staff.

A business should consider the best way to keep staff updated on changes to its AI policy, including allocating responsibility for communication of policy updates to an appropriate individual or team. The policy should make clear that each member of staff is responsible for keeping up to date with the AI policy. The latest version of the AI policy should be accessible to all staff, with major policy changes communicated across the business.