Financial services outsourcings and AI

United Kingdom

With the AI summit last week in Bletchley Park where a world-first agreement has been made between the UK, EU, US and China on the opportunities and risks posed by AI, we thought it was a good time to provide our initial thoughts on AI and the potential impact on regulated outsourcing arrangements. As outlined in the Bletchley Declaration, AI presents a significant opportunity for regulated firms to enhance and automate processes and tasks, both internally and externally via third party service providers, alongside some complex challenges and risks.

Ultimately, our view is that the challenges are similar to each time a new revolution is rolled out. When cloud technology was first offered, it was difficult to apply regulatory outsourcing requirements as they were at the time and they had to be adapted to cloud models. The adoption of digitalisation is necessary to have a UK financial services industry that is world leading and competitive, and the UK government is trying to position itself as a world leader by leading the charge on the Bletchley Declaration. However, being a first mover can also carry risk, particularly where the regulations have not caught up for the new technology. In this case, the AI revolution is moving at such rapid pace that regulators will need to act quickly to deal with some of the challenges mentioned below in order to align with the UK government’s ambitions in this area.

There is a lot of content on AI and its potential impact, and there seems to be more questions than answers. This article is therefore a combination of our initial thoughts on the opportunities and challenges of AI for regulated outsourcing arrangements. We will comment further in due course as the AI revolution is rolled out.


1. Efficiency and automation: Where a firm is entering into a long term partnership with a third party provider, we expect such provider to demonstrate continuous improvement throughout the term to ensure that they retain a market-leading proposition, and this requires investment.  Typically, we consider continuous improvement to be a cost of doing business for the service provider. However, where the service requires significant build or bespoke requirements, or it is for a strategically important customer, it may increasingly become a point for negotiation as to how much budget is allocated towards enhancement and automation of existing processes to drive reduced fees in the long term.

2. Drafting and negotiation of non-material outsourcing agreements: In our experience, the tools available to date are not sophisticated enough to deal with the complexity of detailed bespoke contractual drafting for long term material outsourcing arrangements. However, we expect generative AI to be able to aid in generating legal contracts, compliance documents and regulatory reports over the longer term. These tools may in the future become helpful for in-house legal teams to use to support standard non-material contractual arrangements with limited negotiation.

In addition, AI-powered tools could assist in reviewing and analysing outsourcing agreements, ensuring that the terms align with applicable legal and regulatory requirements, e.g. PRA SS2/21, the EBA Guidelines and DORA, and providing helpful audit trails. This could expedite contract negotiations, contract remediation processes and reduce the risk of misunderstandings.

3. Regulatory change: The terms around regulatory change can often be heavily negotiated in material outsourcing arrangements, particularly around who takes responsibility for monitoring changes in law, how the provider implements the change in law and who pays for it. We expect that in the future AI could assist in monitoring changes in law, regulations and industry standards.

Providers typically resist any contractual obligation to monitor the laws applying to their clients but we may see providers becoming more willing to agree to monitor changes in laws or regulations applicable to clients if they have an AI tool that can help predict or identify regulatory change.

4. AI-generated insights for audits: Audit rights over subcontractors down the supply chain have become increasingly challenging for firms and providers, particularly given the rules in the PRA’s SS2/21 and the EBA Guidelines. Generative AI may be able to provide insights by analysing historical data, transactions, and patterns for the purposes of firms’ audits and information gathering which support compliance with the outsourcing requirements. Over time, AI might help reduce the need for on-site audits over ‘nth’ subcontractors whilst still providing firms with the information they need to sufficiently demonstrate oversight of their supply chain.

5. Training and onboarding materials: Training may be needed for employees in certain outsourcing arrangements, particularly where there are customer facing functions (for example, call centres dealing with claims management). Generative AI could be used to create training materials and onboarding content for new employees, clients or partners within outsourcing arrangements. This may be particularly useful where new regulatory requirements come into force that require additional training; for example, we expect that the Consumer Duty will necessitate additional training for consumer-facing staff. (See here our article on The Consumer Duty and outsourcing: remediation or not? (

6. Customised communications: Complex outsourcing arrangements sometimes require a detailed client communications workstream; for example, where underlying investors are requested to move from one custodian to another. We expect that generative AI could assist in crafting personalised and relevant communications for clients, as well as potentially for regulators and stakeholders. This could help to support firms in obtaining customer consents for any transfer which may streamline the exercise of moving from one provider to another.


1. Internal process compliance: Where a firm is using AI to facilitate its internal processes as they relate to an outsourcing with a third party, there is a question as to whether this sufficiently discharges the firm’s regulatory obligations and how compliance is demonstrated. By way of example, if AI is used by a firm to facilitate a due diligence process with a prospective provider for a material outsourced service, what extra steps will need to be taken by the firm to achieve compliance with the relevant outsourcing requirements? Will the output of the AI be sufficient to demonstrate to regulators that appropriate steps have been taken to ensure that the provider has the necessary capability, capacity and expertise to perform the outsourced function?

It has long been the case that firms have used different types of technology to support compliance.  Indeed, certain types of technology, e.g. cloud arrangements, provide additional security for certain arrangements and particularly in terms of customer data. However, over-reliance on technology as a sole means of compliance could cause unintended issues of control over, and understanding of, the steps that have been taken to achieve that compliance and whether or not it aligns with the regulatory intent.

2. Oversight over third parties: There are numerous challenges that will need to be considered by regulators in the context of outsourced services. For example, where firms use a third party provider who uses AI, how does the firm have oversight of this technology? Will the AI ‘show its workings’ in order to demonstrate the steps taken? If AI is more intelligent than humans, steps may be taken that lead to an outcome that is commercially desirable to the third party provider (or even the firm) but which does not necessarily lead to good outcomes for customers or compliance with regulatory intent. This tension between conflicting interests in the event of overreliance on AI and which interests prevail is likely to be a serious challenge for the financial services industry as a whole.

3. Responsibility for AI: In terms of responsibility, the parties to an outsourcing agreement will need to consider who takes the ‘AI risk’ and what is the allocation of responsibility between the parties. This becomes particularly challenging from a liability perspective where service providers push for contractual liability standard that is negligence only, and where this is capped at a multiple of annual fees. Where there is no reasonable industry standard, how can a negligence standard meaningfully be applied? From a customer perspective, a breach of contract standard will be preferable but that relies on the firm writing the service description in full, which may be challenging to do where there is reliance on AI. In terms of the liability cap, it will be interesting to see whether the market norm will move to ‘AI losses’ sitting outside of the cap (as we often see in a data protection / intellectual property loss event).

4. Scoping of service descriptions: In terms of scoping the service description, there may be a risk that we get to a point where providers do not accept amendment to their standard service description as the technology that sits behind that service description is unable to adapt for each client’s requirements. Standardisation has huge challenges for firms and regulators as it becomes more difficult to demonstrate that a firm has looked holistically at the risk profile of the arrangement and amended the service descriptions accordingly to mitigate its risk. How can it demonstrate to regulators that this has been done where providers only offers a ‘one size fits all’ approach based on the technology available?

Please let Angela Greenough or Joy Davey know if you have any questions from a financial services outsourcing perspective.