ESMA issues first formal guidance on use of AI in retail financial services


The European Securities and Markets Authority (ESMA) recently released a public statement on the use of artificial intelligence (AI) in the provision of retail investment services. The statement provides initial guidance to investment firms and credit institutions using, or planning to use, AI so they can ensure compliance with their obligations under MiFID II. It also outlines live use cases and ESMA’s understanding of the “known challenges” currently facing firms in adopting AI.

Emerging ESMA approach?

While this is by no means ESMA’s first publication or statement on AI, it is its first formal guidance on the topic. It is noteworthy that ESMA has focused this initial intervention on retail investment services and retail investor protection.

The statement on May 30 was made without prejudice to the broader EU framework on digital governance, including the EU AI Act and the Digital Operational Resilience Act (known as DORA). Instead, ESMA focuses on the use of AI under the existing MiFID II framework. Similar to the UK regulators, ESMA will approach AI using its existing regulatory toolkit (and it expects firms to do the same).

There is also evidence of an emerging proportionality principle in ESMA’s recognition that firms are expected to evaluate and monitor the impact of AI in a manner that reflects the scale, complexity, and potential risks of such systems. Specific attention should be given to areas where AI has the most significant influence on the firm’s processes and client services.

Importantly, the statement not only addresses scenarios where AI tools are officially adopted by institutions, but also where staff use third-party AI technology with or without senior management’s knowledge or approval. ESMA expects firms to have in place appropriate measures to control the use of AI systems in any form.

Application of MiFID II to AI systems

ESMA considers that the use of AI systems demands a heightened degree of attention and emphasises that AI-specific processes and procedures are required under key areas of the MiFID II framework.

Client best interests rule and information requirements

ESMA expects that any client communications regarding firms’ use of AI are presented in a clear, fair and non-misleading manner.

Firms should be transparent about the role of AI in decision-making processes, and any use of AI for client interactions (whether chatbots or other AI-related automated systems) should also be disclosed.

Organisational requirements

The board is “pivotal” in ensuring compliance. Boards are expected to have an appropriate understanding of how AI is applied and used within their firms, ensure appropriate oversight, and establish robust governance structures. ESMA also expects firms to develop and maintain robust processes and procedures that are specifically tailored to address the “unique challenges and risks” of AI.

Where AI tools are used for investment decision-making, firms must take a “meticulous approach” to sourcing data, ensuring that algorithms are trained on accurate and sufficiently broad data sets, and that there is “rigorous oversight” over the use of data.

Using AI requires “heightened vigilance”. Firms should therefore ensure that adequate training programs are in place, covering not only the operational aspects of AI, but also its potential risks, ethical considerations, and regulatory implications. These should further feed into fostering a culture that encourages continuous learning to keep pace with the rapid development of AI technologies and regulatory landscapes.

Finally, strong ex-ante controls must ensure the accuracy of information supplied to and used by AI systems, together with sufficiently frequent ex-post controls on any process that uses AI to deliver information to clients.

Conduct of business requirements

ESMA points out that firms’ use of AI technologies in relation to investment advice and portfolio management requires a “heightened level of diligence”, particularly as firms are required to ensure the suitability of services and financial instruments provided to each client. Product governance and suitability assessment procedures should be adapted accordingly. Firms are expected to have in place rigorous quality assurance processes; this includes testing algorithms and conducting periodic stress tests.


ESMA expects firms to maintain comprehensive records on AI utilisation and on any related complaints. These records should encompass AI deployment, including the decision-making processes, data sources used, algorithms implemented, and any modifications made over time.

Next steps

ESMA stresses that firms will need to strike a balance between harnessing the potential of AI and safeguarding investors’ confidence and protection. ESMA is clear that firms’ current focus should be on fostering transparency, implementing strong risk management practices, and complying with legal requirements. ESMA intends to keep the position under review, to determine if further action is needed.


This article was first published by Reuters Regulatory Intelligence