EBA publishes new FinTech Report on big data and advanced analytics

United KingdomScotland

The European Banking Authority has published a new Report on big data and advanced analytics. The Report follows the EBA’s FinTech Roadmap for 2018/2019, in which the EBA set out its intention to identify and assess prudential risks and opportunities stemming from the use of FinTech. In this new Report, the EBA provides its observations on the current uses of big data and advanced analytics in the financial services sector. The Report identifies a number of key risks associated with the deployment and use of these technologies and sets out, what the EBA has termed, four ‘key pillars’ for the development, implementation and adoption of big data and advanced analytics, and a number of ‘trust elements’ that the EBA considers need to be properly and adequately addressed across those four key pillars when applying artificial intelligence and machine learning technologies.

The Report will be of interest to FinTechs and financial institutions that have deployed or are considering deploying advanced analytics because it gives an indication of what the EBA sees as the main risks and challenges associated with the use of these technologies. Whilst the Report does not make policy recommendations or set supervisory expectations, the EBA has said that it will continue to observe and consider the pace at which big data and advanced analytics are deployed in financial services and, where appropriate, will issue opinions and/or proposals for guidance.

The four ‘key pillars’

  1. Data management

This pillar is about understanding the sources and types of available data, data security and protection and data quality. The Report highlights the need to consider the sources of data and the legal rights and obligations that may relate to that data. The EBA observes that institutions which are already using big data and advanced analytics predominantly collect and use ‘internal’ data (for example, customer transaction data and use of other products such as credit cards and loans) and the use of external data is limited. Where external data is used, it stresses the importance of validating that data before using it (for example, checking for redundancy, inconsistency and incompleteness).

Data management is also about appropriately protecting the data using technological and organisational measures and the Report suggests that these measures could be addressed through an information security management strategy or by establishing dedicated roles and a security management framework. The Report says that, in particular, institutions must comply with the GDPR throughout the entire lifecycle of a big data application when using personal data for training models or other purposes. (This would include, for example, not only ensuring that you have the necessary legal rights to use personal data but also implementing privacy by design and default and considering how ‘explainability’ requirements would be satisfied, if those apply.)

  1. Technological infrastructure

This pillar is about institutions having the right technology foundation in place to support big data and advanced analytics. The Report notes that the integration of machine learning solutions with existing legacy systems may raise IT risks, including in relation to data security, change management and business continuity and resilience (a significant area of focus for UK regulators – see further below).

  1. Organisation and governance

This pillar is about institutions’ establishing appropriate internal governance structures and measures to manage the risks, as well as having sufficient skills and knowledge to support, big data and advanced analytics. The Report describes areas that can support this, including:

  1. assigning clear roles and responsibilities within the governance structure and ensuring the board of directors have an adequate understanding of the use (and risks) of these technologies;
  2. having in place a risk management framework and risk control measures, as well as contingency plans in case of a system malfunction;
  3. adherence to the fundamentals of explainability and interpretability; and
  4. adherence to the EBA’s Guidelines on outsourcing arrangements when using externally developed and/or sourced applications, in particular where data is obtained from external sources. (You can read more about the EBA Guidelines on outsourcing here.) The Report warns of the risks that arise from reliance on third-party suppliers, including a lack of control, vendor lock-in and concentration risk.

The Report highlights how these types of technology can be complex and difficult to understand and, as staff come to increasingly rely on them, it says that there should be sufficient training and understanding of these technologies’ strengths and limitations. The Report says that senior management should have appropriate levels of competence and expertise and be able to understand the explanations provided to them by specialist members of staff. The EBA considers that there is currently a shortage of skills in the financial services sector relating to these technologies.

  1. Analytics methodologies

This pillar describes the high-level methodology that the EBA considers is generally followed when advanced analytics solutions are used: (1) data collection, (2) data preparation, (3) analytics, and (4) operations. This section of the Report focusses on model training, tuning, validation/selection, testing and monitoring, all of which can be used to mitigate the risks of adopting these technologies as well as addressing the trust elements that the EBA identifies. It also notes the importance of creating a documented audit trail (for example, in relation to the data transformation steps taken when developing features in a machine learning model).

The elements of trust

The Report finds that rolling-out artificial intelligence and machine learning technologies raises issues around trustworthiness, and it describes what the EBA calls ‘fundamental trust elements’ that should be dealt with. These elements include:

  1. Ethics

The Report recommends having an ethical policy in place to enforce ethical principles (such as prevention of harm, fairness and explainability) from the start of an AI project (‘ethical by design’) and set the boundaries for acceptable and unacceptable use cases. It also recommends having an ethics committee to oversee this.

  1. Explainability and interpretability

It states that the need for explainability is higher whenever decisions have a direct impact on customers and also when a human is required to make the final decision based on the results produced by the model (and therefore needs to understand why a particular result was generated). The Report also refers to the rights of data subjects under the GDPR to receive meaningful information about the logic involved in reaching automated decisions (sometimes referred to as the ‘right to explanation’). It considers that lack of explainability is a prominent risk where artificial intelligence and machine learning systems are provided to financial institutions by third parties and sold as opaque ‘black boxes’. The EBA considers that institutions need the ability to validate results without too strongly relying on the service provider.

  1. Fairness and avoidance of bias

The Report identifies how bias can be introduced by the input data, training of the model and coding of the rules (i.e. algorithmic bias) which may result in discrimination and it discusses various techniques that can be used to prevent or detect bias (for example, oversampling under-represented classes or reducing over-represented classes by size). It states that having a diverse workforce can help to ensure early detection of bias/discrimination issues. It also suggests that a ‘human in the loop’ should be periodically involved in the decisions taken by the model to check whether it is performing correctly.

  1. Traceability and auditability

To enable oversight of advanced analytics, all the steps and choices made throughout the entire process should be clear, transparent and traceable. The EBA suggests that institutions should keep a register of the evolution of models (it observes that some platforms have built-in version-logging features for this purpose).

  1. Data protection

This trust element is closely related to the ‘Data management’ pillar. It sets out a number of areas where compliance with the GDPR should be considered when using personal data. This includes ensuring that the use of personal data in advanced analytics models is compatible with the original purpose for which it was collected; that the appropriate rights and safeguards are in place to share data with third parties (for example, when all or part of a platform is outsourced); and the rights of data subjects to be informed about the processing of their personal data, including the right to explainability where decisions are automated.

  1. Consumer protection

Advanced analytics techniques may give rise to misconduct issues if customers’ data patterns are used to maximise profit without considering customers’ interests. The Report notes that some customers may be unfairly excluded from financial services if they do not share the required data or do not have the data to share (for example, non-digital customers). It also notes how higher risk customers may become excluded and other customers may be treated more favourably if they learn how to behave so that the analytics model produces a particular outcome. (These concerns are also relevant for InsurTech – for a discussion of that, see further here.) The EBA highlights the right of customers to file a complaint due to unsatisfactory service and to receive a plain language response that can be clearly understood (which links to the issue of implementing solutions that produce ‘explainable’ decisions).

FCA and Bank of England survey on Machine learning in UK financial services

Publication of the EBA Report follows the release of the Financial Conduct Authority’s and Bank of England’s joint survey on Machine learning in UK financial services in October 2019. The EBA Report and the survey found that around two thirds of financial institutions have deployed some form of machine learning or advanced analytics solutions and most common uses include for anti-money laundering/customer onboarding and fraud detection. They also both identified the challenge that legacy IT systems pose to the implementation of these technologies, the important role that third-party providers (in particular, cloud service providers) play in their deployment, and the need to adapt governance and risk management frameworks in line with these technologies as their levels of maturity and sophistication increase.

Operational resilience

Several of the themes discussed in the EBA’s Report resonate with comments about IT resilience made by the House of Commons Treasury Committee in its Report into IT failures in the financial services sector. This includes, for example: that senior managers should have appropriate accountability for IT operations; the care needed to ensure that customers are not exposed to risks due to legacy IT systems; the risks posed by implementing new technology and having poor change management processes; the important role that third-party providers play and the importance of managing them to a good standard; and the need to ensure that institutions have a workforce with sufficient skills and experience.

Operational resilience is high on the regulatory agenda for 2020 (and beyond). The Prudential Regulatory Authority, FCA and Bank of England have recently published joint and coordinated Consultation Papers on new requirements to strengthen operational resilience in the financial services sector. For further reading about recent developments in relation to operational resilience, see here.

What’s next?

The UK Information Commissioner’s Office has recently closed a consultation on draft guidance that it produced in conjunction with The Alan Turing Institute entitled, Explaining decisions made with AI. The guidance is intended to help organisations explain to individuals how AI-related decisions that affect them are made. The final version of the guidance is expected to be published later in 2020 and should be read by FinTechs and financial institutions whose AI-systems use personal data, especially those which make automated decisions.

News reports published in January 2020 indicate that the European Commission is considering new regulatory measures in relation to the development and use of AI. According to those reports, the options under review include legal obligations on the developers and users of AI both at the development stage (to reduce risks created before products or services that rely on AI are placed on the market) and following the deployment of AI (to facilitate enforcement or to provide possibilities for redress or other remedies in situations where harm has materialised). Further details are expected in the coming weeks.

Building on their joint survey on Machine learning in UK financial services, the FCA and the Bank of England are establishing a forum to further dialogue with the public and private sectors to better understand the use and impact of AI and machine learning – see further here. They have named this the ‘Financial Services AI Public Private Forum’ (AIPPF) and have said that it will explore means to support safe adoption of these technologies within financial services and whether principles, guidance, regulation and/or industry good practice could support safe adoption of AI and machine learning. Membership of the AIPPF is at the invitation of the FCA and Bank of England. Anyone interested in becoming a member should apply before the deadline, 21 February 2020.

The PRA, FCA and Bank of England consultation on operational resilience closes on 3 April 2020 with final guidance not expected until later this year or in 2021. In the meantime, FinTechs and financial institutions should review the Consultation Papers and understand the proposals and what they will mean for their businesses and respond to the consultation where appropriate before the regulators decide upon their final policy.