LawFlash

ESMA Issues Guidance on AI in Retail Financial Services as EU AI Act Takes Effect

19. August 2024

The European Securities and Markets Authority (ESMA) recently published its first formal guidance on the use of artificial intelligence (AI) in the provision of retail investment services. The guidance outlines AI’s potential risks and benefits for investment firms and their clients and covers various AI-use aspects, including customer service, provision of investment advice and portfolio management services, compliance, risk management, and fraud detection. ESMA’s guidance was issued shortly ahead of the landmark EU AI Act’s 1 August 2024 effective date.

ESMA’s guidance concerns the application of AI in investment services in light of the key obligations on firms under the EU Markets in Financial Instruments Directive (MiFID II) when using AI tools. The guidance focuses on organisational requirements, conduct of business requirements, and the general obligation to act in the client’s best interest.

The EU AI Act, which is the world’s first AI-focused law, will also apply to firms’ use of AI in retail financial services in the European Union, and establishes, parallel obligations to the EU/UK General Data Protection Regulation (GDPR). 

Potential Use of AI and Related Benefits in Retail Financial Services

ESMA envisions various potential uses of AI, including:

  • Customer service and support, such as through AI-powered chatbots or virtual assistants
  • Supporting firms in the provision of investment advice/portfolio management services, as AI tools could be used to (1) analyse client information at an individual level to provide personalised investment recommendations and (2) develop investment strategies given AI’s ability to process vast amounts of financial data
  • Compliance, as financial regulations could be summarised and analysed by AI, and AI tools could then be used to compare the analysis with a firm’s internal policies and procedures
  • Risk management, as AI could be used to evaluate the risk associated with different investment options
  • Fraud detection, as AI systems could monitor data on transactions, internal communications between staff, or external communications between staff and clients/counterparts for unusual patterns that may indicate fraudulent activity
  • Operational efficiency, as AI can be used to automate tasks such as data entry, report generation, and transaction processing

In addition to technologies developed or adopted by firms, ESMA’s guidance addresses third-party AI technologies (such as ChatGPT and Google Gemini and their use by firm staff (with or without their managers’ knowledge/approval), reminding firms that there should be appropriate measures to control the use of those third-party systems.

AI Risks for Firms and Clients

Risks for firms identified by ESMA in the context of using AI include:

  • Lack of accountability and oversight (overreliance), as the importance of human judgment may be neglected through overreliance on AI tools (which can be particularly risky in financial markets where AI may produce inaccurate predictions)
  • Lack of transparency and explainability/interpretability, as many AI tools are “black boxes” with unclear decision-making processes (which could lead to difficulties in adjusting underperforming strategies)
  • Security/Data privacy, with the use of AI tools raising concerns around the collection, storage, and processing of large amounts of data by the AI tools
  • Robustness/reliability of the output, quality of training data, and algorithmic bias, as AI natural language tools may “hallucinate” and produce factually incorrect—but realistic sounding—outputs, resulting in misleading advice with unexpected risks or missed opportunities (and training data used to develop an AI tool could result in predictions being polluted by biases that are difficult to identify and/or correct)

Utilisation of AI Tools in the Context of MiFID II

ESMA’s statement is intended to guide firms using or planning to use AI technologies so they can ensure compliance with the key MiFID II requirements, particularly those pertaining to organisational requirements, conduct of business requirements, and the general obligation to act in the client’s best interest.

Client Best Interest and Information to Clients

ESMA expects firms to be transparent on the role of AI in investment decision-making processes and generally expects firms to present information on how AI tools are used in investment services in a clear, fair, and not-misleading manner. Firms should also transparently disclose any use of chatbots or other types of AI-related automated systems.

Organisational Requirements: Governance, Risk Management, Knowledge, Competence, and Staff Training

ESMA notes that management bodies are “pivotal” in ensuring compliance with organisational requirements. It expects management bodies to have an appropriate understanding of how AI technologies are used within their firm, and that they should ensure appropriate oversight of those technologies. ESMA highlights that this oversight is crucial to ensuring that AI systems align with a firm’s overall strategy, risk tolerance, and compliance framework. Firms are therefore expected to develop robust governance structures that monitor the performance and impact of AI tools and to foster a culture of risk ownership, transparency, and accountability where the implications of AI deployment are regularly assessed, and appropriate adjustments are made in response to the evolving market conditions and regulatory landscape.

ESMA also notes that firms should ensure that any data used as input for AI systems is relevant, sufficient, and representative, ensuring that algorithms are trained and validated on accurate, comprehensive, and sufficiently broad datasets. ESMA highlights the importance of meticulous data sourcing.

Finally, ESMA expects firms to put in place adequate training programs that cover the operational aspects of AI and its potential risks, ethical considerations, and regulatory implications. Relevant staff should be equipped with the knowledge to identify and address issues such as data integrity, algorithmic bias, and unintended consequences of AI decision-making. ESMA highlights that fostering an organisational culture that encourages continuous learning and adaptation is vital, given the rapid evolution of AI technologies and the associated regulatory landscape.

Conduct of Business Requirements

A “heightened level of diligence” is required in the context of investment advice and portfolio management, as firms are required to ensure the suitability of services and financial instruments provided to clients. Firms should implement rigorous quality assurance processes for their AI tools, including thorough testing of algorithms and their outcomes for accuracy, fairness, and reliability and conducting periodic stress tests. Firms should also ensure strict adherence to data protection regulations.

Recordkeeping

ESMA expects firms to maintain comprehensive records on AI utilisation and on any related clients’ and potential clients’ complaints. Firms should also maintain records that document the utilisation of AI technologies in the various aspects related to the provision of investment services, with those records covering aspects of AI deployment (including the decision-making processes, data sources used, algorithms implemented, and any modifications made over time).

Analysis of ESMA Guidance

ESMA has outlined its expectations for firms to have in place appropriate controls, policies, and procedures to comply with an existing regime—in this case MiFID II requirements—in respect of their deployment of AI. This aligns with a broader theme that firms looking to leverage what they already have in place in order to document and effectively manage risks arising from their use of AI, as highlighted in our recent webinar discussing the regulatory focus by financial regulators globally on AI.

In its February 2023 report Artificial Intelligence in EU Securities Markets, ESMA found overall that “[AI] does not seem to be leading to a fast and disruptive overhaul of business processes.” In that report, ESMA highlighted that complexity and lack of transparency, which hindered effective human oversight, represented barriers to the uptake of AI tools. It is clear from ESMA’s first formal guidance that those risks remain a key concern.

While this guidance focuses on the application of AI in the context of a firm’s MiFID II obligations, it recognises the broader EU framework on digital governance and, specifically, the AI Act and Digital Operational Resilience Act (DORA). DORA, which will come into effect on 17 January 2025, establishes a binding, comprehensive risk management framework for financial services firms’ information and communication technology (ICT).

DORA will require EU-based investment firms and other financial entities to implement an appropriate operational risk management processes and controls around planning and due diligence, security and privacy, oversight and accountability, and resilience and business continuity, as well as specific contractual provisions with third-party ICT service providers, which will extend to most vendors offering AI tools. Several of these principles align with ESMA’s Guidelines on Outsourcing to Cloud Service Providers, which may currently apply to firms’ use of cloud-based AI tools and with which many firms will be familiar.

Implications of the EU’s AI Act for Retail Financial Services

Many of the AI risks that ESMA has identified have been considered in connection with the AI Act. Notably, the AI Act will prohibit (effective February 2025) many AI applications, including several relevant to retail financial services, such as “AI systems,” which

  • deploy subliminal techniques beyond a person’s consciousness to materially distort a person’s behavior;
  • purposefully manipulate or deceive to materially distort a person’s behavior;
  • exploit the vulnerabilities of a person (e.g., age and disability) to materially distort a person’s behavior;
  • evaluate, classify, or score persons based on social behavior and personal or personality characteristics;
  • assess the risk of a person committing a criminal offense;
  • conduct facial recognition by untargeted scraping of facial images from the Internet or CCTV;
  • perform emotion inference in the workplace or in educational institutions;
  • use biometric categorization systems to infer demographic background; or
  • use real-time remote biometric identification systems in publicly accessible spaces for law enforcement purposes.

Further, even if an AI system is not prohibited, firms may be subject to potentially rigorous obligations to develop or use such AI systems. For example, firms which are “providers” (typically, firms which develop AI) of “high-risk” AI systems may be subject to significant obligations, including (in summary):

  • Registration in a public EU database
  • Implementing risk and quality management systems
  • Effective data governance processes (e.g., relating to bias mitigation, and use of representative training data)
  • Transparency around (e.g., instructions for use, and technical documentation)
  • Human oversight (e.g., relating to explainability, auditable logs, and human-in-the-loop)
  • Accurate, resilient cybersecurity measures and reporting of “serious incidents”

Likewise, firms which are “deployers” (typically, firms which use AI) of “high-risk” “AI systems” may also be subject to significant obligations. For example, banks may need to conduct a “Fundamental Rights Impact Assessment” when evaluating individuals’ creditworthiness.

We have published several helpful resources about the new AI Act, including LawFlashes on 10 key takeaways for business and legal leaders, its extraterritorial reach in certain circumstances, and its notable cybersecurity obligations, as well as a step-by-step compliance checklist.

Next Steps

By fostering transparency, implementing robust risk management practices, and complying with legal requirements, ESMA seeks to help firms to ensure they harness AI’s potential while safeguarding investors’ confidence and protection. ESMA, together with national regulators, will continue to monitor the position in relation to AI to determine if further action is needed in the area.

Many of ESMA’s objectives have been expressly considered under the EU’s new AI Act. In fact, many firms will need to consider whether, and if so, to what extent their retail financial services will be subject to the AI Act, and accordingly design and implement an AI Act compliance program, alongside their GDPR compliance program.

Contacts

If you have any questions or would like more information on the issues discussed in this LawFlash, please contact any of the following: