BLOG POST

Tech & Sourcing @ Morgan Lewis

TECHNOLOGY TRANSACTIONS, OUTSOURCING, AND COMMERCIAL CONTRACTS NEWS FOR LAWYERS AND SOURCING PROFESSIONALS

AI in Financial Services: Bank of England and UK FCA Highlight Key Challenges and Risks

The Bank of England (Bank) and the UK Financial Conduct Authority (FCA) published their final report of discussions from the UK Artificial Intelligence Public-Private Forum on February 17. Over quarterly meetings and several workshops conducted since October 2020, the Bank and the FCA jointly facilitated dialogue between the public sector, the private sector, and academia in order to deepen their collective understanding of artificial intelligence (AI) and explore how to support the safe adoption of AI. This initiative was incorporated into the UK National AI Strategy.

The report does not detail any new regulatory guidance; instead, it explores the various barriers to adoption, challenges, and risks of the use of AI in financial services and indicates certain themes in the Bank’s and FCA’s thinking.

Key Takeaways

  • AI begins with data: The importance of the availability and quality of data used by AI systems is a key theme in the report. Notably, unstructured data sourced from third-party providers is called out as presenting additional challenges of quality, provenance, and— potentially—legality. The changing role of data in the AI lifecycle raises questions on adapting governance structures (see below) and AI-specific data standards within an organization.
  • Model risk: The report notes that most of the risks related to the use of AI models in financial services are not new and can arise in the use of non-AI models. The scale at which AI is beginning to be used, the speed at which AI systems operate, and the complexity of the underlying models is new. Complexity is the main challenge for managing risks arising from AI models; in particular, the complexity of inputs (such as many input layers and dimensions), relationships between variables, the intricacies of the models themselves (e.g., deep learning models), and the types of outputs. Identifying and managing change in AI models, as well as monitoring and reporting their performance, are also key parts of ensuring that models behave as expected.
  • Explainability: Being able to explain model outputs is described in the report as “vital”. The Bank and FCA suggest that approaches to managing explainability should not just focus on the features and parameters of models, but also on consumer engagement and clear communications. This issue brings together both model risk and governance considerations.
  • Governance: Existing governance frameworks and structures provide a good starting point for AI models and systems, though the report notes that they should reflect the risk and materiality of each use-case and cover the full range of functions and business units. A centralized body within firms should set the AI governance standards, with business areas being accountable for the outputs, compliance, and execution against the governance standards.

Next Steps

To support further discussion with a wider set of stakeholders, the Bank and the FCA will publish a Discussion Paper on AI later in 2022. Save for this, the report does not detail any specific next steps from either the Bank or the FCA. It suggests that regulators continue to monitor and support the safe adoption of AI in financial services, provide clarity on how existing regulations and policies apply to AI, and coordinate both domestically and internationally with other regulators and government departments to catalyze progress.

In the private sector, the Bank and FCA suggest that an AI industry body could serve as a next step towards developing voluntary codes of conduct and an auditing regime to help foster trust in AI systems.

Morgan Lewis will continue to monitor regulatory developments around the use of AI in financial services.