Approximately one year on from our briefing in June 2024 which considered some of the steps firms could be taking to meet future regulatory expectations in relation to governance of AI, in this briefing we outline some of the significant steps taken by the FCA over the past twelve months or so, in shaping the regulatory agenda for AI, and evaluate the potential enforcement risk on the horizon.
1. Key FCA developments and initiatives in AI
We set out below a chronological overview of the key developments and initiatives undertaken by the FCA in the AI regulatory landscape over the past 12-14 months, highlighting the scale and momentum of recent activity.
- On 22 April 2024 the FCA published its AI Update in response to the (then) Government’s pro-innovation approach to AI regulation and initial guidance for regulators. The FCA’s AI Update included a high-level outline of how it envisaged the existing regulatory framework mapping across to each of the government’s five key principles.[1]
- Starting in May 2024, the FCA ran a 3-month Market Abuse Surveillance TechSprint to explore how advanced solutions leveraging AI and machine learning (ML) could help detect evolving forms of market abuse. The solutions that were demonstrated during the three-month period included: large language models to filter out false positives and improve the accuracy of alerts; anomaly detection techniques to identify unusual trading patterns; and models to identify subtle price changes, all contributing towards more reliable identification of genuine cases of market abuse.
- In October 2024 the FCA launched its AI lab, the first components of which comprise the following initiatives: the Supercharged Sandbox, AI Live Testing, AI Spotlight, AI Sprint, and the AI Input Zone (more on each of these below).
- From 4 November to 31 January 2025 the FCA opened its ‘AI Input Zone’ to allow stakeholders to provide feedback on what they saw as the most transformative AI in financial services use cases, what the barriers were to adopting these, and whether the current regulatory framework was sufficient to support firms to embrace the benefits of AI in a safe way.
- In December 2024 the FCA announced an initiative to undertake research into AI bias to inform public discussion and published its first research note following its literature review on bias in supervised machine learning. The FCA’s review identified several sources of bias in supervised machine learning and found that bias can lead to unfair or discriminatory outcomes particularly for protected or vulnerable groups but acknowledges that there may be techniques to mitigate bias in machine learning systems.
- On 29-30 January 2025 the FCA hosted an ‘AI Sprint’ at its London office, bringing together 115 participants from industry, academia, regulators, technologists and consumer representatives to discuss the opportunities and challenges of AI in financial services, with the aim of assisting the FCA in informing its regulatory approach to AI whilst creating an environment for growth and innovation. This had been preceded by a ‘Showcase Day’ on 28 January, where firms selected by the FCA had showcased their AI-related proposals and solutions across themes relating to bias and fairness, explainability, data quality and compliance.
- In March 2025 the FCA published its 5-year strategy which reaffirmed its “increasingly tech-positive approach” and priority to support growth, including by enabling investment in innovation.
- In April 2025 the FCA published a summary of feedback it received during the AI Sprint in January, which included four common themes:
- regulatory clarity: the importance of understanding how existing regulatory frameworks apply to AI, and the need for the FCA to clarify or enhance existing requirements to help firms understand regulatory expectations whilst supporting beneficial innovation;
- trust and risk awareness: the need for consumer trust to release the full benefits of AI in financial services;
- collaboration and coordination: the need for cross-functional cross-border collaboration between international regulators, government, financial services firms, academics, model developers, and the end users;
- safe AI innovation through sandboxing: the need for a safe testing environment to promote responsible innovation.
- This was soon followed by the announcement by the FCA in the same month of its “Supercharged Sandbox”, in partnership with NVIDIA (an AI and computing company), the aim of which is to provide a secure environment for firms to experiment with AI. Successful applicants will be able to explore their AI innovations in the Supercharged Sandbox from October 2025.
- In May 2025 the FCA published its second research note, which explored the potential usefulness and limitations of LLMs (such as the GPT series of OpenAI) in consumer-facing contexts in financial services. This followed the FCA’s testing of generative AI applications to consumer financial services in two pilot projects, the three key takeaways of which were: (i) LLMs have a strong potential in simplifying complex information but validating their outputs requires robust oversight from both human judgment and automated tools; (ii) the effectiveness of LLMs is context dependent, subject to the point at which the LLM is embedded in the customer journey for example; (iii) there is clear appetite amongst consumers for AI-driven assistance
In summary, recent developments indicate that the FCA is actively pursuing its strategy to promote innovation within the financial sector and forms a key component of the FCA’s broader agenda to foster continued competitiveness for UK financial services. However, it is important to note that the FCA continues to prioritise consumer protection and the prevention of financial crime. The regulator’s support for innovation should not be construed as a relaxation of its expectations regarding regulatory compliance. Firms are therefore reminded that the adoption of AI solutions—whether developed internally or sourced from external vendors—must be accompanied by a careful assessment of associated risks.
Accordingly, firms should ensure that robust governance frameworks are in place to manage the deployment of AI technologies. This includes conducting thorough due diligence on third-party providers, maintaining effective oversight of AI-driven processes, and implementing appropriate controls to safeguard consumer interests and uphold regulatory standards. By taking these steps, firms can mitigate the risk of enforcement actions and ensure that their use of AI aligns with regulatory expectations.
Below we consider what these enforcement actions might look like.
2. FCA intervention or enforcement: Key risk areas
Given the regulator’s support of innovation including the adoption of AI and the technology-agnostic financial regulatory regime in the UK, the potential intervention or enforcement risk for firms using AI is likely to arise from failures to meet existing regulatory obligations. Below is an overview of the key areas where we consider enforcement risk may materialise:
- Governance and Oversight Failures: The FCA expects firms to have robust governance arrangements, which encompass not only effective oversight at board and senior management level but also the implementation of effective systems and controls. This includes in relation to the firm’s deployment and use of AI tools. If a firm fails to demonstrate adequate governance—such as not understanding how the AI models they use make decisions (known as ‘explainability’) or not monitoring their outputs— this could lead to breaches of the FCA’s Principles for Businesses (such as Principle 2: skill, care and diligence and/or Principle 3: management and control) and subsequent enforcement action, particularly if this presents a potential or actual risk of consumer harm.
- Consumer protection: AI systems can present considerable risk to consumers as well as benefits, including for example decision making that could be associated with discriminatory decisions or bias related to protected characteristics (arising from underlying data on which the AI model might be trained) which could inadvertently lead to discriminatory pricing, inappropriate product recommendations, or excluding certain groups of customers from access to certain products for example. Firms should be able to withstand regulatory scrutiny in relation to how it ensures AI does not undermine consumer protection. If AI systems result in unfair outcomes for consumers, firms may face enforcement action.
Of note, the FCA trailed its concerns surrounding AI in its final non-Handbook guidance for firms on the Consumer Duty in July 2022, which highlighted that using algorithms including machine learning or artificial intelligence which embedded or amplified bias could lead to worse outcomes for some groups of customers, and might not be acting in good faith for their consumers.
- Financial crime and market integrity: One key benefit of AI is how it can be used to prevent financial crime, but it can also introduce new vulnerabilities, for instance in the context of securities trading where it has the potential to undermine the integrity of markets. The Bank of England for example flagged concerns in a speech last year that developments in AI could affect financial stability. A trading algorithm, without the proper control environment and human oversight, could influence asset prices in an illegitimate way and/or have the potential to influence another AI system’s actions in the trading ecosystem, thereby leading to a potentially systemic impact on price fluctuations in the market (albeit unintended).
- Operational resilience: There has been increasing focus by financial services regulators on operational resilience in recent years, especially regarding IT and systems failures, and a willingness to take enforcement action where firms fail to manage operational risks, particularly when such failures result in customer harm or market disruption. These regulatory principles and enforcement trends are highly relevant to the governance of AI within financial services. As AI systems become increasingly integral to the delivery of important business services, the risks associated with their failure or misuse—such as algorithmic errors, data bias, or lack of transparency—can be as significant as those arising from traditional IT failures.
- Outsourcing and third-party risk: When firms outsource services to third-party providers, the FCA expects them to exercise robust due diligence and maintain effective oversight of those arrangements. This expectation applies equally where third parties deploy AI systems on a firm’s behalf. Enforcement risk may arise if a firm is unable to demonstrate that it has taken reasonable and proportionate steps to identify, assess, and manage risks associated with third-party use of AI, particularly where such risks result in regulatory breaches. Taking steps to implement comprehensive vendor due diligence processes and ensure that contractual arrangements require third-party providers to comply with all relevant local laws, regulations, and ethical standards in their use of AI, may help mitigate this risk.
- Individual accountability: If a firm’s systems and controls are found to be inadequate—such as failing to properly assess, monitor, or manage the risks associated with AI—individuals with responsibility for these areas may be held personally accountable. This could include, for example, the Chief Technology Officer, Chief Risk Officer, or any senior manager with oversight of technology, risk, or compliance functions.
The FCA’s enforcement risk for firms using AI is closely tied to the firm’s ability to demonstrate compliance with existing regulatory requirements. The regulator has not created a separate regime for AI, but it expects firms to apply established principles and rules to their use of new technologies. Enforcement action is most likely where the use of AI leads to consumer harm, market disruption, or breaches of core regulatory obligations. Firms should therefore ensure that their AI strategies are underpinned by strong governance, effective risk management, and a clear focus on consumer outcomes. This underscores the critical importance of explainability, as firms must be prepared to justify their models and processes to regulators and demonstrate that they are managing risks appropriately.
[1] Security, safety, robustness; appropriate transparency and explainability; fairness; accountability and governance; contestability and redress