The report covers the private law issues that may arise when firms use AI to assist, or directly execute, market activities.

By Nell Perks

In late October, the Financial Markets Law Committee (FMLC) released a report that considers the private law issues that may arise from the use of artificial intelligence (AI) in wholesale financial markets.1 The report offers non-binding guidance on how to conceptualise AI’s impact in private law domains. While the technology may be revolutionary, the FMLC has emphasised that the English commercial courts are able to adapt to new technologies within existing commercial law, “which is essentially ‘technology-neutral’”. Accordingly, the report maps out how the FMLC sees AI fitting within the legal terrain: where existing principles already fit AI, where evidential and doctrinal questions may arise, and where targeted regulatory clarification could sensibly complement private law.

In particular, the report covers the private law issues that the FMLC has identified may arise when firms use AI to assist, or directly execute, market activities. It considers how legal responsibility for AI‑enabled statements and actions could be attributed, how contracts formed or performed with AI could be treated, the standards applicable to professional duties, and how market abuse rules may engage with autonomous behaviour. Further, the report highlights the data protection and intellectual property questions that affect AI development and cross‑border deployment.

As the report states, the role of the FMLC is to identify issues of legal uncertainty or misunderstanding in the framework of the wholesale financial markets which might give rise to material risks and to consider how such issues should be addressed. Some of the report’s key findings are summarised below.

AI’s Impact Across Private Law Domains

A Tool, Not an Agent

Given that AI is not a legal person, responsibility for AI-mediated conduct will rest with the human or corporate deployer. However, the deployer is not necessarily strictly liable for the acts of the AI. Therefore, financial market participants should have contractual arrangements in place governing the use of AI.

Contract Law

When examining AI-mediated contracts, the court’s focus will be on the natural or legal persons’ purposes, frameworks, and parameters set in advance, rather than the reconstruction of an AI’s internal “state of mind”. While the use of AI may lead to novel situations and considerations, and the law will need to develop in some areas, the FMLC believes that this development can occur within existing commercial law. Legislative intervention is therefore not required.

Professional Duties

The use of AI does not change the standard of care applicable to financial institutions. Firms should be able to justify the choice of system, its training, and the production parameters chosen. Verifiability is key. AI reasoning often functions like an opaque “black box” such that its use for high‑stakes functions will be hard to defend if effective and more transparent alternatives exist.

Fiduciary Duties

FMLC’s view is that AI cannot be a fiduciary. Since conscience requires consciousness, where fiduciary obligations do arise, AI may appropriately inform decisions, but the wholesale delegation of discretion to automation is unlikely to satisfy the duties of loyalty and no-conflict. Where AI is used to inform human judgment, governance is essential to ensuring that AI use aligns with the beneficiary’s interests.

Market Abuse

The liability for market abuse sits firmly with the deployer of AI. Existing frameworks regulating algorithmic trading require firms to test, monitor, and override automated systems to prevent manipulative practices. However, it remains unclear how this applies to intent-based offences when AI carries out actions that usually involve human intent. FMLC anticipates targeted regulatory clarifications rather than recognition of AI’s legal personality or private law reform.

Data Protection and Intellectual Property

Data protection and intellectual property issues affect every stage of AI. Furthermore, the rules around web scraping, data subject rights, and training diverge across jurisdictions and remain in flux. FMLC urges the UK government to clarify the regulatory landscape for data and intellectual property issues regarding AI.

Key Takeaways

Whether an action is AI-mediated or not, firms will continue to be judged on how reasonably they select, configure, monitor, and control the systems they choose to use.

Firms should allocate risks explicitly in contracts involving the use of AI. For instance, defining permitted uses, requiring secure access to logs and evidence, and determining how liability for defects and errors should be allocated is crucial.

Market participants should expect iterative development and not sweeping reform in this space. Currently, English private law can accommodate the use of AI in wholesale financial markets. Regulation may be required but only in targeted areas where further protection is desirable for market certainty.

We would like to thank August Chen for his contribution to this blog post.