Artificial intelligence (AI) is increasingly being integrated into mergers and acquisitions (M&A), supporting negotiations, determining the value of the target, drafting relevant contracts – and, most important, helping perform due diligence. Yet, its role in this context raises questions about liability, fiduciary duties, and the validity of corporate transactions. In a new paper, we offer a framework for addressing these questions and the legal implications of AI in M&A.
We focus on both external liabilities, such as those arising from faulty tools or third-party service providers, and internal governance questions, including the duties of corporate directors. Drawing on legal standards in Germany, France, Belgium, the United Kingdom, and the United States (with a focus on Delaware), we identify where traditional doctrines are challenged by the probabilistic nature of machine decision-making.
Why AI Is Rewriting the M&A Playbook
According to a 2025 Deloitte survey, 97% of companies and private equity firms reported using AI or automation technologies in their due diligence work, up from 69% in 2022. Many professionals now anticipate that AI will play a key role across the transaction lifecycle, from target screening to post-merger integration.
The reasons are obvious. AI systems offer speed, scale, and a degree of consistency that even the most seasoned analyst teams struggle to match. These systems excel at scanning contracts, flagging liabilities, modeling deal structures, and extracting high-risk clauses from dense data rooms. In short, they make transactions more efficient, and they may assist boards of directors to carry out their duties, such as obtaining the best price for a merger.
But AI systems are not perfect. Their “decisions” are based on data patterns, rather than an understanding of the law. AI tools can misclassify key clauses or overlook critical risks. They may attribute an incorrect value to a target, draft a flawed contract, or disrupt negotiations.
The Nature of AI Systems: Autonomy, Error, and Opacity
Under the European Union’s AI Act, AI is defined as a system that operates with some level of autonomy and adapts to new data. Most M&A-related tools fall into the category of “weak” or “narrow” AI. They are good at specific tasks, such as classifying documents, identifying terms, and extracting insights, but they lack any general reasoning capacity. Some systems function as “augmented intelligence,” assisting human decision-makers. Others operate with more autonomy, requiring only intermittent oversight.
Autonomy, though, creates a triad of legal complications: unpredictability, opacity, and occasional error. First, we cannot anticipate precisely how an AI system will respond to new inputs. Second, even if we can audit its mathematical logic, we often cannot conceptually understand why it generated a certain output. And third, no AI system can operate flawlessly. The big question is, who is responsible when AI-support leads to “bad” or poorly informed deals.
Third-Party Liability in Case of Harmful AI Use
When AI tools lead to harmful outcomes, liability often lies with two parties: the provider of the AI tool and the professional who deployed it in support of its M&A-related services.
If an acquiring company relies on an AI from a third party, and the tool fails to perform as promised, liability may depend on the contract between the provider and the user. Explicit performance guarantees matter here. But in many cases, providers include contractual disclaimers. Moreover, the situation is obscured by the fact that all AI tools can make mistakes. Unless the provider breached an express regulatory standard or supplied a tool wholly unfit for its stated use, liability is difficult to establish.
The situation is different when the acquirer hires a law firm or consultant to assist in the transaction. Here, the service provider may be held liable if their use of AI falls short of applicable professional standards. We note that professionals are typically required to make reasonable efforts, not deliver perfect results. However, as courts and regulators become more familiar with AI, the standard of care may evolve to include duties around tool selection, training, and supervision.
In a Canadian case, for instance, a court reduced a lawyer’s fee on the grounds that AI could have completed the same work more efficiently. This example underscores how AI is shaping not just what professionals can do, but what they are expected to do.
Contract Integrity and the Limits of the Mistake Doctrine
We also examine whether an M&A transaction itself might be invalidated if an AI tool provides incorrect information that influences the acquirer’s decision.
In many jurisdictions, a mistaken belief must be “excusable” to qualify for legal protection. This threshold is difficult to meet when the user knowingly relied on an imperfect AI system. In some cases, courts may hold that such reliance was unreasonable, especially when known limitations of the tool were ignored.
However, there are exceptions. If the seller provided faulty information, whether through a chatbot or data-room interface powered by AI, the buyer may have an easier time invalidating the contract. In cases of deliberate deception, doctrines of fraudulent misrepresentation are relevant. Even absent fraudulent intent, if the seller’s AI system conveyed inaccurate information, the buyer may rely on the rule of mistaken belief, especially where the error relates to essential deal characteristics.
Still, most M&A agreements are designed to preclude such remedies. Through representations and warranties, parties allocate the risk of information asymmetry. In many cases, these clauses explicitly limit reliance on extrinsic statements or tools. As a result, the doctrine of mistake is rarely a reason for unwinding a completed deal.
Inside the Boardroom: Fiduciary Duties and AI-Driven Decisions
Some of the most significant legal implications of AI use in M&A arise not from external parties but from within the acquiring company. When directors rely on AI to inform key transactional decisions, particularly in pricing, due diligence, or market analysis, they remain fully bound by fiduciary duties, especially the duty of care.
In Delaware, directors are protected by the business judgment rule. While Smith v. Van Gorkom is often cited as a cautionary tale, holding directors liable for approving a merger without adequately informing themselves, the case law has evolved. In particular, Cinerama, Inc. v. Technicolor, Inc. (commonly referred to as Cinerama II) clarified that Delaware courts will generally presume directors are adequately informed unless plaintiffs can rebut that presumption with particular facts. Thus, the emphasis remains not on the outcome of a decision but on the quality of the decision-making.
In other legal systems, similar standards apply. German corporate law ties business judgment protection to the board’s use of “appropriate information,” while Belgian and UK law emphasize procedural diligence and reasoned evaluation.
This has important implications for the use of AI in corporate governance. Directors cannot abdicate their oversight responsibilities to algorithms. Instead, they must ensure they understand AI’s capabilities, limitations, and the basis for their outputs. Blind reliance on algorithmic analysis may be incompatible with the board’s informational expectations under the duty of care. AI must support, not substitute for, human judgment. Boards should carefully document their rationale for selecting an AI tool, verify its relevance and performance, and ensure that results are reviewed and understood before serving as a basis for proposing a transaction for approval. In all cases, AI deployment does not lower the bar. It raises the expectation that directors will exercise meaningful oversight over the tool and its output.
We believe that as AI becomes more common in board processes, courts will scrutinize whether directors asked the right questions about the tools they relied on. The central issue is not whether AI was used, but how, and whether its role was clearly integrated into a diligent and reasoned decision process.
Conclusion
AI is reshaping how deals are structured, how risks are detected, and how decisions are made. But in embracing it, we must not lose sight of legal responsibilities.
While AI may be the engine of modern due diligence analyses, the humans involved remain subject to existing contractual and fiduciary duties. These standards already create thresholds for responsible AI deployment, externally or internally, without requiring new regulatory regimes. As with other important corporate decisions, the risks of using AI rest with the shareholders of the acquiring company.
Floris Mertens is a PhD candidate at Ghent University, Belgium, and Maarten Herbosch is an assistant professor at KU Leuven, Belgium. This post is based on their recent paper, “The Future of Mergers & Acquisitions? Risk Allocation in AI-Guided Transactions,” available here.