Artificial intelligence (“AI”) tools are expanding into every aspect of consumers’ daily lives. That proliferation has the potential to dramatically increase bank liability under the Electronic Funds Transfer Act (“EFTA”) and Regulation E.
The consulting firm McKinsey & Company recently reviewed the potential for “agentic” AI to conduct consumer transactions. An “agentic” AI model is one that is given agency to take actions on behalf of a user. This has opened the possibility of agentic AI not only providing recommendations for consumers, but actually executing transactions:
And what begins as AI-mediated discovery increasingly carries through to execution, as AI agents compare options, assemble baskets, and complete checkout via emerging payment protocols and merchant integrations. The AI platform Perplexity, for example, launched an agentic shopping tool, “Perplexity Buy with Pro,” in late 2024. OpenAI’s Operator, launched in January 2025 and now integrated into ChatGPT, uses agents to help users automate tasks like booking travel and restaurant reservations. More recently, OpenAI announced an Agentic Commerce Protocol, codeveloped with Stripe, which allows users to complete purchases within ChatGPT without leaving the chat.
McKinsey & Company, October 17, 2025 Report (emphasis added). As described by McKinsey, agentic AI developers are looking forward to a day where entire transactions are conducted through agentic AIs interacting with each other on behalf of their users.
Anyone who has used AI today, however, is familiar with the potential for errors. AI tools generated a list of fake books for a summer reading list, ordered $222 worth of chicken nuggets at McDonald’s, and approved serving food that had been nibbled on by rats. No doubt, many of these errors will be worked out over time.
The stakes are low when an agentic AI accidentally tells a restaurant with humans in the kitchen to cook 260 chicken nuggets. Humans can easily intervene to correct the obvious error. But what happens if instead of humans in the kitchen, another agentic AI is in charge of fulfilling the order and charging the customer? McKinsey anticipates such a day may come when different agentic AI tools negotiate and consummate transactions with minimal human oversight.
Banks may unwittingly find themselves underwriting the risk that the agentic AIs get these transactions wrong. Banks are already well aware that under the EFTA and Regulation E, banks can end up being liable for losses resulting from unauthorized transactions from consumer accounts. Banks can be liable for “errors” which include:
(i) An unauthorized electronic fund transfer;
(ii) An incorrect electronic fund transfer to or from the consumer’s account;
12 C.F.R. § 1005.11. Is an agentic AI transaction authorized by the consumer? Presumably, the consumer will provide their debit card number to the agentic AI tool in order to facilitate transactions. If the agentic AI uses the debit card to buy 260 chicken nuggets, does that mean the debit card has been stolen?
These questions are about to be posed to financial institutions all over the country as agentic AI tools proliferate. Banks should consult with their compliance and legal counsel to ensure they have a consistent response to the potential for unauthorized agentic AI transactions. One issue for consumers claiming a transaction is unauthorized will be showing that they received no benefit as required by 12 C.F.R. § 1005.2(m). Or, was a consumer fraudulently induced to provide their information to the agentic AI under the premise that the agentic AI would not buy 260 of anything?
Someone is going to be left holding the bag for agentic AI misfiring. Hopefully, the bag will come with some fries.