With innovation comes regulatory scrutiny. On December 9, 2025, the Financial Industry Regulatory Authority, Inc. (“FINRA”) released its 2026 Annual Regulatory Oversight Report (the “2026 Report”), which includes a new section dedicated to generative artificial intelligence (“GenAI”). The 2026 Report is the latest iteration of FINRA’s yearly summary of insights from its regulatory oversight activities, designed to aid member firms in strengthening their compliance programs.
Below we summarize FINRA’s key observations on GenAI use by member firms and related risks. We also review certain compliance considerations for member firms integrating GenAI tools into their operations. Our takeaway: FINRA recognizes that GenAI is here to stay, and the regulator expects member firms to adopt and deploy GenAI responsibly and in compliance with existing regulatory obligations.
Observed GenAI Use Cases
According to the 2026 Report, member firms have implemented GenAI solutions largely with a focus on efficiency gains to internal processes and information retrieval. FINRA lists the following use cases as some of the most commonly observed:
| Summarization and information extraction (the top observed use case) | Conversational AI & question answering |
| Sentiment analysis | Translation |
| Content generation & drafting | Classification & categorization |
| Workflow automation & processing intelligence | Coding |
| Query | Synthetic data generation |
| Personalization & recommendation | Analysis & pattern recognition |
| Data transformation | Modeling & simulation |
FINRA explains in the 2026 Report that it published this list in part to facilitate the usage of shared terminology regarding GenAI in the financial services industry.
Emerging Risks Highlighted by FINRA
While the use of GenAI opens the door to efficiency gains, it also introduces novel risks for member firms, including:
- Hallucinations, a term that the 2026 Report describes as “instances where the model generates information that is inaccurate or misleading, yet is presented as factual information”;
- Bias, where a model produces skewed or inaccurate outputs due to model design decisions or data that are limited or inaccurate, including outdated training data;
- Cybersecurity risks, including deepfakes, synthetic identities, and polymorphic malware that can be deployed against a member firm or its customers;
- Unsupervised execution of regulated actions by agentic AI, which refers to autonomous systems capable of executing tasks without predefined rules; and
- Data sensitivity concerns.
Member firms must develop suitable approaches to identifying and mitigating the risks inherent in GenAI deployment internally and by third-party vendors. Additional diligence and monitoring of internal and third-party vendor use of GenAI may be necessary.
Compliant Use of GenAI: More of the Same, and Human Oversight is Key
FINRA notes unequivocally in the 2026 Report that its rules, and the securities laws generally, are intended to be technologically neutral and continue to apply when firms use GenAI. The design and implementation of supervisory systems remain critical in this context. Member firms should consider whether updates are necessary to their review and approval processes, governance frameworks, and risk management protocols in light of their current and proposed use of GenAI. In addition, member firms should implement regular testing of GenAI tools as well as ongoing monitoring of the prompts into and outputs from such tools.
FINRA’s record retention requirements also must be considered as they apply to AI-generated content, including chatbot outputs. Depending on audience and distribution, such outputs may constitute correspondence, retail communications or institutional communications, each requiring appropriate review and retention consistent with FINRA and SEC obligations.
For more information see: 2026 FINRA Annual Regulatory Oversight Report | FINRA.org