Introduction
On January 6, 2025, the FDA released draft guidance on using artificial intelligence (AI) in regulatory decision-making for drugs and biological products. The draft guidance – the first of its kind from the agency – aims to enhance the efficacy and accuracy of the drug approval process, ensuring that applications incorporating AI meet rigorous standards for safety and effectiveness. Public comments on the new guidance can be submitted through April 7, 2025.
The draft guidance provides recommendations for establishing and maintaining trust in AI systems used across the drug product lifecycle, focusing on safety, effectiveness, and quality. In preparing the document, FDA considered input from the community, including feedback from “a number of interested parties including sponsors, manufacturers, technology developers and suppliers, and academics” and input provided at an FDA sponsored expert workshop convened by the Duke Margolis Institute for Health Policy in Dec. 2022. [1]
Key Highlights from the Guidance
While the FDA proposed guidance is procedural in nature, it is not without intrigue, as the abutment of artificial intelligence with biopharmaceuticals is not only a long time coming, but also a nexus with the potential to usher in landmark changes. Thus, the FDA’s guidance – while tentative at this stage – will undoubtedly be closely scrutinized as AI-based data production is further integrated into the biopharma regulatory framework.
The FDA’s guidance applies specifically to AI models used to produce data supporting regulatory decisions on drug safety, effectiveness, and quality.[2] The guidance does not address the use of AI models (1) in drug discovery or (2) when used for operational efficiencies (e.g., internal workflows, resource allocation, drafting/writing a regulatory submission) that do not impact patient safety, drug quality, or the reliability of results from a nonclinical or clinical study. Id. Notably, the FDA encourages sponsors to engage with FDA early if they are uncertain about whether or not their use of AI is within the scope of this guidance. Id.
A critical aspect of the guidance is its seven-step risk-based credibility assessment framework, designed to evaluate AI systems based on their context of use (COU) and the level of regulatory impact they have:
- Define the question of interest that will be addressed by the AI model;
- Define the COU for the AI model;
- Assess the AI model risk;
- Develop a plan to establish the credibility of AI model output within the COU;
- Execute the plan;
- Document the results of the credibility assessment plan and discuss deviations from the plan; and
- Determine the adequacy of the AI model for the COU.
Id. at pages 5-6. In summary, the framework thus instructs (a) defining the question of interest and the context of use; (b) assessing AI model risk by considering the model’s influence and potential consequences of errors; (c) developing and executing a credibility plan tailored to the model’s risk level; and (d) documenting and evaluating results, allowing for iterative adjustments based on those results.
Lifecycle Management and Ongoing Challenges
A key focus of the guidance is AI model lifecycle maintenance, particularly addressing challenges like data drift — where an AI model’s performance degrades over time due to differences in new input data. The FDA recommends continuous monitoring and updating of AI models to ensure they remain effective and reliable throughout their lifecycle. The guidance also addresses broader AI-related concerns:
- Dataset quality and integrity: AI models require high-quality, representative data to produce reliable outcomes.
- Algorithmic bias: The FDA acknowledges the risk of bias in AI-generated results and stresses the importance of bias mitigation strategies.
- Transparency and explainability: Regulatory decisions must be interpretable, necessitating AI models that provide clear, understandable justifications for their outputs.
Potential Questions and Issues to Address
While the draft guidance is a strong signal in the FDA’s efforts to integrate technological change in biopharmaceutical regulation, these efforts do not come without strain. For example, it remains to be seen how stakeholders will consistently apply the guidance’s model risk matrix across diverse use cases, considering the relatively scant amount of guidance at present, as well as the numerous (and rapidly increasing) means by which AI can be leveraged in a scientific setting. While the FDA guidance does not – and perhaps cannot – attempt to cover all implementations of AI, the speed at which technology processes, and at which the FDA responds, will need to be monitored closely, as several open questions remain:
- Consistency in AI risk assessment: How will different stakeholders interpret and apply the AI risk matrix across diverse regulatory scenarios?
- Regulation of self-evolving AI models: The FDA highlights concerns about AI systems that adapt over time, but the extent of post-approval oversight is still unclear.
- Impact on smaller companies: Meeting the documentation and validation requirements could pose challenges for startups and smaller biopharma companies with limited regulatory experience.
Industry Engagement and Next Steps
The FDA is positioning this guidance as the beginning of an ongoing conversation, strongly encouraging sponsors to engage early to discuss AI model risk and credibility assessment plans. The agency outlined various engagement options for discussing AI in development programs, including INTERACT meetings for CBER and CDER products, as well as Pre-IND meetings. Depending on the AI model’s intended use, sponsors and stakeholders can also explore other options, such as:
- INTERACT meetings (for early-stage regulatory discussions)
- Pre-IND meetings (for investigational new drug applications)
- Digital Health Technologies Program (for AI and digital tool integration)
- Complex Innovative Trial Design (CID) Program (for AI-driven trial methodologies)
- Emerging Drug Safety Technology Program (EDSTP) (for post-market AI surveillance)
Id. at 17-20.
Conclusion
Publishing the guidance is the beginning, not the end, of the process, which FDA acknowledges will require a dialogue. The FDA “strongly encourages” early engagement between sponsors and the agency to discuss the use of AI models in drug and biologic products, emphasizing the importance of expectation setting for credibility assessments and addressing potential challenges.
While the FDA will consider public feedback on whether its risk-based framework meets industry expectations (and whether current engagement opportunities are sufficient), this newly published guidance is a clear reflection of the FDA’s commitment to incorporating AI into regulatory processes while upholding safety and reliability standards.
[1] “Considerations for the Use of Artificial Intelligence to Support Regulatory Decision-Making for Drug and Biological Products.” Federal Register, vol. 90, no. 4, 7 Jan. 2025, pp. 851-855. U.S. Government Publishing Office, https://www.federalregister.gov/documents/2025/01/07/2024-31542/considerations-for-the-use-of-artificial-intelligence-to-support-regulatory-decision-making-for-drug.
[2] “Considerations for the Use of Artificial Intelligence to Support Regulatory Decision-Making for Drug and Biological Products.” U.S. Food and Drug Administration, Jan. 2025, https://www.fda.gov/media/184830/download at page 3.