rf-fullcolor.png

 

April 14, 2025
by Ferdous Al-Faruque

Stakeholders seek clarity in FDA’s AI in regulatory decision-making draft guidance

Stakeholders want the US Food and Drug Administration (FDA) to address topics such as third-party vendor versus sponsor responsibility, generative artificial intelligence (AI), and labeling requirements in its recent draft guidance on using AI in regulatory decision-making when developing drugs and biological products. They also asked for clarifications regarding terminologies, risk assessments, and when and how to engage with the agency.
 
In January, FDA proposed using a risk-based credibility assessment framework for AI use in product submissions based on context of use (COU). The proposed framework would apply when AI produces information or data that could be used in regulatory decisions regarding product safety, effectiveness, and quality. The agency received more than 100 comments on the guidance, including from multiple industry groups. (RELATED: AI in drug development: FDA draft guidance addresses product lifecycle, risk, Regulatory Focus 7 January 2025)
 
The drug lobby group PhRMA agreed with FDA's decision to exclude AI models that do not affect patient safety, drug quality, or reliability of studies from the guidance. However, it asked the agency to explicitly state that such AI models are outside its regulatory authority and purview.
 
PhRMA noted that different types of AI models use different levels of autonomy and asked FDA to be more nuanced about how it applies its recommendations to different models instead of broadly applying the same approach to all AI models. The group also asked the agency to define the word 'model' and align the definition with other guidance, and asked how it plans to evaluate AI systems that use multiple AI models or when the system interfaces with other software, hardware, or users.
 
“PhRMA encourages FDA to include the definitions of other key terms in the Draft Guidance, including validation, federated learning, locked data, and machine-based systems, drawing upon definitions in other FDA guidance documents and FDA’s Digital Health and Artificial Intelligence Glossary as appropriate to ensure consistency,” said PhRMA. “FDA should consider adding a glossary to the Draft Guidance to define important terms and updating the Digital Health and Artificial Intelligence Glossary as needed with any modifications.”
 
PhRMA said it agrees with the draft guidance's proposed risk-based credibility assessment framework but proposed a seven-step credibility framework to provide more clarity and certainty. Some of the proposals include clarity on how the agency plans to assess AI model risk, develop credibility assessment plans, and document the results of those plans.
 
“We encourage the Agency to continue discussions with stakeholders as the Agency gains experience in making regulatory decisions that involve AI models,” said PhRMA. “Engagement opportunities are particularly useful in the context of AI, where there may be novel regulatory questions associated with emerging technologies.
 
“To enhance these discussions, FDA should focus on complex, high-risk AI models and share learnings across engagement options to foster consistent regulatory decision-making,” the group added. “Additionally, FDA should provide clearer guidance on the stage(s) of the Credibility Framework at which sponsors should engage with the Agency, the appropriate engagement option(s) for each stage, and how to engage in more targeted or limited conversations with the Agency on certain aspects of credibility assessment plans.”
 
The Critical Path Institute (CPI) also commented on the guidance and asked FDA how it would be used to evaluate generative AI and for more clarification on the model risk matrix proposed by the agency. The group also asked for clarification on handling missing and out of bounds (OOB) data.
 
CPI asked for discussions in the guidance on several topics, including model stability/sensitivity during the model development process and risks associated with AI modeling when external data is used for pretraining.
 
The Duke Margolis Institute for Health Policy commented on the guidance, stating that while it had been well-received by academic and practicing communities, some areas need more clarification. One area it asked for clarification on was risk types. The organization said evaluating credibility assessments should be balanced by the type of AI model risk.
 
“In some parts of the present guidance, a broader interpretation of risk seems implied in the discussion of scope," said Duke Margolis. “Yet, the examples provided highlight risk to patients rather than risk to the information presented to support a regulatory decision.
 
“Therefore, we encourage FDA to clarify what or which types of risk should be assessed and/or offer additional examples that may further demonstrate how a multi-dimensional, risk-based framework could be applied,” the organization added.
 
Duke Margolis also asked for more information in the guidance on how the use of AI tools should be addressed in the product label, especially in the context of clinical trials. The organization also wanted clarity on labeling so that providers and patients are kept abreast of whether AI was used in clinical studies and if there's any potential risk to patients.
 
The biotechnology lobby group BIO commented on the guidance and asked FDA to clarify how it fits in with other regulatory frameworks, such as good clinical practice (GCP) and good manufacturing practices (GMP). The group said it supports the agency excluding AI discovery and operational efficiencies from the guidance and being open to the idea of AI models that self-evolve or autonomously adapt.
 
“BIO recommends that the guidance specifically state that the guidance applies broadly to AI-derived outputs (e.g., biomarkers, endpoints, and diagnostic tools) intended to support regulatory decision-making for drug and biological products,” said BIO. “Further, BIO recommends AI models that are considered to have little to no risk will not be required to submit a credibility assessment plan (CAP) and credibility assessment report (CAR), and this should be stated explicitly in the guidance.
 
“This would align with a risk-based approach and help ensure that the deployment of credible AI models is encouraged and not delayed unnecessarily,” the group added.
 
While the draft guidance recommends that sponsors engage with the FDA early, like PhARMA, BIO asked for more clarity about when such early engagement should happen, the different scenarios they could occur in, and the types of questions that could be asked depending on the development stage. The group also asked the agency to work with international regulators to ensure they agree on terminology, approaches, methodologies, and reporting mechanisms when reviewing AI models in drug development.
 
BIO also noted that sponsors often use third-party vendors, which are the entities that develop, train, and maintain the AI models used in their studies. However, the sponsor does not own or have complete access to the AI models in such situations.
 
“BIO recommends that FDA identify mechanisms by which FDA may access information the Agency considers appropriate to inform its regulatory decision-making that help protect any third-party proprietary information,” said BIO. “Given the proliferation of third-party vendors developing AI solutions, BIO recommends the final guidance address how sponsors can appropriately use these solutions and the type of data (and ways of communicating such data to FDA) to help enhance innovation.”
 
Comments
×

Welcome to the new RAPS Digital Experience

We have completed our migration to a new platform and are pleased to introduce the updated site.

What to expect: If you have an existing login, please RESET YOUR PASSWORD before signing in. After you log in for the first time, you will be prompted to confirm your profile preferences, which will be used to personalize content.

We encourage you to explore the new website and visit your updated My RAPS page. If you need assistance, please review our FAQ page.

We welcome your feedback. Please let us know how we can continue to improve your experience.