rf-fullcolor.png

 

December 8, 2025
by Joanne S. Eglovitch

Experts call on FDA to provide greater clarity on its approach to AI

Washington, DC – Experts said that the US Food and Drug Administration (FDA) should provide more clarity on how it intends to regulate the use of artificial intelligence and large language models when they are used in the development or manufacture of a product but are not part of the product itself.
 
Speaking at the Food and Drug Law Institute (FDLI) Enforcement, Litigation, and Compliance Conference on 4 December, Anthony Schiavone, associate vice president of regulatory and healthcare compliance counsel for Agilent Technologies, said there is a “mish-mash” of global regulations surrounding AI. He noted that the EU has established the most comprehensive regulatory framework for AI compared to other jurisdictions.
 
During the meeting, industry panelists provided an update on AI submissions to FDA’s Center for Drug Evaluation and Research (CDER) and discussed some of the new challenges that are emerging around the agency’s internal use of AI.
 
More than 800 CDER submissions
 
Tala Fakhouri of Parexel, formerly the associate director for data science and policy analysis in the Office of Medical Policy (OMP) within CDER, shared insights on the increasing use of AI in regulatory submissions.
 
Fakhouri mentioned that CDER has received over 800 regulatory submissions incorporating AI components since 2016, with the first submission occurring that year. The majority of these submissions are in the oncology area.
 
She added that such AI tools that are often employed to assist with aspects of clinical trials, such as selecting patients for trials, or for predicting outcomes, performing confounding adjustment, conducting pharmacometrics modeling, creating digital twins, and monitoring post-market safety.
 
FDA issued two guidance documents related to AI in January 2025, one on the use of AI in drug and biologic development and the other related to AI-enabled software functions in medical devices across the product lifecycle.
 
Fakhouri stated that the AI drug guidance document focuses on the use of AI to assist in regulatory decision-making regarding safety, effectiveness, or drug quality. This includes the use of AI-derived biomarker endpoints. “The FDA will want to know about that,” she said.
 
Using AI for drug discovery, such as predicting how a protein folds, is out of scope. Additionally, a firm's use of AI for business practices or operational studies is excluded unless it relates to product safety or effectiveness. Regulatory authoring is also considered out of scope.
 
Yet she said that there are significant gray areas in the middle, which is the “tricky” part, anything in the operations of a clinical trial or in the manufacturing of a drug, where it will not necessarily be in the regulatory submission, but it is still covered by an inspection.
 
“That is the part where I think we would benefit if we got more FDA guidance in that domain,” Fakhouri said.
 
Schiavone agreed on the necessity for more guidance from the FDA regarding AI. “FDA needs to get more guidance out there in deciding where to move forward,” he said, including in the area of drug advertising and promotions.
 
Schiavone noted that this has resulted in a lack of governance in these systems, prompting states to devise their own strategies.
 
Global AI landscape
 
Schiavone also observed that the landscape for regulating AI is inconsistent from jurisdiction to jurisdiction.
 
“The consequences of a fractured landscape are that regulatory schemes are siloed and fractured, and national security and privacy concerns constrain companies to solutions that may only work in one or a handful of locales.  For example, certain jurisdictions disallow use of LLMs like ChatGPT in favor of solutions originating from within like DeepSeek,” he said.
 
Schiavone noted that “the EU AI Act has a solid implementation deadline, and it takes a risk-based approach,” which FDA is moving towards.
 
There is also a global expansion of AI regulation and legislation. He stated that China, Brazil, South Korea, Singapore, Japan, and other countries have implemented AI regulations or strategies.
 
Schiavone also addressed China’s approach to AI in the regulatory space. “This is a very large market, and they are constantly evolving. They are very concerned with content in relation to respect for the Chinese government. They want to understand the underlying workings of your algorithms, as there are certain algorithms they do not want to see. China is still behind the US, but they are catching up," he said.
 
FDA’s internal AI usage
 
Nathan Brown, a partner in the healthcare and life sciences practices at Akin Gump, stated that FDA should provide more insight into how it plans to utilize AI internally. This includes the use of Elsa and its recent implementation of an agentic AI tool for staff.
 
Last week, FDA announced the launch of agentic AI capabilities for all agency employees. FDA said this tool refers to advanced AI systems that are designed to achieve specific goals by planning, reasoning, and executing multi-step actions. FDA said the use of this tool is entirely optional for FDA staff, and the tool is meant to complement Elsa. FDA claims its Elsa tool is voluntarily used by more than 70% of its staff.
 
“My overall pitch is that it would be helpful if the FDA would provide more clarity about their approach. In the FDA’s press release, they said that 70% of staff are using Elsa in some way…. I don’t know what that means exactly.”
 
Brown emphasizes the need for sponsors to align with the FDA on AI tool usage before reviews begin.
 
“I am seeing situations in which a regulatory decision may not be wrong, but the feedback you are getting back for the agency, maybe it was not the reviewer that read that or wrote that because it does not make sense…. In general, for a company that wants to formally disagree with the agency, you really want to work it out. Think about it now rather than waiting until you are on the clock and before you are puzzling about this for the first time,” Brown said.
 
×

Welcome to the new RAPS Digital Experience

We have completed our migration to a new platform and are pleased to introduce the updated site.

What to expect: If you have an existing login, please RESET YOUR PASSWORD before signing in. After you log in for the first time, you will be prompted to confirm your profile preferences, which will be used to personalize content.

We encourage you to explore the new website and visit your updated My RAPS page. If you need assistance, please review our FAQ page.

We welcome your feedback. Please let us know how we can continue to improve your experience.