rf-fullcolor.png

 

August 10, 2023
by Ferdous Al-Faruque

Stakeholders seek risk-based approach from FDA on regulating AI/ML for drug development

Industry groups and researchers are calling on the US Food and Drug Administration (FDA) to harmonize its thinking with other regulators, coordinate with its own experts and to apply a risk-based approach when considering how transparent artificial intelligence and machine learning (AI/ML) tools used in development need to be.
 
These comments were made in response to FDA’s recent white paper on the use of AI/ML in drug development, as well as in the development of medical devices intended to be used with drugs. Several organizations and individuals wrote to the agency urging it to take a risk-based approach to regulating such tools. (RELATED: FDA publishes discussion paper on AI/ML in drug development, Regulatory Focus 12 May 2023)
 
The International Society for Pharmaceutical Engineering (ISPE) said transparency is a “major barrier” because end users don’t understand how the AI/ML software was trained, validated and updated throughout its lifecycle. The group added that end users must trust the relationship between the input and output data to work.
 
ISPE said it is important that AI/ML software is transparent about how it was trained and operates, and it should be descriptive in a way that the end-user can understand how it works. The group adds the software should also be transparent about the rationale behind how it operates and how much external information it uses in outputting results.
 
While emphasizing the need for transparency, ISPE also said that FDA should use a risk-based approach when determining how much human involvement is necessary to operate the AI/ML software.
 
“The level of human involvement should be proportional to the utilization and risk of the AI/ML model,” said ISPE.
 
“Models that provide suggested or optional direction for human follow-up would have less need for human involvement than one that makes decisions, for example as related to process control,” the group added. “Models that make independent decisions related to product quality, safety, or efficacy (e.g., batch release) would have the highest level of risk, and thus merit the greatest human oversight.”
 
The Biotechnology Innovation Organization (BIO) also commented on how much transparency is necessary when overseeing AI/ML software used in developing new drugs and biologics. It said transparency of elements such as model architecture, data sources, validation, ethical considerations, and safeguards are all issues that should be considered by FDA.
 
“BIO notes that different levels of transparency are needed for different stakeholders and for different use cases,” the group wrote. “For example, a higher level of transparency may be required for FDA, while a different level of transparency may be appropriate for patients. To support transparency between sponsor and regulator, BIO believes a comprehensive set of documentation should be communicated to the regulators to serve as an audit trail through the AI/ML lifecycle.”
 
BIO also asked FDA to continue to update stakeholders on the AI/ML-related projects it's working on and to develop a decision tree or quick start guide so they know where to go for their AI/ML-related questions.
 
The Computing Community Consortium (CCC), a group of academic computing researchers, said that the level of transparency needed for the AI/ML software should depend on how it is used in the drug development process.
 
“For instance, if AI is used in compound optimization and a researcher follows up with experimental verification, there isn’t a compelling need for a high level of transparency due to the subsequent experimental work,” said the group. “However, if a researcher wants to speed up a clinical trial, there needs to be more transparency into the code and data that purports to justify the more rapid schedule.”
 
One of the CCC co-signers is Kevin Fu, a computer engineering professor at Northeastern University and a former acting director for medical device cybersecurity at FDA’s Center for Devices and Radiological Health (CDRH). He and his co-signees note that FDA already has substantial expertise in AI/ML, including those at CDRH, and said the agency should use them to create a program dedicated to developing AI/ML regulations across centers.
 
“AI/ML is rapidly becoming a pervasive aspect of all drug development applications, and so we believe that it is more important to focus on conversations related to the use of AI not being siloed,” said CCC.
 
Similarly, BIO said FDA and other agencies should align their standards for AI/ML with those developed by the National Institute of Standards and Technology (NIST). The group added there needs to be “high-level principles and standards” across different sectors so that all agencies in the federal government are on the same page.
 
ISPE went one step above and asked FDA to ensure that AI/ML regulations developed by the agency are harmonized with other regulators and federal agencies. It said aligning regulations will help drug developers around the world adopt AI/ML models more readily.
 
“International alignment is needed to advance the use of AI/ML in advanced pharmaceutical manufacturing,” the group said. “Without a common regulatory approach, implementation of new technology will likely be limited, since most pharmaceutical manufacturing is international.”
×

Welcome to the new RAPS Digital Experience

We have completed our migration to a new platform and are pleased to introduce the updated site.

What to expect: If you have an existing login, please RESET YOUR PASSWORD before signing in. After you log in for the first time, you will be prompted to confirm your profile preferences, which will be used to personalize content.

We encourage you to explore the new website and visit your updated My RAPS page. If you need assistance, please review our FAQ page.

We welcome your feedback. Please let us know how we can continue to improve your experience.