Regulatory Focus™ > News Articles > 2019 > 6 > Industry Groups Question Aspects of CDRH’s AI/ML-Based SaMD Framework

Industry Groups Question Aspects of CDRH’s AI/ML-Based SaMD Framework

Posted 10 June 2019 | By Ana Mulero 

Industry Groups Question Aspects of CDRH’s AI/ML-Based SaMD Framework

Feedback on the US Food and Drug Administration’s (FDA) proposed regulatory framework for artificial intelligence- (AI) and machine learning- (ML) based software as a medical device (SaMD) underscores the uncertain environment for developing such products.

The comment period on a discussion paper that proposed the framework in April closed last week with more than 100 comments, most of which were made public on Friday. Many commenters tout the efforts of the FDA’s Center for Devices and Radiological Health (CDRH) to adapt to issues unique to AI/ML SaMD but say the proposed framework may be premature.

Groups including the Medical Imaging Technology Alliance (MITA), Consumer Technology Association (CTA), Electronic Health Record Association (EHRA), Connected Health Initiative (CHI), Healthcare Information and Management Systems Society and the Personal Connected Health (PCH) Alliance take issue with the framework’s reliance on regulatory proposals that are either untested or still under development or both, such as good manufacturing learning practices (GMLPs). Critics also include the American Medical Association (AMA), Microsoft and electronic health record system vendor Allscripts, among others.

With CDRH’s software precertification (Pre-Cert) program still in its pilot phase, some commenters call into question the interdependent relationship between the Pre-Cert program and the AI/ML regulatory framework described in the discussion paper. Synergies with the Pre-Cert program include the International Medical Regulators Forum’s (IMDRF) SaMD risk categorizations, greater reliance on real-world evidence (RWE) and applying a total product lifecycle (TPLC) approach to SaMD based on a threshold of organizational excellence. 

The discussion paper relies on assumptions that the Pre-Cert model “is a necessary and sufficient base from which to build a more encompassing regulatory model around all AI software” and that the “existing regulatory review processes are insufficient for AI software,” MITA executive director Patrick Hope says.

Except for IMDRF’s risk categorizations, PreCert concepts have not been put into practice outside of the pilot so far and lack further guidance to support Pre-Cert’s widespread adoption among product developers. In lieu of a fully operational program, comments request either nixing the synergies or further clarifying the connection.

Another reason to avoid an interdependent relationship between Pre-Cert and the framework on modifications to AI/ML-based SaMD is the limited scope of the IMDRF framework, commenters argue.

Within the context of flexibility, several commenters ask for both the Pre-Cert program and the AI/ML framework to be available to all manufacturers once established and fully operationalized. Similarly, some request allowing class III AI/ML-based SaMD to make use of the framework. “Class III AI/ML SaMD can and should be developed using the same Quality System and therefore should be able to take advantage of this proposed regulatory framework,” AdvaMed argues.

Matthew Diamond, medical officer for digital health at FDA, clarified during a fireside chat at AdvaMed’s Digital MedTech Conference in May the intention is not to limit the AI/ML regulatory framework to those that chose to participate in the Pre-Cert program, once implemented. Diamond also highlighted GMLPs, the new concept of a “focused review” and transparency as the three “fundamental components” of the AI/ML discussion paper. Most commenters also offer recommendations on each of these three components to inform their potential direction.

Others recognizing the similarities with the Pre-Cert model recommend testing the newly proposed framework as an expansion of the Pre-Cert pilot or as a separate pilot.

MITA, AdvaMed and others request clarifications around existing review pathways, including the Traditional and Special 510(k) pathways, and guidances would fit into the framework.

Many commenters also argue that the discussion paper is incorrect in assuming that AI and ML are well understood in the medical care community and that such technologies are prevalent in the market.

Additional recommendations unrelated to Pre-Cert speak to the pressing need for standardized nomenclature and terminology—with some raising concerns with the discussion paper’s definitions of the terms locked versus continuously adaptive algorithms as well as the interchangeable use of AI and ML—and developing agency-wide policies on prescription drug-related software. Initial clearance or approval processes for AI/ML-based SaMD and IMDRF language in the US regulatory paradigm are in need of clarification too, commenters argue. 

Public Docket

Regulatory Focus newsletters

All the biggest regulatory news and happenings.

Subscribe