FDA’s AI/ML action plan includes ‘tailored’ regulatory framework for SaMD

Regulatory NewsRegulatory News | 18 January 2021 |  By 

The US Food and Drug Administration has released a five-part action plan for its oversight of artificial intelligence/machine learning (AI/ML)-based software as a medical device (SaMD) based on feedback from developers and manufacturers on an April 2019 discussion paper.
Top of the action-plan list is the agency’s intention to develop a “tailored” regulatory framework for the medical software by issuing draft guidance on the Predetermined Change Control Plan outlined in the discussion paper, which mapped out a regulatory premarket review for AI/ML-based SaMD modifications. (RELATED: FDA proposes regulatory framework for AI- and machine learning-driven SaMD, Regulatory Focus 2 April 2019; FDA’s steps toward a new review framework for medical devices using Artificial Intelligence Algorithms, Regulatory Focus 7 May 2019).
The change control plan allows manufacturers to prespecify the modifications to the AI/ML-based SaMD as it changes over time through learning, while an Algorithm Change Protocol explains how the algorithm “learns” and changes without comprising the medical software’s safety and effectiveness. The draft guidance will detail which elements need to be included in the pre-specifications and ACP to ensure continued safety and effectiveness. It will also address the identification of types of modifications, the submission and review processes for the premarket review, and submission content. The agency aims to publish the draft guidance this year.
The remaining action-plan items address:
  • Good machine learning practice (GMLP);
  • Use of a patient-centered approach that includes transparency to SaMD users;
  • Improving methods for evaluating and addressing algorithmic bias to ensure algorithm robustness; and
  • Real-world performance (RWP) monitoring for AI/ML software.

In regard to GMLP, the agency said it plans to foster harmonization of good practice development through consensus standards efforts and by reaching out to US and international AI/MD-focused groups.
The proactive, patient-centered transparency efforts are aimed at building user trust around functionality by ensuring patients fully understand device benefits, risks, and shortcomings, especially as they change over time. The agency will also hold a public workshop on the role of labeling in supporting transparency to users. The workshop content will be based on patient input during an October 2020 Patient Engagement Advisory Committee meeting to establish what factors affect patients’ trust in these devices.
Efforts to offset algorithmic bias and ensure robustness will draw on collaborative regulatory science research methods to evaluate AI/ML software. In particular, FDA noted that AI/ML software uses historic datasets that might reflect data variations along racial, ethnic, and socio-economic status lines. The agency emphasized the importance of addressing those inherent biases to ensure medical devices are well suited for a racially and ethnically diverse patient population.
Real-world data (RWD) for gauging RWP are a key component of the total product lifecycle (TPLC) approach, which was outlined in the discussion paper. Stakeholders requested clarity on a range aspects relating to RWP, including the type of reference data that should be used for measuring real-word performance, the developer or manufacturer’s role in oversight of RWP, the quantity of data that should be collected and how often it should be submitted to the agency, and how the algorithms and data should be validated and tested.
In response, the agency said it would work with developers and manufacturers wanting to voluntarily pilot an RWP monitoring process. The collaboration would be in concert with existing RWD-focused programs within the FDA, which would allow agency to develop a more comprehensive framework for collecting RWP parameters. The agency would also be able to use evaluations made during the process to set thresholds for the RWP metrics, especially those relating to safety and usability, and garner user feedback during the collaborative process.
The 2019 discussion paper drew on practices from existing premarket programs in the US as well as risk categorization principles from the International Medical Device Regulators Forum, other risk-focused FDA documents on benefit-risk and software modifications, and the total lifecycle regulatory approach outlined in the Digital Health Software Precertification Program.


© 2022 Regulatory Affairs Professionals Society.

Tags: AI, FDA, ML, SaMD

Discover more of what matters to you

No taxonomy